I think I'm going to feel anxious about my application to our Earthquake Physics program until I get that official letter, no matter how many good signs I'm getting from people. So far, nobody has said or done anything that leads me to believe I won't be getting in - quite the contrary, really. I don't think a department expecting to reject someone would keep inviting that person to participate in departmental events and discussions, nor would someone on the application committee, when passing me in the hall, go, "We looked at your application today," and give a big grin and a thumbs up. And yet I am still nervous, and have a couple of weeks to go before I get that letter. Ahhh!
One of the departmental functions in which I have been invited to participate is Journal Club. Every Thursday, the EP faculty and graduate students meet to discuss recent research related to a specific theme (this quarter's is earthquake prediction) - this alternates between one person choosing a bigger article which we all read and discuss, or three people presenting shorter articles of interest from a particular journal they've been assigned. I've even been assigned a journal to search; everyone figured Nature would be a good place to start, since it's written to be understood by scientists of all types, not just insiders.
The first meeting of the quarter was this Thursday (I would have written about it that evening if my hard drive hadn't thought Wednesday night would be a fine time to die). We all read an article from Journal of Geophysical Research by K.F. Tiampo, J.B. Rundle, S. McGinnis, S.J. Gross, and W. Klein called "Eigenpatterns in southern California seismicity."
I had an immediate uh oh moment due to not understanding the very first word of that title, but I looked it up and forged ahead with reading anyway. I found it pretty hard to wade through much of the text as well, due in part to some terminology issues, but also because there were a lot of cases of them using really elaborate and circuitous phrasing to describe something not particularly complicated. Basically, what they did was take the catalog of southern California earthquakes and plug it into a computer program that checked, for any given location that has an earthquake on any given day, where else in the region tends to have quakes on the same day (and where else tends to have fewer quakes than usual on that day). Considering the types of patterns they were trying to pick out, one would expect the terms "aftershock," "triggered slip," and "stress change" to turn up a lot, but they really didn't.
But even with my not understanding all of the terminology or the specifics of the method, I was still able to pick up on plenty of hints that this paper is full of it. For one, there were some outright factual errors. There were repeated instances of the Joshua Tree-Landers-Big Bear sequence as having happened in 1991 rather than 1992, which is bad enough in itself, but becomes even more worrisome considering that the arbitrary cutoff date for some of the calculations was the end of 1991. It didn't look like Landers was included in those figures, but some were not so clear. There were also generalizations/assumptions in the paper. The authors stated that Parkfield quakes happen every 22 years, like clockwork, not that the average is 22 years. Big difference! And such statements do not credibility make. Another red flag came up very early in the article, when the authors stated that their goal was to identify "all possible space-time seismicity configurations." All possible? That's lofty and ambitious, particularly when they only have some sixty years of data to work with. You certainly can't come up with all possible patterns in such a small chunk of geological time, particularly not when most of the major faults in the area did not have a major rupture in that span (one would think they'd want to get the San Andreas in on their predictions), nor when there are who knows how many as-of-yet-undiscovered thrust faults whose current inactivity also puts them out of the calculation. I think "all possible" would be impossible even with much more data than this study used, so the fact that they've even aimed for it seems pretty ridiculous. Lastly, when events below magnitude 3.0 were taken out of the picture, or when the time interval they were using to check correspondence was changed, the results came out completely different from each other, a far cry from hard and fast prediction rules.
I came to the Thursday discussion with my low opinion of the article, but I also decided I wasn't going to say anything if everyone else didn't have a problem with the article. I was admittedly quite worried that I was completely off base, but those worries went away when the professor presenting the article started with, "I'm going to present this paper, or at least the parts that I understood." Turns out that everyone else thought the paper was full of it as well, so I felt much better about my reading/comprehension skills and participated in the discussion after all. People pointed out even more inconsistencies and problems, including some tweaking of the scale on the last couple of diagrams to make the results look more significant. We all agreed, though, that the paper would have gotten on our nerves much less if the authors had said, "We tried this method and found no correlation," rather than trying to make a big deal out of a couple thousandths of a percent. No correlation is a perfectly legitimate result to an experiment. Disappointing, certainly, but legitimate. But I guess that's not the sort of result that draws the mainstream media toward your ZOMG Earthquake Predictiashun Breakthrough, and apparently Tiampo et al got a decent share of attention for this paper, so there we go.
Next week is one of the sessions in which three people share shorter articles. It's not my turn yet, since I figured I ought to hold off until I see how it works, but I may volunteer for two weeks after that.