I have moved the blog to a new server at civilstat.com where I can host custom visualizations, maps, code, etc. more easily. The old posts are still archived here for now but new material will only appear on the new site.
Category Archives: Uncategorized
The process of doing science, math, engineering, etc. is usually way messier than how those results are reported. Abstruse Goose explains it well:
In pure math, that’s usually fine. As long as your final proof can be verified by others, it doesn’t necessarily matter how you got there yourself.
Now, verifying it might be hard, for example with computer-assisted proofs like that of the Four Color Theorem. And teaching math via the final proof might not be the best way, pedagogically, to develop problem-solving intuition.
But still, a theorem is either true or it isn’t.
However, in the experimental sciences, where real-world data is inherently variable, it’s very rare that you can really say, “I’ve proven that Theory X is true.” Usually the best you can do is to say, “I have strong evidence for Theory X,” or, “Given these results it is reasonable to believe in Theory X.”
(There’s also decision theory: “Do we have enough evidence to think that Theory X is true?” is a separate question from “Do we have enough evidence to act as if Theory X is true?”)
In these situations, the way you reached your conclusions really does affect how trustworthy they are.
Some of their recommendations only make sense for limited types of analysis, but for those cases, it is sensible advice. I thought that the contrast between their two descriptions of Study 2 (“standard” on p. 2, versus “compliant” on p. 6) was very effective.
I’m not sure what to think of their idea of limiting “researcher degrees of freedom.”
For example, they discourage a Bayesian approach because “Bayesian statistics require making additional judgments (e.g., the prior distribution) on a case-by-case basis, providing yet more researcher degrees of freedom.”
I’m a bit hesitant to say that researchers should be pigeonholed into the standard frequentist toolkit and not allowed to use their best judgment!
If canned frequentist methods are unsuitable for the problem at hand, or underestimate uncertainty relative to a carefully-thought-out, problem-appropriate Bayesian method, you may not be doing better after all…
However, like the authors of this paper, I do support better reporting of why a certain analysis was judged to be the right tool for the job.
Ideally, more of us would know Bayesian methods and could justify the choice between frequentist and Bayes approaches for the given problem at hand, not by always saying “the frequentist approach is standard” and stopping our thinking there.
I see some kind of vague showy, wiggling lines — here and there an E and a B written on them somehow, and perhaps some of the lines have arrows on them — an arrow here or there which disappears when I look too closely at it. When I talk about the fields swishing through space, I have a terrible confusion between the symbols I use to describe the objects and the objects themselves. I cannot really make a picture that is even nearly like the true waves.
As it turns out, he probably did:
As I’m talking, I see vague pictures of Bessel functions from Jahnke and Emde’s book, with light-tan j’s, slightly violet-bluish n’s, and dark brown x’s flying around. And I wonder what the hell it must look like to the students.
The letter-color associations in this second quote are a fairly common type of synaesthesia. However, the first quote above sounds quite different, but still plausibly like synaesthesia: “I have a terrible confusion between the symbols I use to describe the objects and the objects themselves”…
I wonder whether many of the semi-mystical genius-heroes of math & physics lore (also, for example, Ramanujan) have had such neurological conditions underpinning their unusually intuitive views of their fields of study.
I love the idea of synaesthesia and am a bit jealous of people who have it. I’m not interested in drug-induced versions but I would love to experiment with other ways of experiencing synthetic synaesthesia myself. Wired Magazine has an article on such attempts, and I think I remember another approach discussed in Oliver Sacks’ book Musicophilia.
I have a friend who sees colors in letters, which helps her to remember names — I’ve heard her think out loud along these lines: “Hmm, so-and-so’s name is kind of reddish-orange, so it must start with P.” I wonder what would happen if she learned a new alphabet, say the Cyrillic alphabet (used in Russian etc.): would she associate the same colors with similar-sounding letters, even if they look different? Or similar-looking ones, even if they sound different? Or, since her current associations were formed long ago, would she never have any color associations at all with the new alphabet?
Also, my sister sees colors when she hears music; next time I see her I ought to ask for more details. (Is the color related to the mood of the song? The key? The instrument? The time she first heard it? etc. Does she see colors when practicing scales too, or just “real” songs?)
Finally, this isn’t quite synaesthesia but another natural superpower in a similar vein, suggesting that language can influence thought:
…unlike English, many languages do not use words like “left” and “right” and instead put everything in terms of cardinal directions, requiring their speakers to say things like “there’s an ant on your south-west leg”. As a result, speakers of such languages are remarkably good at staying oriented (even in unfamiliar places or inside buildings) and perform feats of navigation that seem superhuman to English speakers. In this case, just a few words in a language make a big difference in what cognitive abilities their speakers develop. Certainly next time you plan to get lost in the woods, I recommend bringing along a speaker of Kuuk Thaayorre or Guugu Yimithirr rather than, say, Dutch or English.
The human brain, ladies and gentlemen!
Yesterday’s earthquake in Virginia was a new experience for me. I am glad that there was no major damage and there seem to have been no serious injuries.
Most of us left the building quickly — this was not guidance, just instinct, but apparently it was the wrong thing to do: FEMA suggests that you take cover under a table until the shaking stops, as “most injuries occur when people inside buildings attempt to move to a different location inside the building or try to leave.”
After we evacuated the building, and once it was clear that nobody had been hurt, I began to wonder: how do you know when it’s safe to go back inside?
Assuming your building’s structural integrity is sound, what are the chances of experiencing major aftershocks, and how soon after the original quake should you expect them? Are you “safe” if there were no big aftershocks within, say, 15 minutes of the quake? Or should you wait several hours? Or do they continue for days afterwards?
Maybe a friendly geologist could tell me this is a pointless or unanswerable question, or that there’s a handy web app for that already. But googling does not present an immediate direct answer, so I dig into the details a bit…
FEMA does not help much in this regard: “secondary shockwaves are usually less violent than the main quake but can be strong enough to do additional damage to weakened structures and can occur in the first hours, days, weeks, or even months after the quake.”
I check the Wikipedia article on aftershocks and am surprised to learn that events in the New Madrid seismic zone (around where Kentucky, Tennessee, and Missouri meet) are still considered aftershocks to the 1811-1812 earthquake! So maybe I should wait 200 years before going back indoors…
All right, but if I don’t want to wait that long, Wikipedia gives me some good leads:
First of all, Båth’s Law tells us that the largest aftershock tends to be of magnitude about 1.1-1.2 points lower than the main shock. So in our case, the aftershocks for the 5.9 magnitude earthquake are unlikely to be of magnitude higher than 4.8. That suggests we are safe regardless of wait time, since earthquakes of magnitude below 5.0 are unlikely to cause much damage.
Actually, there are several magnitude scales; and there are other important variables too (such as intensity and depth of the earthquake)… but just for the sake of argument, we can use 5.0 (which is about the same on the Richter and the Moment Magnitude scales) as our cutoff for safety to go back inside. Except that, in that case, Båth’s Law suggests any aftershocks to the 5.9 quake are not likely to be dangerous — but now I’m itching to do some more detailed analysis… and anyhow, quakes above magnitude 4.0 can still be felt, and are probably still quite scary coming right after a bigger one. So let us say we are interested in the chance of an aftershock of magnitude 4.0 or greater, and keep pressing on through Wikipedia.
We can use the Gutenberg-Richter law to estimate the relative frequency of quakes above a certain size in a given time period.
The example given states that “The constant b is typically equal to 1.0 in seismically active regions.” So if we round up our recent quake to magnitude around 6.0, we should expect about 10 quakes of magnitude 5.0 or more, about 100 quakes of magnitude 4.0 or more, etc. for every 6.0 quake in this region.
But here is our first major stumper: is b=1.0 appropriate for the USA’s east coast? It’s not much of a “seismically active region”… I am not sure where to find the data to answer this question.
Also, this only says that we should expect an average of ten 5.0 quakes for every 6.0 quake. In other words, we’ll expect to see around ten 5.0 quakes some time before the next 6.0 quake, but that doesn’t mean that all (or even any) of them will be aftershocks to this 6.0 quake.
That’s where Omori’s Law comes in. Omori looked at earthquake data empirically (without any specific physical mechanism implied) and found that the aftershock frequency decreases more or less proportionally with 1/t, where t is time after the main shock. He tweaked this a bit and later Utsu made some more modifications, leading to an equation involving the main quake amplitude, a “time offset parameter”, and another parameter to modify the decay rate.
Our second major stumper: what are typical Omori parameter values for USA east coast quakes? Or where can I find data to fit them myself?
Omori’s Law gives the relationship for the total number of aftershocks, regardless of size. So if we knew the parameters for Omori’s Law, we could guess how many aftershocks total to expect in the next hour, day, week, etc. after the main quake. And if we knew the parameters for the Gutenberg-Richter law, we could guess what proportion of quakes (within each of those time periods) would be above a certain magnitude.
Combining this information (and assuming that the distribution of aftershock magnitudes is typical of the overall quake magnitude distribution for the region), we could guess the probability of a magnitude 4.0 or greater quake within the next day, week, etc. The Southern California Earthquake Center provides details on putting this all together.
What this does not answer directly is my first question: Given a quake of magnitude X, in a region with Omori and Gutenberg-Richter parameters Y, what is the time T such that, if any aftershocks of magnitude 4.0 or greater have not occurred yet, they probably won’t happen?
If I can find typical local parameter values for the laws given above, or good data for estimating them; and if I can figure out how to put it together; then I’d like to try to find the approximate value of T.
Stumper number three: think some more about whether (and how) this question can be answered, even if only approximately, using the laws given above.
I know this is a rough idea, and my lack of background in the underlying geology might give entirely the wrong answers. Still, it’s a fun exercise to think about. Please leave any advice, critiques, etc. in the comments!