Archive for the ‘statistics’ tag

#  Spurious Correlations →

July 2nd, 2014 at 17:00 // In Worth Distraction 

Spurious correlations are a common and obvious problem that afflicts a lot of science. Tyler Vigen’s site is dedicated to collecting them. They’re pointless fun to see. Here’s how the divorce rate in Maine is driven by the consumption of margarine across the US:

spurious-correlation-divorce-margarine

#  The Effectiveness of Vaccines →

June 25th, 2014 at 15:03 // In Worth Seeing 

David Mendoza put together a pretty awesome series of charts of how effective the introduction of the measles vaccine was in stopping new cases across the United States. This one really requires no introduction:

chart-of-measle-vaccine-effectiveness-in-the-us

#  How Americans Die →

June 12th, 2014 at 13:00 // In Worth Seeing 

A neat little visual tour of the history of the various causes of death of Americans today. Nothing mind-blowing, and I recommend it as much for its cool technology as its novel insights, but worth a look.

how-americans-die

#  Observing Love on Facebook →

February 21st, 2014 at 16:37 // In Worth Considering 

In honor of Valentine’s Day last week, Facebook made a number of interesting post based on the unfathomable quantities of data they possess. The specific effects are generally understandable but not necessary what you would have predicted. On the frequency of Facebook activity as a  relationship starts:

During the 100 days before the relationship starts, we observe a slow but steady increase in the number of timeline posts shared between the future couple. When the relationship starts (“day 0”), posts begin to decrease. We observe a peak of 1.67 posts per day 12 days before the relationship begins, and a lowest point of 1.53 posts per day 85 days into the relationship. Presumably, couples decide to spend more time together, courtship is off, and online interactions give way to more interactions in the physical world.

facebook-posts-vs-relationship-timeline

 

The start of relationships post got the most attention, but their profile of religion, age, and breakups also interested me.

(via The Atlantic)

#  Tips for Healthy Science Skepticism →

November 27th, 2013 at 10:25 // In Worth Reading 

It’s a bit of an old saw here by now, but I think there’s a lot of “science”, especially as reported to popular culture that’s utterly bogus. In the guise of helping politicians, the Nature blog has a good piece about how to be intelligently skeptical of scientific claims. This is “publication bias” looms large in my mind:

Because studies that report ‘statistically significant’ results are more likely to be written up and published, the scientific literature tends to give an exaggerated picture of the magnitude of problems or the effectiveness of solutions.

The list does go much deeper, too. Here’s a harder issue I’d nearly forgotten about (my last experience with sample size significance is nearing a decade ago):

Effect size matters. Small responses are less likely to be detected. A study with many replicates might result in a statistically significant result but have a small effect size (and so, perhaps, be unimportant). The importance of an effect size is a biological, physical or social question, and not a statistical one. In the 1990s, the editor of the US journal Epidemiology asked authors to stop using statistical significance in submitted manuscripts because authors were routinely misinterpreting the meaning of significance tests, resulting in ineffective or misguided recommendations for public-health policy.

(via The Browser)

#  The Coach Who Never Punts →

November 19th, 2013 at 18:23 // In Worth Watching 

Without telling anyone I sold this blog to Grantland, and now I’m only going to link to their videos. Hope you don’t mind.

I kid, but two-in-a-row is something I’d typically avoid. But I have two and so you’re going to go watch a video about a coach of a small private Arkansas high school called Pulaski Academy whose strategy is to never — with very very few exceptions — punt away the football. He credits the strategy, along with his unconventional almost-all-onside-kick strategy, with allowing his small school to win so many state championships.

(via kottke)

#  Why Most Published Research is False →

November 29th, 2011 at 14:31 // In Worth Reading 

I’m a bit of connoisseur of this type of thing, and so I’m embarrassed that I just today found an utterly fantastic plain-English argument from Alex Tabarrok about why you should discount almost all news story about a really interesting new finding by scientists. (I’m a connoisseur of this kind of thing because of the number of intelligent people who seem to treat every new study about a wonder-substance or agent-of-death as meaningful.) These guidelines are a good summary:

1)  In evaluating any study try to take into account the amount of background noise.  That is, remember that the more hypotheses which are tested and the less selection which goes into choosing hypotheses the more likely it is that you are looking at noise.

2) Bigger samples are better.  (But note that even big samples won’t help to solve the problems of observational studies which is a whole other problem).

3) Small effects are to be distrusted.

4) Multiple sources and types of evidence are desirable.

5) Evaluate literatures not individual papers.

6)  Trust empirical papers which test other people’s theories more than empirical papers which test the author’s theory.

(via Tabarrok himself, in a shorter but good post about a specific study’s failure)

#  Expensive Wine Words →

February 26th, 2011 at 12:42 // In Worth Knowing 

Another in the large pile of “most things about wine are bullshit” stories. This author did a statistical analysis:

Using descriptions of 3,000 bottles, ranging from $5 to $200 in price from an online aggregator of reviews, I first derived a weight for every word, based on the frequency with which it appeared on cheap versus expensive bottles. I then looked at the combination of words used for each bottle, and calculated the probability that the wine would fall into a given price range. The result was, essentially, a Bayesian classifier for wine.

(via more of what i like)

#  Hitler vs. Stalin →

February 25th, 2011 at 18:02 // In Worth Knowing 

Timothy Snyder offers some new details on the age old question of “who was worse?” Doing the morbid calculus with new data leads to a result that turns the conventional wisdom — Hilter the eviler, Stalin the deadlier — on it’s head (I wouldn’t pull this but that I understand that not everyone likes to read NYRB articles):

All in all, the Germans deliberately killed about 11 million noncombatants, a figure that rises to more than 12 million if foreseeable deaths from deportation, hunger, and sentences in concentration camps are included. For the Soviets during the Stalin period, the analogous figures are approximately six million and nine million. These figures are of course subject to revision, but it is very unlikely that the consensus will change again as radically as it has since the opening of Eastern European archives in the 1990s.

#  The Decline Effect →

January 22nd, 2011 at 11:36 // In Worth Reading 

Some scientific researchers are worried that the strength of experimental effects seem to decline over time. And I know science’s fallibility is something of an old saw around here, but until I see more smart people taking it seriously I doubt that will change. Jonah Lehrer’s conclusion pretty well captures what I want more people to realize:

We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.

(via reddit)