Monday, March 25, 2013

Every move you make....Are you watching you?


Monitoring....There's something about getting something into numbers and targets that just makes it seem to be so controllable, isn't there? And many people - including many doctors - just love gadgets and measuring things. No wonder there is so much monitoring in health and fitness.

Actually, there's too much monitoring in some health matters. Some monitoring could cause anxiety without benefit, or lead to actions that do more harm than good.

Professor Paul Glasziou, author of Evidence-Based Monitoring, talked about this on Monday at Evidence Live. For monitoring to be effective there has to be:
  • valid and accurate measurement,
  • informed interpretation, and
  • effective action that can be taken on the results.  
Then there has to be an effective monitoring regimen.

None of that is simple. Frequent testing can mean you end up acting on random variations, not real changes in health. There's more at Statistically Funny about when statistical significance can mislead and the statistical risks of multiple testing.

Self-monitoring can be a path to freedom and better health in some circumstances - if you use insulin or an anticoagulant like warfarin, for instance. But constant monitoring of everything you can measure is a whole other kettle of fish. You can read more about this, monitoring apps and 'the quantified self' in my guest blog at Scientific American: 'Every breath you take, Every move you make...' How much monitoring is too much?

Saturday, March 9, 2013

Nervously approaching significance



We're deluged with claims that we should do this, that or the other thing because some study has a "statistically significant" result. But don't let this particular use of the word "significant" trip you up: when it's paired with "statistically", it doesn't mean it's necessarily important. Nor is it a magic number that means that something has been proven to work (or not to work).

The p-value on its own really tells you very little. It is one way of trying to tell whether the result is more or less likely to be "signal" than "noise".  If a study sample is very small, only a big difference might reach that level, while it is far easier in a bigger study.

But statistical significance is not a way to prove the "truth" of a claim or hypothesis. What's more, you don't even need the p-value, because other measures tell you everything the p-value can tell you, and more useful things besides.

This is roughly how the statistical test behind the p-value works. The test is based on the assumption that what the study is looking for is not true - but instead, that the "null hypothesis" is true. The statistical test estimates whether you would expect the result you got, or one further away from "null" than that result, if the hypothesis isn't true.

If the p-value is <0.05 (less than 5%), then the result is compatible with what you would get if the hypothesis actually is true. But it doesn't prove it is true. You can't conclude too much based on that alone. The threshold of 0.05 for statistical significance means the level for the test has been set at 95%. That is common practice, but still a bit arbitrary.

You can read more about statistical significance over here in my blog, Absolutely Maybe - and in Data Bingo! Oh no! and Does it work? here at Statistically Funny.

Always keep in mind that a statistically significant result is not necessarily significant in the sense of "important". It's "significant" only in the sense of signifying something. A sliver of a difference could reach statistical significance if a study is big enough. For example, if one group of people sleeps a tiny bit longer on average a night than another group of people, that could be statistically significant. But it wouldn't be enough for one group of people to feel more rested than the other.

This is why people will often say something was statistically significant, but clinically unimportant, or not clinically significant. Clinical significance is a value judgment, often implying a difference that would change the decision that a clinician or patient would make. Others speak of a minimal clinically important difference (MCID or MID). That can mean they are talking about the minimum difference a patient could detect - but there is a lot of confusion around these terms.

Researchers and medical journals are more likely to trumpet "statistically significant" trial results to get attention from doctors and journalists, for example. Those medical journal articles are a key part of marketing pharmaceuticals, too. Selling copies of articles to drug companies is a major part of the business of many (but not all) medical journals. 

And while I'm on the subject of medical journals, I need to declare my own relationship with one I've long admired: PLOS Medicine - an international open access journal. As well as being proud to have published there, I'm delighted to have recently joined their Editorial Board.


(This post was revised following Bruce Scott's comment below.)