You are probably all too often confused by what you read and hear in the news about medicine and medical breakthroughs. Last week’s wonder drug has now proven toxic or ineffective. An apt example: after being bombarded for a couple of years about the dangers of vitamin D deficiency, a panel of the Institute of Medicine concluded that it is unnecessary to get blood tests for vitamin D or to take supplements of vitamin D or calcium to prevent osteoporosis. So, just what's going on? To explain these turnarounds, an article in a recent issue of Newsweek magazine states that almost everything you hear about medicine is wrong.
Finding errors in medical research
The article cites the writings of John P.A. Ionnidis, recently appointed chief of Prevention Research Center at Stanford University, who, along with other medical scholars, have been examining errors in medical research for some time. And these purported errors are not in studies carried out by inexperienced or poorly trained physicians that are published in fly-by-night journals. Rather they are referring to research carried out by recognized experts and published in prestigious, well-edited journals.
Population studies vs. controlled clinical trials
One example of erroneous articles cited in the article: garlic was initially shown to lower blood cholesterol levels, a finding not confirmed in later, more careful research. Other prominent examples refer to research done in the early 2000s; one set of studies concluded that vitamin E protected against coronary heart disease (CHD), many others found that hormone replacement therapy (HRT) reduced the incidence of CHD in postmenopausal women. Subsequent controlled clinical trials (CCTs) have found that neither vitamin E nor HRT protects against CHD. But, in my opinion, this reversal of the original conclusions does not mean that those studies were wrong. Instead, the error was overzealous actions by physicians, patients, and others to the reported benefits of vitamin E and HRT, based on the results of population and observational studies, which can only show associations and not cause and effect relationships. Unfortunately, erroneous prescription of vitamin E and HRT continued for a number of years because the relevant CCTs required large numbers of participants and were only completed 10 to 15 years later.
Many studies are too small
Another source of confusion are the many reported small studies which show, for example, that the 35 people who took drug "X" had 10 percent fewer headaches than those who took a placebo. Such findings may be worthy of larger future studies; however, the studies are too small and observe differences that may have statistical significance but no clinical importance.
What can you do to separate the medical chaff from the wheat?
Although I have tried in my practice and writing to make use of only what seems like well established findings, I confess to having fallen prey to acting or writing on apparently exciting new reports that later turned out to be wrong.
Here's what you should keep in mind:
• Look at more than the headlines of medical stories. Be skeptical if they don’t provide adequate information to judge the validity and importance of the findings. • Remember that the findings of population and observational studies must be verified by CCTs. • Don’t overreact to trivial findings in small studies. • Beware of "breakthroughs," which are uncommon or even rare.