While Peter was trying to calm the risk communication waters..here's my suggestion about embedding reporters: Don't.
There was some important research in JAMA about how 60 percent of adolescents don't respond to the first anti-depressants they receive but respond well to the second line of meds and cognitive behavioral therapy. That jives with earlier results of cross-over studies that demonstrate about 60 percent failure rate among depressed adults and their first SSRIs.
Read More
Of course that could explain why, in many studies, you get the zero effect vs placebo. That, and as Fred Goodwin has pointed out, the FDA refuses to allow crossover designs (adaptive trials of a sort) in neuropharm. And of course as genetic testing gets closer in antidepressants, as David Meltzer and others noted at a conference held by the ...that trial, error and miss approach to medication might be history.
But you had to search mightily for that story yesterday in the media or the blogosphere.
Instead, it was a sloppy meta-analysis (meta, as in sort of, analysis as in let the computer do it) of lots of placebo controlled trials of varying quality, design, exclusivity criteria that seemed to suggest what anyone who has taken an antidepressant knows is crap: they don't work. And of course while the media and the blogosphere sneer at the source research funding they are willfully blind to the incredibly poor study design of the meta-analysis in question as well as the limits of meta-analysis in general. Here's one of many hard sci bloggers piling on the silly stew of studies that was passed off as science.
"Way too loose of p-values for false positives in studies, in medicine (and social sciences) compared to natural sciences, is one reason to not read too much into any individual study that claims antidepressants are ineffective, like the Public Library of Science meta-analysis of individual studies did.
P-values of the same looseness as in medicine/social sciences have been used to claim intercessory prayer actually works on sick people (http://www.religioustolerance.org/medical6.htm), for example, or here (http://skepdic.com/refuge/bunk21.html)
"I’m not saying that the results of a meta-analysis are no stronger than the weakest study in its umbrella. I am saying that, with p values as loose as they are in health/medicine (and social sciences), is that no massive amount of individual research studies being included under one meta-analysis will make the meta-analysis’ results anything more than a little bit stronger than the best individual study.
In other words, in medicine, and in social sciences, meta-analysis adds a very modest bump, nothing more. The problem is, most people believe it does much more than that when it doesn’t.
Or, to put it another way, meta-analysis is no better than the material it’s analyzing."
Read More
And here's another question: if they don't work, then why do they have such horrible but rare side effects? Is it really possible that drug companies just pumped drugs that were completely worthless except for the fact that they caused kids to commit suicide?
But that's precisely the mindset of the media and to even more greater extent http://www.pharmalot.com and http://www.fiercepharma.com. Both sites totally ignored the JAMA study and focused on the meta-analysis. The JAMA study was a randomized controlled trial and the meta-analysis...well what can I say except what my high school English teach used to say regarding the difference between Cliff Notes and a novel: that meta-analysis is to real research what masturbation is to sex. Does anyone do any analysis of related analysis anymore or are journalist and bloggers just posting the facts that fit their pre-conceived notions. Does it tell you anything that the meta-analyses are easy to churn out and rush into print just before, say, the next FDA meeting of import?
Such perspectives only encourage people not to take medicines or encourage them to stop taking them because they "don't work." It's irresponsible.
Oh yeah... Here's the study that made antidepressants worthless....
Read More
There was some important research in JAMA about how 60 percent of adolescents don't respond to the first anti-depressants they receive but respond well to the second line of meds and cognitive behavioral therapy. That jives with earlier results of cross-over studies that demonstrate about 60 percent failure rate among depressed adults and their first SSRIs.
Read More
Of course that could explain why, in many studies, you get the zero effect vs placebo. That, and as Fred Goodwin has pointed out, the FDA refuses to allow crossover designs (adaptive trials of a sort) in neuropharm. And of course as genetic testing gets closer in antidepressants, as David Meltzer and others noted at a conference held by the ...that trial, error and miss approach to medication might be history.
But you had to search mightily for that story yesterday in the media or the blogosphere.
Instead, it was a sloppy meta-analysis (meta, as in sort of, analysis as in let the computer do it) of lots of placebo controlled trials of varying quality, design, exclusivity criteria that seemed to suggest what anyone who has taken an antidepressant knows is crap: they don't work. And of course while the media and the blogosphere sneer at the source research funding they are willfully blind to the incredibly poor study design of the meta-analysis in question as well as the limits of meta-analysis in general. Here's one of many hard sci bloggers piling on the silly stew of studies that was passed off as science.
"Way too loose of p-values for false positives in studies, in medicine (and social sciences) compared to natural sciences, is one reason to not read too much into any individual study that claims antidepressants are ineffective, like the Public Library of Science meta-analysis of individual studies did.
P-values of the same looseness as in medicine/social sciences have been used to claim intercessory prayer actually works on sick people (http://www.religioustolerance.org/medical6.htm), for example, or here (http://skepdic.com/refuge/bunk21.html)
"I’m not saying that the results of a meta-analysis are no stronger than the weakest study in its umbrella. I am saying that, with p values as loose as they are in health/medicine (and social sciences), is that no massive amount of individual research studies being included under one meta-analysis will make the meta-analysis’ results anything more than a little bit stronger than the best individual study.
In other words, in medicine, and in social sciences, meta-analysis adds a very modest bump, nothing more. The problem is, most people believe it does much more than that when it doesn’t.
Or, to put it another way, meta-analysis is no better than the material it’s analyzing."
Read More
And here's another question: if they don't work, then why do they have such horrible but rare side effects? Is it really possible that drug companies just pumped drugs that were completely worthless except for the fact that they caused kids to commit suicide?
But that's precisely the mindset of the media and to even more greater extent http://www.pharmalot.com and http://www.fiercepharma.com. Both sites totally ignored the JAMA study and focused on the meta-analysis. The JAMA study was a randomized controlled trial and the meta-analysis...well what can I say except what my high school English teach used to say regarding the difference between Cliff Notes and a novel: that meta-analysis is to real research what masturbation is to sex. Does anyone do any analysis of related analysis anymore or are journalist and bloggers just posting the facts that fit their pre-conceived notions. Does it tell you anything that the meta-analyses are easy to churn out and rush into print just before, say, the next FDA meeting of import?
Such perspectives only encourage people not to take medicines or encourage them to stop taking them because they "don't work." It's irresponsible.
Oh yeah... Here's the study that made antidepressants worthless....
Read More