According to wikipedia "Potboiler or pot-boiler is a term used to describe a poor quality novel, play, opera, or film, or other creative work that was created quickly to make money to pay for the creator's daily expenses (thus the imagery of "boil the pot"[1], which means "to provide one's livelihood"[2]). Authors who create potboiler novels or screenplays are sometimes called hack writers."
Medical journals are full of potboilers writen by hack writers who today throw together reports designed to fit the media's appetite for stories fitting the anti-Big Pharma narrative...
Case in point:
Can you guess what the conclusion of the study was from the following headline?
You can spend your own $15 to get this potboiler on line or you can read the juicy parts of the hatchet job here.. You just have to suffer through my commentary.
"Results were considered favorable if they were statistically significant (based on P values or CIs) and supported the efficacy or safety of the test drug or not favorable if they were not statistically significant for the efficacy or safety of the test drug (25). For noninferiority trials, if the test drug was equal to the comparison drug, the results were also classified as favorable."
An industry sponsored study showing any benefit even if barely statistically signiifcant is considered positive. An industry sponsored study showing no difference in outcome in treatment compared to another drug or a placebo... that's also called a positive study.
There is also these important findings:
"Trials funded by industry were more likely to be phase 3 or 4 trials (88.7%; P < 0.001 across groups), to use an active comparator in controlled trials (36.8%; P = 0.010 across groups), to be multicenter (89.0%; P < 0.001 across groups), and to enroll more participants (median sample size, 306 participants; P < 0.001 across groups). Government-funded trials were most likely to be placebo-controlled (56.2%), whereas trials funded by nonprofit or nonfederal sources were least likely to be multicenter (24.6%) and tended to have the smallest sample size (median, 50 participants). Industry-funded trials were also most successful at enrolling the anticipated number of participants, with 84.9% of trials enrolling at least 75% of the planned number of participants (P < 0.001 across groups)"
In otherwords, drug companies did more post market studies (increasingly required) and confirmatory trials (always required) that were more diverse and larger. That explains in part the higher percentage of trials showing statistically significant efficacy. Smaller studies that are underfunded and underpowered -- and less likely to enroll the number of patients regarded to achieve a level of confidence that results are reliable: More likely early phase studies looking at other endpoints. No wonder industry sponsored trials are more likely to be "positive." And just to be sure, the researchers toss treatment toss-ups into the positive category. Nothing like creating your own standards. I wonder how many product managers would regard a no-difference result to be "positive."
So the authors twist the obvious into a conspiracy about how industry funding deliberately puts a happy face on otherwise lousy results... But in the world of medical publishing, skewing data to stick to pharma is, dare I say, a positive.
So do the headlines about the study, which I bet few reporters even read.
Medical journals are full of potboilers writen by hack writers who today throw together reports designed to fit the media's appetite for stories fitting the anti-Big Pharma narrative...
Case in point:
Outcome Reporting Among Drug Trials Registered in ClinicalTrials.gov
- Florence T. Bourgeois, MD, MPH;
- Srinivas Murthy, MD; and
- Kenneth D. Mandl, MD, MPH
Can you guess what the conclusion of the study was from the following headline?
Review Suggests Bias in Drug Study Reporting
Industry-funded trials more likely to have positive findings than other studies, analysis shows
www.businessweek.com/lifestyle/content/healthday/641567.htmlYou can spend your own $15 to get this potboiler on line or you can read the juicy parts of the hatchet job here.. You just have to suffer through my commentary.
"Results were considered favorable if they were statistically significant (based on P values or CIs) and supported the efficacy or safety of the test drug or not favorable if they were not statistically significant for the efficacy or safety of the test drug (25). For noninferiority trials, if the test drug was equal to the comparison drug, the results were also classified as favorable."
An industry sponsored study showing any benefit even if barely statistically signiifcant is considered positive. An industry sponsored study showing no difference in outcome in treatment compared to another drug or a placebo... that's also called a positive study.
There is also these important findings:
"Trials funded by industry were more likely to be phase 3 or 4 trials (88.7%; P < 0.001 across groups), to use an active comparator in controlled trials (36.8%; P = 0.010 across groups), to be multicenter (89.0%; P < 0.001 across groups), and to enroll more participants (median sample size, 306 participants; P < 0.001 across groups). Government-funded trials were most likely to be placebo-controlled (56.2%), whereas trials funded by nonprofit or nonfederal sources were least likely to be multicenter (24.6%) and tended to have the smallest sample size (median, 50 participants). Industry-funded trials were also most successful at enrolling the anticipated number of participants, with 84.9% of trials enrolling at least 75% of the planned number of participants (P < 0.001 across groups)"
In otherwords, drug companies did more post market studies (increasingly required) and confirmatory trials (always required) that were more diverse and larger. That explains in part the higher percentage of trials showing statistically significant efficacy. Smaller studies that are underfunded and underpowered -- and less likely to enroll the number of patients regarded to achieve a level of confidence that results are reliable: More likely early phase studies looking at other endpoints. No wonder industry sponsored trials are more likely to be "positive." And just to be sure, the researchers toss treatment toss-ups into the positive category. Nothing like creating your own standards. I wonder how many product managers would regard a no-difference result to be "positive."
So the authors twist the obvious into a conspiracy about how industry funding deliberately puts a happy face on otherwise lousy results... But in the world of medical publishing, skewing data to stick to pharma is, dare I say, a positive.
So do the headlines about the study, which I bet few reporters even read.