Latest Drugwonks' Blog
Then Peter Bach in his NEJM article revealed that the entire enterprise, the foundation not only of the entire liberal effort to bend the curve on health care through government rationing but of a billion dollar business designed to "coach" people out of treatments for back pain, breast cancer, prostate cancer, etc... all to the financial benefit of hospitals and health plans, is medically bankrupt and statistically corrupt...
Here's Buzz Cooper putting the Atlas through the document shredder:
An important article appeared in today’s NYT, describing a new paper by Peter Bach, which is in today’s NEJM. Peter’s paper (“A Map to Bad Policy“) debunks the Dartmouth Atlas and cautions against its use. As I said in the Wash Post in September, the Dartmouth Atlas is the ”Wrong Map for Health Care Reform.”
More damning even than Peter’s analysis was Elliott Fisher’s reply: “Dr. Fisher agreed that the current Atlas measures should not be used to set hospital payment rates, and that looking at the care of patients at the end of life provides only limited insight into the quality of care provided to those patients. He said he and his colleagues should not be held responsible for the misinterpretation of their data.” Really? It was someone else’s interpretation? OK, Elliott, you’re not responsible. Just stand in the corner.
Peter is not the only leading epidemiologist to debunk Dartmouth in recent days. There’s also the report this week from the U of Wisconsin and RWJ by Pat Remington (another leader), showing that people who have the poorest health (and, therefore, the highest health care costs) live in the poorest counties (see my blog report and an earlier discussion of poverty and health care). And there’s the recent paper by Ong and Rosenthal (co-authored by Jose Escarce, editor of HSR, the leading health services research journal), showing that, when all care is measured (not simply end-of-life care, as measured by Dartmouth), hospitals that provide more have lower mortality, which was confirmed in the current issue of Medical Care by Barnato and associates at the U of Pittsburgh. When it rains, it pours.
What’s doubly important about the death of the Dartmouth Atlas is that it was the cornerstone of health care reform. Right from the start, Peter Orszag, director of OMB and the administration’s architect of health care reform, accepted Dartmouth’s ideological principles that health care spending was driven by doctors and hospitals who over-treated and over-charged, to no benefit. The funds for health care reform were readily available by simply getting rid of geographic differences. That alone would save 30% of health care spending ($700B). And that could be accomplished by making everything look like Mayo (white, middle class and efficient) and by having more primary care physicians (which Mayo doesn’t). And best of all, it could assure that no new taxes would be needed, just as President Obama had promised.
And here's Peter Bach explaining why cost driven health care policy is, well, bad for patients...
Say Hospital A and Hospital B each has a group of patients with a fatal disease. Hospital A gives each patient a $1 pill and cures half of them; Hospital B provides no treatment. An Atlas analysis would conclude that Hospital B was more efficient, since it spent less per decedent. But all the patients die at Hospital B, whereas only half of the patientsdo at Hospital A, where the cost per life saved is a bargain at $2. Although $1 cures are rare, changing the price or efficacy of the pill does not alter the fundamental problem with examining costs alone when cost differences are sometimes associate with outcome differences.
And finally Bach challenges what, until recently, the media, most foundation types, the underachieving liberal health bloggers drank as daily Kool-aid, that the Dartmouth Atlas controlled for severity of illness by looking at only dead people.
Another methodologic problem is that Atlas analyses assess hospital efficiency overall on thebasis of costs incurred for nonrepresentative patients — decedents who were enrolled in fee-forservice Medicare. This group varies among hospitals in terms of severity of illness and is notrepresentative of a given hospital’s overall spending pattern. Regarding illness severity, Atlas researchers note on their Web site (www.dartmouthatlas.org/faq/ hospital.shtm) that they focus on “patients who died so that [they can be sure] that patients were similarly ill across hospitals,” further explainingthat “by definition, the prognosis of all patients[who died was] identical — all were dead . . . therefore, variations [in resource use] cannot be explainedby differences in the severity of illness.” But since some hospitals take care of sicker patients than others, the average severity of illness of patients who die also varies among hospitals. This fact is being ignored when all spending differences are attributed to differences in efficiency.
"Dr. Peter Bach of Memorial Sloan-Kettering argues against using the Dartmouth measures to financially reward and penalize hospitals. There is a healthy and vital debate about how best to change hospital incentives. None of this, however, calls the Dartmouth researchers’ decades of highly respected work—or their fundamental findings—into question. If anything, the debate reinforces the importance of their research."
http://www.newyorker.com/online/blogs/newsdesk/2010/02/the-cost-conundrum-persists.html#ixzz0gOWwIFNc
Meanwhile Elliott Fisher sought the refuge of the Dartmouth student newspaper to set the record straight:
“There’s a pretty good correlation between treatment of patients over 65 and under 65,” Fisher said.
In a response published in the New England Journal of Medicine, Fisher wrote that Medicare data is closely associated with a single hospital, making it a good measure of hospital effectiveness.
Bach’s final criticism of the Atlas data focused on the variation of illness severity between hospitals. Because of this variation, hospitals that care for patients with very severe illnesses could appear less efficient than those that take care of patients with less severe cases, even if they actually operate with the same level of efficiency, he said.
Fisher responded in the interview that the Dartmouth Atlas data is “carefully adjusted” for variations such as illness severity, poverty and price differences.
Talking to a college student, admittedly, a very smart college student, Fisher can get away with a mixture of evasion and distortions.1. There is no correlation between health care spending between people age 65 and over vs people who are younger unless you truly control for severity of illness which the Atlas fails to do because it never looks at two people from baseline with the same disease over time and compare outcomes. The Medicare/ non Medicare variance is wide as the Bach article shows.
2. Dartmouth and Fisher claim to limit who they analyze to those with a list of common chronic diagnoses. All together, that is about 90% of all people who die in Medicare. Then they adjust for a few things like primary diagnosis and age and sex and stuff. They have never shown that they can capture the variability between hospitals in severity of illness and they cannot, even using far more sophisticated measures. But they do not think they need to risk-adjust for the regional variations stuff. When MedPac did that, most of the variations went away.
Despite all this, both Gawande and Fisher persist in claiming: "that Bach’s criticism does not undermine the main finding of the Atlas data: regional variations in health care spending show that higher costs associated with specialized procedures do not necessarily lead to better health outcomes. Costs can be lowered by creating more accountable hospitals and moving away from a “toxic payment system” that creates incentives for unnecessary tests, according to Fisher."
http://thedartmouth.com/2010/02/22/news/atlas
Why would they say that when the entire methodology is shot to hell?
Because both (and not just them) make big money off of speeches, consulting and businesses that push the less is more approach.
And the commercialization of the Dartmouth Atlas -- coaching people to ration their own care in my opinion -- culiminated in the formation of a company called Health Dialog by the Dartmouth folks. HD was purchased by a British concern for $750 million. And HD has lobbyists in Washington pushing the adoptiong of its products in the current health care reform bill.
http://www.nytimes.com/2010/02/23/health/23niss.html?pagewanted=2
I am sure that Nissen was waiting for the climactic scene when the CEO either tries to bribe him or have him poisoned in his hotel room.
Unfortunately, the response he received from the GSK higher-ups was suggestion to do an analysis about when people actually had their heart problems since other studies did not show CV side effects in the short time frame he was showing (See below). And the worst thing about that response was the fact that GSK did not offer to pay him tens of millions of dollars to do such a study, a study Nissen has gladly done before using his ultrasound measurement of heart problems..
Of course there is no mention of Nissen secretly sending a copy of his NEJM paper to Congress in violation of the embargo requirement and not sharing it with the FDA.
Nor is there any mention of what NHLBI said in light of the RECORD study... "NHLBI staff reviewed the scientific findings and arranged for Data Safety and Monitoring Board (DSMB) meetings for both trials. The BARI 2D and ACCORD DSMB meetings reviewed in-depth analyses of cardiovascular disease rates in patients receiving rosiglitazone versus other diabetes drugs in the two trials. The DSMBs also conducted a thorough review of the recently published data on heart attacks and deaths in patients receiving rosiglitazone.
Each DSMB provided recommendations to the NHLBI on how the totality of evidence on rosiglitazone should affect the conduct of BARI 2D and ACCORD. The NHLBI carefully reviewed the DSMBs' recommendations and also thoroughly reviewed the recently published meta-analysis of rosiglitazone and the RECORD trial, along with the accompanying editorials published online May 21 and June 5 by the New England Journal of Medicine. "
http://www.nhlbi.nih.gov/new/press/07-rosi-qa.htm
Oh...
The NHLBI concurs with the DSMBs' recommendation that both the BARI 2D and ACCORD data, viewed in the context of the recent publications, contain no observations that would justify a recommendation to terminate rosiglitazone treatment in the research setting of either study.
The New York Times claims that the RECORD study was poorly designed. In fact, the interim RECORD analysis done to follow up on Nissen's claim was had limited statistical power because of an unexpectedly low event rate and incomplete follow-up (a mean of 3.7 years instead of the planned median of 6 years). Still the interim analysis had the same internal statistical reliability as Nissen's analysis: What the Times fails to note is that when Nissen lumped in all 42 studies, he glossed over the fact that the individual studies he looked at were not conducted the same way, with patients with the same disease or drug dose. Further, the 42 studies were small and overall event rates were low, partially because trial durations were relatively short ranging from 24 to 52 weeks, about 25 to 50 percent shorter than RECORD. As anyone who does statistics for a living will tell you, adding up a bunch of underpowered and short term studies does not give you a well-powered study with long term predictive effects.
Here's what one article -- written by Sanjay Kaul and colleagues said about the study: "The investigators' own subgroup analyses, which were limited to the small trials alone or to the 2 large trials (DREAM [Diabetes Reduction Assessment with Ramipril and Rosiglitazone Medication] and ADOPT [A Diabetes Outcome Progression Trial]), did not demonstrate statistically significant associations (1). Furthermore, one might reasonably question whether results from the 3 trials that targeted patients with Alzheimer disease (n = 1) or psoriasis (n = 2) who did not have diabetes should be combined with results from other trials that included patients with diabetes or prediabetes. Because rosiglitazone is already contraindicated in patients with heart failure, one might also reasonably limit the assessment of risk to patients without that contraindication and not combine data from the single study in patients with diabetes who had congestive heart failure with data from other studies. Incidentally, this trial exhibited the highest number of myocardial infarctions (n = 5) and cardiovascular deaths (n = 3) among all the small trials in the rosiglitazone treatment group (1).
Read Article
Some people can't handle not being FDA commissioner.... Meanwhile we will keep a running count of the media types who fail to put Nissen, Avandia and other studies about the medicine in context and simply write stories based on the narrative of the insider exposing (once again) the evil drug company.
Jim Pinkerton over at Serious Medicine Strategy does a great job at undermining the assertion of those who claim we spend too much on expensive new technologies and bleating about why prices of technologies go up, not down:
"The price of a CT scanner rose about 55 percent in 30 years, from an inflation-adjusted $1.4 million to $2.2 million. That’s a big increase, but, as Goetz notes, the number of X-ray “slices” a machine can perform has risen from four to 64. That’s a 16-fold increase, or 1600 percent. Which is to say, the increase in the quality of CT machines has risen almost 30 times faster than the increase in the cost. If that’s not a Moore’s Law-level increase, it’s pretty close. "
Moreover, as I noted in a post to Jim's blog the increase in the use of PET/CT scans is not profit driven, it is a function of what new technologies, contrast agents and science can do, particularly in prediction, staging and diagnosing a wide variety of cancers. And the cost of a scan in inflation adjusted terms has declined while the accuracy has increased. These two factors explain diffusion, consistent with Jim's observation.
"...they are highly effective in staging, detecting and personalizing all types of cancer. Moreover, the information from imaging has been critical to the next wave of personalized medicine: development of -- guess what? -- to predict the path of each cancer tumor by linking a CT report to the rearrangement of the genetic code for the tumor. Why is this possible, because of a rapid decline in the cost of sequencing along with an increase in speed and precision. Moore's Law. And this approach could replace CT scans when they become price competitive. Why low cost? A greater increase in use, a further fall in production costs and competition. "
But only if government doesn't get in the way of it's use.
Anyone want a government agency to determine whether this new "fingerprinting" of cancers should be used?
seriousmedicinestrategy.blogspot.com/2010/02/can-we-afford-ct-scans.html
Scientists Spot Genetic 'Fingerprints' of Individual Cancers
THURSDAY, Feb. 18 (HealthDay News) -- Researchers have found a way to analyze the "fingerprint" of a cancer, and then use that fingerprint to track the trajectory of that particular tumor in that particular person.
"[This technique] will allow us to measure the amount of cancer in any clinical specimen as soon as the cancer is identified by biopsy," said study co-author Dr. Luis Diaz, an assistant professor of oncology atJohns Hopkins University. "This can then be scanned for gene rearrangements, which will then be used as a template to track that particular cancer."
Diaz is one of a group of researchers from the Ludwig Center for Cancer Genetics and Therapeutics and theHoward Hughes Medical Institute at Johns Hopkins Kimmel Cancer Center that report on the discovery in the Feb. 24 issue of Science Translational Medicine.
This latest finding brings scientists one step closer to personalized cancer treatments, experts say.
"These researchers have determined the entire genomic sequence of several breast and colon cancers with great precision," said Katrina L. Kelner, the journal's editor. "They have been able to identify small genomic rearrangements unique to that tumor and, by following them over time, have been able to follow the course of the disease."
One of the biggest challenges in cancer treatment is being able to see what the cancer is doing after surgery, chemo or radiation and, in so doing, help guide treatment decisions.
"Some cancers can be monitored by CT scans or other imaging modalities, and a few have biomarkers you can follow in the blood but, to date, no universal method of accurate surveillance exists," Diaz stated.
Almost all human cancers, however, exhibit "rearrangement" of their chromosomes.
"Rearrangements are the most dramatic form of genetic changes that can occur," study co-author Dr. Victor Velculescu explained, likening these arrangements to the chapters of a book being out of order. This type of mistake is much easier to recognize than a mere typo on one page.
But traditional genome-sequencing technology simply could not read to this level.
Currently available next-generation sequencing methods, by contrast, allow the sequencing of hundreds of millions of very short sequences in parallel, Velculescu explained.
For this study, the researchers used a new, proprietary approach called Personalized Analysis of Rearranged Ends (PARE) to analyze four colorectal and two breast cancer tumors.
First, they analyzed the tumor specimen and identified the rearrangements, then tested two blood samples to verify that the DNA had been shed into the blood, sort of like a tumor's trail of bread crumbs.
"Every cancer analyzed had these rearrangements and every rearrangement was unique and occurred in a different location of genome," said Velculescu. "No two patients had the same exact rearrangements and the rearrangements occurred only in tumor samples, not in normal tissue," he noted.
"This is a potentially highly sensitive and specific tumor marker," Velculescu added. Levels of the biomarkers also corresponded with the waxing and waning of the tumor.
"When the tumor progresses, the relative amount of the rearrangement increases in the blood and goes down after chemotherapy," Diaz said. "It tracks very nicely with the clinical history of the tumor."
The method would not be used for cancer screening and more research needs to be done to make sure PARE doesn't detect low-level tumors that don't actually need any treatment.
Although this approach is currently expensive (about $5,000 versus $1,500 for a CT scan), the authors anticipate that the cost will come down dramatically in the near future, making PARE more cost-effective than a CT scan.
Under the terms of a licensing agreement, three of the study authors, including Velculescu, are entitled to a share of royalties on sales of products related to these findings.
Maine lawmakers are debating a bill that would require pharmaceutical companies to retrieve unused prescription drugs from households across the state. The measure is designed to prevent those medicines from ending up in Maine’s water supply.
Protecting Maine’s drinking water is of paramount importance. But the bill won’t make Maine’s water any cleaner. And it could raise the price of medicines and stifle biopharmaceutical research.
(The bill is LD 821, “An Act to Support Collection and Proper Disposal of Unused Drugs”)
For starters, more than 90 percent of the drugs that find their way into tap water comes from the waste products of those who have ingested their medicines correctly — not from folks improperly flushing their meds down the toilet. So this “take back” program would do nothing to prevent the main source of pharmaceutical chemicals in water.
Moreover, the safest and most effective way to dispose of drugs is to toss them in the household trash, in sealed plastic bags to keep them away from children and pets. That waste is eventually held in secure landfills, preventing any chemicals from leaking into surface waters.
A secondary function of the bill, according to its supporters, is to get pharmaceuticals out of the hands of those who might abuse them.
Any legislation that takes the issue of drug abuse seriously, however, needs to provide communities with the resources to teach their citizens about the dangers of prescription drugs. This bill does not.
Even if the Maine Legislature went back to the drawing board and attempted to design a bill that actually improved residents’ health, lawmakers still would be hard-pressed to create effective regulations. That’s because water quality, by its very nature, isn’t a state issue — it’s a national issue.
The Mississippi River Watershed runs through 31 states. The Chesapeake Bay Watershed serves 17 million people in seven states plus the District of Columbia. There’s no doubt that water safety deserves serious attention from the government. And Maine lawmakers should do their best to ensure that the local water supply remains free of pollutants. But this bill would be ineffective at the state level.
The problems with this bill go beyond its inefficacy. It actually could jeopardize the health care of patients across the country.
Creating the complex and expansive network required to regularly collect unused drugs from resident homes would be a very costly enterprise. Consequently, this bill would dramatically drive up the operating costs for medical research firms, leaving less money for scientists to develop new drugs. That’s particularly bad news for those suffering from conditions like cancer and Parkinson’s, whose lives depend on future medical innovation.
What’s more, drug makers likely will try to recoup dollars lost on recovery programs by raising prices on their products. So folks would see higher pharmacy bills at a time of great economic instability.
Maine lawmakers have crafted a bill that fails to address the important issues of water purity and prescription drug abuse, and actually harms people’s health care.
They can do better.
http://www.nytimes.com/2010/02/19/opinion/19krugman.html?em
"What would work? By all means, let’s ban discrimination on the basis of medical history — but we also have to keep healthy people in the risk pool, which means requiring that people purchase insurance. This, in turn, requires substantial aid to lower-income Americans so that they can afford coverage.
And if you put all of that together, you end up with something very much like the health reform bills that have already passed both the House and the Senate.
What about claims that these bills would force Americans into the clutches of greedy insurance companies? Well, the main answer is stronger regulation; but it would also be a very good idea, politically as well as substantively, for the Senate to use reconciliation to put the public option back into its bill."
Maybe the Krug forgets that the it was the very prescriptions he is pushing that helped contribute to the rise in premiums. That, and the fact that insurance companies tried to hold the line on premium increases in 2008. In any event, he ignores the fact that individual premiums under Obamacare will increase by up to 35 percent in 2016 as all these regulations kick in (that's according to CBO). And a pubiic option will not be less expensive according to both CBO and CMS unless there is some drastic price controls. That will keep prices in check in the short term, but not overall spending..
Buying across state lines is not a panacea and would likely have unintended consequences (as with any health care policy). But there are a lot of things that could be done to keep premium costs down and predictable, starting with allowing people to buy catastrophic coverage. However leftists like Krugman would never allow that choice.
Here’s the abstract:
The application of cost-effectiveness analysis in healthcare has become commonplace in the US, but the validity of this approach is in jeopardy unless the proverbial $US50,000 per QALY benchmark for determining value for money is updated for the 21st century. While the initial aim of this article was to review the arguments for abandoning the $US50,000 threshold, it quickly turned to questioning whether we should maintain a fixed threshold at all. Our consideration of the relevance of thresholds was framed by two important historical considerations. First, cost-effectiveness analysis was developed for a resource allocation exercise where a threshold would be determined endogenously by maximizing a fixed budget across all possible interventions and not for piecemeal evaluation where a threshold needs to be set exogenously.
Second, the foundations of the $US50,000 threshold are highly dubious, so it would be unacceptable merely to adjust for inflation or current clinical practice. Upon consideration of both sides of the argument, we conclude that the arguments for abandoning the concept for maintaining a fixed threshold outweigh those for keeping one. Furthermore, we document a variety of reasons why a threshold needs to vary in the US, including variations across payer, over time, in the true budget impact of interventions and in the measurement of the effectiveness of interventions. We conclude that while a threshold may be needed to interpret the results of a cost-effectiveness analysis, that threshold must vary across payers, populations and even procedures.
‘‘Cost-effectiveness analysis can skirt life valuation by relying instead on the premise that we want our limited resources to achieve maximal benefits (which may be set in units that we prefer not to value monetarily).’’
[Thompson and Fortress, 1980, p. 555[1]]
The complete article can be found here.
The purpose of the study: “To obtain prospective evidence of whether industry support of continuing medical education (CME) affects perceptions of commercial bias in CME activities.”
The method: “The authors analyzed information from the CME activity database (346 CME activities of numerous types; 95,429 participants in 2007) of a large, multispecialty academic medical center to determine whether a relationship existed among the degree of perceived bias, the type of CME activity, and the presence or absence of commercial support.”
CME conflicts of interest! The Cleveland Clinic! A study with zero industry funding or researcher connections! Speed dial to Senator Grassley, right?
Not so fast.
The study’s conclusion: “This large, prospective analysis found no evidence that commercial support results in perceived bias in CME activities. Bias level seem quite low for all types of CME activities and is not significantly higher when commercial support is present.”
Could this be the reason there was no mainstream media coverage or press releases from Congress?
You be the judge.
The study can be found here.
Steve Usdin (scribe extraordinare of BioCentury) offers up some intriguing PDUFA pensées under the seasonally appropriate headline, "PDUFA Blizzard."
The article can be found here.
Once again proving the point that, when it comes to matters PDUFA, the best din is Usdin.
“Podium policy” (when regulators give speeches or media interviews that announce new regulatory expectations), is never a good idea. And least of all when the policy implicates First Amendment values and involves potential criminal enforcement.
The issue at hand is pharma and social media – an issue where there is already significant confusion. And now, unfortunately, there’s more.
Consider the comments of Jean-Ah Kang (special assistant to DDMAC director Tom Abrams) in her recent interview with Ignite Health:
“The bottom line is this is a regulated industry, and if you choose to do promotion in that area just make sure that at the end of the day what we’re looking at is in the best interest of public health.”
Dr. Kang then defines what she meant by “the public health”:
“Meaning, is this prescription drug promotion truthful? Is it balanced? Is it accurate? Is it false or misleading? That’s the big picture at the end of the day.”
And then she offers some qualifications:
“Several things come to mind with use of intent. We have regulations and again, they’re not black and white per se, but they exist … Even though someone may not have intended something, if the end result is that the public is misled then it’s a problem.”
And finally, “I mean people have gone to jail over these serious public health issues. So just be aware of the regulatory environment.”
“Intent” to promote against the “best interest of the public health” via regulations that are “not black and white” and over which “people have gone to jail.” Talk about "net impressions." The implications of her remarks are chilling. Chilling, frustrating and disappointing – but not necessarily surprising. After all, it’s all about ambiguity.
Ambiguity is power. That’s why interpretation of FDA regulations (on social media and a host of other issues) is such a vibrant cottage industry. Regulated industry, on the other hand, seeks clarity. Industry wants bright lines. They want to know the rules. They want predictability. This may sound simple, but it has proven to be a fractious bureaucratic kulturkampf within the FDA. “Change is not required,” as management guru W. Edwards Deming once said. “Survival is not mandatory.” And nowhere is this truer or more dangerous than at DDMAC.
Regulators change industry behavior by changing the rules of the game. But changing the minds of regulators, having them embrace bright lines rather than vague, ever-changing expectations based on undefined notions of what serves “the public health,” is a distinctly more challenging proposition.
If the FDA wants to remain relevant (and out of court), they should develop clear rules that safeguard the important First Amendment values at stake. And this is about more than just the speech rights of companies. It’s about the rights of the Internet user (yes, you) to obtain information from a full range of sources – not just the government and plaintiffs lawyers and snake oil salesmen.
Dr. Kang is a smart person and a real believer in the potential of social media to advance the public health and she does her best to portray the FDA process as thoughtful and deliberative. It is. But, at a bare minimum, we are entitled to something more (or perhaps it would be better to say something less) than Jean-Ah’s remarks. After all, the FDA doesn’t have the authority to regulate or even define what is “bad” for the public health writ large. That’s regulatory creep of the first order. (Her complete interview can be found here.)
When I served at the FDA, we struggled with how to both regulate and advance the new field of pharmacogenomics. As Commissioner McClellan said at the time, “pharmacogenomics is a new field, but we intend to do all we can to use it to promote the development of medicines. By providing practical guidance on how to turn the explosion of pharmacogenomic information into real evidence on new drugs, we are taking an important step toward that goal.” The same philosophy of “regulator as colleague” should also be true for the new dynamic of social media. It’s like a game of chutes and ladders. FDA should act as a guide to the ladders and a sentry against the chutes – rather than an as the ogre at the foot of the bridge.
“I know it when I see it” as an approach to social media regulation doesn’t cut it. Predictability is power in pursuit of the public health. And social media is as powerful tool for advancing the public health today as any medical breakthrough you care to name. In 2010, healthcare begins at search.
Predictability is the result of creative, forward-thinking leadership that rises above bureaucratic ambiguity. And it’s never easy, because swimming against the tide of an entrenched bureaucracy never is. But as Commissioner Hamburg and other agency change agents (Drs. Sharfstein, Woodcock and Goodman to name three) demonstrate, it is possible.
As Winston Churchill said, “Ease is relative to the experience of the doer.”
If it’s unlikely that Washington will pass meaningful health care reform any time soon. Yet health care costs are still exploding — making quality care unaffordable for too many Americans and putting a financial burden on us all.
Surprisingly, though, there’s a smart move that health insurers can make that will lower costs for consumers and insurers alike, and improve patient health: Reduce co-pays on prescription drugs.
High drug prices lead many Americans to skip doses or quit prescriptions entirely. Yet prescription drug prices aren’t rising — it’s patients’ out-of-pocket costs, or co-pays. Over the past several years, insurance companies have become increasingly reluctant to foot the bill for brand-name medications. Indeed, since 2000, co-pays have increased four times faster than prescription drug prices.
Patients respond to higher co-pays by skipping their meds more often. In 2003, researchers at the University of Oregon studied the effects of introducing a $2 to $3 co-pay for prescription meds among 17,000 patients. Adherence to treatment dropped by 17 percent.
Some insurers are even refusing to cover new prescription drugs. According to a study from Wolter Kluwer Health, insurers’ denial rate for brand-name meds was 10.8 percent at the end of 2008 — a 21 percent jump from the year before.
Abandoning treatment — a practice known as "non-adherence” — has serious consequences for patient health. For instance, people with hypertension who neglect their meds are more than five times more likely to experience a poor clinical outcome than those who don’t. Heart disease patients are 1.5 times more likely.
It also results in higher medical costs, as patients who go off their meds often end up in the hospital. Minor conditions that might have been controlled by inexpensive medications can sometimes balloon into life-threatening illnesses that require surgery or other costly treatments.
This makes sense. After all, a daily cholesterol-lowering drug is far less expensive than emergency heart surgery.
As Congress figures out what to do next on health care reform, private insurers can act now to control their own costs and vastly improve medical outcomes, all while making health care more affordable for average Americans. Reducing drug co-pays is the way to do it.