The U.S. is a wealthy country, and it might be supposed that this helps explain why health care costs so much. But a number of countries with lower health care costs have higher per capita GDPs, and based on per capita GDP alone, the U.S. spends $2,500 more than it should on health care.
Medical Malpractice System
In 2004, the Congressional Budget Office (CBO) estimated that the entire cost of the malpractice system accounted for only about 2% of the overall costs of health care. While malpractice crises, characterized chiefly by sudden, dramatic increases in malpractice insurance premiums, stress physicians and other health care providers, the economic impact on the health care system as a whole appears to be slight. To counter this, the American Medical Association and other medical organizations asserted that the CBO ignored the cost of “defensive medicine,” which accounted for as much as $120 billion a year in health care costs. This contention was disputed on a number of grounds, not the least of which was that it was virtually impossible to determine what medical practices were “defensive.”
In October 2009, the CBO released a report that found that malpractice reforms would reduce health care spending by more than $40 billion over the next 10 years. This is far lower than the figures for defensive medicine that had been put forward by organized medicine, and would save only about 1/10 of 1 percent of current annual health care expenditures. More importantly, the CBO report does not necessarily support the idea that malpractice reforms reduce defensive medicine, defined as practices that do not produce overall patient benefit. One of the three recent studies that the CBO relied on stated:
A second recent study was more unequivocal: “On the other side of the ledger, malpractice liability leads to modest reductions in patient mortality; the value of these more than likely exceeds the cost impacts of malpractice liability.” (The third study did not assess the impact of reforms on patients’ health.)
If aging, wealth, better outcomes, and malpractice costs do not explain the high cost of U.S. health care, what does?
High Prices and Profits
In 2008, pharmaceuticals were the third most profitable industry sector in the country (following communications and oil), with a profit margin of 19.3%.
The McKinsey Global Institute estimates that U.S. drug prices are 50% higher than the average prices for other industrialized countries. This is almost entirely attributable to higher costs for brand name drugs. The drug industry argues that these prices are necessary in order to pay for the research and development costs of new drugs. This is disingenuous. In the first place, the drug industry makes enormous profits. In 2008, pharmaceuticals were the third most profitable industry sector in the country (following communications and oil), with a profit margin of 19.3%. Second, the vast majority of new drugs on the market do not offer significant improvements over existing modalities. Finally, as a lawyer who represented drug manufacturers for nine years, I can attest that drug prices are not based on the costs of manufacturing and R&D, but on what the market will bear.
Hospital care in the U.S. is about twice as costly as in other industrialized economies.
U.S. physicians earn on average far more than their counterparts in other countries: twice as much as doctors in Germany and four times as much as doctors in the U.K. The gap is even greater for specialists. One argument in favor of physician incomes is that they provide the ultimate in social benefit by promoting health and saving lives, and therefore deserve to earn as much as they do. Given the lack of evidence−based medicine, however, the amount of good that physicians do is debatable. Moreover, this argument ignores the fact that physicians in other countries provide the same social benefit for far less money.
Another argument that is often asserted to justify physician incomes in the U.S. is that medical school in other countries such as Germany and Britain is free, while in the U.S. physicians need to repay large medical school loans. A 2007 report by the American Association of Medical Colleges (AAMC) states that most medical school graduates defer loan repayment until after their relatively low−paid residency periods, and that they then will pay between approximately $1,000 and $2,300 a month to retire their debt, depending on the type of medical school (public or private), the type of loan (federal, MEDLOAN), and the repayment period (10 years vs. 25 years). The report also estimates that the average physician income after taxes in 2006 was $119,130, which would make the loan repayment between 10 and 23 percent of after−tax income. This is a hefty chunk, particularly for lower paid practice areas such as family medicine and pediatrics. What about specialists? According to the Wall Street Journal, in their first year after residency, neurologists, cardiologists, and anesthesiologists earn on average between $200,000 and $400,000, suggesting that the argument about medical school debt is less compelling for these practitioners (and, of course, helping to explain why these specialties are so popular).
Average administrative costs for health care in the U.S. are 7%. Although some other countries with multipayer systems have similar costs, this is about twice the average in the industrialized world. We pay higher administrative costs in part because of the paperwork burden that results from the complexity of our system, and the profit margins of health insurance companies.
Intensity of Services
High health care costs are the result of both high prices and heavy utilization (or “intensity”) of health care services. Elective out−patient surgery is the fastest−growing sector of the health care industry. Comparisons with other OECD countries show that patients in the U.S. get three times more ambulatory procedures than patients in other advanced countries; c−sections, knee replacements, cataract surgery, and revascularization procedures are particularly out of sync. High−cost diagnostic procedures such as CT scans and MRIs also are used far more frequently in the U.S. Most importantly, there is no conclusive evidence that the greater intensity of services translates into better health outcomes.
Why is utilization so high in the U.S. A major reason is the technological imperative, the desire by hospitals and physicians to offer the most advanced medical techniques, fueled by intense industry marketing efforts. Another reason is patient demand. This too is encouraged by industry marketing, such as direct−to−consumer drug ads, which account for almost 15% of drug company promotional expenditures, and results in increased spending on prescription drugs. In addition, the more the patient’s care is paid for by insurance, the less incentive the patient has to restrain demand.
But the overriding reason for why utilization is so high is simply that it can be. With the exception of the number and duration of hospital stays, it has proven extremely difficult to dampen the intensity of services. One cause is “patient creep”—the tendency to provide high tech care to broader and broader populations of patients, including older patients and those with less severe symptoms, which has been widely documented. Another cause is the lobbying power of organized medicine. When Medicare has tried to limit physician payments through the use of a mechanism called a “sustainable growth rate” (SGR), which is supposed to reduce physician payments across the board in the following year if total annual Medicare physician spending exceeds a certain limit, Congress has been prevailed upon to block the reduction and replace it with a payment increase. (In fairness to the opponents of SGR, it has been criticized for indiscriminately reducing payments to all physicians rather than just to those who are particularly responsible for increases in utilization.)
Over the years, as well as during the current health reform debate, suggestions have been made to radically alter the U.S. health care system in ways that might restrain or reduce health care expenditures, such as replacing the present public−private system with a single−payer public system similar to Canada or the U.K. or eliminating the for−profit health care sector. For reasons that we do not have time to go into, neither of these options is politically realistic for the foreseeable future; the closest we have come to such a drastic overhaul of the system was the Clintons’ failed effort in the early 1990s, and it would have adopted neither of these approaches. (For that matter, it also is not realistic to expect that the operation of the health care system will be entrusted to the private market free of substantial government regulation, and there is little evidence that such an approach would reduce spending.)
Another proposal is to place greater financial risk on patients and their families so that they will have an incentive to spend more wisely. This approach, called “consumer−driven health care ,” is the subject of another feature story. As explained there, it has too many problems to become a significant cost−cutting option. In any event, patients in the U.S. already bear one of the greatest financial burdens for their health care compared to patients in other countries, and yet this has not reduced the rate of health care cost increases. More importantly, only 10% of patients are responsible for 70% of all health care expenditures, and these patients, who have acute medical emergencies or prolonged chronic ailments, are unlikely to be able substantially to limit their spending.
A final idea that is not likely to be implemented is to place hospitals on annual budgets, similar to the way hospital care is financed in Canada. Although this was proposed during the current reform debate by both Representative John Conyers and Senator Bernie Sanders, administrative hurdles and concerns about longer queues for care make it an unrealistic option.
More promising approaches are summarized by the phrase “crack the WIP,” with WIP standing for reducing waste, intensity, and prices.
If, as it appears, the greater amounts that we spend on health care do not translate into significantly better outcomes or patient satisfaction, then it is likely that at least some of what is being done is unnecessary. Medical necessity is an elusive concept, however. The clearest example of an intervention that would not be necessary would be one that provided no benefit whatsoever to the patient. Then there are interventions that provide benefits, but also adverse effects or other harms, so that there is little or no net benefit to the patient. And there may be interventions that provide some net benefit, but not enough to justify the cost. When policy−makers talk about curbing waste, they often are talking about all three, but there are serious complications when considering the second and third types of interventions as wasteful. One question is: Wasteful according to whom?
[Between] 18-45% of
surgery patients cannot recall the major risks of surgery, many cannot answer basic questions about the services or procedures they agreed to receive, and 44% do not know the exact nature of their operation.
Obviously, the determination of whether or not a treatment will provide net benefit depends in part on the patient’s preferences and aversion to risk. Discovering these specifics about the patient is the task of the informed consent process, but we know that this process is highly flawed. For example, 18−45% of surgery patients cannot recall the major risks of surgery, many cannot answer basic questions about the services or procedures they agreed to receive, and 44% do not know the exact nature of their operation.
As for the role of cost, we have already acknowledged that the more that patients are covered by health insurance, the less reason they have to care about costs versus benefits. This means that, in order to reduce spending, the balancing of costs and benefits will have to be done by someone other than the patient, and under a fee−for−service payment system, someone other than the physician, who stands to gain financially the more services that are provided. Yet this is what managed care tried to do through utilization review, and what triggered the backlash against managed care by physicians and patients that crippled its ability to hold down costs.
Even if we were comfortable with some third party making medical necessity decisions, or if we switched to a different payment system that did not reward physicians merely for providing more services, about which more will be said later, how would we know which interventions would provide net benefit to the patient? Presently, there is little scientific evidence on which to base these judgments. The result is practice based overwhelmingly on anecdote and trial−and−error, which in turn leads to what in retrospect looks like a great deal of waste. Hence, creating more scientific evidence is one of the major initiatives being fostered by the Obama administration.
Unfortunately, this is easier said than done. One enormous challenge is that the clinical studies that traditionally have been relied upon to generate evidence are extremely expensive and so time−consuming that the interventions being tested often have been overtaken by newer technologies before or shortly after the studies are completed. An even greater obstacle is the fact that clinical studies measure outcomes in large cohorts of subjects. Given our growing understanding of individual differences in the nature of disease and in response to treatment, the results of most of these studies cannot tell us much about what the outcome will be in the case of an individual patient. One solution is to design studies that take individual genetic and other differences into greater account, but this makes them harder and more expensive to conduct. Another solution being advocated by the Obama administration is to mine effectiveness data from large electronic medical records systems. But this requires setting up and maintaining these systems. This is incredibly costly in itself, and diverts attention away from tending patients in order for caregivers to complete increasingly detailed and lengthy computerized forms.
Even if a substantial amount of personalized evidence of effectiveness becomes available, there is likely to be considerable political resistance if it is used by government officials or insurers to deny services. Critics already are characterizing such decision−makers as “death panels.” Opposition successfully derailed the Administration’s initial proposal to use stimulus money to spur “cost−effectiveness” research, which attempts to translate the benefits of alternative treatment approaches into standardized units called “quality−adjusted life years” (QALYs), and then to compare the treatments in terms of their cost per QALY. (Cost−effectiveness research does indeed raise many ethical and legal concerns. Not the least is that it tends to devalue the lives of persons with disabilities, which is a violation of the Americans with Disabilities Act.) More recently, resistance to evidence−based practice has been illustrated by the backlash against revised practice guidelines for mammograms.
Another approach to reducing waste is “pay for performance” and other forms of “value−based” payment, whereby providers are paid on the basis of patient outcomes rather than the intensity of services. Intuitively this makes a lot of sense, but like evidence−based practice, implementing it is more difficult than it might seem. Without evidence of what will produce good outcomes, how are providers to know what to do? Their response may well be to “cherry−pick” patients, accepting only those with conditions that are likely to respond well to treatment. The solution is to adjust the measurement of patient outcomes by the severity of their condition, but this is also challenging: According to the Centers for Medicare and Medicaid Services, for example, none of the available methods for adjusting for severity can account for even half the variation in the costs of care for Medicare patients.
There is widespread agreement that, in order to reduce the volume of services consumed by patients, the remuneration of providers must be separate from the intensity or number of the medical services a patient receives. There are numerous proposals for how to do this. One approach is to pay providers a lump sum based on the patient’s condition. A version of this is the DRG system under Medicare, which pays hospitals a fee per admission based primarily on the patient’s diagnoses, thereby eliminating an incentive to keep the patient in the hospital longer or provide extra services. A similar payment system is being suggested for chronic care, where physician groups would be paid a fixed amount to care for a patient’s chronic condition. Again, a major concern is how to avoid providers gaming the system.
Most physicians are paid on a fee−for−service basis, giving them an incentive to provide as many services for patients as possible. Managed care has tried to limit this by penalizing physicians financially for providing too much, and by paying them a lump sum per patient per calendar period (“capitation”). This has spawned criticisms that physicians then have an incentive to provide too few services. The solution would seem to be to put physicians on salaries, and not allow their salaries to fluctuate based on behaviors that are cost−saving but not beneficial to patients. Some physicians fear that this will convert them into mere employees, with too little professional status or autonomy, but there are plenty of successful models of salaried professionals, including, in the case of physicians, Mayo and other multi−specialty clinics, about which more later.
Develop Practice Guidelines
A different approach to reducing intensity is to inform physicians and patients about what care is medically appropriate through practice guidelines. This topic is worthy of a conference all its own, but suffice it to say that although heralded with much promise, practice guidelines in large part have been a disappointment. Lars Noah, Professor of Law at the University of Florida, explains why.
Practice guidelines may be based on opinion and habit rather than on sound science. As John D. Ayres observes in an understatement quoted by Noah, guidelines "are constructed on a somewhat fragile data base." Second, Noah points out, “[t]he process of developing guidelines, which some commentators have described as ‘haphazard,’ may itself introduce serious distortions.” There may be conflicts of interest on the part of the entity issuing the guidelines, such as when specialty societies seek to preserve their turf against inroads by non−specialists, or when guidelines are issued by health insurers or drug companies. Third, a guideline may become out of date, and the issuing entity may not employ an adequate method for updating it. Fourth, there is a proliferation of guidelines and no clear way to identify which guidelines are better than others, and which are definitive. Conceivably, the push for better evidence−based medicine will make better guidelines possible, but this depends on overcoming the problems with evidence−based research discussed earlier.
It is important to hold down or reduce the prices of care not only for its own sake, but also to counteract any perverse effects from reducing intensity. If the volume of services is decreased, a provider who wishes to continue to earn the same amount of money has an incentive to raise prices (and vice versa if prices are reduced). One way to reduce prices is to reduce the demand for health care. This is unlikely to occur for a number of reasons, including the aging of the population and the number of people with unhealthy lifestyles. Prices instead must be controlled directly, with health care purchasers declining to pay more. Government price controls, similar to those implemented during World War II and adopted on a short−term basis in 1971 by President Nixon, are one approach, but economists generally agree that this postpones and exacerbates, rather than solves, the problem.
The alternative is for health care purchasers to have enough market power to drive hard bargains with providers. Democratic health reform efforts therefore would give the government the power to negotiate drug prices on behalf of Part D of Medicare, rather than leave prices to be negotiated between suppliers and smaller private health plans that provide Part D coverage, and one rationale for a public insurance plan option is that it would be able to spearhead price reductions that would then spread to private plans as well.
One way to cushion a significant drop in physician income would be to reduce or eliminate their medical school indebtedness. But no one is seriously proposing this, presumably because of the cost to taxpayers.
As a way of holding down costs without reducing quality, President Obama has focused considerable attention on one model for delivering health care: the large, multi−specialty clinic, represented by institutions such as the Mayo and Cleveland Clinics. As noted earlier, their physicians are salaried, and they can achieve economies of scale that are not possible for smaller providers. They also are highly efficient, with management implementing numerous time− and labor−saving systems, such as integrated medical records. And they tout their ability to avoid fragmentation of care and achieve quality through the use of multi−specialty teams.
But critics point to the fact that these clinics operate under very special circumstances, and doubt that their experience can be widely duplicated. According to the Washington Post, for example, “Mayo's patients are wealthier, healthier and less racially diverse than those elsewhere in the country. It has few poor patients. It limits the number of procedures it performs per patient, but the rates it charges private insurers and self−paying patients is higher than average, allowing it to thrive despite the lower Medicare spending cited by its supporters.” The Cleveland Clinic also has received criticism for overcharging uninsured patients and for not providing enough care to poorer members of the local community.