Richard Kahn, PhD, recently gave a keynote address at the Diabetes Technology Society meeting, which was held October 25-27, 2007, in San Francisco. The Diabetes Technology Society meeting brings together leading research scientists from industry and academia to share their advances in technologies involved in improved diabetes care. Dr. Kahn is the Chief Scietific and Medical Officer of the American Diabetes Assocation. His presentation addresses the history behind our current diabetes technologies and considers the implications of diabetes technological development for the present and future.
Download complete document in pdf format (64.3 KB)
Diabetes Technology in the 21st Century
Delivered by Richard Kahn,
Chief Scientific Medical Officer of the American Diabetes Association,
for the Diabetes Technology Meeting 2007
It’s a great honor and privilege for me to be here today to talk to you about my perspective on the future of diabetes technology. I appreciate being given this opportunity by David and his planning committee, although I must admit, that despite my title and position in the American Diabetes Association, I am far from being an expert on this subject, which all of you know much better than I.
I’m going to structure this talk as both a historical perspective of American medicine in general, and diabetes specifically, as well an effort to predict the future. Of course predicting the future is a sure thing if the audience has a short memory span. But today, that’s a risky proposition. I do think we have a better chance of predicting what will happen in coming years if we consider where we’ve come from, and where it seems we’re going. Although in a horse-race, many surprises occur in the homestretch. If we look carefully at how well the horses have trained, and where they’re placed as they go around the track—we’re likely be more accurate in our prediction of who will win.
So let’s start with my view of the history of American medicine along with the development of diabetes care over the last century. I think it’s fair to characterize American medicine over the last hundred years as a relationship between doctor and patient in which the physician determined what is best for the patient, and the patient was expected to comply with all recommendations. It was understood that physicians acted with only the interests of the patient in mind, and what they did, or recommended, sprung from a vast compendium of knowledge that was unfathomable to patients.
Doctors had innumerable years of training in the complexity of the human body, the countless afflictions we might acquire and how best to treat them all. They were society’s best and brightest professionals, and therefore, everyone tacitly assented to the unwritten proposition that they knew what they were doing. None of this was ever overtly questioned by either the medical profession or the laity. Despite a low grade level of chronic grumbling on both sides—and occasionally, attention being called to certain inadequacies, everyone agreed that the system worked well.
In diabetes care, history really starts in the 1920s with the discovery of insulin, followed not too long after with the hypothesis, but general agreement by the medical profession, that good glycemic control was important in reducing the ravages of the disease. The clinical measurement of glucose control at that time relied on laboratory measurements of blood glucose at the time of an office visit and, for some patients, urine glucose measurements at home.
Now fast forward to the late 1960s—a period that ushered in the concept of individual self-determination. The generation of that time loudly voiced their discontent with the inequities of society, and expressed a marked resentment of any voice of authority. Paternalism and unquestioned obedience were no longer tolerated if not violently disdained. Many people, motivated by the process that was given the name “consciousness-raising”, became aware for the first time that it was fair and proper to question anyone with standing, or who had a demonstrable impact over events that impacted society.
Physicians were not excluded. Articles in the popular press began to challenge their basic premise of authority, their presumption of uniform effectiveness, and even their benevolence. But unlike most other arenas of society, medicine itself was also changing rapidly during that time. So rapidly in fact that society was unable to get a sufficient grip on medicine’s complexity to be able to question or direct the care delivered, or, even more frustrating, to place value on the care received.
The unprecedented rate at which new medical knowledge was accumulated and new technologies discovered, revolutionized diagnosis and therapy. The process of clinical decision making was fast becoming inordinately complex. Choices were wider and much harder to make, and were fueled by the rapid development of university medical centers, the rise of the pharmaceutical and device industry, and the increasing availability of government grants for biomedical research.
In diabetes care, the early ‘60s ushered in the availability of oral drugs—the sulfonylurea tolbutamide was licensed in 1957, and others followed soon after. New insulins and glucagon also emerged, and like the rest of medicine, diabetes care was no longer a simple proposition. Indeed, internists specializing in diabetes care were self-proclaimed diabetologists.
What would prove to be the single most revolutionary breakthrough in the management of diabetes since the discovery of insulin was introduced quite inauspiciously in 1964. In a two page advertisement in the journal Diabetes, the Ames Company introduced test strips for blood glucose monitoring. Dramatic in its simplicity, the ad pictured a finger from which a drop of blood was suspended over a Dextrostix. The caption proclaimed, “A one minute test for blood glucose.” The era of diabetes self-management thus began, even though physicians over many previous decades stressed the importance of good eating habits and proper nutrition.
But the full impact of monitoring would have to wait an additional decade until meters became available to self-monitor glucose at home. And soon after came the invention of a sharp lancet, and then were meters that were truly portable. The A1C test also appeared in the late ‘70s, and all this signaled the fact that the medical-industrial complex would benefit those with diabetes as it did most other diseases and conditions.
It was this blizzard of new discoveries in the ‘60s, ‘70s and ‘80s, appearing alongside the notion of a patient’s right to make informed decisions about the course of his or her own care, which spurred the public to make demands on their doctors, and on the profession, to become more open and transparent in their decision making. Sharing of all available information, including the benefits and risks of drugs and interventions became institutionalized, in both the practice of medicine and in communication between clinicians. This, in turn, stimulated the need for a greater quantitation of clinical benefit, and along with ever improving statistical methodologies, physicians would now have much more information to make informed decisions.
Although patients were now, in the ‘80s, routinely asking questions, and physicians could no longer get by easily without any discussion of treatment options and effects—our health care system was still structured to pay any cost incurred, regardless of whether the service was necessary, appropriate , or even if there were no robust studies showing clinical benefit.
With essentially no limit to spending, and the corporate imperative to reward shareholders, the medical industry established enormous marketing and sales budgets. The then embryonic discipline of technology assessment could easily be overwhelmed by advertisements and sales reps who deftly pitched the latest medical test or procedure. Medical proselytizers emerged, called key opinion leaders, to support industry’s contention that every new development was worth far more than its cost. Absent the imperative of performing a randomized controlled study, technology was usually sold on the basis of observational studies and simple proof that the technology worked.
In the late ‘80s, in diabetes care, self-monitoring of blood glucose was simply a given. That is, it was of virtually unquestioned benefit, although still not commonly done by most patients. And a similar benefit was bestowed to nearly every other office-based practice related to diabetes care.
Tests, procedures, drugs—whatever got approved—was marketed as if there were no shortcomings, no patient who couldn’t benefit, no limits or constraints to their use, and most important—everything was well worth the price—any price.
In hindsight, therefore, it would seem natural that the early ‘80s gave birth to the term “evidence-based medicine”, as well as clinical guidelines and standards promulgated by organizations with an interest in helping clinicians sort through an increasingly complex world of medical practice. In diabetes, clinical guidelines first appeared in 1988, generated by my organization, and were almost entirely driven by observational studies and consensus expert opinion.
From the viewpoint of a patient, new medical technology and treatments were awesome advances that held enormous promise to change the course of diseases, reduce pain, and bring an end to suffering. And patients had to pay hardly anything, usually nothing.
In a totally unconscious and unwitting confluence of circumstances that would feed on each other, patients wanted health care at any cost, clinicians were ready and eager to deliver therapy with virtually no constraints, industry couldn’t deliver novel tools fast enough, and payers (particularly employers) had no choice but to pony up.
Not surprisingly, therefore, the cost of health care in the United States soared. From just a low single-digit share of the gross national product in the ‘70s and ‘80s, the cost of health care rose almost exponentially. In 2006, it exceeded 16% of our GDP. This amounts to spending about $7,100 on health care each year for every man, woman, and child in America. Do we get our monies worth? The answer is hardly.
Health care in America cost more, per capita, than in any other country in the world—and sadly, by the vast majority of measures one can think of, our health outcomes are nowhere near the best in the world—even when you adjust for age, race, sex and socio-economic status. Moreover,
recent reports suggest that only about half of US adults or children receive care consistent with current recommendations. In other words, the technology, medications and information are all there—but they are not getting to the patient. We seem to have what we need, but putting it into routine practice seems to allude us.
Perhaps worse, there are studies showing that regions of the country with higher health care spending levels do not experience better health outcomes, nor do they gain better access to care, or even report greater satisfaction with the care they do get. In fact, a recent report showed that in many states, there was a negative association between quality of care and health care spending.
Among the 30 most developed nations in the world, the United States ranks near the bottom on most standard measures of health status, and among 192 nations for which 2004 data are available, the United States ranks 46th in average life expectancy from birth and 42nd in infant mortality. Conversely, we lead the world in per capita use of most diagnostic and therapeutic medical technologies.
All these facts are not obscure pieces of trivia. Our government, payers, health policy gurus, managed care organizations, employers and business coalitions recite them like children’s rhymes. They are the gospel that draws converts into the church health care reform.
The current pre-occupation with cost and outcomes wasn’t just hatched. In the early ‘90s when advancements in care were in high gear, insurers, payers and health gurus invented a new paradigm to stem the financial bleed. It was the child of rising costs, a perceived mis-emphasis on treatment rather than prevention, and an attempt to reform health care to conform to a true market driven enterprise. They called it managed care, and a collateral offshoot was a new interest in cost-effectiveness. Thus began the era of quality improvement.
In diabetes, the ‘90s ushered in a preoccupation with glycemic control—and rightfully so. New drugs to lower blood glucose were commercialized, and strip and meter sales soared.
The ‘90s also gave birth to many other advances in technology without which we would have made little progress in controlling the ravages of the disease. Laser photocoagulation, insulin pumps, angioplasty and by-pass procedures, mono filaments for foot exams, sophisticated glucose meters, and many more technological advances, have given people with diabetes a far better life than was imagined even a decade or two earlier. They have certainly saved lives, improved many more lives, and made diabetes manageable for millions of people. Yet in some instances, for example self-monitoring in type 2 patients on oral drugs, the technology was not preceded by any long-term, randomized controlled study showing benefit. Of note, for that technology, even now, we do not have robust proof that the technology works in such patients, yet Medicare is spending over 1 billion dollars to provide it.
In the ‘90s, the quality improvement movement reached diabetes care as well. In 1994, diabetes performance measures were invented, and certifying bodies like NCQA and JCAHO, along with the largest payer of all—Medicare—rapidly embraced them.
Confirming what most experts suspected, the quality of diabetes care in America left much to be desired, and increasing attention would now be paid to improving performance across all the parameters we knew for certain to be important. At the same time, because providing health care was becoming a true business, health care systems were astonished to find how much of their costs were due to diabetes. For example, diabetes now costs Medicare nearly one-third of its entire budget.
As the new millennium was about to begin, attention to cost and quality throughout all of medicine took center stage. The words overuse, underuse, and misuse entered the medical lexicon.
A preoccupation with how do we get our monies worth, or how do we fix this broken system, became the focus of conferences and articles in almost every issue of leading medical journals. The rallying cry was named health care reform, and under that umbrella, the deficiencies of our health care system are documented almost daily.
And in the late ‘90s consumers discovered the internet—not only for shopping, but to learn about diseases and their treatments. What they should be doing, getting, avoiding, and most importantly, what constitutes quality health care, would become the subject of dedicated health information web sites, trafficked by millions of people daily. Google recently reported that diseases and their treatments get more searches than any other subject or topic we know. The public was turning into health care shoppers, and with that knowledge and sense of confidence, they have now lost nearly all reluctance to question any procedure, test or treatment. The phrase “Is this really of any value” is the mantra now heard over and over by all of us.
In diabetes, the new millennium was accompanied by an emphasis on cardiovascular disease and diabetes, along with heightened attention to better and easier ways to measure glucose, deliver insulin and resolve the rate limiting factor that prevents us from achieving perfect glycemic control—hypoglycemia.
Insulin delivery devices and more sophisticated glucose monitoring devices were introduced, but medicine, as I’ve just reviewed, could no longer embrace the simple construct of, “if you make it, they will buy it”. Technology introduced in recent years and in the foreseeable future, undergoes much more scrutiny. The value proposition, cost-effectiveness, and most important, the need to prove real benefit are concepts that have become no less of concern than showing the technology actually works.
Unfortunately for patients, the quality of diabetes care in America is still unsatisfactory, and the financial toll of diabetes is inordinate and growing faster than ever. Those factors might suggest that any new technology would rapidly be embraced—but I don’t think that’s happening. That’s because we’ve moved into a cost-cutting environment along with one that asks for greater proof of effectiveness, no longer do advances easily translate into sure sales. For example, I believe the sale of insulin pumps or continuous glucose monitors would soar if there were a long-term, randomized–controlled trial showing that either of these technologies really do benefit people with diabetes, and are worth the added cost.
So all that I’ve reviewed leads us up to today—and to tomorrow. Given our history, and a careful read of the current health care movement, predicting the future of diabetes technology should be easier than saying who will be our next President.
I think you can count on more attention being paid to technology that enables physicians to provide the evidence-based care that’s already available.
Addressing the gulf between what we know versus what we get, a recent Institute of Medicine’s report titled “Crossing the Quality Chasm”, which has received incredible attention and endorsement, identified 6 areas for major improvement: we need to make health care more effective, efficient, safer, timely, equitable and patient-centered.
It’s obvious that half of this list of critical areas for improvement do not speak to the need for more treatments or traditional technological advances: that is, we need more timely, equitable and patient-centered care. Rather, these three all relate to needed improvements in what is called systems of care. And that’s where I believe new technology will be eagerly accepted.
Technology that reduces medical errors will be scooped up by hospitals and clinics; technology that identifies errors will be embraced much more than one that adds another way to make errors.
Technology that more accurately assesses the risk of disease and provides guidance on the most cost-effective therapy will greatly outsell new versions of the technology we already have. And technology that corrects inefficient or wasteful processes now rampant in hospitals, clinics and physician practices, will be greeted with open arms.
We need low-cost, highly effective solutions to our key problems, while avoiding the introduction of advances that result in incremental spending with no clear benefit. The golden goose of technological progress will remain alive and well-nourished, if we can turn our attention to the over-arching problems American medicine must now confront, will confront, the rewards will be much greater.
In comparison to the money spent to develop new technologies, we spend a paltry sum on finding ways to improve the use of technologies we already have. For example, in one study it was estimated that just reminding physicians to act on elevated cholesterol values would be 7 times more effective in preventing heart attacks than replacing older cholesterol lowering drugs with newer, more potent ones. A technology that gets the 70% or so of people with diabetes who are not taking their medications as directed, to start taking their drugs like they should, would much more likely have greater benefit than another monitoring or insulin delivery device.
What medicine is also now desperate for is technology that facilitates accurate and timely communication between doctors, and electronic tools to remind doctors and patients to do what they’re suppose to do. We need those tools much more than we need yet another data set, treatment or procedure.
We need technology that improves systems of care, not ones that make the existing systems more complicated. And my bet is that another gadget that adds complexity will receive far more scrutiny, and have many more hoops to jump through, before any health plan or clinic approves its use.
Please know that I am not advocating that medical advances be abandoned in favor of system solutions—both are vital. And everyone agrees that the benefits of health care technology are often substantial. But we need, and will move toward, a new equilibrium between investing in new treatments and investing in delivering what we have.
I concede that solutions that make the delivery of quality care more systematic are not as sexy as implantable closed loop islet factories, or new devices to make insulin delivery more convenient, but technology that improves systems of care are apt to save and improve more lives—and such technology will be amply rewarded in coming years.
I also predict there will be less tolerance for spending huge sums on new treatments, versus new technology that enables us to more effectively use what has long been available. We are fast approaching the point when progress in providing good care impresses the public and payers as much as the roll-out of a device that measures another molecule, or is another way to measure the same molecule.
But that’s not all that’s coming. There is yet another tsunami that is likely to materialize, given where we’ve come from and where we seem to be going. As health care costs continue to rise faster than inflation, thus squeezing consumers and payers alike, the principles of accountability—that each incremental dollar should provide something of real value to patients—will become increasingly important.
As the cost of treating disease continues to spiral upwards, more of the cost burden will be shifted to employees, who will then find their co-payments, insurance premiums and deductibles climbing. The business case for repairing a dysfunctional delivery system will be overwhelming and those that pay the bill, 200 million adults, not just a few thousand health plans and payors, will demand more and better evidence of benefit than what’s asked for today.
Surely, this will also lead consumers to finally consider, weigh, and ponder the cost of health care. They might well go shopping, not just for information, but for bargains and sales, or a package deal on a year’s worth of diabetes supplies and care. True competition in the provision of health care services is likely to take hold.
The public will surely wonder out loud why it is that nearly every advance in technology seems to lower prices—TVs, computers, phones—but in medicine—new technology is virtually always more expensive. I think we’re just beginning to see the backlash emerging. And if the public manages to shift more of the cost of health care onto the government, we can be certain that the feds will become even more pre-occupied with cost and quality.
And next it won’t be surprising when industry’s profit-margins, pricing structure, and marketing plans come under even greater scrutiny than we see now. The current impetus to get more evidence of medical benefit will be accompanied by an equivalent mandate that seeks greater value for fewer dollars spent.
All that I’ve said is certainly not meant to scare or criticize. We must not disparage technology, nor slight its contribution. We should, however, more fully appreciate that the nature of American medicine is changing. The years of unlimited spending and the ready acceptance of any technology that simply works are drawing to a close.
The culture and values of Americans have also changed. There is a growing belief amongst the public that medicine is a business, not just a service. And because it’s quite alright to shop for care and direct the care that’s given, consumers are beginning to have cost and value on their mind when entering a health care facility, much as they do when entering Circuit City.
For new technology to be embraced and thrive, we must keep a watchful eye on the shifting sand, and not be deceived by a mirage of the past. I hope this discussion stimulates you to do your own study of how we got to where we are. I’m sure that will lead you to appreciate better the coming road ahead, and to plan accordingly. Thank you.