4 C’s of the Quantified Doctor-Patient Relationship
by Margalit Gur-Arie 11/09/2015
Four Cs to “quantify” a doctor-patient relationship are: Choice for both patients and physicians, Competence of physician, Continuity of care relationship and treatment, and lack of physician Conflict of interest
For patients, this means choice of practice, settings, primary care physician, specialists, hospitals, and choice among treatment alternatives. Surely the degree to which these choices are available to patients can be objectively calculated, rated and ranked as is now fashionable. For example, where patients are assigned to physicians by third parties, the relationship would score a big fat zero. A point or two would be awarded to a vertically integrated system where patients can choose from the physicians employed by the group. Scores would be proportional to network size and variability for more traditional plans, with Medicare fee-for-service and cash-only practices getting the highest scores. Obviously, patients will need to account for individual scenarios for incrementing or decrementing scores.
Choice of specialists and hospitals can be inferred from the same variables as measured above, but adjustments will need to be made to account for hospital privileges and referral patterns of the primary care physician. This too can be measured and scored pretty accurately from easily obtainable hard data. Choice among treatment alternatives is a bit trickier, particularly in primary care. Using process measures, sample documentation and insurance plan policies, one could derive an individualized measure of choices available to patients. It is important to note that here we are not measuring “appropriateness”, “stewardship of scarce resources” or how “wisely” people choose, nor do we measure “education” about options. We measure the actual availability of treatment options.
How does one measure physician competence? Arguably, all current “quality” measures, public reporting and board certifications are aiming to quantify and ensure precisely the competence of doctors, in a roundabout way that is failing to measure anything of consequence. If we describe a competent physician as one who stays up to date, has good technical and diagnostic skills, exhibits good clinical judgement and is cognizant of his or her own limitations (as Dr. Emanuel did), we could devise better ways to assess competence. Staying up to date is trivial to measure. Technical and diagnostic skills, as well as clinical judgement, are very difficult to assess objectively, and perhaps this is why all our faux measuring schemes seem woefully inadequate.
We can certainly envision physicians assessed by their peers (perhaps anonymously or through virtual grand rounds collaboratives), but professional competence cannot be discussed until we quantify the prerequisite time variable. It makes little difference whether a physician is competent or not, if the patient rarely sees the doctor, or if visits are limited to a few minutes of furious typing, clicking and scrolling. So here is one variable that can be objectively and rather easily quantified: time spent with patients by severity of chief complaint, patient health status and vulnerability. We can get fancy and measure frequency of visits and total time spent per patient per year, adjusted for a host of variables.
Another factor closely related to competence in primary care, and not explicitly addressed by the C’s framework, is comprehensiveness. This too can be measured objectively. The range of conditions treated by the physician, and the list of those routinely referred out can be compiled, ranked and assigned relative scores accounting for frequency of occurrence, along with patient characteristics. For example, a physician treating large numbers of elderly diabetics with multiple comorbidities, would garner more competence points than a physician who spends most of his time taking telemedicine calls for minor and limited ailments. A physician who admits and manages her own patients when hospitalized would rank higher than physicians who never set foot in a hospital.
Continuity of care is another word for long lasting, comprehensive relationships, and it can be accurately quantified with very little effort. Both PCMH and standard patient experience surveys include vague attempts to quantify continuity, but those could be misleading. Continuity of care is now applied loosely to teams of clinicians, such as residency groups, and it does not account for how appointments are conducted. When the patient is seen by a team member, and the billing doctor sticks his head in for a few seconds to say hello, does this count as continuity? When any and all patient interactions that do not involve a face-to-face visit are “handled” by other team members, and never the physician, does that count as continuity? How about outsourcing complex care management in between visits altogether, which is the “unintended” consequence of the new Medicare chronic care management fee?
It is important not to confuse continuity of care with continuity of medical records, or care coordination, when quantifying this aspect of the doctor-patient relationship, but other than that this may be the easiest factor to quantify objectively. A physician who always sees his or her patients, is always available in between visits to provide clinical advice, and has maintained this relationship with individual patients over long periods of time, would score high on this factor. Almost by definition, solo practitioners and many direct primary care physicians should top the charts on continuity. Similar to the quantification of patient choice, here too we must account for the vagaries of health insurance marketplaces which are increasingly empowered to break any relationship at any time on a whim.
4. (non) Conflict of interest
This is arguably the most important factor in the doctor-patient relationship, and other than random incendiary headlines, there are no serious attempts to measure or even shed light on the mushrooming conflicts of interest systematically inserted into the traditional doctor-patient relationship. Ideally, physicians would always act solely in the best interest of the one patient in front of them. Most people still believe that this is the case and most physicians will insists that regardless of circumstances, this is what they strive to do, but there are objective data points that could more precisely quantify the alignment of interests between doctors and patients.
We all know now that accepting the smallest gifts from pharmaceutical companies represents a conflict of interest. But how about directly tying salaries, and other compensation for labor, to corporate revenues? How about enforcement of corporate protocols and suppression of “disruptive” behavior? How do these things jive with the clinical judgement required by our “competence” factor? How about coercive “reimbursement” rates that force physicians to limit time spent with patients, and exclude certain patients from their practice? How about participation in incentive programs that pay doctors to substitute the interests of “society” for the individual interests of patients (as “misguided” and “wasteful” as those may be)? These are precisely quantifiable data.
Ideally, I would love to see a comprehensive, and frequently updated, list of all potential conflicts of interests for each physician, by health insurance plan, publicly displayed in every practice and on every practice website. Why? Because conflict of interest, whether by choice or externally imposed, affects the most basic ingredient of any relationship: trust. If you were charged with a crime, would you trust a lawyer who is payed to keep society safe from criminals? Would you trust an accountant who is paid to increase IRS revenues? Would you trust a hair dresser paid a fixed fee per client per year? Would you trust a mechanic who gets a little kickback from your insurance company to use the cheapest replacement parts for your car? Same goes for doctors.
In summary, there is absolutely no reason why we should not collect objective data, which is readily available in quantifiable formats, and combine it to create an informative picture of each physician and the environment in which he or she is practicing medicine. We may not be able to come up with a simplistic single score on some artificial scale, and we may not be able to punish or reward doctors for the “relationship measure”, but people have a right to know what lies behind studied communications and standardized compassion, and most of all, people have a right to know how health care reforms are affecting a physician’s ability to maintain relationships with patients. If I’m not mistaken, this is what transparency is all about.
Margalit Gur-Arie is the founder, BizMed. She writes regularly about the intersection of healthcare & technology on her site: On Health Care Technology. Follow her on Twitter at @margalitgurarie
And, here is Margalit Gur-Arie on how EHRs have become “oppressive straightjackets” for the practice of medicine —
A Health IT Developer’s Confession: How Bad Software Is Made and What to Do About It
Dec 7, 2015
By MARGALIT GUR-ARIE
It was a dark and stormy night. My computer didn’t catch fire while typing the previous sentence. No alarms were triggered warning me about the quality of such opening. I wasn’t prompted to select subjects and predicates from dropdown lists. I typed the entire sentence, letter by letter, not at all dissimilar to its first rendering back in 1830. Computer software in general, and Microsoft Word in particular, magically removed the hassles of quills, ink, paper, blotters, sharpeners, ribbons, whiteout, carbon paper, dictionaries, and all the cumbersome ancillary paraphernalia needed to support authoring, but made no attempt to minimize the cognitive effort associated with writing well. Authoring great literature today requires as much talent and mastery as it did in the days of Edward Bulwer-Lytton.
For several decades, software builders have tried to help doctors practice medicine more efficiently and more effectively. As is often the case with good intentions, the results turned out to be a mixed bag of goods, with paternalistic overtones from the helpers and mostly resentment and frustration from those supposedly being helped.
Whether we want to admit it or not, the facts of the matter are that health IT and EHRs in particular have turned from humble tools of the trade to oppressive straightjackets for the practice of medicine. Somewhere along the way, the roles were reversed, and clinicians of all stripes are increasingly becoming the tools used by technology to practice medicine. A common misconception is that EHR designers produce lousy software because they don’t understand how medicine is practiced. The real problem is that many actually do, and the practice of medicine is precisely what they aim to change. These high clerics of disruptive innovation would have you believe that “resistance to change” is equivalent to the resurrection of paper charts, thick ledgers, and medical information coded in secretive hieroglyphs. The truth is that physicians want to use modern computers, but they resent being used by computers. And the truth is that if we shed the orthodoxy imposed on us by self-serving “stakeholders”, computer software can indeed help address various problems in health care, some in the here and now, most in a distant future.
One thousand and one elements
This may sound strange to some, but the first step towards putting EHRs back on the right track should be to stop trying to help physicians practice medicine. Clinical decision “support” in the form of alerts, disease specific templates, mandatory checklists, required fields and rigid workflows are some of the things that must be removed from EHRs for two reasons. First, most of these “features” don’t work very well anyway. Second, more often than not, the real purpose of said support is not clinical in nature. For example, alerts about generic substitutes for brand name medications, data fields that must be filled and checkboxes that must be clicked to satisfy billing codes, PQRS or Meaningful Use, and the wealth of screens to be traversed before an order can be placed, have no clinical value. And in most cases the opposite is true.
Some experts argue that EHRs are failing because they are nothing more than an old paper chart rendered on a computer screen. Many others are outraged by the fabled lack of interoperability (dissemination of information) or the lack of EHR usability, i.e. number of clicks, visual appeal, color schemes and ease of information retrieval. I would suggest that these dilemmas are peripheral to the one foundational problem plaguing current EHR designs – the draconic enforcement of structured data elements as a means of human endeavor.
When Google mapped the Earth, it did not begin by mandating how to build and name roads and buildings. When we indexed and digitized books and articles, we did not require that authors change the way they write prose or poetry. When we digitized music, we did not require composers and performers to produce binary numbers at equidistant time intervals, and we did not make changes to musical instruments to allow for better sampling. We built our computerized tools to ingest, digest, slice, dice and regurgitate whatever humanity threw at us, without inconveniencing anybody. This is why good technology seems magical.
EHRs on the other hand, are obnoxiously demanding that people change how they think, how they work, and how they document their thoughts and actions, just so that the rudimentary software prematurely thrust upon them can function at some minimal level of proficiency. People don’t think in codified vocabularies. We don’t express ourselves in structured data fields. Instead of building computers that elegantly adapt to the human modus operandi, EHRs, unlike all other software tools before them, demand that humanity adjust itself to the way primitive computers work. The self-appointed thought leaders, who are taking turns at regulating the meaningful clicks of EHRs, are basically demanding that we discard the full spectrum of human communications, in favor of gibberish that supposedly serves a higher purpose.
All the pretty horses
What is the purpose of EHR documentation templates? There is practically no EHR in use today that does not include visit templates. Visit templates are a list of checkboxes, some with multiple nested levels, which allow documentation by clicks instead of by typing, writing, drawing or dictation. Visit templates are created for each disease and contain canned text for findings judged pertinent to that condition by template creators. In all fairness, many physicians like documentation templates because with just a few clicks you are able to generate all the documentation required nowadays to get paid for your work, pages and pages of histories, review of systems, physical examination, assessments and plans of care. Do doctors like templates because they believe this extensive documentation is necessary, or do they like templates because the checkboxes alleviate the pain of typing thousands of meaningless regulatory words? I suspect the latter.
Clinical templates, along with the automated clinical decision support they enable, are advertised as time savers for physicians. The time saved is the time previously spent with patients, and most importantly the time spent thinking, analyzing, and formulating solutions. For most, it’s also the time spent rendering thoughts in a manner that can be understood by another person. Furthermore, when your note taking is template driven, most of your cognitive effort goes towards fishing for content that fits the template (like playing Bingo), instead of just listening to whatever the patient has to say. Even in “efficient” practices where staff does the clicking and physicians have the luxury of asking “open ended” questions, the patient story, the quirky details that are irrelevant to the template, are not documented (highlighted, circled, noted on the margins, etc.) anymore. Is this a good thing?
If we proceed on the assumption that IBM Watson and the likes are eventually going to be artificially intelligent enough, and big data are eventually going to be big enough, to respectively analyze and represent a complete human being, then yes, we can safely dispense with old fashioned human expertise. However, we are most certainly not there yet, and regardless of industry rhetoric, we are not certain that we will ever be there, and we are not even sure that we want to ever be there. While this utopia (or dystopia) is portrayed by interested parties as “inevitable”, chances are that for at least several generations we will be forced to contend with imperfect digital renditions of medicine, instead of allowing EHRs to follow the growth of underlying technologies. This is akin to summarily confiscating and shooting all the horses, on the day Henry Ford rolled the first Model T off his assembly line. Where would America be today, if we did that on October 1, 1908?
Furthermore, what type of doctors are we producing when we teach medicine by template, supported by clinical decision aids based on the same template, and assessed by quality measures calculated from template data? Medicine does not become precise just because we choose to discard all imprecise factors that we are not capable of fitting into a template. Standardization of processes and quality does not occur just because we choose to avert our eyes from the thick edges were mayhem is the norm. Dumbing physicians down is not the optimal strategy for bringing computer intelligence closer to human capabilities. EHRs should not be allowed to become the means to stifling growth of human expertise, the barriers to natural interactions between people, or the levers pushed and pulled at will by greed and corruption.
Instead, EHRs could be the scaffolding for IMB Watson and other emerging contraptions to grow and become truly useful tools for both doctors and patients, and yes, also for legitimate and beneficiary secondary uses of clinical information. Instead of mandating that doctors think and work in ways that serve Watson’s budding abilities, we should require that Watson learns how to use the normal work products of humans. Instead of enforcing templated thought and workflows, whether through direct penalties for doctors or indirect certification requirements for software, we should work on teaching Watson how to parse and use human languages in all their complexity. Watson should grow up to be the multi-media scribe behind the computer screen, the means by which the analog music composed by physician-patient interactions is digitized into zeros and ones without loss of fidelity and without interference with actual performance.
Billions of years of evolution endowed the lowliest human specimen with cognitive abilities that machines will most likely never attain. The glory is in the journey though. We need to accept delayed gratification, and we need to accept that the challenge will span centuries, not just one boom-bust cycle of a fleeting global economy.
We need to accept the fact that we will all die long before the ultimate goals are achieved, instead of declaring victory whenever each negligible incremental step is taken. If we are going to create a new form of intelligent life on earth, we need to assume the same humility Nature, or God, has been exercising since the dawn of time and counting. Otherwise, we are all just a bunch of hacks looking to make a quick buck on the backs of our fellow men and women.