In a prior post I explained how I was in agreement with the suggestion in a commentary [4] by Rebecca Russ-Sellers, Jerry Youkey, Ronnie Horner that health services researchers should learn more about the practice of medicine. As I mentioned, the authors, along with Matthew Hudson published a series of commentaries about remaking health services research (HSR), and that I did not agree with all of their ideas. In this post, I'll push back on some of them. (For the reader's convenience, full references and links to all their commentaries to date are at the end of this post.)

In [1], citing the fact that there are still racial and ethnic disparities in health care and predicting the failure of the accountable care organization model, the authors claim HSR has not achieved its potential. To be sure, we should strive for greater achievement, but the continued existence of disparities and the anticipation of failure not yet realized is hardly a sound or complete basis for assessing HSR.

Still, the authors believe there is trouble in the field, and the heart of it is that it is not patient-focused enough.

[H]ealth services research, even when it investigates comparative effectiveness or is otherwise focused on patient-centered outcomes, is methodologically targeted to the “average” patient in a population of patients. [...] The average patient seldom exists in the reality of medical practice; it is a statistical concept with certainty to result in the wrong care at the wrong time or in the wrong way with the wrong outcomes for many patients.

This echos a complaint articulated by Trisha Greenhaigh and colleagues and addressed by Bill Gardner on TIE.

You may think that the problem is that the evidence-based clinicians got off on the wrong foot by oversimplifying the patients when they developed their algorithms [or guidelines]. Maybe so, but this is almost unavoidable. The reason is that evidence-based medicine might also be called “medicine guided by statistical learning from data” and it is hard to learn anything from statistical data unless you can simplify your problem to some degree.

Ultimately one has to average over some number of patients to learn anything from data. Of course every patient is unique, but one cannot learn without generalizing (or "simplifying") to some extent.

Aaron Carroll made a similar point at The Upshot in discussing medical guidelines.

Some doctors [...] believe that patients should be treated as individuals, and think that guidelines, and evidence-based medicine, are too “cookbook,” remove doctors from the equation, treat patients all the same, and result in missed opportunities for better care. [...]

[But] guidelines aren’t meant to tell you how to take care of every patient. They’re meant to tell you how to take care of specific patients. They tell you that for certain patients who meet certain criteria, there is a best way to practice.

But a physician still must decide when a patient doesn’t meet the criteria, and if not, must treat that patient using judgment and experience. Guidelines don’t cover everything, but we should allow them to cover what they can.

I'm all for pushing the envelope on patient-centeredness, but one must recognize that there are limits and that those limits do not doom HSR or evidence-based medicine.

I'll conclude with one a last point from [2] that rubbed me the wrong way.

[T]he preferred study is designed to minimize the impact on the provider in providing care to the patient and on the patient who is there to receive care.

Here too, one must recognize reasonable limits of this ideal. Sometimes studies must be disruptive to be both informative and to move the system in the direction in needs to go. We cannot presume that the status quo is ideal, after all. For example, Atul Gawande's interventions to encourage the use of checklists were disruptive! They inconvenienced providers! But they also improved care. The same might be said of interventions to promote shared decision making.

Of course, all things being equal, we don't want to inconvenience providers or patients. But we tolerate some degree of it for the greater good (e.g., medical education means that less experienced practitioners do some of the health care delivery). It cannot be any other way, and that's not the fault of HSR. That's the nature of collecting evidence.

References

1. Horner RD, Russ-Sellers R, Youkey JR. Rethinking health services research. Med Care. 2013;51:1031–1033.

2. Sinopoli A, Russ-Sellers R, Horner RD. Clinically-driven health services research. Med Care. 2014;52:183–184.

3. Russ-Sellers R, Hudson M, Youkey JR, et al. Achieving effective health service research partnerships. Med Care. 2014;52:289–290.

4. Russ-Sellers R, Youkey JR, Horner RD. Reinventing the health services researcher. Med Care. 2014;52:573-575.

Austin B. Frakt, PhD, is a health economist with the Department of Veterans Affairs and an associate professor at Boston University’s School of Medicine and School of Public Health. He blogs about health economics and policy at The Incidental Economist and tweets at @afrakt. The views expressed in this post are that of the author and do not necessarily reflect the position of the Department of Veterans Affairs or Boston University.

Blog comments are restricted to AcademyHealth members only. To add comments, please sign-in.