Evidence-based medicine is not a new idea, but, as a concept, it is enjoying a new level of attention. If the term existed before 1990, Google's Ngram viewer detects no mention of it. Almost any recent issue of a health-policy focused journal or any discussion of how to improve the efficiency of the health system will include mention of the need to motivate practice with greater attention to clinical evidence, with less waste from over-use and missed opportunities from under-use. We're spending more in the pursuit of evidence too. The American Recovery and Reinvestment Act of 2009 allocated $1.1 billion for comparative effectiveness research. The Affordable Care Act further supports research of this type by establishing the Patient-Centered Outcomes Research Institute (PCORI). So, at the moment, the future looks bright for the prospects of more information about what works and doesn't in health care. I assert, however, that as hard as it is to do high-quality research in this domain, it's the easy part. The much harder part is translating what is learned into practice. I've learned this first hand, as a member of the New England Comparative Effectiveness Public Advisory Council (CEPAC). At each CEPAC meeting, my council-mates and I face the challenge of using comparative effectiveness research of treatments for a specific condition to decide what works and what doesn't and for whom. It's always hard. No studies make all the comparisons one would like to see, for the populations of interest, and with flawless methods. Every study has important limitations. Consequently, one is forced to interpolate and extrapolate from what is available and in many dimensions simultaneously. Fundamentally, we are struggling with the problem of how to best inform providers, patients, and payers about the what the evidence means for real-world practice. A related question is, how does one get providers, patients, and payers to act upon it? These are not easy problems and questions. I do not have full and fool-proof answers. Though I could offer some partial ones, they'd be similar to what you've likely seen elsewhere, variants of getting the incentives right and other standard ideas. And organizations and initiatives like CEPAC play an important role. There is, however, something about which I am more certain. There is an important prerequisite for health services researchers to assist in this translation challenge: understanding some basics of medical science and how evidence is (or could be) actually used in clinical settings. I've asked many professors in schools of public health whether their Masters and PhD curricula require students to take a course on medical science. So far I haven't found one. Moreover, as far as I know there does not exist a course in the country that teaches basic concepts of medicine for social scientists. (If you're aware of one, let me know.) Does it strike you as odd that we are training students to be experts in health care delivery, organization, and policy without at least offering the opportunity for them to learn some details about medical science? It's a bit like an engineer not knowing Newton's laws. Maybe this made sense some time in the past, but given the current emphasis on evidence-based care and comparative effectiveness research, I think it is time that even health services researchers, health economists, and anybody who claims to be a health policy expert knew more about medical science. I can't enter a CEPAC meeting expecting to participate without knowing something about the science behind the condition in question. Unlike some of the clinicians on the council, I have to do a little extra work to get up to speed. We all, as health services researchers, need to get up to speed. If we're going to talk the "evidence-based medicine" talk, we've got to walk the walk ... a little. (Crawl the crawl?) But medical science is complex and vast. How can we begin to learn some basics? Here, I have answers. For your textbook, I recommend Michael Hochman’s 50 Studies Every Doctor Should Know: The Key Studies that Form the Foundation of Evidence Based Medicine. (See also the website 50studies.com.) Each chapter tackles a different, hallmark study in preventative medicine, internal medicine, surgery, obstetrics, pediatrics, radiology, neurology, psychiatry, and systems based practice. It's just the basics, but you can, of course, track down the full study for additional details. I admit, 50 Studies Every Doctor Should Know is a bit dry. So, for your lectures, I recommend the SMART EM podcast. The hosts are two emergency care physicians. In each monthly episode, they dive deeply into the evidence for a particular condition and describe how it might be used to inform practice. (I'd love to hear a similar podcast on primary care, but I am not yet aware of one.) SMART EM is not dry. The hosts are entertaining. Still, like 50 Studies, there's a lot of jargon. You'll have to pause and look things up. But that's part of the learning process. Finally, for your homework, subscribe to the issue alerts for a few medical journals. NEJM and JAMA are fine, though there are others. Naturally, I would not recommend trying to read all or even most of the articles. If you're reading 50 Studies and/or listening to SMART EM, some will jump out at you. Pursue them. Also, pay attention to the editorials, as they will highlight areas of contention or where major new developments occur. I promise you, if you do these things, you will soon increase your understanding of medical science. Having done so, you can then utter the words "evidence-based medicine" and know a great deal more about what that means, how complex it really is, and some of the challenges in translating evidence into practice. -Austin

Blog comments are restricted to AcademyHealth members only. To add comments, please sign-in.