Most med schools in the US look basically the same. The first two years are mostly coursework, not too different from college. You sit in lecture and learn about how the body works, all the muscles and bones and blood and guts, and then you go to the anatomy lab and see it for yourself. You memorize biochemical pathways. You learn about the common diseases and some uncommon ones too, the latter mostly chosen because they illustrate a point about genetics or protein synthesis or whatever. And you have some exposure to the business of being a doctor, learning how to take a history and examine people.
Then, the last two years, you have a totally separate experience. You spend all your time in the hospital or in clinic, seeing patients along with residents and attending physicians. You rotate through many of the different parts of the hospital. You see babies born and sick kids and appendicitis and cancer. You decided what type of doctor you want to be, based on your personality and what you like to do and which specialty you got along with best, and you apply for residency.
There are variants, but this is how the majority of American medical students learn to be doctors. It’s been this way since 1910, when Abraham Flexner published his report on medical education. At that time, medical training wasn’t standardized, and many schools were turning out wildly substandard graduates. And without internship and residency being a requirement, med school was all you got – everything else you learned on the job, on real people, without oversight. Or didn’t, as the case may be. The Flexner report led to the standard 2+2 curriculum, and it worked pretty well – a couple years to learn the science of the body, and then a couple years of apprenticeship.
Except. Then it got codified and engraved in stone. And in the meantime, the world changed. Medical knowledge expanded by a gazillion-fold. We started doing internship and then residency and then fellowship and then post-fellowship fellowship (I’m looking at you, electrophysiology) to be able to master even one small niche. We got (or are finally straggling towards) EMR and UpToDate and internet streaming of lectures. We got a whole slew of education research on teaching and learning.
So why hasn’t medical education kept up? Blame the hierarchical, traditional mindset that comes with an apprenticeship model, where I do something this way because that’s how I was taught. The numbers of times I have heard that phrase over the past few years. It reminds me of the story about how when a woman got married, her husband asked her why she always cut the end off the roast. “Because that’s how my mother did it”, she says, so they ask her mom, who says that’s how HER mother did it, so they ask Grandma, who tells them it’s because her roast pan was too small. Individual idiosyncrasies get passed down from attendings to one generation of residents to the next. And when you reach attending status yourself you’re expected to pick through what you were taught, and know what to discard and what to keep.
Medical education is not broken, exactly. We still turn out pretty decent doctors every year. But there’s definitely room for improvement, not just around the margins, but in the fundamentals of the system. And it matters for all of us, because better doctors are better for patients.
That’s what this blog is about: changing the fundamentals of the system. Rethinking how we train physicians, how we interact with patients and with each other, how hospitals work and how they can work better with new technology–not just for better diagnostics and therapeutics, but to change the way medicine is practiced. Because we all know that the US healthcare system could do better. And that starts with us, in medicine. Better doctors are better for patients.