There’s something built in the human mind that forces pattern recognition. It’s not just that we can recognize patterns; we must. Maybe the patterns we’re most inclined to see are human faces. The psychiatrist Curt Thompson, summing up attachment theory, says we all come into the world looking for someone looking for us. We never stop. So we see the “man on the moon” or Jesus’ face on a piece of toast. We’re wired for this.
That’s either an unfortunate or appropriate metaphor following the release of the latest version of the large language model (LLM), GPT-4. This LLM is the latest in technological developments in the ever-growing bucket of artificial intelligence (AI), though most folks agree an LLM doesn’t have agency in a way that we might consider true AI would have agency. There are depths to this conversation that I can’t fathom, but it seems some of this work is causing people to reconsider our understanding of intelligence in the first place. This conversation between Ezra Klein and Kelsey Piper does a good job of introducing some of the basic concepts as well as the concerns.
The applications in medicine are vast. Some are obvious, like helping physicians form differential diagnoses based on a selected list of symptoms, exam findings, and study results. GPT has, after all, already passed the first two of three steps of the United States Medical Licensing Exam (USMLE). Other uses include reading radiological studies, pathology slides, or retinal images.
The drawbacks of LLM use are well-known: GPT, for example, can confabulate inaccurate responses, particularly sources. It’s also unclear, as these models develop and as Klein and Piper discuss, whether true AI might develop goals and motivations of its own that might not serve the interests of humans. In a more mundane fashion, could it also corrode the reasoning capabilities of clinicians? Or might it sharpen those capabilities by pushing us beyond ourselves. We’ll also encounter the more garden variety human evils with AI, as people program it to exploit others or just ignore implementation frameworks that exacerbate inequality. When AI makes a mistake, is it the fault of the creator, or should we create a new legal (ethical? Relational?) category of liability just for this?
I think, as tools, LLMs and AI might do a lot of good, inside and outside of medicine. However, I worry about our capacity to keep them in the place of tools. Within the realm of that concern, I want to consider two hidden barbs of this technology, one for patients and one for clinicians. For patients, the utopian dream of AI in medicine offers unfettered healing which, as delightful as that sounds, untethers patient from compassionate carers which make healing possible in the first place. For clinicians, the utopian dream of AI allows a machine to set the standard for medical care which forces human clinicians to become more machine-like.
Healing Without Others
In the (admittedly bad) film Elysium, Earth is ruined. A space station orbiting around the planet houses those who escaped and now live in luxury. They have access to machines that scan their bodies and keep them healthy by painlessly and quickly dispatching any disease. The impoverished earth-dwellers don’t have these machines, providing a major source of conflict for the story. That machine, in one form or another, finds its way into many stories. Wouldn’t it be nice if such a machine existed?
That technology promises more than just healing. It promises independence. The relationship between a patient and their clinicians (doctors, nurses, therapists, others) is one that requires vulnerability and trust. Edmund Pellegrino observed that this relationship is “a peculiar constellation of urgency, intimacy, unavoidability, unpredictability, and extraordinary vulnerability.” Notice that these are all true for the patient, not the clinician (until the clinician, through compassion, makes the patient’s concerns their own). The clinicians know so very much about the patient (or at least certain parts of the patient), while the patient usually knows very little about their clinicians (and certainly less about whatever respective parts of their clinicians).
Wouldn’t it be great if I could just climb into a pod in my living room and cure whatever disease I have? No need to share my awkward story with another person. Or I could at least confess my symptoms to a robot who could offer me the services I need. Maybe I could employ a robot caregiver for an ailing loved one who needs more supervision than I can provide. All this skips the awkward and cumbersome interaction with other humans who are, let’s admit it, not as precise or obedient as machines.
Dr. Google promises something similar on a more rudimentary scale. Your secrets are not safe with Dr. Google, but it feels more private than sharing your concerns with a human clinician, it costs nothing (well, no money), there are no stupid questions, and it’s instantaneous. Now, the chances are good that your symptom list will result in a differential diagnosis that includes cancer and, so far, Dr. Google can’t narrow that don’t further for you, diagnose anything, nor treat the cancer that you might have. But Dr. Google is the beginning of a promise made by tech companies that we’re on the path toward freeing ourselves from interdependence.
In E.M. Forster’s The Machine Stops, everyone lives in a little room, connected via tele-video technology to everyone else. People rarely, if ever, leave their room. The Machine provides for their every need. Well, almost every need:
“Who is it?” [Vashti] called. Her voice was irritable, for she had been interrupted often since the music began. She knew several thousand people, in certain directions human intercourse had advanced enormously.
But when she listened into the receiver, her white face wrinkled into smiles, and she said:
“Very well. Let us talk. I will isolate myself. I do not expect anything important will happen for the next five minutes - for I can give you fully five minutes, Kuno. Then I must deliver my lecture on ‘Music during the Australian Period.’”
She touched the isolation knob, so that no one else could speak to her. Then she touched the light apparatus, and the little room was plunged into darkness.
“Be quick!” she called, her irritation returning. “Be quick, Kuno; here I am in the dark wasting my time.”
But it was fully fifteen seconds before the round plate that she held in her hands began to glow. A faint blue light shot across it, darkening to purple, and presently she could see the image of her son, who lived on the other side of the earth, and he could see her.
“Kuno, how slow you are.”
He smiled gravely.
“I really believe you enjoy dawdling.”
“I have called you before, mother, but you were always busy or isolated. I have something particular to say.”
“What is it, dearest boy? Be quick. Why could you not send it by pneumatic post?”
“Because I prefer saying such a thing. I want —“
“Well?”
“I want you to come and see me.”
Vashti watched his face on the blue plate.
“But I can see you!” she exclaimed. “What more do you want?”
“I want to see you not through the Machine,” said Kuno. “I want to speak to you not through the wearisome Machine.”
Later in their conversation, they continue to bicker about the value of in-person visits, with Vashti still resisting Kuno’s invitation to join him on a visit to the now-inhospitable surface of the planet:
“And besides—“ [Vashti spoke]
“Well?”
She considered, and chose her words with care. Her son had a queer temper, and she wished to dissuade him from the expedition.
“It is contrary to the spirit of the age,” she asserted.
“Do you mean by that, contrary to the Machine?”
What Vashti had and valued was both independence and hyperconnectivity. But Kuno was dissatisfied. Something was lost when human presence was transmitted through the medium of the Machine. Despite also having access to innumerable resources, lectures, and “ideas,” he also sought to experience the world itself first hand, even risking his life to explore among the toxic ruins of Earth. This sounds a bit like our world, doesn’t it, with internet and smartphone in such easy reach? It’s hard to believe Forster wrote this story in 1909.
The lesson for those who would seek their healing from machines (or the Machine) is to behold how easily technology slips between humans, promising to enhance their relationships. Instead, it modifies them. It sets the tone. It frames the questions. It shapes the vision. For health and medicine, does that matter? Only to the extent that we believe people are more than machines themselves. If all we need is technical repairs to return us to health, then a machine will eventually be able to do most of that work - whether in one hundred years or a thousand. But if we’re more than parts upon which technique can intervene, then we may miss our healing.
Healing, in the deepest sense of the term, isn’t merely fixing. Its restoration to wholeness, including one’s involvement in community and story. The healer is a part of that. Whether all the technical promises of AI come to fruition or AI fades away like so many other fads, the fervent hope in the promise of such technology betrays how much we believe the human clinician is not themselves an instrument of healing when indeed they so often are - or else, if they aren’t, how badly they can hurt someone even as they prescribe the right things. Yes, yes, humans should be involved for now, but only because they’re necessary. What happens when we believe they’re no longer necessary?
In freeing ourselves from this relationship that requires trust and dependence, we also free ourselves from the conditions that cultivate those things that grow our very humanity: compassion, love, patience, wisdom, courage, and others. Is it good that people suffer and die? Absolutely not. But it is through caring for others who suffer and die that people hone these qualities within themselves and also sustain communities that make such qualities possible and transmissible. To the extent we outsource the care of those who suffer to machines, we will also face the atrophy of these qualities necessary to make caring, and healing, possible.
Machine-like Clinicians
I’ve quoted him before, but it’s worth recalling the words of Neil Postman here. He warned that we can be tricked into believing “we are at our best when acting like machines, and that in significant ways machines may be trusted to act as our surrogates. Among the implications of these beliefs is a loss of confidence in human judgment and subjectivity.” Postman cites The Principles of Scientific Management, written by Frederick Taylor who wrote just two years after Forster completed his short story. Postman described the thrust of Taylor’s work:
“…the primary, if not the only, goal of human labor and thought is efficiency; that technical calculation is in all respects superior to human judgment; that in fact human judgment cannot be trusted, because it is plagued by laxity, ambiguity, and unnecessary complexity; that subjectivity is an obstacle to clear thinking; that what cannot be measured either does not exist or is of no value; and that the affairs of citizens are best guided and conducted by experts.”
Technology generally and AI specifically offer to remediate these problems. In some ways, they seem to. I much prefer the electronic medical record (EMR) to paper charts. But the EMR is an unwieldy tool, and too often clinicians can be made to serve the tool rather than the other way ‘round - e.g., through documentation requirements, CPT coding, satisfying alerts, etc. The tail wags the dog.
There is no chance AI is going to start kicking doctors and nurses out of work in the near future. This isn’t about “turf.” From a technical standpoint, if a machine can do something better than a person, we have to ask ourselves why we’d still want the human to do the job. For the time being, humans will remain in close proximity to the suffering and dying, but under what I worry will be increasingly dehumanizing conditions and expectations wrought by the adoption of ever-advancing technology. Perhaps it’s not the AI itself, but rather our lack of capacity to distinguish what is humane activity, that makes it most dangerous. Machines will set the standard for precision, accuracy, and speed. Before long, the only relevant questions are those the machine can answer for us. Uncanny valley? The valley will be all we know.
To mitigate the risks of innovations like AI, Nina Singh and colleagues argue in favor of “digital minimalism” in healthcare. Clinicians might reduce their exposure to the toxicity of digital contagion by recognizing “clutter is costly, optimization is vital, and intentionality is satisfying.” They admit that “digital minimalism is more than the sum of its three tenets. We believe it can have the greatest effect when it is used as a framework to guide our health system’s relationship with technology. The current approach often focuses on saying ‘yes’ to each additional form of technology, without consider the cumulative impact, and then retroactively making small changes such as removing individual alerts.”
Their brief essay is a starting point and not the entire prescription. The best case scenario is that such engagement would open the conversation to a deeper realization that what we need isn’t more technique applied differently or better, but a different way of being in relation to technology and one another. We must consider who we are before we consider what we do (even as we recognize that what we do shapes who we are).
This irony is apparent in the “Getting Things Done” craze, where people seek out novel and more efficient ways of scheduling their time and organizing their life. The danger here, as Oliver Burkemann sees it, is that:
“We fill our minds with busyness and distraction to numb ourselves emotionally. (‘We labour at our daily work more ardently and thoughtlessly than is necessary to sustain our life,’ wrote Nietzsche, ‘because to us it is even more necessary not to have leisure to stop and think. Haste is universal because everyone is in flight from himself.’) Or we plan compulsively, because the alternative is to confront how little control over the future we really have.
… the more you try to manage your time with the goal of achieving a feeling of total control, and freedom from the inevitable constraints of being human, the more stressful, empty, and frustrating life gets. But the more you confront the facts of finitude instead - and work with them, rather than against them - the more productive, meaningful, and joyful life becomes.”
The techniques of digital minimalism, just like the techniques of clinical medicine, can be used to distract us from becoming the kind of people who can face human limitation and, paradoxically, transcend it, not by obviating it but by acknowledging it. That doesn’t mean we cast innovation and quality improvement to the wind. It does mean, though, that we elevate the value of moral formation to at least equal importance as those other things.
The most fundamental and hidden challenges and opportunities that come with these new technologies will be formative and existential, not necessarily technical. I hope the conversation grows to include those questions.
Trajectories
Following a meandering reading-path, sharing some brief commentary along the way.
“Liquid health. Medicine in the age of surveillance capitalism”
When society broadly and medicine narrowly loses sight of health, technology dictates the agenda. Is that a good thing? Giovanni Rubeis argues for why we should be careful to understand and articulate the end toward which we’re aiming with medical technology.
“Protecting the legitimacy of medical expertise”
The developments about which the authors write reflect an erosion of trust in expertise. We don’t earn back that trust by “defending the existence of facts,” though that’s important. The epistemic gap is insurmountable with more information. Rather, we need to start by listening to those who feel distrustful, just like we would at the bedside. If lawmakers are getting more involved in the domain of medical expertise, this reflects a dissatisfaction on the part of lawmakers (which, if things were working properly, would be a surrogate for society) with how medicine is working. That should cause clinicians to reflect. Here’s a good companion piece.
Adam Rodman’s podcast on the history of medicine tackles the development of the “problem oriented medical record” and how it’s come to shape clinician thinking. Listening to this podcast, it’s surprising that we’ve been slow to learn lessons from this technology (and that of the EMR) so that we can apply them to emerging technologies like AI.
Closing Thoughts
“Illness is the experience of living through the disease. If disease talk measures the body, illness talk tells of the fear and frustration of being inside a body that is breaking down. Illness begins where medicine leaves off, where I recognize that what is happening to my body is not some set of measures. What happens to my body happens to my life.”
Arthur Frank, At the Will of the Body
What does technological loneliness do to the human psyche primed by hundreds of thousands of years of community and tribal relationships? A loneliness so primed that its supposed solution are LLM chatbots? There are aspects of the degradation of GenZ's mental health we aren't talking about, which will make the Alpha cohort a great experiment in if A.I. can teach soft skills.