A Machine Better Than Family
Artificial Surrogate Decision-Makers (i.e., the Patient Preference Predictor)
“I remember the first time I saw an artificial surrogate in action.”
Dr. Logan leaned against his desk.
“Of course, that’s what we called them then. They weren’t quite as sharp as the synthetic selves you’ve got today. I was working in the emergency department and this John Doe comes in, unconscious.”
Students furrowed their brows. Dr. Logan explained about John Does. This John Doe had a card with a number, instructing them to punch it into the medical record.
Dr. Logan and a few others had huddled around a computer. A string of text across a window introduced itself as Kori, David Roger’s artificial surrogate decision-maker. Their John Doe had a name. Kori provided a small window in David’s life - he was unmarried, with a brother who lived across the country, and retired last year from working as an engineer. It then asked the team a question: “What are we trying to do to help David?”
The cursor blinked at them as they blinked at one another.
“I had the training by then. Our leadership said we should incorporate these Koris whenever we encountered one. But it made me uneasy. I could talk to a grieving wife in the waiting room. But chatting with a machine at the computer while my patient lay in another room? It was strange.”
David got better. Other patients came. Some got better, some got worse, some died, as patients do. Kori was sometimes there to shepherd families and clinicians alike.
“Then there was one where a man brought his wife in. She was having a lot of trouble breathing. A bad flare of COPD, which I think you all learned about a few weeks ago.”
In between gasps, she told the nurse about her Kori. Her husband looked around, aghast. They hadn’t talked about this. The woman didn’t just want Kori’s help. She had made Kori her surrogate decision-maker instead of her husband. It was all written down in an advance directive. Kori explained that it could make decisions the woman’s husband couldn’t, like saying no to a ventilator.
That’s what Kori did. By this time the woman was semi-conscious. “Her husband was furious. Screaming at staff, screaming at his wife. When he wouldn’t let us give her any morphine to ease her breathing, we had to call security. We had no choice. She was the patient, he wasn’t.” Kori’s cursor stood blinking at the woman’s bedside as her husband wailed in the parking lot. The woman died within the hour.
“In those early days, families had trouble adjusting. They’d object, yell, fight. It was common for people to appoint Kori without their family’s knowledge. A lot of people don’t trust their spouses.” Some students chuckled.
“It wasn’t very long, maybe a year or two, before these were common. Almost everyone had them. Sometimes Kori helped out, sometimes it was the main decision-maker. The hospital had Nori, a general artificial surrogate for people who didn’t have Kori and had no human to speak for them. These were just so much better at doing this than any human. There’s lots of evidence for it now.”
A student raised her hand. “Is there anything that’s worse now compared to then?”
“Worse?” Dr. Logan paused. “No, I don’t think so. But there is a difference. People cared enough to decide for others back then. There was something to human relationships that’s hard to see now. Families don’t want to intrude on their loved one’s desires. Everyone wants what they want. A person’s relationship with their synthetic self, just like with those old artificial surrogates, has become sacred.”
“You make it sound worse.”
“It’s different, but I think we’ve improved things. The outcomes are better. The synthetic selves are better companions than any human could be, more reliable. Now you don’t type to Kori, you talk with a video representation of the patient themselves, as if they were really there making the decisions. In a way they are. There’s nothing in someone that hasn’t been carried over to the synthetic self. There are no mysteries anymore. What is human caring in the face of all this progress?” Dr. Logan spread his arms wide, as if to fill the room with the riches of innovation they now enjoyed.
They constantly try to escape From the darkness outside and within By dreaming of systems so perfect That no one will need to be good But the man that is will shadow The man that pretends to be.
T.S. Eliot, “The Rock”
One of the well-worn questions in any clinician’s repertoire is, “What would they have wanted?”1 That is, what would this person, who now can no longer decide for themselves, have decided if they could tell us? Gone are the days when clinician and family conspired to decide what was best for a confused patient. Now we’re left with faltering attempts to discern “what they would have wanted.” While I’m not suggesting we return to the era of paternalism, it’s not at all clear people nowadays do indeed receive the care “they would have wanted.”
Sometimes this involves a document where the person, either a month or twenty years prior, declared treatments that may or may not be acceptable to them in certain circumstances. Chances are excellent that those pronouncements are either too broad, vague, or irrelevant to apply in any given real life clinical scenario. Even if they do apply, clinicians and families shrug as they say, “But is that what they’d want now?” People change their minds all the time.
Sometimes, bereft of any such documentation, families intuit their way through it, helped or hindered by commentary from the waiting clinician. Maybe there was a conversation that one time, or the family just knows. They know the kind of person this is.
Sometimes there’s no one. There’s a lonely patient, unconscious or otherwise incapacitated, known by no one except their medical record. It is, after all, the medical record that now sustains the most abiding relationship with a patient across our healthcare bureaucracy. In these circumstances, we’re left searching for policies and laws to guide, forming committees to render judgments by strangers for strangers.
It turns out surrogate decision-making is tough. Is there any way to make it better?
Annette Rid and David Wendler proposed one idea: let’s get a computer to do it. They call this the “patient preference predictor” (PPP). Here’s how it would work: someone would gather “extensive data on how individuals want to be treated in various situations.” This would be a lot of data on things like age, gender, religion, current functioning, attitudes and values, etc. There’d also be the opportunity to respond to various clinical dilemmas, similar to questions you might answer in an advance directive - e.g., would you want to be sustained on a ventilator if clinicians thought you had little chance of recovering? “Based on these data, statistical analysis would be used to identify which factors predict patients’ treatment preferences during periods of decisional incapacity. The identified predictors, including their weight and possible interaction with other predictors, would then be modeled statistically for predicting the treatment preferences of individual patients.”
Voila! A surrogate decision!
Or at least some guidance for the human tasked with the surrogate decision-making. Rid and Wendler’s review of the literature and argument is extensive; I won’t reproduce all of it here. Importantly, though, they don’t intend that the PPP should replace human surrogate decision-makers, but only help them. They acknowledge that the PPP may provide a “strong default” for particular decisions in which humans would need substantial justification to overcome, but they don’t claim that’s the only option. Early in its conception, their vision leaves much to be realized.
Since Rid and Wendler published their proposal, others have added to it. For example, Brian Earp and colleagues suggest making a personalized patient preference predictor (P4) to account for idiosyncrasies that aggregated demographic data might overlook. Patients would either provide or the algorithm would scrape personally created resources like texts, blogs, or individual surveys to inform its decisions.
Others, like Ryan Hubbard and Jake Greenblum, take a stronger stance. They argue that the “autonomy algorithm” (AA, as they call it) “should have sole decision-making authority [because] the AA will likely be better at predicting what treatment option the patient would have chosen. It would also be better at avoiding bias and, therefore, choosing in a more patient-centered manner.”
Despite hanging around the ethics literature for about a decade, and despite surrogate decision-making so often bedeviling healthcare, and despite many patients and surrogates liking the idea, neither the PPP, the P4, nor the AA have yet to be created, let alone empirically tested. How we define success and whether the PPP would ever be able to achieve it involve one set of questions, but I’m more interested in what hopes and fears the proposal reveals even if the technology never comes to fruition. It’s these hopes and fears that shape relationships between patients and clinicians even now.
The Deciding Machine
It is true that surrogate decision-making and advance directives are both limited in realizing, in any given clinical scenario, a patient’s previously expressed wishes. The reasons for this are many (reviewed in the Rid and Wendler piece cited above, and also discussed here). First, people don’t really talk about this stuff, so surrogates might be uninformed. Even when conversations have taken place, surrogates are biased people with their own values and agendas. Advance directives are vague, general, and irrelevant, and often inaccessible at the point of care. We also must contend with the fact that people don’t do a great job of deciding for themselves in the future: they overestimate both their positive reactions to good outcomes and their negative reactions to bad outcomes (affective forecasting errors). If people can’t get it right for themselves, you can imagine how hard of a time others will have in getting it right for them!
Whether it’s a person or a machine making the decision for someone, “substituted judgment” is a bit of a fiction. We can’t know with 100% certainty what the patient “would have chosen.” Even if it’s written down that they’d never want dialysis, we can’t know if they would have changed their mind in this circumstance (as many people do!). Clinicians and surrogates work together to tell a story about an acceptable choice that they both imagine this person could have made. This is why I prefer the word “could” over “would,” as it’s more honest about what’s happening: “what could they have chosen, knowing them and these circumstances?”
The PPP promises to obviate (not solve) these problems. By knowing enough about someone - either demographically or inferred from personal information - the machine could make decisions that are closer to “what the patient would have chosen” to a higher degree of specificity than any human surrogate could. Not only would this presumably benefit people who already have human surrogate decision-makers like spouses and siblings, but it would be of immense benefit to so-called “unbefriended patients,” those who have no one to speak for them in their hour of need.
Most of the concerns raised in discussions about the PPP relate to data privacy as well as process management. These concerns lead to questions like, how can we ensure we only use the PPP in cases where patients would want it used? How can we ensure the PPP only has access to information people want to give it? How will the PPP be used in any given instance: as a support, or as a replacement, for human surrogates?
These questions don’t allow people to perceive other challenges that the PPP might bring to the clinical encounter. The suggestion of the PPP reveals certain beliefs about who we are and what we expect from our relationships, but we need the eyes to see them. The PPP would be a pharmakon.
No One Will Need To Be Good
Neil Postman warned that we can be tricked into believing “we are at our best when acting like machines, and that in significant ways machines may be trusted to act as our surrogates. Among the implications of these beliefs is a loss of confidence in human judgment and subjectivity.” He cited The Principles of Scientific Management, written by Frederick Taylor in 1911. Postman described the thrust of Taylor’s work:
“…the primary, if not the only, goal of human labor and thought is efficiency; that technical calculation is in all respects superior to human judgment; that in fact human judgment cannot be trusted, because it is plagued by laxity, ambiguity, and unnecessary complexity; that subjectivity is an obstacle to clear thinking; that what cannot be measured either does not exist or is of no value; and that the affairs of citizens are best guided and conducted by experts.”
Surrogate decision-making is afflicted by all sorts of problems. Technology generally and artificial intelligence specifically offer to remediate these problems, but the problems must first be framed technically. They are problems of efficiency. What are we missing in the current discussion about the PPP and surrogate decision-making more broadly?
There are boundaries that I suspect neither the PPP nor human surrogate decision-makers can or will cross. Consider a patient with the capacity to make decisions about whether he’s going to accept insulin while he’s in the hospital. Sometimes he accepts it, sometimes he doesn’t. He ends up getting his insulin about 10% of the time. Clinicians might not like it, but they sigh and say he has the right to make bad decisions. Now, suppose he becomes hypoactively delirious. He won’t resist any of the care offered by hospital staff. The clinicians turn to his wife and she says, “I know he hates insulin. Whenever he’s in the hospital, he only accepts it about 10% of the time. So I only want you to administer insulin 10% of the time.”
A bit strange, no? Clinicians might hesitate to carry out her decision, even if it was made via substituted judgment. Daniel Brudney explains that some forms of surrogate decision-making don’t represent actual consent, but rather hypothetical consent. He describes it this way:
“Hypothetical consent … is not a weaker form of actual consent. Rather, the two are conceptually distinct. Hypothetical consent has to do with what a person would agree to, not with what she has agreed to. Hypothetical consent has nothing at all to do with the exercise of a person’s will. It concerns what would be rational to do, not what is actually being done. … there is nothing morally binding about a hypothetical contract. Invoking such a contract is a way to get at what action makes the most sense in a given situation. Here, ‘choice’ is just a metaphor—a way to track what is reasonable to do. The will is not in the picture. This is equally true of the ‘What would the patient choose?’ question.”
Brudney goes on to argue that it’s authenticity, not self-determination, that provides the moral basis for inquiring about “what someone would want.” I would add that there’s value in choosing something for oneself rather than having it chosen for you. As Hilde Lindemann and James Nelson observe, “what people value about making decisions is not only getting the outcome they choose, but also getting the outcome because they choose it.” The authenticity is made real in the choosing, not just in the circumstances coming about that one would prefer.
Because authenticity (or self-determination for that matter) doesn't trump all other considerations, in cases of surrogate decision-making the balance should shift. This is what most clinicians would intuit with the wife who wants staff to only give her husband 10% of his insulin even if her decision is predicated on substituted judgment. Surrogate decision-makers need greater justification for their decisions that may cause harm than even the patient would need to provide.
It’s unlikely the PPP would offer this as a possibility either, though if one were strictly adherent to substituted judgment it should be the outcome at least some of the time. These limits should keep us honest about what we’re doing when we decide for another person. It’s not merely about checking the box on “what they would want.” We need to decide well.
While it seems no one is yet rushing to actually create the PPP, I suspect it’s only a matter of time before more people wake up the possibility of AI being used for this purpose. The PPP was dreamed up before the large language model (LLM) revolution and so putting the two together will reinvigorate the quest to develop an actual PPP. However, even if it never comes to exist, it reveals the evergreen hope that our tools can save us from ourselves.
What’s so bad about that?
In trusting machines rather than people, we erode our capacity to learn how to trust people. Trust isn’t binary. Not only does it exist in gradations, but it exists as a capacity we have to develop. Trust lives. It can be well-nourished or starved. Someone can be too trusting and naive, and another person can be too untrusting and cynical. We develop our trust wisely and well by exercising judgment about who we trust, when, and with what. Others help us in this, for good or ill. If machines usurp the circumstances in which we develop this capacity, I worry it will become harder for us to trust people and our relationships will become more anemic. The PPP could further tempt us into the hypervigilance about which Neil Postman warned: the machine is far more trustworthy than any human.
The PPP desiccates clinical decision-making, and the whole clinical encounter, into a technical endeavor. The PPP is grounded on the presumption that a patient with the capacity to make a decision is given information and they yield a response. It treats the patient as a machine, which is why we believe a machine could so easily replace them.2 But that’s not how decision-making works. It’s laden with emotion and values. It’s fraught with negotiation. Most people aren’t sitting at home pondering their goals of care. Instead, they develop their decisions and hone their values in the moment when confronted with a choice. Clinician’s are integral to developing that choice; they are, as Jennifer Blumenthal-Barby has argued, “choice architects.” There’s value in this negotiation, for the patient, their family, and the clinician. It’s unlikely clinicians can become fully themselves, as clinicians, without shaping their compassion amidst repeated bouts of this kind of negotiation. Families are afforded one way to show how they care through this kind of negotiation. Patients wrestle with their values through community in this kind of negotiation. The negotiation is about what matters most. It seeks to answer, how can we care well for this person now? As Brudney observed, in many cases the language of “what would he want” stands in for this work clinicians and families do together. Edwin Jesudason remarks that, instead of relying on AI to relieve us of surrogate decision-making,
“We can adopt a different stance. Deciding for incapacitated others should undo us, and we should take great care to see it does. Instead of outsourcing such work to AI, in the name of accuracy, ease or efficiency, we can and should suffer it together, compassionately. Honouring the role, we can take comfort that we’re more than just prediction machines. Our creative humanity, burnished by suffering, helps us counsel and console. Facing loss and hurt, we’re remade by each other. And if we’re not—we’re missing someone.”
The PPP for the unbefriended patient veils responsibility. One case where I can imagine the PPP would provide substantial benefit would be in helping clinicians in their decisions for patients with no surrogate decision-maker (so-called “unbefriended patients,” or “unrepresented patients” - they go by several titles). Every health system handles these situations differently. One example requires two attending physicians to agree on interventions requiring signature consent, and for a multi-disciplinary committee (not involving frontline clinicians) to provide decisions regarding life-sustaining therapy. Usually this is done while requesting that the state appoint a guardian.
But in cases where the surrogate decision-maker has no other information about this person’s preferences except what the PPP might provide, it’s less clear what it means for the surrogate decision-maker to be “responsible.” They don’t, in any real way, own this decision. The PPP could very well make the decision without them. What the human surrogate decision-maker becomes is a scapegoat - the one who signs their name next to what the PPP provides but with no real authority beyond that. Whatever happens to the patient is the result of the PPP’s output, not any decision-making process on the part of the humans involved.
But what about that community-level consent that protects the unconscious patient arriving in the emergency department? We might say the PPP offers a more complex form of what goes on there. The emergency department clinicians not only have permission but are obligated to provide life-saving interventions until they learn they should do otherwise (e.g., a surrogate decision-maker declines further intervention). Society consents for this person who cannot otherwise decide, until clinicians find someone to speak for them. Society has deliberated, via the legislative process, about the circumstances in which clinicians are permitted/obligated to intervene. So, too, society could deliberate, via the PPP (programming, policy, possibly legislation) about what kinds of output it will provide clinicians about unbefriended patients.
This becomes unwieldy fast. The state has a deeply rooted interest in preserving life, so it makes sense that society would consent to using some kind of “reasonable person” standard about incapacitated people in emergency settings. But what would it mean to legislate decisions about nuanced choices regarding cancer treatment, blood transfusions, surgery, and so on? What would it mean to legislate the weighing of longevity, function, and comfort? I doubt it can be done.
Not only that, but the fact that we don’t know anything about the unbefriended patient is just pushed back a step with the PPP, behind the pall of technology. Whatever nebulous methods and intuitions clinicians, committees, and guardians use to reach their decisions for someone who cannot express any kind of preference, it will be just as nebulous for the PPP. For example, in offering a decision in the care of a person with severe dementia who has no known care preferences nor anyone who could even offer insight into their values (let alone a surrogate decision-maker), the PPP would need to be informed not by the preferences of the general population, but impossibly by the preferences of other people with severe dementia (just like we’d want the PPP to help make decisions for patients with cancer to be informed by the perspectives of people with cancer). Otherwise you’re just creating AI paternalism.
Burdening One Another
Gilbert Meilaender, when considering surrogate decision-making at the end of life, claims that he wants to burden his loved ones:
“Is this not in large measure what it means to belong to a family: to burden each other—and to find, almost miraculously, that others are willing, even happy, to carry such burdens? Families would not have the significance they do for us if they did not, in fact, give us a claim upon each other. At least in this sphere of life we do not come together as autonomous individuals freely contracting with each other. We simply find ourselves thrown together and asked to share the burdens of life while learning to care for each other. We may often resent such claims on our time and energies.
…
I hope, therefore, that I will have the good sense to empower my wife, while she is able, to make such decisions for me—though I know full well that we do not always agree about what is the best care in end-of-life circumstances. That disagreement doesn’t bother me at all. As long as she avoids the futile question, “What would he have wanted?” and contents herself with the (difficult enough) question, “What is best for him now?” I will have no quarrel with her. Moreover, this approach is, I think, less likely to encourage her to make the moral mistake of asking, “Is his life a benefit to him (i.e., a life worth living)?” and more likely to encourage her to ask, “What can we do to benefit the life he still has?” No doubt this will be a burden to her. No doubt she will bear the burden better than I would. No doubt it will be only the last in a long history of burdens she has borne for me. But then, mystery and continuous miracle that it is, she loves me. And because she does, I must of course be a burden to her.”
Now, you might say that’s all well and good for ol’ Meilaender. Let him make his choices, let me make mine. Some people don’t have families, or they don’t trust their families, or they want to make different arrangements with their families.3 But consider for a moment: there are people to whom you’re obligated. You may have chosen some of those obligations at first, but now you can’t so easily opt out. Meaningful community requires such obligations. We need one another. That interdependence is burdensome not because people themselves become burdens, but because they bear burdens they need help in bearing. Will we help them?
The PPP is an answer to a technically-framed question, but it’s a question we’ve been asking all along about surrogate decision-makers. It sees these burden-bearers as machines for making decisions, rather than people trying to care well for someone else - whether it be their spouse, their friend, or a ward of the state assigned to them. If we follow the PPP toward the freedom it promises, we’ll venture further down the path of making the clinical encounter more efficient and less human. Being human isn’t about being free from all obligations, but being free for the obligations of living and loving.
It’s also a terrible question, as others have noted and I discuss in the essay on surrogate decision-making. A much better replacement: “What could they have chosen, given all we know about them?”
Warning! Klara and the Sun spoiler alert! (which is why I’ve relegated this to the footnotes despite allowing us to behold where the logic of the PPP terminates): In Klara and the Sun by Kazuo Ishiguro, Josie is dying. She suffered an adverse effect from a procedure that was intended to boost her intelligence. Her parents solicit the help of a scientist, Mr. Capaldi, to transfer her consciousness to Klara, an “artificial friend.” Klara is purchased to presumably be Josie’s friend while her parents work with the hope of re-creating their daughter. Here’s a snippet of their dialogue which I think shows us, in its final form, the hope of technical processes to alleviate the burden of caring for one another:
‘So you see what’s being asked of you, Klara,’ Mr Capaldi said. ‘You’re not being required simply to mimic Josie’s outward behavior. You’re being asked to continue her for Chrissie [Josie’s mother]. And for everyone who loves Josie.’
‘But is that going to be possible?’ the Mother said. ‘Could she really continue Josie for me?’
‘Yes, she can,’ Mr Capaldi said. ‘And now Klara’s completed the survey up there, I’ll be able to give you scientific proof of it. Proof she’s already well on her way to accessing quite comprehensively all of Josie’s impulses and desires. The trouble is, Chrissie, you’re like me. We’re both of us sentimental. We can’t help it. Our generation still carry the old feelings. A part of us refuses to let go. The part that wants to keep believing there’s something unreachable inside each of us. Something that’s unique and won’t transfer. But there’s nothing like that, we know that now. You know that. For people our age it’s a hard one to let go. We have to let it go, Chrissie. There’s nothing there. Nothing inside Josie that’s beyond the Klaras of this world to continue. The second Josie won’t be a copy. She’ll be the exact same and you’ll have every right to love her just as you love Josie now. It’s not faith you need. Only rationality. I had to do it, it was tough but now it works for me just fine. And it will for you.’
The unbefriended patient is someone in truly tragic, but thankfully uncommon, circumstances. For the vast majority of people, there’s someone who cares, who is willing to shoulder the responsibility of caring for this person by deciding for them, if only someone would show them the way.
A very interesting article. However, I think one thing that I have become more acutely aware of as I move through my career it that we in medicine often treat surrogates as tools. By this I mean that we enter a room provide them with highly technical information and then expect that they,much like a PPP, will provide us the decision so we can move through our day. In this way off loading from the provider much of the emotional burden of the decision. It becomes a far different discussion when the clinician views the surrogate as a co-patient, one who is also suffering the disease. It makes the goals of care discussions longer, but in many ways richer.
Another great article.
Joshua, I agree with you.
But what seems most compelling about some of the AI tools is their ability to carry on a dialogue. Imagine a machine surrogate that discussed end-of-life with someone, and captured the record of the person's responses. Or led that discussion with the family?
The machine could be deeply knowledgable about particular cultural traditions. Even today, you can prompt LLMs to answer, "As if you were a Catholic ethicist..." What you will get is largely accurate, although rarely deep. Near-future LLMs might be better at this.
I wouldn't want to use such a machine as my surrogate (but I have faith in my wife's ability to be my surrogate). But perhaps a machine with which I had had an extended dialogue might be consulted by a physician or family member to get a sense of what I would think?