“In the morning they came up out of the ravine and took to the road again. He'd carved the boy a flute from a piece of roadside cane and he took it from his coat and gave it to him. The boy took it wordlessly. After a while he fell back and after a while the man could hear him playing. A formless music for the age to come. Or perhaps the last music on earth called up from out of the ashes of its ruin. The man turned and looked back at him. He was lost in concentration. The man thought he seemed some sad and solitary changeling child announcing the arrival of a traveling spectacle in shire and village who does not know that behind him the players have all been carried off by wolves.”
This is a day in the life of “the man” and “the boy” in Cormac McCarthy’s The Road. They trek across a barren landscape, ruined by some unnamed disaster, sometimes starving, sometimes hiding, often both. This world had faced, as we call it, the apocalypse. The Greek word doesn’t mean “end of the world,” but rather “unveiling” or “revelation,” which I suppose carries its own world-ending possibilities. What’s unveiled in McCarthy’s “post-apocalyptic” world, with its dead homes, dead people, and almost-dead hope?
The question, before I respond to it, brings to mind people I know living post-apocalyptic lives every day. They scavenge for hope after the news lands on them as if out of the sky: “You have cancer,” “Your kidneys aren’t working anymore,” “Time is short for your Mom,” “You can’t work anymore.” No brimstone has fallen, but the revelation has come.
Sometimes the apocalypse is more subtle. Someone lays in bed. They’re alone. They’ve been alone. They’ll die alone. If they’re in a hospice house, a volunteer or staff person is often nearby, but they’ll pass from the world unknown. The people who care for them in their estrangement testify to their human dignity but no one mourns the way a spouse or child would mourn. Others, without hospice, die alone and undiscovered in old apartments. The revelation of a lonely person is a terrible thing.
Another apocalypse. A man estranged from his daughter as he dies from cancer. At the end, she cries for him. She had her father restored to her; they’re reconciled. She mourns that they only had a year, but also cries in joy that it was a year they wouldn’t have otherwise have had if cancer had woken them to their need for one another. The revelation of love, like a candle carried against the wind, is a beautiful, fragile thing.
Our whole world is facing an apocalypse right now. Living and working together is hard. In bygone eras, people would talk about becoming patient, humble, generous, and the like. They would cultivate practices and communities around how to do this right. They would prepare themselves for the hard work of life together. Now we hope our technology can save us from the hard work of discerning the purpose for living, the substance of community, and the burden of caring. Elizabeth Broadbent and colleagues wrote an editorial discussing what robotics has to offer to human community:
“Both pet-like and humanoid companion robots have been shown to decrease loneliness and provide companionship either directly, via connecting users to others, or through reminders for social appointments. A systematic review of social robots for older adults found positive effects on engagement, interaction, and well-being, as well as reduced stress and loneliness. Robots may also help older people remain healthy and active in their homes rather than moving to assisted living centers.”
The promises are substantial, even reaching beyond the grave: “Advances in AI-driven speech synthesis mean that robots can be made to speak like old friends or even loved ones that have passed away.”
Clinicians are hopeful too: “Sixty-nine percent agreed that robots could provide companionship and augment the influence of human-to-human interaction for older adults suffering from social isolation and only 16% disagreed, whereas 15% were unsure. Seventy percent agreed that companion robots could improve mental health among isolated patients. Respondents felt that the main limitation was that robots could not yet replicate the full complexity of human companionship and hence that they were not ‘real companionship.’”
The authors admit this isn’t ideal: “We are not proposing that robot companions replace human companions, but robots may provide a bridge between what many isolated individuals need and what society currently provides.” Nevertheless, they end on a hopeful note: “With the right ethical guidelines, we may be able to build on current work to use robots to create a healthier society.”
It’s only a matter of time before the technology affords us the opportunity to provide lonely people with artificial friends (as Kazuo Ishiguro called them in Klara and the Sun). What we need are the “right ethical guidelines” - like how robots will store the data they receive (privacy issues) and what their responsibilities are (e.g., alerting emergency services). Those, too, will fall into place with enough ingenuity.
The Apocalypse of Our Daydreams
People who propose technical solutions to deep existential problems flourish what I worry are right answers to the wrong questions. The work appears predicated on
’s “four antihumanisms”: we’re stupid, obsolete, fragile, and hateful. In human relationships, all these problems come to the fore: people don’t show up, they lie, they lose their patience, they betray, they stumble. Machines promise to remediate or even obviate these messy human problems. We’ve lost the capacities to reform and discipline our hearts and minds, so we’ll instead use other skills to make out of silicon a shadow of what we would hope we’d become. Our machines will be more reliable than we are.But whether a fully functional robotic companion becomes a reality or not (we’ve been let down by the hype of other technology), the mere pursuit of the promise is its own apocalypse, revealing a deep hope. Maybe these haunting problems will respond to technical fixes, fixes that require nothing more of me than my credit card. We hope that our tools will save us because we’ve given up long ago that anything or anyone else will.
What do we see in this post-apocalyptic landscape? Well, I hope we come to discover that machines aren’t humans and, more importantly, humans aren’t machines. This means that while they might help with some things, they can never replace us in the most important things. That’s tricky, though: what’s most important when life has lost any coherent sense of the telos toward which it should be oriented?
Wendell Berry reflects on one salient difference between humans and machines:
“A machine, if shot into outer space never to return, would simply go on and on being a machine; after it ran out of fuel or traveled beyond guidance, it would still be a machine. A human mind, necessarily embodied, if shot into outer space never to return, would die as soon as it went beyond its sustaining connections and references. How far from home can a mind go and still be a mind? Probably no scientist has yet made this measurement, but we can answer confidently: Not too far. How far can a machine go from home (supposing, for the sake of argument, that a machine has a home) and still be a machine? Theoretically, if it is not destroyed, it can go on forever.”
This sounds like Berry isn’t speaking in favor of the human mind. It’s limited and fragile. Not only can it not survive in the vacuum of space, but it needs comfort, sleep, respect. It needs social references, also known as a home. It needs.
But what might, at first glance, appear as a deficiency Berry would argue is a strength, indeed the essence. Who are we without our “sustaining connections and references?” These are the commitments that make us who we are. These commitments give content to human living. When we look to machines, the tools that have no home, to remediate the deficiencies that make us homeless, we’ll only drive ourselves further into the wilderness of alienated longing and loneliness.
Berry offers a repudiation to the way of the machine and a suggestion for resetting our perspective:
“What I am against—and without a minute’s hesitation or apology—is our slovenly willingness to allow machines and the idea of the machine to prescribe the terms and conditions of the lives of creatures, which we have allowed increasingly for the last two centuries, and are still allowing, at an incalculable cost to other creatures and to ourselves. If we state the problem that way, then we can see that the way to correct our error, and so deliver ourselves from our own destructiveness, is to quit using our technological capability as the reference point and standard of our economic life. We will instead have to measure our economy by the health of the ecosystems and human communities where we do our work.”
Our grasping after technological solutions in caregiving, then, might contribute in substantial ways to the economic status quo. It’s not enough to just avoid robotics. Our entire way of thinking about “productivity” must change. There’s very little money to be made off helping nursing assistants, physicians, and physical therapists grow in compassion; there’s a lot of money to be made off selling robots to assuage the clinician’s obsolescence and replace them where they can.
Shannon Vallor spoke with Illah Nourbakhsh and Mercer Gary at The Hastings Center on the question of whether AI should care for us. Economics made an appearance in their discussion as well. Vallor remarked that the responses caregivers receive in response to their caregiving help them become better caregivers - “You shouldn’t have said that,” “Where were you?” “Thank you,” the cold shoulder, the growing trust, and so on. We feel our way toward truth, goodness, and beauty together. She agrees that without this feedback loop, human communities would lose the capacity to learn how to care. Financial incentives push us toward technical fixes and further away from the presence required to cultivate the capacity to care, as evidenced by the low salaries of those with the most rigorous, hands-on caring work. I would add we don’t even need the real robotic companion for this to happen. When we’re transfixed by the promise of what technology might afford us, we neglect caring capacities and communities. Machine-like reliability and precision are that much more attractive.
In the pallid light of this apocalypse, we see that the promise of the machine threatens to set the standard for human companionship. If we no longer care about those qualities that make human caring possible (they’ve atrophied), we’ll replace them with qualities manufactured within the machine. This is a standard we’re unable to meet. Imagine some future world in which a robot companion can work throughout the night to get all the chores finished, can guide an older adult with cognitive impairment through all the details of their day without fail, never complains, and has no other job except to shepherd this person. Humans can’t keep up with that except through ever more rigorous application of technology.
As a result, the promise of robotic companionship refocuses where we spend our time and money. Broadbent and colleagues argue that this could be a bridge to a world in which our priorities shift and we’re able to care more about human companionship. But time and money spent investing in robotic companionship is not time and money spent building a more humane world. These are two different worlds. One can’t be a bridge to the other. We already live in the world of robotic companions even if they don’t yet exist. It’s this world, the one we’re in right now, that supplants virtue for technique in pursuit of efficiency for its own sake and thus strives toward the robotic companion. With or without these companions, our world is becoming less and less responsive to human needs. Instead it manufactures ever-less nourishing substitutes. This world, as I’ve observed of the hospital, offers blankets without warmth, food without fellowship, and sleep without rest.
The thing with less nourishing substitutes, like fast food, is that they’re just so easy. They fill our belly with something but over a long period of time and with repeated exposure, they harm us. The same could be for the the robot companion that demands so little - just payment. It doesn’t matter if we feel satiated and our aching loneliness is eased for the moment if we need not do the messy work of becoming a person more suited for human community. The anti-social ramifications of social technology are insidious. This is what we’ve chosen because the customer’s always right, never mind we didn’t realize what we needed wasn’t for sale anyway.
Well, what if this is what people want? Maybe they’ll write in their advance directives: “When I lose the capacity to recognize other people, just make sure I have a robotic companion to keep me company so I don’t burden by family.” We should respect that, shouldn’t we?
We should think again. We reveal how much we value those who are assigned a robotic companion because no human bothers to spend any time with them. We reveal the type of relationship we believe these people are owed. We reveal our disbelief in the value of human presence. By offering robotic companionship, we devalue ourselves as potential companions: there’s nothing special about me that a robot couldn’t supply. The aforementioned advance directive becomes a document to codify all these evaluations so that they could then be taught to family members and clinicians alike.
What are some of the lessons? If there’s nothing special about me such that I could be replaced by a robot, then I come to evaluate my own life against the machine. I’ll be no better than a robot by whatever metrics the robotic rationale sets for me, the very concern I shared from Berry earlier. Even if someone tells their human companion, “I want you. I don’t want no stinkin’ robot,” the miasma of forsaken efficiency will hang over them. We may at one time stumble upon the realization that it’s irresponsible to entrust caregiving to some flighty human. With the inchoate hope of such reliability on the distant horizon, we invest in robotics research, even as we neglect the investment of time required to cultivate those virtues required to use our tools well.
But have we learned to use our tools well? In medicine, we’re still reeling from the last big apocalypse: the advent of the electronic medical record (EMR). I really appreciate the EMR. It was a big improvement over the paper record, though I know some older clinicians disagree about that. Nevertheless, it has fundamentally altered the relationship many (most?) clinicians have with their patients for the worse. These clinicians spend most of their time shepherding the vast amount of data in the EMR rather than being present with their patient. Some can’t even peel their eyes away from it when in the room with their patient! The EMR didn’t free them up, either temporally, emotionally, or physically, to do more of what they hoped to do. Now EMRs have become such bureaucratic monsters, there is no taming the tentacles they’ve wrapped around the practice of medicine. Why would we think any other technological development would turn out differently? What did we learn - about the tools and about ourselves? Did we learn?
Which brings us to the darkest corner of robotic caregiver apocalypse. Maybe we want machines to give care because we have an ambivalent relationship with both ourselves as human and our hope to become something other than human. Machines promise to extend our agency while they indict those very foibles that make us human. In Klara and the Sun, the Mother speaks to Klara, her daughter’s artificial friend:
“‘It must be nice sometimes to have no feelings. I envy you.’
I considered this, then said: ‘I believe I have many feelings. The more I observe, the more feelings become available to me.’
She laughed unexpectedly, making me start. ‘In that case,’ she said, ‘maybe you shouldn’t be so keen to observe.’”
The Mother is a tragic character. Elsewhere in the story, we see she longs after the hope, frail and fading in the wake of Klara’s artificial apocalypse, that there’s something inside of us that can’t be reached and replicated by technology. But she also wrestles with that hope and, in this case, flippantly counsels Klara to avoid any attempt to become more human if she can.
The humanity isn’t worth the pain.
What We Need
There are things machines can’t deliver. Even the promise machines make when they don’t yet exist can corrode the quality of community we do have and impair the possibility of developing meaningful relationships. We don’t want to prepare our hearts, budgets, and calendars for humane community; we instead focus on what technology can efficiently offer.
What do we need to be in relationships with other people? I cared for someone once whose brother sat with them as they lay dying. He was at her bedside every day. He watched his sister’s chest rise and fall with every breath. He might say a word or two, but otherwise was quiet. Reminiscing, perhaps; I don’t know. We sometimes call this “sitting vigil” in hospice. We spoke little. I felt like I was treading on holy ground when I visited (and I was the visitor despite working there).
What did he need to be able to do that? Things like patience, empathy, care (even love!), and generosity (with his time and his money, for in this instance he took off from work). We need communities to sustain and cultivate these; they don’t come out of thin air. Palliative care in some ways depends on this. There’s very little technological mediation in my relationships with my patients. I’m there with them, and we talk about what matters most and the trade-offs they’re willing to make in pursuit of that. But those conversations are hard when so much threatens to distract us. These conversations and relationships require me to be a certain type of person. I can’t usually strafe behind a procedure or a medication to avoid these challenges. Even when I can, that doesn’t help when the deeper matters show themselves again.
Let’s return to the question with which I started: What does McCarthy’s “post-apocalyptic” world show us? What can we see about what we need from one another?
At one point in the story, a starving thief stole almost everything the man and the boy had - the few cans of food, the plastic tarp that shielded them from rain and snow, the blankets that kept them warm. All that remained were the rags on their backs and feet, and the man’s pistol. This theft would condemn them to death.
Soon they found the thief and, under threat of vigilante justice (for that was the only justice in this world), the man recovered their goods. Not only that, but he forced the thief to give up his knife and clothing as well, condemning him to death. The boy begs for mercy, just as he had begged for them to take in and care for others during the journey, and just as before, the man denied him. It wasn’t safe. They left the thief naked on the side of the road, where he remained as the boy looked over his shoulder:
He’s not gone, the boy said. He looked up. His face streaked with soot. He’s not.
What do you want to do?
Just help him, Papa. Just help him.
The man looked back up the road.
He was just hungry, Papa. He’s going to die.
He’s going to die anyway.
He’s so scared, Papa.
The man squatted and looked at him. I’m scared, he said. Do you understand? I’m scared.
The boy didn’t answer. He just sat there with his head bowed, sobbing.
You’re not the one who has to worry about everything.
The boy said something but he couldnt understand him. What? he said.
He looked up, his wet and grimy face. Yes, I am, he said. I am the one.
We might believe the apocalypse was the disaster that killed the world. But this story, after the disaster, is the true apocalypse. Most people are revealed for who they are: so desperate to survive they’ll descend to theft, murder, and cannibalism. But in the man and his son, we see another revelation.
The man “worries about everything” - he and the boy must get to the coast, they must find their next meal, they must hide, they must survive. He plots their course as best he knows how. Why does he do it? He does it for the boy, his son.
The boy “worries about everything” - he cares about what happened to the people whose bodies are strewn in their path. He wants to be generous to another boy they see, and help others in need. He wants to have mercy on the thief. Why does it do it? Because whatever happened to the world hasn’t yet incinerated his heart. He loves his dad, and wants to invite others into that.
The world is ruined, but despite that, the man and boy persevere together. This little human community sustains them, even as it wavers under the weight of a dead planet. These two people care for each other as only people could. We learn more about the man and the boy because the other is there, but those are things we could have learned had one or the other been a robot. Robots often serve that mirroring function in stories, like in Klara and the Sun, I Robot, R.U.R., and Star Wars. We get to see the other characters, and ourselves, more clearly through them.
With the man and the boy together in this apocalypse, we see something only that relationship could show us: what it means for one to care for the other even as the other cares for them in return. The Road shows us that at the end of all human ingenuity, when every tool has failed, two people might carry each other along in love. That’s what we need: a willingness to bear with our companions as we accompany them in love, even when it feels like we’re wandering in a wasteland. It’s something only humans can share. It can’t be manufactured or sold, it’s not novel or flashy, and it can’t be distilled into a five-point plan. Instead, maybe the robot apocalypse will give us eyes to see one another for the first time in a long time.
Trajectories
Following a meandering reading-path, sharing some brief commentary along the way.
Katherine Stamford reflects on something I encounter almost every day: inadequate informed consent regarding code status. This conversation is sped past by many inpatient clinicians (and not even addressed by outpatient clinicians) because the answer is presumed or the content uncomfortable (e.g., the clinician isn’t ready to wade into prognosis yet). Far more important than code status, though, is an appreciation for the patient’s overall goals of care. It’s not uncommon when I ask another clinician what they think of someone’s goals of care they respond with an answer related to code status. We need a deeper grasp of what these goals are, and how code status fits in that context.
Jennifer McCormick reflects on how informed consent might be bolstered to resist the belief (from patients, clinicians, and researchers alike) that research is intended to benefit individual patients, rather than produce generalizable scientific knowledge. While there is a chance in some trials that individual patients could benefit (e.g., phase III clinical trials), individual participant benefit is not the primary reason these trials exist. However, I think we’d need go beyond informed consent to resist the therapeutic misconception. The whole milieu in which the research takes place would need to change. The room would need to feel different; the history and physical exam would need to be modified in some way to indicate this isn’t a typical clinical encounter; the researchers would need to appear different (e.g., not wearing white coats even if, ironically, white coats were brought to the bedside from the lab).
writes about her experience with a mysterious (eventually diagnosed) illness and the phenomenon of being (or feeling like) an “illness faker.” She recounts some of the people who have written memoirs of their illness experiences, all sharing the thread of being disbelieved. Although it doesn’t excuse the behavior, one reason clinicians are quick to dismiss people who have don’t diagnosable illness is that it’s hard to endure one’s own powerlessness as a supposed healer before something that isn’t easily fixed. Medical school doesn’t prepare you for that. Instead, it prepares you to get the right answer through the rigorous application of technology. Compassion, without accompanying wisdom, withers under the heat of chronic, mysterious suffering. Again, that’s not an excuse, but hopefully a more productive call for clinicians to look beyond more technique to better care for people with these types of problems.Closing Thoughts
DOMIN: Yes, Alquist, they will. Yes, Miss Glory, they will. But in ten years Rossum’s Universal Robots will produce so much corn, so much cloth, so much everything, that things will be practically without price. There will be no poverty. All work will be done by living machines. Everybody will be free from worry and liberated from the degradation of labor. Everybody will live only to perfect himself.
HELENA: Will he?
DOMIN: Of course. It’s bound to happen. But then the servitude of man to man and the enslavement of man to matter will cease. Of course, terrible things may happen at first, but that simply can’t be avoided. Nobody will get bread at the price of life and hatred. The Robots will wash the feet of the beggar and prepare a bed for him in his house.
ALQUIST: Domin, Domin. What you say sounds too much like Paradise. There was something good in service and something great in humility. There was some kind of virtue in toil and weariness.
DOMIN: Perhaps. But we cannot reckon with what is lost when we start out to transform the world. Man shall be free and supreme; he shall have no other aim, no other labor, no other care than to perfect himself. He shall serve neither matter nor man. He will not be a machine and a device for production. He will be Lord of creation.
BUSMAN: Amen.
FABRY: So be it.
R.U.R., Karen Čapek
Most nascent robots tend to reduce the mental health of the humans that must work alongside them. Will this change as the software in all robots feels more like ChatGPT? An entire generation now growing up with A.I., like the generation before them grew up with mobile devices and unlimited scrolling. Where indeed is the consent for such a world.
The general purpose robots will come faster than many expect.