6 Comments

Joshua, I agree with you.

But what seems most compelling about some of the AI tools is their ability to carry on a dialogue. Imagine a machine surrogate that discussed end-of-life with someone, and captured the record of the person's responses. Or led that discussion with the family?

The machine could be deeply knowledgable about particular cultural traditions. Even today, you can prompt LLMs to answer, "As if you were a Catholic ethicist..." What you will get is largely accurate, although rarely deep. Near-future LLMs might be better at this.

I wouldn't want to use such a machine as my surrogate (but I have faith in my wife's ability to be my surrogate). But perhaps a machine with which I had had an extended dialogue might be consulted by a physician or family member to get a sense of what I would think?

Expand full comment
author

Hey Bill - I remain hesitant to consider outsourcing these conversations to a machine. Insofar as what you're describing could be likened to, say, reviewing your journal, it's one step removed from doing so. The intermediary is the machine and it determines, through whatever means it does, what answers to provide.

I'm realistic that the technology is being developed and it won't surprise me if something between what I've described and what you've described is prevalent in clinical practice in the next 10-20 years. I worry that such a development will erode our capacity to care and our capacity to develop a capacity to care.

Expand full comment

Thanks, Joshua.

On the one hand, I share your concern about caring.

On the other hand, all the futures that I can imagine involve human/machine cognitive integration. If that's correct, we need think through how we might engineer that integration so that it advances our values.

Expand full comment
author

Having read the likes of Neil Postman, Jacques Ellul, Ivan Illich, L.M. Sacasas, and others, I struggle to imagine how that might happen.

Most of these technologies are a double-edged sword. Take the electronic medical record, for example. I much prefer it to a paper chart, but it's undeniable that it's shaped the clinical relationship in all sorts of unsavory ways. I've figured out a detente with it, but I hear patients share how their clinicians never look at them and spend the majority of the visit typing away. My own experience as a patient shows me this as well. A technology that has brought some good has also contributed to the clinical encounter in ways we couldn't have imagined beforehand in service to efficiency.

Look also at how large swaths of "social media" have become "antisocial media." The tail wags the dog, people doomscroll, etc. The technology shapes us in ways we don't want.

I think the way we prepare ourselves to use our tools wisely and well is to develop the kinds of virtues that allow us to do so. But medical education does not have a focus on virtue ethics for clinicians, nor does broader society for clinicians and patients alike. While I don't think AI is necessarily bad, I don't think we're prepared as a society (and maybe as a species) for the formative impact it will have on us.

Expand full comment

"I think the way we prepare ourselves to use our tools wisely and well is to develop the kinds of virtues that allow us to do so. But medical education does not have a focus on virtue ethics for clinicians, nor does broader society for clinicians and patients alike. While I don't think AI is necessarily bad, I don't think we're prepared as a society (and maybe as a species) for the formative impact it will have on us."

I could not agree more.

Expand full comment

I also struggle to imagine how we can make these technologies work for us, but again, we have to try.

I am very lucky--and privileged--to have skilled and humane physicians and nurses caring for me. However, I also get great benefit from my access to the EMR. And without substack, I would never have found _your writing_.

Your post raises the right questions. I hope to write something about the possible answers soon.

Expand full comment