In Guardians of the Galaxy, Vol. 3, we meet the High Evolutionary. Through centuries of experiments, he hopes to craft the perfect species and father the perfect civilization. His early experiments are seemingly grotesque amalgamations of metal and fur, but they become increasingly refined: smarter, stronger, even more peaceful. What a noble goal! Right? Well…
I have made the case - and stand by it still - that medical technology can intervene on the human body in three ways to sustain and restore health: by helping us live longer, by helping us do something (e.g., walk, eat, reach a milestone), and by helping to ease discomfort. There are at least two other uses for medical technology that exist outside this paradigm. The first is death, which is becoming increasingly popular. I’ve addressed it before, frequently return to it, and will likely do so again and again giving its ascendency. I don’t think death can be a goal of care, and I also don’t think a society should offer to kill those who suffer.
There’s another use for medical technology: enhancement. I’ve hesitated to tackle this topic because I’m worried I can’t do it justice. I’m not sure I have a handle on its nuances. But in the spirit of thinking out loud and in conversation, I’m going to enter in, tentative thought I may be.
What is an enhancement?
Some medical technologies are only therapeutic. Most antibiotics, for example, have no other use apart from treating infections. Stimulants, on the other hand, have at least two non-therapeutic uses. They can be used recreationally, or they can be used to improve otherwise healthy attention and energy to even higher levels. Some believe this is helpful for test taking or for long drives. So, too, anabolic steroids might improve the performance of an athlete. Cosmetic surgery is sometimes an enhancement by bringing human form into alignment with an aesthetic standard. Controversially, contraception can be considered an enhancement, insofar as it’s being used for purposes that don’t restore or sustain health. Tattoos might fall in the category of enhancements. We can also consider more theoretical enhancements: bolstering brainpower with implanted microchips, engineering germ-line genetics for desirable traits, or extending the human lifespan to once unthinkable ages.
As you might be thinking, the line between therapy and enhancement is blurry. It also hinges on how one defines “health.” Because every experience is mediated through our bodies, almost anything can be tied back to health. A definition that is too broad can result in a total take-over of health fanaticism. Health scientists and clinicians would have something to say, as authorities, about everything, and any technology might be owed to anyone at any time. I’m not interested in exploring the chaos that ensues from that line of thinking right now. Instead, if we narrow our understanding of health to the one I’ve been using in this newsletter, then we’re in a better position to grapple with enhancements. Health is both the objective well-functioning wholeness of an organism as well as the subjective experience of being present and having agency in the world.
I also think we can set aside things we do to the body that aren’t explicitly biomedical. Again, definitions can be blurry here, but things like tattoos, hairstyles, makeup, piercings, and clothing, while perhaps interesting to consider in their own right, we’ll set aside. So, an enhancement for our purposes here is a biomedical intervention that improves something about the body beyond health. I recognize some of this arbitrary - what’s the fundamental ethical difference between getting your nose pierced by a non-clinician and getting an elective rhinoplasty by a plastic surgeon (except the complexity of the procedure) - but if you’ll bear with me, I hope we’ll reach fruitful territory.
Are enhancements a good thing?
Joseph Vukov addresses this deceptively straightforward question in his book Perils of Perfection. When trying to consider why one thing might be good and another bad, you need to identify morally salient distinguishing factors between the two. Vukov uses the fictional example of someone who claims it’s morally permissible to drink wine but not beer. This person now needs to find some morally salient feature that separates these two beverages. The fact that wine is made from grapes, for example, doesn’t provide any sufficient basis for their claim. Murder, on the other hand, is differentiated from manslaughter by considering intention, a morally salient difference between the two acts. Vukov calls this the parity principle, meaning “two things should be treated as morally equivalent unless we can identify some morally relevant feature that one has and the other lacks.”
Consider coffee. Vukov points to coffee as a clear example of an enhancement.1 What’s the morally salient difference between the caffeine found in coffee and methylphenidate (a prescription stimulant, the likes of which might be desired to enhance academic tasks)? If you say methylphenidate is more likely to be abused, we’d need to know how frequently caffeine is abused and what the toll of that is. Or maybe the risks of an adverse effect are greater with methylphenidate than caffeine - you’d need to study that too. I’m not making a case for over-the-counter methylphenidate. Instead, following Vukov, I want to observe two things: first, we already engage in a great deal of enhancement. Second, it’s not so easy to pick out what makes any particular enhancement problematic, or even how we should regulate enhancements.
Vukov argues that we needn’t scrutinize individual technologies for the problem. Rather, if a technology is misused or inequitably distributed, we ourselves are the problem. Some might bristle at the language he uses to describe this, grounded as it is in Christian thought, but the reality is common knowledge regardless of what you call it: humans are fallen creatures. We’re fallible, frail, and biased. Not only are we prone to error, but we’re prone to evil. This state should temper our expectations for any technology that claims to be an enhancement because the object being enhanced (the human) as well as the one enhancing (the human) are so limited.2
Consider therapies. Sometimes we use therapies really well: accurately, precisely, and in a way that restores people to health while mitigating against the pharmakon’s impact on other aspects of living. In other cases, though, therapies are misused: over-utilized, under-utilized, inequitably allocated, or twisted to serve roles for which they were never intended. We so often lack the wisdom to handle our therapies well. What makes us think we can wisely handle enhancements? Indeed, many of us struggle to keep the more mundane enhancements in our lives - from coffee to the smart phone - from taking over. Some have already succumbed to their power, drinking a pot of coffee every day just to get by while tethered to their email and social media at all hours.
But Vukov doesn’t leave us with only our fallenness. He also wants us to consider our fundamental dignity. It’s axiomatic, in many countries’ constitutions, in the UN’s Human Bill of Rights, and in many religions, that each person has inherent dignity and worth that isn’t contingent on anything except being human. Enhancements, by their very nature, pull for evaluations from us. So, we ask, “Is it better that with this enhancement, Sally will be smarter by these measures?” Perhaps it’s a good thing for someone to be smart, and maybe it’s even better for them to be smarter, but implicit in that evaluation is that it’s worse for someone to be less smart. We’re a hairsbreadth away from claiming that someone is better than someone else due to the differences in their intelligence. We’re at risk of either devaluing the intrinsic dignity of every human, or elevating the worth of some over all others based on some particular characteristic.
If you want to avoid that pitfall, then you also lose the impetus for why you might pursue an enhancement in the first place. Without resorting to ableism, one struggles to claim that an enhancement should be preferred over one’s present circumstances. Is a life with caffeine or a smart phone a better life than one without? Is a child who has been engineered with incredible athleticism and intelligence better than one who was created the old fashioned way?
The “fallen dignity” view, as Vukov calls it, helps us to approach enhancement technologies with humility. We shouldn’t expect to be made better people - both because our worth is grounded in something unalterable and inalienable, and because we so often twist these technologies for nefarious purposes. At the same time, this also means that the technologies themselves aren’t, de facto, off limits.
Well, how close are we, really, to engineering our children? Or downloading our brains to cloud? Even if none of that actually becomes reality, our hope for such a world reveals something about us: our dissatisfaction with limits and our presently fallible, vulnerable state. We want to transcend the entanglements of our humanity (thus, transhumanism). Such hopes will continue to influence our striving even if none of these projects come to fruition because this is how we’ve been living for millennia. Rather than accept the tutelage of our limits, this hope drives our ever-restless striving against them.
We’re the problem, but sometimes the technology is too.
Before I move on to what I believe is one of Vukov’s most important points, I want to bring in Neil Postman. I think his voice helps scrub away some of Vukov’s naïveté by highlighting that there are some inherent qualities of technology that bias us toward particular uses; it’s not all on the user. Postman, writing about technological change in 1998, offers a guide for how we might navigate those choppy waters. I’ve reviewed it before, and I want to adapt it again for use in this conversation:
“All technological change is a trade-off.”
Vukov discusses some of these trade-offs. New technologies alter our relationships, our self-conception, and our capacity to act with authenticity. Some disruptive technologies create new jobs while at the same time eliminating swaths of labor. These trade-offs are explicitly considered when developing novel therapeutics. Clinicians and researchers alike are well-acquainted with considering at least a medication’s physiological burdens and benefits. Turning to enhancements, though, even the category name suggests that there is no trade-off in using technology for this purpose. Who doesn’t want more of the good things they have? Remembering at the outset of either innovation or use that all technological change is a trade-off may help us to reckon with the burdens of what we might otherwise hope would be entirely beneficial.
“Advantages and disadvantages of a new technology are never evenly distributed among a population. … new technology benefits some and harms others.”
Vukov also explicitly albeit partially addresses this. He reflects on the invention of the lightbulb, a technology that wasn’t evenly distributed until long after its creation. This is, according to Vukov, an inevitable growing pain of innovation. What he neglects is that even once a technology is widely accessible, its benefits and burdens will continue to accrue inequitably. Consider smart phones. Even in countries where these might be available, the poorest in these countries might be mining under minimal regulation the cobalt required to make the technology possible. Even on a more mundane level, something like the smart phone may benefit someone with routine tasks, while its use becomes a life-deforming addiction for another person.
“Every technology has a philosophy which is given expression in how the technology makes people use their minds, in what it makes us do with our bodies, in how it codifies the world, in which of our senses it amplifies, in which of our emotional and intellectual tendencies it disregards.”
This is what I think Vukov overlooks. Technologies are not blank slates nor are they morally neutral. The atomic bomb calls for one kind of use, and also reveals a particular way of engaging the world and thinking about others (for both the owner of the bomb and those who don’t own it). Cosmetic surgery can be used to restore a burn victim to health, yes, and it can also support unhealthy standards of beauty. Vukov would say this is due to our fallenness. That’s part of it. But I think it starts when the technology seems to ask, “Why not?” We’ll get to the question in a moment.
“Technological change is not additive; it is ecological. … A new medium does not add something; it changes everything.”
Psychotropic medication isn’t just one additional technology to our lives. It has, for better and for worse, changed everything. It has changed how we conceive of our identities, our behaviors, and our suffering. It has led us to ask, if a pill can make a depressed person feel relatively normal, can a pill also make a normal person feel great? The ventilator didn’t become just one more treatment among many. It transformed medicine and society. So too would germ-line editing or microchips in the brain, in ways we struggle to even acknowledge let alone fathom.
“Media tend to become mythic. … a common tendency to think of our technological creations as if they were God-given, as if they were a part of the natural order of things.”
The story around contraception isn’t only a story about healthcare. As some tell it, it’s a story about justice and equality. The same goes for abortion. In both of these cases, biomedical technology is required to sustain the liberal interests of both the state and its individual citizens. Isn’t that remarkable? It suggests that we’re incapable of justice apart from biomedical intervention. Or maybe it suggests that we seek to remediate injustice through biomedical intervention. Either way, these technologies have thus become “mythic.” So, too, with the culture surrounding coffee. Why would we think any new enhancements wouldn’t also behave the same? Could we come to live in a world in which some enhancement becomes a part of our collective self-narrative?
Why (not) enhance?
As Vukov argues, the train’s already left the station on enhancement. If you enjoy coffee, then the question isn’t whether or not to enhance, but how and why. So it’s those two latter questions that can guide our use, but they’re a little too broad. Let’s narrow it a little bit.
Do we know enough?
Some worry that tinkering with enhancement, usually of the more sci-fi variety, is akin to “playing God.” The High Evolutionary embraced the role: “There is no god! That’s why I stepped in.” There are at least two components to “playing God” - acting with god-like power, and possessing god-like knowledge. Anyone who would press this concern in good faith must also admit that we already act with all sorts of god-like powers: we breed plants and animals for particular traits, we transform the land, we pull ingredients together to cook meals. We are manipulators of our environment. We aren’t omnipotent, but the mere concern about “playing God” doesn’t help use differentiate between those areas in which we should use our (allegedly God-given) power and when we shouldn’t (short of an explicit divine prohibition).
The issue about possessing god-like knowledge is far more germane. Vukov cites the myth of Icarus. The innovative and ingenious Icarus was able to fly using self-made wax wings. But what he had in power he lacked in knowledge (did he know the wax would melt under the sun?), wisdom (he didn’t know how to maneuver the skies safely), and/or humility (nothing could stop him now). Flying by means of wax wings, in and of itself, wasn’t foolish. Icarus was foolish because didn’t know the limits of flight. He lacked the wisdom to use his power safely.
We’re always operating with some degree of ignorance. That shouldn’t keep us from acting, but it should give us pause. An immense amount of research goes into understanding the physiologic impact of novel therapeutics. Virtually no research is conducted on how novel therapeutics will impact societies financially or existentially. Postman’s voice penetrates not one millimeter into modern day scientific innovation, for therapy or for enhancement.
Who can do this?
Society turns to scientists and clinicians to develop and apply enhancements to the human body. They have the technical prowess to do this. However, as I’ve explored before relying on The Sorcerer’s Apprentice and Frankenstein, technical prowess is insufficient to guide innovation and intervention. Even in the world of therapy, scientists and clinicians usually fail to appreciate the manifold impact their innovations will have on the human person and society, how these technologies will contribute to and change the story we tell about ourselves (individually and collectively), and how the use of such technologies can become ends unto themselves, particularly at the end of life.
At least in the realm of therapy, clinicians have the standard of health to guide them. Some don’t abide by it, instead relying on the standard of patient preference. Even in those cases, health sets the boundary for what a clinician can decline to do, if they suspect that substantially more harm than benefit may come from a requested intervention. I’ve frequently revisited the idea that without this purpose, this end, this telos in view, clinicians and patients alike will wander through a wasteland of medical technology. This too often happens.
With enhancements, however, restoring and sustaining health isn’t in view. Enhancements move the human body beyond health. Certainly enhancements can negatively impact health (e.g., too much caffeine can give you anxiety and palpitations), but the benefit of an enhancement isn’t to restore or support health. When considering therapies and health, clinicians stop short of the final question: what is health for? In an age of pluralism, that is a question each must face on their own. Clinicians might join in responding to it in the context of something like psychotherapy, but in general, even for those clinicians that actually have health in view, they are not helping patients to prioritize health among the other values in their lives.
Enhancements, by not restoring and supporting health in the way therapies do, skip over that question about the telos of health. Enhancements bring us straight to one of the biggest and most enduring questions of our humanity: what is a good life? I don’t think clinicians are in a good place to help people answer this question, and therefore shouldn’t be the one’s applying enhancements.
What for?
This is perhaps the most important point raised by Vukov. The quest to enhance the human body and experience is a quest for the good life. Judgments about a “life worth living” are smuggled into an unguided philosophy of enhancement. These judgments are present in the realm of therapy as well, but far more obvious with enhancement. If we want to avoid these judgments, we need to subject the intervention to a greater purpose. This enhancement is going to make you more intelligent, for example, for what purpose? Vukov approaches the question from his Christian perspective, as others have. In our pluralistic society, no one ideology or religion can claim authority over the others in grappling with this question. Our response has been to throw responses to the question into the marketplace. If I buy into one idea and you another, that’s fine. We’re both consumers here. You have no right to challenge me on my “purchase,” nor should I challenge you.
Ironically, what this allows for is a technological hope to rise above all other considerations. Daniel Callahan wrote:
“The greatest fear of liberal individualism is authoritarianism. But that fear, reasonable enough, fails to take account of the fact the power of technology, and the profit to be made from it, can control and manipulate us even more effectively than authoritarianism. Moral dictators can be seen and overthrown, but technological repression steals up on us, visible but with an innocent countenance, and is just about impossible to overthrow, even as we see it doing its work on us. Liberal individualism makes this scenario more easily possible, and that is why it is not a tolerable guide to the sensible use of medical knowledge and technology.”
What we need, then, aren’t spaces in which no ideologies, philosophies, and religions are allowed. It’s farcical to believe such spaces exist anyway; they’re only fronts to allow certain philosophies to dominate unappraised. Instead, what we need are spaces in which all these perspectives are in dialogue with one another. It is out of the deliberation from such spaces that we, as a society, might come to some understanding of whether we should introduce and embrace an enhancement. What will be - or could be - the guiding telos of one’s living? Even if we can’t agree, such dialogue might further clarify our differences in helpful ways. We could refine and reaffirm our commitments and come to know something more of our fellow citizen.
As the third volume of Guardians of the Galaxy winds its way to a conclusion, we see that the High Evolutionary’s aspirations are built on a foundation of degradation for anything and anyone that doesn’t meet his standard of perfection. He will incinerate entire planets in pursuit of his goal. He has no friends and does not know love. “All I wanted to do was make things perfect,” he laments. One of his creations responds, “You didn’t want to make things perfect. You just hated things the way they are.” This profound dissatisfaction with the givenness of life can drive us to enhance for the sake of enhancement. Like the poor Sorcerer’s Apprentice, the power can get away from us. We have nothing greater by which we constrain our power, enhancements, and therapies, and they run rampant. With such a guiding light, though, we can discern how and why to enhance in pursuit of a greater purpose.
I don’t think enhancement can be a goal of care, if by that we mean provided by a clinician in a medical context. I also don’t think enhancements are entirely out of bounds. How can we deliberate together about them, submitting them to a shared understanding of what constitutes a good life? Can we at all?
I’d amend this to say caffeine, not coffee, is the enhancement, but the point still stands.
There’s a great deal of literature out there about moral enhancement which Vukov doesn’t directly address in his book. I’ll save that for another time.