The messy morality of letting AI make life-and-death decisions

The messy morality of letting AI make life-and-death decisions thumbnail
In a Rotterdam workshop, Philip Nitschke–Dr. Death, or “the Elon Musk for assisted suicide” to some, is overseeing the final rounds of testing on his new Sarco device before shipping it to Switzerland. He says that its first user is still waiting. This is the third prototype Nitschke’s non-profit, Exit International has 3D-printed. It has been wired up. Number one was exhibited in Germany, Poland and elsewhere. He says that number two was a disaster. Now he’s ironed out the manufacturing errors and is ready to launch: “This is the one that will be used.”

A coffin-size pod with Star Trek stylings, the Sarco is the culmination of Nitschke’s 25-year campaign to “demedicalize death” through technology. A person who has made the decision to die must answer three questions before being sealed inside the machine. Where are you? Do you know what happens when you press the button?

The Sarco will be filled with nitrogen gas. Its occupant will die of asphyxiation within five minutes.

A recording of this short interview will be then handed over to the Swiss authorities. Nitschke has not approached Switzerland’s government for approval. However, Switzerland is one of only a few countries that have legalized assisted death. It is allowed as long as the person who wishes to die does the final act.

Nitschke aims to make assisted suicide as free and unassisted possible. This will allow people who choose to end their lives with dignity and autonomy. He says, “You don’t really need a doctor for death.”

The Sarco uses nitrogen, which is a readily available gas, and not the barbiturates used in euthanasia clinics. It does not require a physician for an injection or to sign off on lethal drugs.

At least that’s the idea. Nitschke has not been able to completely bypass the medical establishment. Nitschke states that Switzerland requires that people seeking euthanasia have mental capacity. This is usually assessed by a psychiatrist. He says that there is still the belief that if someone asks to die, it is because they have some undiagnosed mental illness. He believes he has a solution. Exit International is developing an algorithm that Nitschke hopes allows people to do a sort of psychiatric self assessment on a computer. The program would give a four-digit code to activate a Sarco to anyone who passes the online test. Nitschke says, “That’s our goal.” “Despite all this, the project is very difficult.”

Nitschke may seem outrageous or extreme to some. His belief in the power and potential of algorithms could prove to be exaggerated. He is not the only one who wants to use technology and AI in life-or death decisions.

Yet where Nitschke sees AI as a way to empower individuals to make the ultimate choice by themselves, others wonder if AI can help relieve humans from the burden of such choices. AI is being used to triage, treat and monitor patients in a growing variety of health-care areas. As algorithms become a more important part of healthcare, we need to ensure that they are only used for medical decisions and not moral ones.

Medical care is a finite resource. Patients must wait until appointments are made to receive tests or treatment. Patients who require organ transplants must wait until they find suitable hearts and kidneys. Vaccines should be given first to those most in need (in countries that have them). During the worst pandemic, when hospitals were faced with a shortage in beds and ventilators during the outbreak, doctors had to make quick decisions about who would get immediate care and who wouldn’t. This led to tragic results. The covid crisis brought this issue into sharp focus and led many to wonder if algorithmic solutions could be found. Hospitals all over the world co-opted existing AI tools or bought new ones to aid in triage. The UK had some hospitals that were exploring the use AI tools to screen chest radiographs . They jumped on these tools as a quick and cheap way to identify the most serious covid cases .. Qure.ai, an Indian supplier of this tech, was based in Mumbai. Lunit, a Korean supplier, was based in Seoul. Lunit was based in the US. Lunit took on contracts in Europe and the USA. Diagnostic Robotics, an Israeli company that supplies AI-based triage tools for hospitals in Israel, India and the US, said it saw a sevenfold increase in demand for its technology during the first year of the pandemic. Since then, the health-care AI business has been booming.

This rush to automate raises many questions that have no easy answers. What kind of decisions is it appropriate to use an algorithm for? How should these algorithms be constructed? Who gets to decide how they work?

Rhema Vashianathan, director of the Centre for Social Data Analytics, is a professor at Auckland University of Technology in New Zealand who focuses on tech and welfare. She believes it is right that people ask AI to assist them in making big decisions. She says, “We should be addressing issues that clinicians find really difficult.”

One of her current projects is a teen mental health service where self-harming behavior can be diagnosed and treated. The clinic is in high demand and must maintain a high level of turnover. Patients should be discharged as soon as possible to allow for more patients. Doctors have to make a difficult decision between treating existing patients and treating new patients. Vaithianathan says that doctors don’t discharge patients because they are afraid of their patients self-harming. “That’s their nightmare scenario.”

Even when AI seems accurate, scholars and regulators alike call for caution.

Vaithianathan and her colleagues have tried to develop a machine-learning model that can predict which patients are most at risk of future self-harming behavior and which are not, using a wide range of data, including health records and demographic information, to give doctors an additional resource in their decision-making. She says, “I’m always looking to find those cases where a clinic is struggling and would appreciate an algorithm.”

The project is in its early stages, but so far the researchers have found that there may not be enough data to train a model that can make accurate predictions. They will continue to try. Vaithianathan states that the model does not need to be perfect in order to help doctors.

They are not the only ones trying to predict whether patients will be discharged. A review published in 2021 highlighted 43 studies by researchers claiming to use machine-learning models to predict whether patients will be readmitted or die after they leave hospitals in the US. Although none of the models were clinically useful, the authors anticipate that machine-learning models will be used to predict whether patients will be readmitted or die after they leave US hospitals. One, algorithms are unable to follow the data and follow it in a way that is not human-friendly. For example, health data is dominated by white men and women, which reduces its predictive power. The models can be trusted to make ethical decisions and allow people to trust the machine more than question its output.

This ongoing problem is a theme in David Robinson’s new book, Voices in the Code, about the democratization of AI. Robinson, a visiting scholar at Berkeley’s Social Science Matrix and a member of Apple University’s faculty, tells the story of Belding Scribner. In 1960 Scribner, a nephrologist in Seattle, inserted a short Teflon tube known as a shunt into some of his patients’ arms to prevent their blood from clotting while they underwent dialysis treatment. This innovation allowed kidney disease patients to continue dialysis treatment indefinitely. It transformed kidney failure from a fatal condition to a long-term, manageable condition.

When word spread, Scribner was overwhelmed with requests for treatment. He could not accept everyone. He had to decide who he should help and who he should turn away. He quickly realized that this was not a medical decision, but an ethical one. He formed a committee of laypeople to make the decision. Their choices were not perfect. Because of prejudices at the time, the committee favored married men with jobs and families.

The way Robinson tells it, the lesson we should take from Scribner’s work is that certain processes–bureaucratic, technical, and algorithmic–can make difficult questions seem neutral and objective. They can mask the moral aspects of a decision–and sometimes the terrible consequences.

Robinson writes

“Bureaucracy can be used to convert hard moral problems into boring technical ones.” Robinson says that this phenomenon existed long before computers. However, software-based systems can help to accelerate and amplify this trend. Quantification can be used as a moral anesthetic. .”

Whatever the process, it is important to let the moral anesthetic wear off before we examine the painful consequences of the decision. Scribner was able to ask a panel of laypeople, rather than a group of objective doctors who meet behind closed doors. This allowed him to save his life. It could also mean asking for audits of high-stakes algorithms. For now, the auditing of algorithms by independent parties is more wish-list item than standard practice. Robinson, however, shows how it can happen.

By the 2000s, an algorithm had been developed in the US to identify recipients for donated kidneys. Some people were not happy with the way the algorithm was designed. In 2007, Clive Grawe, a kidney transplant candidate from Los Angeles, told a room full of medical experts that their algorithm was biased against older people like him. The algorithm was designed to maximize the number of years of life saved by allocating kidneys in a way that minimized waste. This favored whiter, wealthier, younger patients, Grawe and others argued.

Such bias in algorithms is not uncommon. It is rare for algorithm designers to admit that there is a problem. After years of consulting with laypeople like Grawe the designers came up with a more balanced way to maximize the number years saved. This was done by, among other things: taking into account overall health as well as age. One important change was that recipients would not be limited to the same age group as donors who are often people who have already died. If they are otherwise healthy, some of those kidneys can now be given to older people. The algorithm, like Scribner’s committee would not make decisions that everyone would accept. It is difficult to fault the process of its creation.

“I didn’t want to sit there and give the injection. If you want it, you press the button.”

Philip Nitschke

Nitschke, too, is asking hard questions. A former doctor who had to burn his medical license due to a long-running legal dispute with the Australian Medical Board. Nitschke is the first person to legally give a voluntary lethal injection. In the nine months between July 1996, when the Northern Territory of Australia brought in a law that legalized euthanasia, and March 1997, when Australia’s federal government overturned it, Nitschke helped four of his patients to kill themselves.

The first, a 66-year-old carpenter named Bob Dent, who had suffered from prostate cancer for five years, explained his decision in an open letter: “If I were to keep a pet animal in the same condition I am in, I would be prosecuted.”

Nitschke wanted to support his patients’ decisions. He was not comfortable with the role they asked him to play. He made a machine to fill in for him. He says, “I didn’t want to sit down and give the injection.” “If you want it, press the button

The machine was not very attractive. It was basically a laptop connected to a syringe. It served its purpose. The Sarco is an iteration on that original device. It was later acquired by Science Museum in London. Nitschke hopes that the next step will be an algorithm that can perform a psychiatric evaluation.

But there is a good chance that those hopes will be crushed. The problem of creating a program to assess someone’s mental state is a complex one. Nitschke notes that doctors are not always clear on what it means to be a person of sound mind to decide to die. He says, “You can get a dozen differing answers from a dozen different psychiatrists.” In other words, an algorithm cannot be constructed on a common ground.

But that’s not the takeaway here. Nitschke, like Scribner is asking what qualifies as a medical decision and what qualifies as ethical. And who gets to make the final call. Scribner thought that laypeople–representing society as a whole–should choose who received dialysis, because when patients have more or less equal chances of survival, who lives and who dies is no longer a technical question. Robinson explains that society must make these decisions. However, the process can still be encoded into an algorithm if it is inclusive and transparent. Nitschke believes that assisted suicide is an ethical decision that each individual must make. The Sarco and the theoretical algorithm he envisions would only protect their ability.

AI is likely to become more useful and essential as the population grows and resources expand. The real work is in acknowledging the absurdity and arbitrariness many of the AI-related decisions. That’s up to us.

Robinson says that creating algorithms is akin to writing legislation. “In a certain way, the question of how to make software code that will regulate people is just one example of how to make laws. People can disagree about the merits and uses of high-stakes software. But it is ultimately people who are responsible for the laws they have.

Read More