Responsible AI has a burnout problem

Responsible AI has a burnout problem thumbnail
Margret Mitchell had been at Google for two-years before she realized that she needed a break. I started having frequent breakdowns,” Mitchell, who co-founded and led the company’s Ethical AI group. “That was not something that I had ever experienced before.”

Only after she spoke with a therapist did she understand the problem: she was burnt out. Due to stress, she ended up taking medical leave.

Mitchell is not the only one who has had this experience. She is now an AI researcher and chief ethical scientist at Hugging Face. Abhishek Gupta is the founder of Montreal AI Ethics Institute and a responsible AI consultant at Boston Consulting Group. Burnout is becoming more common in responsible-AI groups, she says.

Activists and regulators are putting increasing pressure on companies to ensure that their AI products minimize potential harms before being released. They have invested in teams to evaluate the impact of these systems on our lives, societies, political systems, and politics.

Tech companies like Meta have been forced by courts to provide additional mental-health support and compensation for employees such content moderators who are often required to sort through graphic and violent material that can be distressing.

But AI teams that work on responsible AI are often left alone, employees said to MIT Technology Review. The work can be as mentally draining as content moderation, and the work can be just a little bit more psychologically draining. This can lead to undervaluation and burnout in these teams.

Rumman Crumbhury, who heads Twitter’s Machine Learning Ethics, Transparency, and Accountability team and has been a pioneer in applied AI ethics, had to deal with that problem in a past role.

I was exhausted at one point. She says that the situation was almost hopeless. All the practitioners MIT Technology Review interviewed spoke enthusiastically of their work: It is driven by passion, a feeling of urgency, and the satisfaction in solving real problems. Without the right support, that sense of mission can become overwhelming.

“It almost feels like you can’t take a break,” Chowdhury says. “There are many people in tech companies whose job is to protect the platform’s users. And there is this feeling like if I take a vacation, or if I am not paying attention 24/7, something really bad is going to happen.”

Mitchell continues to work in AI ethics, she says, “because there’s such a need for it, and it’s so clear, and so few people see it who are actually in machine learning.”

But there are plenty of challenges. While organizations place a lot of pressure on individuals to solve big, complex problems, they also face an almost constant barrage online of criticism.

Cognitive dissonance

The role of an AI ethicist or someone in a responsible-AI team varies widely, ranging from analyzing the societal effects of AI systems to developing responsible strategies and policies to fixing technical issues. Typically, these workers are also tasked with coming up with ways to mitigate AI harms, from algorithms that spread hate speech to systems that allocate things like housing and benefits in a discriminatory way to the spread of graphic and violent images and language.

Trying to fix deeply ingrained issues such as racism, sexism, and discrimination in AI systems might, for example, involve analyzing large data sets that include extremely toxic content, such as rape scenes and racial slurs.

AI systems often reflect or exacerbate the worst aspects of our societies, such racism and sexism. The problematic technologies range from facial recognition systems that classify Black people as gorillas to deepfake software used to make porn videos appearing to feature women who have not consented. These issues can be particularly difficult for women, people of colour, and other marginalized groups. They include facial recognition systems that classify Black people as gorillas to deepfake software . Used to create porn videos featuring women who have not consented to be featured in porn videos.

And while burnout is not unique to people working in responsible AI, all the experts MIT Technology Review spoke to said they face particularly tricky challenges in that area.

You are working on a thing that you’re very personally harmed by day to day,” Mitchell says. “It makes the reality of discrimination even worse because you can’t ignore it.”

But despite growing mainstream awareness about the risks AI poses, ethicists still find themselves fighting to be recognized by colleagues in the AI field.

Some even mock the work of AI ethicsists. Stability AI’s CEO, Emad Mostaque, whose startup built the open-source text-to-image AI Stable Diffusion, said in a tweet that ethics debates around his technology are “paternalistic.” Neither Mostaque nor Stability AI replied to MIT Technology Review’s request for comment. Most people working in the AI field include engineers. They are not open to the humanities,” says Emmanuel Goffi (an AI ethicist and founder) of the Global AI Ethics Institute, a think-tank. Companies want a quick fix, Goffi said. They want someone to “explain” to them how ethical thinking should be applied to the entire organization’s functioning.

“Psychologically, the most difficult part is that you have to make compromises every day–every minute–between what you believe in and what you have to do,” he says.

Mitchell believes that the problem is exacerbated by the attitude of tech companies, and machine-learning groups in particular. Mitchell says that you must not only work on these difficult problems but also prove that they are worth your time. It’s the complete opposite of support. It’s pushback.”

Chowdhury says, “There are people that think ethics is a useless field and that it’s negative about the progress [of AI]].”

Social media makes it easy to criticize researchers. Chowdhury says there’s no point in engaging with people who don’t value what they do, “but it’s hard not to if you’re getting tagged or specifically attacked, or your work is being brought up.”

Breakneck speed

The rapid pace of artificial-intelligence research doesn’t help either. There are always new breakthroughs. In the past year alone, tech companies have unveiled AI systems that generate images from text, only to announce–just weeks later–even more impressive AI software that can create videos from text alone too. This is a remarkable achievement, but there are potential dangers associated with every new breakthrough. Text-to-image AI could violate copyrights, and it might be trained on data sets full of toxic material, leading to unsafe outcomes.

Chasing what’s hot, the top-button issue in Twitter, is exhausting,” Chowdhury said. She says that while ethicsists are not experts on all the problems presented by every new breakthrough, she feels she must keep up with every twist of the AI information cycle to avoid missing something. Chowdhury believes that being part of a well-resourced Twitter team has helped her to feel relieved that she doesn’t have to carry the entire burden. She says, “I know that I have the ability to go away for a week without worrying about it.” But Chowdhury works in a large tech company that has the resources and desire to hire a whole team to work on responsible AI. Not everyone is as fortunate.

Smaller AI startups are often under pressure from venture capitalists to grow their businesses. Checks that you receive from investors often don’t reflect the additional work required to build responsible tech. Vivek Katial is a data scientist at Multitudes in Australia, which specializes in ethical data analytics.

The tech sector should demand more of venture capitalists to “recognize that they need to spend more for technology which’s going to have more responsibility,” Katial states.

The trouble is, many companies can’t even see that they have a problem to begin with, according to a report released by MIT Sloan Management Review and Boston Consulting Group this year. AI was a top strategic priority for 42% of the report’s respondents, but only 19% said their organization had implemented a responsible-AI program. Some may think they are considering AI’s risks but they aren’t hiring the right people and giving them the resources to make responsible AI a reality, says Gupta.

“That’s where people start to experience frustration and experience burnout,” he adds.

Growing demand

Before long, companies may not have much choice about whether they back up their words on ethical AI with action, because regulators are starting to introduce AI-specific laws.

The EU’s upcoming AI Act and AI liability law will require companies to document how they are mitigating harms. In the US, lawmakers in New York, California, and elsewhere are working on regulation for the use of AI in high-risk sectors such as employment. In early October, the White House unveiled the AI Bill of Rights, which lays out five rights Americans should have when it comes to automated systems. This bill will likely encourage federal agencies to increase their scrutiny on AI systems and companies.

Despite the fact that many tech companies have had to freeze hiring and threaten major layoffs due to the volatile global economy, responsible-AI teams are more important than ever because introducing unsafe or illegal AI systems could lead to large fines or the requirement to delete their algorithms. After Weight Watchers was found to have illegally collected children’s data, the US Federal Trade Commission ordered them to delete their algorithms last spring. Companies make significant investments in developing AI models and maintaining databases. Being forced to delete them completely by a regulator is a huge blow. Burnout or a persistent feeling of being undervalued may lead to people leaving the field, which could have a negative impact on AI governance and ethics research in general. This is especially dangerous because those with the most experience in addressing AI-related harms may also be the most exhausted. Mitchell states that the loss of one person can have huge ramifications for entire organizations. The expertise someone has built is very hard to replace. In late 2020, Google sacked its ethical AI co-lead Timnit Gebru, and it fired Mitchell a few months later. Several other members of its responsible-AI team left in the space of just a few months. Gupta said that this type of brain drain poses a “severe threat” to AI ethics and makes companies more difficult to follow their programs.

Last year, Google announced it was doubling its research staff devoted to AI ethics, but it has not commented on its progress since. According to MIT Technology Review, the company offers training in mental-health resilience and has a peer to peer mental-health support program. It also gives employees access digital tools to aid with mindfulness. It can also connect them to mental-health providers online. It didn’t respond to questions regarding Mitchell’s tenure at the company.

Meta said it has invested in benefits like a program that gives employees and their families access to 25 free therapy sessions each year. Twitter also stated that it offers free counseling sessions, coaching sessions, and training in burnout prevention for employees. A peer-support program for mental health is also offered by the company. None of these companies claimed that they offer support for AI ethics.

As AI compliance and risk management grow, tech executives must ensure that they are investing enough in responsible AI programs.

Change begins at the top. He says that executives need to talk about their dollars, their time and their resources to determine if they are allocating them to this. People working on ethical AI are at risk.

Successful responsible-AI teams need enough resources, tools, people, and people to solve problems. But they also need agency, connections, and the power and capability to implement the changes they are asked to make. While tech companies offer a lot of resources for mental-health, many focus on work-life balance and time management, Chowdhury states that more support is needed for those who work with emotionally or psychologically jarring topics. She adds that mental-health resources for responsible tech workers would be helpful as well. Mitchell states that there has not been any recognition of the negative effects of working in this area and no encouragement to get away from it.

“The only mechanism that big tech companies have to handle the reality of this is to ignore the reality of it.”

Read More