A bias bounty for AI will help to catch unfair algorithms faster

A bias bounty for AI will help to catch unfair algorithms faster thumbnail

Today, a group AI and machine-learning experts launched a new bias reward competition that they hope will accelerate the process of uncovering such embedded prejudice.

The competition is inspired by bug bounties in cybersecurity and asks participants to create tools that can identify and mitigate algorithmic biases within AI models.

It is being organized by volunteers who work at companies such as Twitter, Splunk and deepfake detection startup Reality Defender. They have been called the “Bias Buccaneers.”

The first bias bounty competition will focus on biased photo detection . It’s a common problem: in the past, for example, flawed image detection systems have misidentified Black people as gorillas.

Competitors are challenged to create a machine learning model that labels each image with the skin tone, perceived gender, age group, and location. This will make it easier for them to spot biases in data and make it easier to measure them. They will be given access to a data set of around 15,000 images of synthetically generated human faces. Participants are ranked based on how accurate their model tags images are and how long it takes for the code to run, amongst other metrics. The competition closes on November 30.

Microsoft and startup Robust Intelligence have committed prize money of $6,000 for the winner, $4,000 for the runner-up, and $2,000 for whoever comes third. Amazon has contributed $5,000 to the first set of entrants for computing power.

The competition is an example a budding industry in AI: auditing to detect algorithmic bias. Twitter launched the AI bias bounty in 2013 and Stanford University just completed its first AI auditor challenge .. Meanwhile, nonprofit Mozilla is creating tools for AI auditors. These audits will become more common. They’ve been hailed by regulators and AI ethics experts as a good way to hold AI systems accountable, and they are going to become a legal requirement in certain jurisdictions.

The EU’s new content moderation law, the Digital Services Act, includes annual audit requirements for the data and algorithms used by large tech platforms, and the EU’s upcoming AI Act could also allow authorities to audit AI systems. The US National Institute of Standards and Technology also recommends AI audits as a gold standard. These audits are supposed to be similar to inspections in high-risk sectors like chemical plants, according Alex Engler, who studies AI governance for the Brookings Institution.

The trouble is, there aren’t enough independent contractors out there to meet the coming demand for algorithmic audits, and companies are reluctant to give them access to their systems, argue researcher Deborah Raji, who specializes in AI accountability, and her coauthors in a paper from last June.

This is what these competitions aim to cultivate. AI experts hope that they will inspire more engineers, researchers, or experts to acquire the skills and experience necessary to conduct these audits. Much of the limited scrutiny in AI comes from academics and tech companies. Competitions like this aim to create a new group of experts who are skilled in auditing AI.

” We are trying to create an additional space for people who are interested, who want to get started, or who are specialists who don’t work in tech companies,” Rumman Chowdhury (director of Twitter’s team ethics, transparency and accountability in machine-learning), the leader of the Bias Pirates, says Rumman Chowdhury. She says that hackers and data scientists could be among these people.

The team behind the bounty competition for Bias Buccaneers hopes it will be one of many. Competitions like these not only encourage machine-learning communities to perform audits, but also promote a shared understanding of “how to audit” and “what types of audits should we be investing in,” Sara Hooker, who heads Cohere for AI (a non-profit AI research laboratory). The effort is “fantastic” and “absolutely needed,” says Abhishek, founder of the Montreal AI Ethics Institute and a judge in Stanford’s AI audit challenge. “The more eyes you have on a system the more likely it will be that we find flaws,” Gupta states.

Read More