Who’s going to save us from bad AI?

Who’s going to save us from bad AI? thumbnail

About damn time. This was the reaction of AI policy and ethics experts to the news that the Office of Science and Technology Policy (the White House’s science and tech advisory agency) had released an AI Bill of Rights . This document is Biden’s vision of how citizens, technology companies, the US government and the AI sector should work together.

It’s a great initiative and long overdue. So far, the US is the only Western nation without clear guidance about how to protect its citizens from AI harms. (As a reminder, these harms include wrongful arrests, suicides, and entire cohorts of schoolchildren being marked unjustly by an algorithm. This is just a start.

Tech companies claim they want to reduce these kinds of harms but it’s hard to hold them accountable.

The AI Bill of Rights lists five rights that Americans should have in this age of AI. These include data privacy, protection from unsafe systems, and assurances about algorithms not being discriminatory and that there will always exist a human alternative. It you can read more about it.

The good news is that The White House has shown mature thinking about AI harms. This should be reflected in how the federal government views technology risks more broadly. The EU is pressing on with regulations that ambitiously try to mitigate all AI harms. It’s a great idea, but it can take many years before their AI Act is ready. The US, however, can “treat one problem at a given time” and each agency can learn how to deal with AI challenges as they arise. Alex Engler, who studies AI governance at the Brookings Institution in DC, said that the US is able to tackle any problem at once.

The bad news: The AI Bill of Rights misses some very important areas of harm like worker surveillance and law enforcement. The AI Bill of Rights is not a binding law, but a recommendation rather than a law, unlike the US Bill of Rights. “Principles are frankly not enough,” says Courtney Radsch, US tech policy expert for the human rights organization Article 19. She adds that “in the absence of, for instance, a national privacy act that sets some boundaries it’s only going half the way.”

The US is on a tightrope. America is not looking weak at the international stage in this area. AI harm mitigation is perhaps America’s most important role, as the US hosts the majority of the largest and most successful AI companies in the world. This is the problem. The problem is that the US must lobby against any rules that would limit its tech giants. In the US, it’s also loath to implement any regulation that could “hinder innovation.” The next two years will be crucial for global AI policy. If the Democrats don’t win a second term in the 2024 presidential election, it is very possible that these efforts will be abandoned. The progress made thus far could be drastically altered or redirected by new people with different priorities. It is possible to do anything.

Deeper Learning

DeepMind’s game-playing AI has beaten a 50-year-old record in computer science

They’ve done it again: AI lab DeepMind has used its board-game playing AI AlphaZero to discover a faster way to solve a fundamental math problem in computer science, beating a record that has stood for more than 50 years. The researchers trained a new AlphaZero version, called AlphaTensor to play a game that would teach the most steps to solve the math problem. It won the game in the shortest time possible.

Why this is so important: Matrix multiplication is a critical type of calculation that is at the core of many applications. It can display images on a screen or simulate complex physics. It is also crucial to machine learning. This calculation could be speeded up to make it easier to perform thousands of computer tasks every day, saving money and energy. Read more from my colleague Will Heaven here.

Bits and Bytes

Google released an impressive text-to-video AI
Just a week after Meta’s text-to-image AI reveal, Google has upped the ante. Google’s Imagen Video videos are much more detailed than Meta’s. But, like Meta, Google is not releasing its model into the wild, because of “social biases and stereotypes which are challenging to detect and filter.” (Google)

Google’s new AI can hear a snippet of a song–and then keep on playing
The technique, called AudioLM, generates naturalistic sounds without the need for human annotation. (MIT Technology Review)

Even after $100 billion, self-driving cars are going nowhere
What a quote from Anthony Levandowski, one of the field’s biggest stars: “Forget about profits–what’s the combined revenue of all the [AV] companies? It’s a million dollars. Maybe. It’s probably closer to zero. (Bloomberg Businessweek)

Robotics companies have pledged not to weaponize their tech
Six of the largest robotics companies in the world, including Boston Dynamics, have pledged not to weaponize their robots. (Unless it is for defense purposes.

Meanwhile, defense AI startup Anduril says it has developed loitering munitions, also known as suicide drones, and this is apparently just the start of its new weapons program. Last summer, I wrote about how military AI startups are thriving. The invasion of Ukraine has forced militaries to upgrade their arsenals. Silicon Valley is poised to capitalize. (MIT Technology Review)

This is life in the Metaverse
A fun story about life in the Metaverse and its early adopters. This was the first Metaverse story I could see the appeal of. But it didn’t make me want plug and play anytime soon. (The New York Times)

There’s a new AI that allows you to create interiors
The model was built in five days using the open-source text-to-image model Stable Diffusion to generate snazzy interiors. It’s great that people are using the model to create new applications. I can see this tech being used to scam real estate agents and Airbnb. (InteriorAI)

See you next time,

Melissa

Read More