The AI myth Western lawmakers get wrong

The AI myth Western lawmakers get wrong thumbnail
The EU is currently negotiating a new law, the AI Act ,, which will prohibit member states and possibly private companies from implementing such an AI system. The problem is that it’s “essentially banging thin air,” says Vincent Brussee of the Mercator Institute For China Studies, a German think-tank.

Back in 2014, China announced a six-year plan to build a system rewarding actions that build trust in society and penalizing the opposite. Eight years later, China has just released a draft law to codify and guide future social credit pilots.

There have been some contentious local experiments, such as one in the small city of Rongcheng in 2013, which gave every resident a starting personal credit score of 1,000 that can be increased or decreased by how their actions are judged. The local government has removed some controversial criteria and people can now opt out.

But these have not gained greater traction elsewhere and don’t apply to the whole Chinese population. There is no national, all-seeing social insurance system that ranks people.

As my colleague Zeyi Yang explains, “the reality is, that terrifying system doesn’t exist, and the central government doesn’t seem to have much appetite to build it, either.”

What has been implemented is mostly pretty low-tech. This is a “mixture of attempts to regulate financial credit, enable government agencies share data, and promote state sanctioned moral values,” Zeyi writes.

Kendra Schaefer, a partner at Trivium China, a Beijing-based research consultancy, who compiled a report on the subject for the US government, couldn’t find a single case in which data collection in China led to automated sanctions without human intervention. The South China Morning Post found that in Rongcheng human “information gatherers”, would walk around the town and note down misbehavior with a pen and paper.

The myth originates from a pilot program called Sesame Credit, developed by Chinese tech company Alibaba. Brussee says that this was an attempt to assess creditworthiness using customer data in a time when most Chinese didn’t have a card. The effort was confused with the entire social credit system in what Brussee calls a “game of Chinese whispers.” This led to the misunderstanding taking on a whole new life.

The irony is that, while the US and European politicians portray this as a problem stemming form authoritarian regimes , systems which rank and penalize people in the West are already in place. Algorithms that automate decisions are being rolled out to deny people housing, jobs and basic services.

Authorities in Amsterdam have used an algorithm rank young people in disadvantaged areas based on their likelihood of becoming criminals. They claim that the goal is to prevent crime and provide better, more targeted support.

But in reality, human rights organizations argue that it has led to increased discrimination and stigmatization. These young people face more police stops, home visits by authorities, and stricter supervision from school and social services.

I It’s easy to criticize a dystopian algorithm when it doesn’t actually exist. However, as legislators in the US and the EU try to reach a common understanding of AI governance, it would be better to look closer at home. Americans don’t even have a federal privacy bill that would provide basic protections against algorithmic decision-making.

There is also a dire need for governments to conduct honest, thorough audits of the way authorities and companies use AI to make decisions about our lives. They may not like what they see, but that only makes it more important for them to continue looking.

Deeper Learning

A bot that watched 70,000 hours of Minecraft could unlock AI’s next big thing

Research company OpenAI has built an AI that binged on 70,000 hours of videos of people playing Minecraft in order to play the game better than any AI before. This breakthrough is a new technique called imitation learning that can be used to train machines to perform a wide range tasks by watching humans first. This raises the possibility that YouTube could be a rich source of training data.

Why it’s a big deal: Imitation learning can be used to train AI to control robot arms, drive cars, or navigate websites. Some people, such as Meta’s chief AI scientist, Yann LeCun, think that watching videos will eventually help us train an AI with human-level intelligence. Read Will Douglas Heaven’s story here.

Bits and Bytes

Meta’s game-playing AI can make and break alliances like a human

Diplomacy is a popular strategy game in which seven players compete for control of Europe by moving pieces around on a map. Players must communicate with each other and spot when others lie. Meta’s Cicero, Meta’s latest AI, was able to trick humans into winning.

It’s a significant step towards AI that can solve complex problems such as negotiating contracts and planning routes around busy traffic. It’s a disturbing thought that an AI could so successfully deceive humans, but I won’t lie. (MIT Technology Review)

We might run out of data to train AI languages programs

The trend towards creating larger AI models means that we need bigger data sets to train them. The trouble is, we might run out of suitable data by 2026, according to a paper by researchers from Epoch, an AI research and forecasting organization. This should encourage the AI community to find ways to make more use of existing resources. (MIT Technology Review)

Stable Diffusion 2.0 is out

The open-source text-to-image AI Stable Diffusion has been given a big facelift, and its outputs are looking a lot sleeker and more realistic than before. It can even do hands. Stable Diffusion’s rapid development is amazing. The first version was launched in August. We will likely see more advancements in generative AI throughout the year.

Read More