Artificial intelligence is creating a new colonial world order
This story is the introduction to MIT Technology Review’s series on AI colonialism, which was supported by the MIT Knight Science Journalism Fellowship Program and the Pulitzer Center. You can read part 1 right here.
My husband and I love to eat and to learn about history. We decided to honeymoon on the southern coast of Spain shortly after we were married. This region was once ruled by Romans, Muslims and Christians. It is known for its beautiful architecture and rich fusion cuisines.
I had no idea how this personal trip would intersect my reporting. Over the last few years, an increasing number of scholars have argued that the impact of AI is repeating the patterns of colonial history. They claim that European colonialism was marked by the violent seizing of land, the extraction of resources and the exploitation of people, such as through slavery, for the economic enrichment in the conquering country. Although it would be a relief to say that the AI industry is reliving past traumas, it is using other, more sinister means to enrich the powerful and wealthy at the expense of the poor.
I had already begun to investigate these claims when my husband and I began to journey through Seville, Cordoba, Granada, and Barcelona. As I simultaneously read The Costs of Connection, one of the foundational texts that first proposed a “data colonialism,” I realized that these cities were the birthplaces of European colonialism–cities through which Christopher Columbus traveled as he voyaged back and forth to the Americas, and through which the Spanish crown transformed the world order.
Particularly in Barcelona, there are many physical remnants from this past. Barcelona is well-known for its Catalan modernism, a style popularized by Antoni Gaud, the architect behind the Sagrada Familia. The Spanish colonial business owners amassed wealth and funneled it into extravagant mansions. This is how the architectural movement was born.
One of the most famous, known as the Casa Lleo Morera, was built early in the 20th century with profits made from the sugar trade in Puerto Rico. While tourists from around the world today visit the mansion for its beauty, Puerto Rico still suffers from food insecurity because for so long its fertile land produced cash crops for Spanish merchants instead of sustenance for the local people.
As we stood in front the intricately carved facade with flora and mythical creatures and four women holding the four most important inventions of the time (a lightbulb and a telephone), I was able to see the parallels between colonial extraction as well as global AI development.
The AI industry doesn’t seek to conquer land like the conquistadors in Latin America and the Caribbean, but it does have the same desire to make money. The more users a company has, the more data it can collect from their movements and activities.
To support MIT Technology Review’s journalism, please consider becoming a subscriber.
The industry does not still exploit labor through mass-scale slave labor, which led to the spread of racist beliefs that dehumanized whole populations. It has found new ways to exploit cheap and precarious labor in the Global South. These methods are shaped by the implicit idea that these populations don’t require–or aren’t more deserving–economic stability and livable wages.
MIT Technology Review’s new AI Colonialism series, which will be publishing throughout this week, digs into these and other parallels between AI development and the colonial past by examining communities that have been profoundly changed by the technology. In part one, we head to South Africa, where AI surveillance tools, built on the extraction of people’s behaviors and faces, are re-entrenching racial hierarchies and fueling a digital apartheid.
In part two, we head to Venezuela, where AI data-labeling firms found cheap and desperate workers amid a devastating economic crisis, creating a new model of labor exploitation. This series also examines ways to change these dynamics. Part three features ride-hailing drivers from Indonesia who are learning to resist fragmentation and algorithmic control by building power through their communities. Part four concludes in Aotearoa (the Maori name for New Zealand), where an Indigenous couple is reclaiming control over their community’s data and revitalizing its language.
Together, these stories show how AI is destroying the communities and countries that aren’t able to influence its development. These are the same communities and countries that were already ravaged by colonial empires. These stories also show how AI could be so much better–a way for the historically disadvantaged to assert their culture, voice, and right to decide their own future. This is the ultimate goal of this series: To broaden our view of AI’s effect on society and to start to see how things could change. It’s not possible to talk about “AI for everyone” (Google’s rhetoric), “responsible AI” (Facebook’s rhetoric), or “broadly distribut[ing]” its benefits (OpenAI’s rhetoric) without honestly acknowledging and confronting the obstacles in the way.
Now a new generation of scholars is championing a “decolonial AI” to return power from the Global North back to the Global South, from Silicon Valley back to the people. I hope this series can be a starting point for “decolonial AI”, and an invitation to explore more.
I’m a journalist who specializes in investigative reporting and writing. I have written for the New York Times and other publications.