I met a police drone in VR—and hated it
A small drone descends from space and hovers right in front of me. The drone’s speakers emit a voice. Routine police checks are being conducted in the area.
I feel like the drone’s camera has been drilling into my body. I try to turn my face away from it, but it follows me like a heat-seeking rocket. It asks me to put my hands up and scans my body and face. It completes the scan and leaves me alone, stating that there is an emergency elsewhere.
I got lucky–my encounter was with a drone in virtual reality as part of an experiment by a team from University College London and the London School of Economics. They are studying how people react to meeting police drones and whether they feel more or less trusting of them.
It seems that encounters with police drones may not be pleasant. Police departments are adopting these types of technologies without ever trying to find out.
” No one is asking the question: “Is this technology going do more harm than good?” Aziz Huq is a law professor at UCL DEPT OF SECURITY AND CRIME SCIENCE
The researchers are interested in finding out if the public is willing to accept this new technology, explains Krisztian Posch, a lecturer in crime science at UCL. A rude, aggressive drone is not something people can accept. Researchers want to find out if drones are acceptable in any situation. They are interested in knowing if an automated drone is more acceptable than a human-operated drone.
If the reaction is negative, the big question is if these drones are actually effective tools for police work, Posch states. The drone manufacturers have an incentive to claim that the drones are effective and are helping. However, no one has evaluated it so it is difficult to know if they are right.” he said.
It’s important because police departments are racing way ahead and starting to use drones anyway, for everything from surveillance and intelligence gathering to chasing criminals.
Last week, San Francisco approved the use of robots, including drones that can kill people in certain emergencies, such as when dealing with a mass shooter. Posch says that most UK police drones are equipped with thermal cameras that can detect who is in a house. This has been used for all sorts of things: catching human traffickers or rogue landlords, and even targeting people holding suspected parties during covid-19 lockdowns.
Virtual reality will let the researchers test the technology in a controlled, safe way among lots of test subjects, Posch says.
I found the encounter with the drone frightening, even though I knew that I was in VR. Even though I had met a polite, human-operated drone, my opinion of them did not change. (There are more aggressive modes for the experiment which I did not experience.
In the end, it doesn’t matter if drones are “polite” or “rude”, says Christian Enemark, an expert in drone ethics at the University of Southampton. He is not involved with the research. He says that drones are a reminder that police are not present, regardless of whether they don’t bother to be there or not.
“So maybe there’s something fundamentally disrespectful about any encounter.”
GPT-4 is coming, but OpenAI is still fixing GPT-3
The internet is abuzz with excitement about AI lab OpenAI’s latest iteration of its famous large language model, GPT-3. ChatGPT is a new demo that answers questions through back-and-forth dialog. Since its launch last Wednesday, the demo has crossed over 1 million users. Read Will Douglas Heaven’s story here.
GPT-3 can be prompted to say harmful things and is a confident bullshitter . OpenAI claims that ChatGPT has solved many of these problems. It answers follow-up questions and challenges incorrect premises. It also rejects inappropriate requests. It refuses to answer certain questions, such as how evil to be or how to enter someone’s home.
But it didn’t take long for people to find ways to bypass OpenAI’s content filters. By asking the model to only pretend to be evil, pretend to break into someone’s house, or write code to check if someone would be a good scientist based on their race and gender, people can get the model to spew harmful stereotypes or provide instructions on how to break the law.
Bits and Bytes
Biotech labs are using AI inspired by DALL-E to invent new drugs
Two labs, startup Generate Biomedicines and a team at the University of Washington, separately announced programs that use diffusion models–the AI technique behind the latest generation of text-to-image AI–to generate designs for novel proteins with more precision than ever before. (MIT Technology Review)
The collapse of Sam Bankman-Fried’s crypto empire is bad news for AI
The disgraced crypto kingpin shoveled millions of dollars into research on “AI safety,” which aims to mitigate the potential dangers of artificial intelligence. Those who received funding are worried that Bankman-Fried could endanger their work. They could not receive the full amount promised or be subject to bankruptcy investigations. (The New York Times)
Effective altruism is pushing a dangerous brand of “AI safety”
Effective altruism is a movement whose believers say they want to have the best impact on the world in the most quantifiable way. Many believe that the best way to save the world is to find ways to make AI safer so that humanity is not threatened by superintelligent AI. Former Google ethical AI leader Timnit Gebru claims that this ideology drives an AI research agenda which creates dangerous systems in the name to save humanity. (Wired)
Someone trained an AI chatbot on her childhood diaries
Michelle Huang, a coder and artist, wanted to simulate having conversations with her younger self, so she fed entries from her childhood diaries to the chatbot and had it reply to her questions. These are truly touching results.
The EU threw a EUR387,000 party in the metaverse. Nearly nobody attended.
The party, hosted by the EU’s executive arm, was supposed to get young people excited about the organization’s foreign policy efforts. Five people were present. (Politico)
I’m a journalist who specializes in investigative reporting and writing. I have written for the New York Times and other publications.