Some people already talk to Amazon’s virtual assistant Alexa like she’s a real person, setting her up for jokes, and having conversations that go way beyond the basic commands like “Alexa, play ‘Hurt’ by Nine Inch Nails.” And that’s how people are treating a disembodied voice. But what if you could see her — and what if she looked disturbingly human?
Magic Leap, an augmented-reality startup, introduced the next evolution of the virtual assistant at their conference earlier this month. Mica performs many of the same functions as Alexa or Apple’s Siri, but when users wear Magic Leap’s augmented reality glasses, they can also see her incredibly life-like avatar. Mica smiles, makes eye contact and even yawns, making the interactions even more convincing.“Our focus was to see how far we could push systems to create digital human representations,” John Monos, Magic Leap’s vice president of human-centered AI, said at the conference. “Above all else, her facial movements are what connect you to her.”
“The technical hurdles are to get the interactions and intelligence to the high level that people expect,” he said. Mica is a prototype, and Magic Leap hasn’t said when they expect a commercial version to be available, or what further subtle characteristic will be added before then to make her even more convincing.
Magic Leap’s vault toward the other side of the uncanny valley — the term used to describe the unsettled feeling people have when faced with something that seems almost human, but isn’t convincing enough — raises ethical and existential questions about the future of humanity and how we interact with machines, as well as security risks.
Popular on Rolling Stone
Earlier this year, leading experts in artificial intelligence research issued a 100-page report outlining the risks of malicious use of AI. The report warned that the technology could be used to make phishing scams — tricking people into revealing sensitive information like credit card numbers — more advanced and prevalent. “The costs of attacks may be lowered by the scalable use of AI systems to complete tasks that would ordinarily require human labor, intelligence and expertise,” the report said.
AI could also be used for malicious political purposes, including increasing the spread of fake news and making surveillance easier.
“We also expect novel attacks that take advantage of an improved capacity to analyze human behaviors, moods and beliefs on the basis of available data,” the report said. “These concerns are most significant in the context of authoritarian states, but may also undermine the ability of democracies to sustain truthful public debates.”
In addition to predictions of all the ways AI could be misused, the researchers made recommendations for how to mitigate the risks. One key recommendation was:
“Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.” Let’s hope the designers at Magic Leap head these warnings before Mica-like entities end up in everyone’s home, collecting credit card information, spreading fake news and ratting us all out to the government.