In a video uploaded to Amazon’s web services and events channel on YouTube yesterday (June 22), the major retailer announced a new Alexa feature that can mimic any voice — even deceased relatives. For now, the update is only experimental, but developers are optimistic that the scientific advancement could benefit millions.
During Amazon’s annual MARS Conference, the company’s head scientist Rohit Prasad showed off the innovation. The event, which is held in Las Vegas and runs June 21-24, highlights machine learning, automation, robotics and space. Conference passes are listed at $1,499 for the full event. While speaking before an audience, Prasad gave a demo of the device.
“As you saw in this experience, instead of Alexa’s voice reading the book, it’s the kid’s grandma’s voice,” the scientist began. He noted that Alexa’s “human attributes” are especially important “in these times of the ongoing pandemic, when so many of us have lost someone we love.” He continued, “While AI can’t eliminate that pain of loss, it can definitely make their memories last.”
Sources say the experimental update can duplicate anyone’s voice after being exposed to just one minute of recorded audio. The general public doesn’t seem to be too thrilled about the technological advance. “The fuck you will,” a user tweeted. Another expanded on their point by saying, “The phone attacking implications of this tool are not good at all. This will likely be used for impersonation.”
One person expressed that while they miss members of their family who have passed away, they don’t see artificial intelligence as the way to go. “I miss my parents deeply, but the idea of interacting with their voices to turn on lights or change the TV volume strikes me as not just surreal but upsetting.”
A YouTube user who watched the demo pointed out the audience’s lack of enthusiasm after the demonstration ended. “Do people really even want this technology to exist? Just because it could doesn’t mean it should,” the user said.
The FUCK you will.
Remember when we told you deepfakes would increase the mistrust, alienation, & epistemic crisis already underway in this culture? Yeah that. That times a LOT.
"Amazon has a plan to make Alexa mimic anyone's voice [w/o their consent]"https://t.co/kXm4EXKgp8
— Damien P. Williams, MA, MSc, ABD, Patternist (@Wolven) June 22, 2022
The phone attacking implications of this tool are not good at all — this will likely be used for impersonation.
At Amazon’s re:MARS conference they announced they’re working to use short audio clips of a person’s voice & reprogram it for longer speechhttps://t.co/5TkEIHoeXG— Rachel Tobac (@RachelTobac) June 22, 2022
I miss my parents deeply, but the idea of interacting with their voices to turn on lights or change the TV volume strikes me as not just surreal but upsetting. https://t.co/EVmu4dxzv6
— Howard Sherman (@HESherman) June 23, 2022
I don't need Alexa to give me trauma with my dead grandma's voice in the middle of the night, thank you very much. https://t.co/zcZDUPPRMw
— 🐱🏳️🌈 Hugo the Pink Cat 🏳️🌈🐱 (@HugoThePinkCat) June 23, 2022
Get me the fuck out of this timeline. https://t.co/c1o1dRROEd
— Steve Silberman (@stevesilberman) June 22, 2022