Safeguards against AI voice impersonation
AI voice-modeling tools have been used to improve text-to-speech generation, create new possibilities for speech editing, and expand movie magic by cloning famous voices like Darth Vader’s. But the power of easily producing convincing voice simulations has already caused scandals, and no one knows who’s to blame when the tech is misused.
Earlier this year, there was backlash when some 4chan members made deepfake voices of celebrities making racist, offensive, or violent statements. At that point, it became clear that companies needed to consider adding more safeguards to prevent misuse of the technology, Vice reported—or potentially risk being held liable for causing substantial damage, like ruining the reputations of famous people.
The courts have not yet decided when or if companies will be held liable for harms caused by deepfake voice technology—or any of the other increasingly popular AI technology, like ChatGPT—where defamation and misinformation risks seem to be rising.
There may be increasing pressure on courts and regulators to get AI in check, though, as many companies seem to be releasing AI products without fully knowing the risks involved.
Right now, some companies seem unwilling to slow down releases of popular AI features, including controversial ones that allow users to emulate celebrity voices. Most recently, Microsoft rolled out a new feature during its Bing AI preview that can be used to emulate celebrities, Gizmodo reported. With this feature, Microsoft seems to be attempting to dodge any scandals by limiting what impostor celebrity voices can be prompted to say.
Microsoft did not respond to Ars’ request for comment on how well safeguards currently work to prevent the celebrity voice emulator from generating offensive speech. Gizmodo pointed out that, like many companies eager to benefit from the widespread fascination with AI tools, Microsoft relies on its millions of users to beta test its “still-dysfunctional AI,” which can seemingly still be used to generate controversial speech by presenting it as parody. Time will tell how effective any early solutions are in mitigating risks.
In 2021, the FTC released AI guidance, telling companies that products should “do more good than harm” and that companies should be prepared to hold themselves accountable for risks of using products. More recently, the FTC last month told companies, “You need to know about the reasonably foreseeable risks and impact of your AI product before putting it on the market.”