Forbes Technology Council: You Only Have One Voice — Don't Let Hackers Steal It

Published by Mike Fong on October 9, 2020

Synthetic voice fueled by deep learning had a coming-out party of sorts in 2016, when DeepMind's WaveNet and Adobe's VoCo were first unveiled. In those days, we saw early glimpses of how neural networks that rely on deep-learning models could be used to analyze the many nuances of a person's voice and speech patterns in order to create an artificial voice for that person. Combined with text-to-speech techniques, the cloned voice could be made to say virtually anything. But the output of those attempts was largely robotic-sounding and not difficult to spot as fake.

Previous
Previous

Security Today: The Lurking Security Risks of Surveillance Capitalism

Next
Next

CRN: The 100 People You Don’t Know but Should 2019