Don’t Believe Your Ears: How To Spot Voice Deepfakes .

Deepfake technology uses AI to create synthetic media, including fake audio that appears totally authentic.

If you’ve seen Morgan Freeman denouncing himself, The Shining starring Jim Carrey and not Jack Nickolson and Spiderman: No Way Home’s Tobey Maguire instead of its original actor Tom Holland, you’ve been deepfaked. Coined in 2014 by Ian Goodfellow, then Director of Machine Learning at Apple Special Projects Group, it combines the words “deep learning” and “fake,” referring to the technology used to manipulate audio clips or video recordings.

Deepfake technology in itself is regarded as harmless. Until it’s not. In the hands of scammers, it can become a dangerous tool. Imbued with plenty of opportunities for deception, defamation or disinformation, deep fakes are surprisingly, and thankfully, not that frequent. But there have been several high-profile cases involving voice deepfakes.

In 2019, scammers used this audio alteration technology to shake down a UK-based energy firm. The scammer made a call, pretending to be the CEO of the firm’s German parent company in a phone conversation, requesting an urgent transfer of €220,000 to the account of a certain supplier company. A year later, in 2020, other scammers used deepfakes to steal up to $35 million from a Japanese company.

Deepfake artificial intelligence has been growing at a rapid rate over the past few years. Machine learning can be used to create compelling fakes of images, video, or audio content.

“To determine whether some audio piece is a fake or a speech of a real human, consider several characteristics: the timbre, manner and intonation of speech. For instance, a voice deepfake will give out an unnatural monotony of speech,” stated Dmitry Anikin, Senior Data Scientist at Kaspersky. Another feature that should be considered is the sound quality.

So, illegible speech and strange noises should be alerted while listening to an audio message or a call. That being said, deepfake technology uses artificial intelligence to create synthetic media, including fake audio that appears totally authentic. This is done by producing artificial sounds that mimic human voices through deep and natural learning algorithms.

“Currently, the technology for creating high-quality deepfakes is not available for widespread use. However, in the future, it may become freely open, which could lead to a surge in related fraud,” said Anikin. “Most likely, attackers will try to generate voices in real time – to impersonate someone’s relative and lure out money, for example. Such a scenario is not realistic for now: creating high-quality deepfakes involves a lot of limited resources.” However, to make low-quality audio fake, fewer resources are required, and fraudsters can use this.

Anikin shares tips on how to protect yourself from deep fakes:

  • You know that suspicious-sounding phone call your gut tells you is suspicious? Pay attention to it. Chances are you are responding to the poor sound quality, unnatural monotony of voice, unintelligible speech, and background noise.
  • Keep your emotions in check. Don’t make decisions based on them at that moment. Wait. Ponder. Reflect. Think. Slow down.
  • Don’t share your personal details with anyone over the phone, and don’t transfer money to just anyone, even if they sound convincing. Better to stop the call and double-check the information received through several channels.
  • Use reliable security solutions on your devices, which will further secure the use of gadgets.
Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts
bitcoinBTC/USD
$ 64,771.35 0.97%
ethereumETH/USD
$ 3,381.83 3.56%
bnbBNB/USD
$ 577.02 3.55%
xrpXRP/USD
$ 0.489744 2.83%
dogecoinDOGE/USD
$ 0.119126 10.36%
shiba-inuSHIB/USD
$ 0.000018 11.32%
cardanoADA/USD
$ 0.369496 7.90%
solanaSOL/USD
$ 134.30 5.78%