04 Apr Deepfakes Evolve From Novelty To Serious Cyber Threat
Employees should know how to spot all types of disinformation
Melbourne, Australia – Apr. 4, 2022
The recent theft of $35 million, using deepfake voice technology to emulate the voice of a UAE bank executive, has reinforced the threat posed by deepfakes — but deepfake videos had remained mostly a curiosity until a manipulated video threatened to change the course of Russia’s invasion of Ukraine.
Word of a deepfake video of Ukraine president Volodymyr Zelensky — which claimed to show the country’s leader urging his countrymen to lay down their arms — spread quickly online when a Ukrainian TV station broadcasted it while being hacked.
For those looking closely, the video had many of the classic hallmarks of deepfakes — awkward intonation, incongruous details, and the like — and few believe it actually tricked Ukrainians into surrendering to the Russian invaders.
Yet for KnowBe4 data-driven defense evangelist Roger Grimes, the video marked a dangerous escalation in the threat posed by deepfakes — which grew from fun curiosities into potentially powerful propaganda weapons overnight.
Cybercrime Radio: The Dangers Of Deepfakes
We need to be skeptics, be selective about news sources
“I originally blew off deepfakes and hesitated talking about them because I thought it was all blather,” Grimes told Cybercrime Magazine. “But I do believe this was a line in the sand.”
“You can’t really get more serious than this, right? Where you have somebody deepfaking something that could lead to change in a country’s leadership and policy.”
With a vicious online propaganda war echoing the kinetic conflict currently devastating many parts of Ukraine — and the more silent cyber war that many argue has intensified behind the scenes with occasional high-profile casualties — the injection of even moderately convincing deepfakes is yet another variable.
Even more concerning: although the Zelenskyy video has enough flaws that Grimes believes it was unlikely to have been created by Russian propagandists, the widespread availability of free, open-source deepfake tools means the technology is now readily available to more-meticulous individuals with a political or ideological bone to pick.
A new imperative in cyber defense
Backed by increasingly concerning generative adversarial network (GAN) tools — which automate the creation of deepfakes by having one AI engine generate deepfakes, then having another AI evaluate them iteratively until it cannot detect the forgery — high-quality deepfake tools are well within reach and quickly becoming more common.
That, cybersecurity specialists are rapidly learning, makes them a very real concern in a world where reliable attestations of identity have become critical to end-user security.
In an online world where creative and resourceful cybercriminals are rife, easy access to deepfake tools is accelerating the risk posed by video deepfakes — which add a new dimension to the audio-only scams that have been intermittently successful in tricking company executives into transferring sums of money.
Executives are particularly exposed because appearances at shareholder meetings and public interviews provide large amounts of potential deepfake source material — so be aware of executives’ exposure.
“The weapons to create disinformation will scale dramatically,” Gartner VP analyst Darin Stewart warned in a recent analysis of deepfakes’ rapid expansion online.
“First, it will become much easier for bad actors to launch many of these attacks, so that even a small percentage of success will make it worth their while.”
“Second, powerful, easy-to-use deepfake technologies in the hands of the many make it simple for anyone to launch an attack against somebody they wish to target for whatever reason.”
Companies can no longer develop and release content into the online world, assuming it will remain intact.
Rather, Gartner recommends that security practitioners develop robust attestation frameworks such as blockchain frameworks, which give viewers a way to confirm the provenance of published videos using information that cannot be removed from those videos.
Another option is the Coalition for Content Provenance and Authenticity (C2PA)’s newly released C2PA standard, which indelibly marks content with provenance information as part of Adobe’s Content Authenticity Initiative (CAI).
Just how widely the standard is adopted will remain to be seen, but in the meantime it’s worth the time to consider how deepfake detection techniques can become part of companies’ cybersecurity training.
Now that we can no longer trust the content we see, hear and read, what hope is there for consumers — or company executives — to avoid being misled by deepfakes?
Although mainstream media tends to take the time to verify important videos, Grimes said, the hit-driven blogsphere has a very different standard of proof — and that means “we all need to be doubting Thomases and skeptics until we have other trusted corroborating sources.”
“As deepfakes grow to be more and more part of our life, it is just incumbent upon every viewer to be more skeptical of any news story: video evidence is no longer video evidence.”
– David Braue is an award-winning technology writer based in Melbourne, Australia.
Go here to read all of David’s Cybercrime Magazine articles.
Sponsored by KnowBe4
KnowBe4 is the provider of the world’s largest security awareness training and simulated phishing platform that helps you manage the ongoing problem of social engineering. We help you address the human element of security by raising awareness about ransomware, CEO fraud and other social engineering tactics through a new-school approach to awareness training on security. Tens of thousands of organizations like yours rely on us to mobilize your end users as your last line of defense.