A sophisticated fusion of artificial intelligence and media creation techniques, deepfake technology enables the generation of hyper-realistic images, videos, and audio recordings that can convincingly mimic real individuals. The innovative technology allows for the insertion of one person’s likeness and expressions onto another, often blurring the boundaries between fact and fabrication.
However, the technology also has its dark side, which is recently raising profound concerns about privacy, trust, and the potential for malicious exploitation. As the technology evolves, so does the risk of its misuse, manifesting in scenarios where individuals’ faces are seamlessly transposed onto explicit content, fueling defamation, harassment, and emotional distress.
Last year saw the shocking case of high schooler Francesca Mani and dozens of other girls at a New Jersey school, victimized by deepfake pornography after schoolboys captured photos of the girls and used AI to turn these images into pornographic material.
Global popstars have also felt the consequences, with singer Taylor Swift being the latest victim, after deepfake pornographic with her likeness rapidly spread through the social media platform X for almost a whole week last month.
Scammers have also been using the technology for financial gains. There was the case of a finance worker at a Hong Kong-based multinational firm who was tricked into paying out $25 million to fraudsters who posed as the company’s chief financial officer in a video conference call.
The Balkans hasn’t been immune to such cases either – last August saw a deepfake video of Serbian politician Dragan Djilas deliberately shown on Serbian TV Pink, in which he criticized his own supporters and fellow party members.
A source based in Skopje told IT Logs that recently he had also been a victim of a deepfake attack, when fraudsters took photos and videos that he had posted on various social media channels, turned into deepfake porn and blackmailed him for financial gains.
The alarming cases of deepfake manipulation highlighted here are but a glimpse into the looming tidal wave of potential threats. The ease with which individuals can be targeted, misrepresented, or exploited for financial gains also highlights the urgent need for comprehensive strategies to counter the impending surge of deepfake-related challenges.
(Don’t) trust your eyes and ears
As the next US elections loom on the horizon, the ominous use of deepfake technology also casts a shadow over the democratic process by spreading disinformation.
According to Philip Reitinger, president and CEO of the Global Cyber Alliance, deepfakes raise a completely new level of concern regarding disinformation.
“We have all been able, as a general matter, to trust our eyes and ears, and as a result video and image have powerfully influenced public opinion. Deep fakes allow our trust in our own senses to be abused to malicious purposes. And the risk affects everyone, undermining the reliability of electoral processes, permitting economic attacks on reputable businesses, and much more,” Reitinger tells IT Logs.
The maliciously crafted synthetic media could be employed to fabricate misleading speeches, statements, or actions attributed to candidates, in turn leading to confusion and eroding public trust in the authenticity of political discourse.
Additionally, the potential for deepfakes to exploit vulnerabilities in social media platforms and influence voters by disseminating deceptive content raises concerns about the resilience of democratic processes against technologically induced subversion.
“Fabricated content, with the potential to sway public opinion and disrupt political landscapes, poses a formidable challenge to the democratic narrative. We need to scrutinize multimedia content for subtle irregularities and unnatural movements, as well as validate the authenticity of content through cross-referencing with reliable counterparts,” Skopje-based cybersecurity analyst Suad Seferi tells IT Logs.
Organizations such as the Global Cyber Alliance are working together with many societal actors in supporting efforts to prevent technology, including AI, from being used to harm people, Reitinger adds.
“This includes efforts to secure the online lives of people and businesses, so there is less fodder for attacks, and policy efforts to reduce the effectiveness of deepfakes in manipulating public opinion,” he tells IT Logs.
Distinguishing truth from fiction
Thus, vigilance and robust cybersecurity measures become imperative to safeguard the integrity of the electoral landscape from the insidious influence of deepfake manipulation.
“Knowledge, once shared, becomes a collective fortress. The undetectable nature of deepfakes also urges us to cultivate not only technological defenses but also a deeply human resilience,” Seferi points out.
For Bilyana Lilly, cybersecurity expert and chair of the cyber track at the Warsaw Security Forum, the damages that deepfakes cause can already be seen through multiple examples that are becoming more and more prevalent, showcasing its far-reaching consequences.
Moreover, as deepfakes infiltrate political arenas, the erosion of public trust and the potential for election interference become increasingly apparent. These evolving examples not only emphasize the urgent need for robust countermeasures but also serve as a reminder of the profound societal challenges posed by deepfake technology.
“Deepfake audio and video content has targeted political leaders and celebrities all over the globe. It has been proliferated in warzones to erode morale and gain tactical advantage, such as in the Russian war against Ukraine; as well as in peacetime, to discourage voter participation such as in the recent audio deepfake of US President Biden. Also, deepfake is primarily used to generate porn images or videos of celebrities, especially of women, as in the latest case of Taylor Swift’s deepfake porn images,” Lilly explains.
While there isn’t a steadfast solution yet on the horizon, according to her, there is always the possibility of government regulation, especially when it comes to the various platforms where such content is distributed.
“This example shows the dire need for government regulation of platforms which have no or inadequate policies on monitoring, identifying and taking down such content from their networks.” Lilly concludes.