A Deepfake Dilemma: Navigating the New Uncanny Valley

By Justin “Hutch” Hutchens | Trace3 Innovation Principal

It has been said that imitation is the sincerest form of flattery. With Artificial Intelligence (AI), the same holds true… but only up to a certain point. As generative AI technology becomes increasingly advanced, its ability to emulate reality is becoming unsettling and problematic. One of the many problematic byproducts of the generative-AI era is “Deepfakes.” “Deepfake” is a term used to describe realistic but artificially created images, audio, or videos. These technologies manipulate and/or superimpose existing media onto source content, making it appear as though individuals are saying or doing things they never actually did, often with the intent to deceive or mislead viewers.

Screen Recording Jun 20

In this article, we will address the emerging problem of deepfake technology, discuss why this issue matters for enterprise business, and examine some of the new emerging solutions that are beginning to tackle this problem.

 

The Deep Fake Problem

The Uncanny Valley

While you may not have experienced it (yet), you can probably imagine what it would feel like to suddenly be shown a video of yourself doing something you know you never actually did. It would almost certainly be immediately unsettling and concerning.

Ironically, this unsettling sentiment arising from observing the artificial becoming “too close” to reality is not a new feeling. Over half a century ago, in 1970, a Tokyo Institute of Technology robotics professor, Masahiro Mori, coined the phrase “the uncanny valley”. This phrase was used to describe an intriguing and measurable response from humans as they interact with increasingly more human-like robotic systems. Mori found that as robots appeared more humanlike, they become more appealing to a person interacting with them – but only up to a certain point. At a certain point, the degree of human likeness becomes naturally unsettling, uncomfortable, and even revolting. This dramatic shift in emotional response is “the uncanny valley.”

Picture1-Jun-20-2024-07-01-24-4733-PM

Figure 1 – The Uncanny Valley showing emotional response relative to the human likeness of a robotic system

 
The New Uncanny Valley (of deepfakes)

While the original notion of the uncanny valley was related to general human likeness, there is a very similar unsettling response that arises as you get closer to emulating the likeness of a specific person. And this is the new uncanny valley that we are facing within the context of deepfake technology.

In a recent interview reflecting on deepfakes resembling himself, well-known actor Keanu Reeves stated, “What’s frustrating about [deepfakes] is you lose your agency. […] That’s scary.” This new uncanny valley is the fear that Keanu described. 

To fully understand this new uncanny valley, it would be helpful for us to speak in terms of a common subjective experience. To that end, imagine for a moment that you are Keanu Reeves (just so that we can all have a single point of reference). A bunch of fans have drawn pictures of you. Each picture is a more accurate depiction of you than the former, and as such, each one is increasingly more flattering. But now, suppose that one of your fans hands you what looks like a photograph of you. The likeness of you is indistinguishable from reality, and yet, you know based on the content of the picture that it is not real – and that these events never happened. Unlike the others, which triggered an increasingly more positive response as they looked more like you, this one creates a deep and unsettling feeling in the pit of your stomach. You immediately start asking yourself questions – “Have other people seen this image of me?” “If they have, do they believe the image is real?” This is the power of deepfake technology.

Picture2-Jun-20-2024-07-04-43-0719-PM

Figure 2 – The new uncanny valley of deepfake technology

 

This new uncanny valley is the alarm and concern felt by Tom Hanks when he realized a deepfake video of his likeness was being used to fraudulently promote a dental plan. This new uncanny valley is the disgust and horror experienced by the daughter of late comedian George Carlin, when a YouTuber used deepfake voice technology to create new content in the voice and style of George Carlin for an hour-long comedy special entitled “I’m Glad I’m Dead". And this new uncanny valley is the outrage expressed by music superstar Taylor Swift when deepfake sexually explicit images of Swift circulated on X (formerly Twitter) after the Kansas City Chiefs made it to the 2024 Super Bowl.

In each of these cases, the unsettling visceral response is due to the acknowledgment that deepfake media presents such a powerful semblance of reality that it can sway other people’s beliefs about the things you’ve said or done, the places you’ve been, or the ideologies or opinions that you hold. Your identity, your brand, and your entire reputation could be drastically altered or even destroyed by this technology in an instant. Unfortunately, in the court of public opinion, perception is reality. With this technology, any moderately tech-savvy person can now transform the perception of anyone, and more specifically, they can transform their perception of you.

 
But why does this matter in business?

Fortunately for most of us, we are not Taylor Swift, Tom Hanks, or Keanu Reeves. For most of us, we are working professionals and do not need to worry about the emerging risks of deepfake technology, right?

Unfortunately, the answer is no. While deepfakes were only a problem for the public elite a couple of years ago, increasing availability and ease-of-use have made these technologies increasingly problematic for everyone. Deepfake technology is also becoming increasingly problematic for businesses and organizations. In early 2024, a multi-national company in Hong Kong lost the equivalent of $25 million USD after an employee fell victim to a deepfake scam impersonating the company’s CFO. But deepfake scams are just scratching the surface. Like individuals, businesses and organizations also have a digital brand and identity that they need to protect. Some of the increasingly relevant business impacts of deepfake technology include:

Picture3-4

Unfortunately, not only is there the potential for negative business impacts related to deepfake technology, but there is also the motive. In the increasingly polarized socio-political climate of today, companies are being unwittingly pulled into “culture wars”. Remaining silent and neutral can be just as problematic as taking a stance on ideological and social issues. Even organizations that attempt to remain neutral can easily find themselves in the crosshairs. This risk of becoming an ideological target is further exacerbated by the many affiliations and business partnerships that most organizations have. Companies are increasingly becoming targets of ideological attacks, commonly referred to as “hacktivism.” While hacktivists have historically engaged in website defacement and denial-of-service campaigns, the new weapon of choice is the incitement of internet mobs, increasingly propelled by deepfake media. Perception is everything in the court of public opinion, and negative impacts to an organization’s brand and reputation can have devastating consequences for the company itself.

 
Brand Protection Tackles Deepfakes

Brand and digital identity protection solutions are not new. For over a decade now, brand protection solutions have scraped the Internet, looking for fake websites or social media profiles, and leveraged partnerships with platforms, hosting providers, and name registrars to quickly take down offending content. But in recent years, the threats have dramatically transformed with emerging deepfake capabilities. Fortunately, AI is a dual-use technology. While there are major risks related to the misuse of deepfakes, this same technology can also be used in new and creative ways to detect and mitigate the risks of deepfakes.

The Trace3 Innovation Team recently had the opportunity to sit down with Doppel, an emerging leader in this space, to better understand their approach to digital identity protection and specifically their capabilities related to deepfake monitoring. Since 2023, their team has been providing brand protection services and deepfake monitoring for the upcoming 2024 US Presidential Election on behalf of leading candidates for both the Republican and Democratic parties, across a wide array of different platforms including TikTok, YouTube, Instagram, Facebook, X (formerly Twitter), and Telegram. Through this process, they have detected hundreds of fake (impersonated) personas of top candidates.

One of the most fascinating parts of this conversation was understanding the degree of granularity and nuance required to determine if an image is “problematic” or not. As we scrolled through the platform, I saw hyperbolic depictions of President Joe Biden with devil’s horns and a pitchfork, and depictions of former President Donald Trump with an Adolf Hitler mustache. The Doppel team explained that, while over-the-top and distasteful, examples like these often fall within the boundaries of free speech, because they are clearly portrayed as parody or satire. Much of this distinction requires determination of intent, specifically whether the media content is intended to be interpreted as authentic. Similar nuance is also required when supporting businesses in their brand protection efforts, where even unauthorized use of proprietary and copyrighted information may be allowable under certain contexts due to the Fair Use doctrine. This becomes even more complex when considering the broad range of different governmental regulations (across different countries, governments, and jurisdictions), and the unique terms-of-service for each of the different social media and hosting platforms on which the content might be circulated. 

Fortunately, emerging capabilities of generative AI can be used to automate this level of fine-grained analysis. While historical brand protection techniques may have been able to compare the likeness of an image to a company’s logo or to photos of its executive staff, it was not possible (until recently) to thoroughly analyze the content or context of those images. But all those obstacles are now gone. With emerging multimodal models, it is now possible to programmatically analyze the precise content in images, video, or audio. Furthermore, it is possible to evaluate this content within the context of specific regulatory and/or platform-specific parameters. In short, it is now possible to granularly identify offending content more effectively and more efficiently than ever before.

In addition to the offerings from new market entrant Doppel, many existing and established digital brand protection solutions are also rapidly working to innovate and optimize their offerings based on the new capabilities that generative AI provides. The threat landscape is rapidly transforming as the capabilities of AI continue to grow and become more accessible. All hope is not lost, but continued innovation will be critical for defenders to continue protecting their brand and identity within an increasingly sophisticated threat landscape.

Picture4-3


Hutchens_Headshot[4]Justin “Hutch” Hutchens is an Innovation Principal at Trace3 and a leading voice in cybersecurity, risk management, and artificial intelligence. He is the author of “The Language of Deception: Weaponizing Next Generation AI,” a book focused on the adversarial risks of emerging AI technology. He is also a co-host of The Cyber Cognition Podcast, a show that explores the frontier of technological advancement and seeks to understand how cutting-edge technologies will transform our world. Hutch is a veteran of the United States Air Force, holds a Master’s degree in information systems, and routinely speaks at seminars, universities, and major global technology conferences.

 

Back to Blog