• Opinion
  • October 7, 2019
  • 13 minutes
  • 1

Artificial Intelligence: What can government do about deepfakes?

Opinion: In a world of “fake news”, it’s getting even harder to trust what you see online

This article is written by Clare Welch, an MSc graduate in Media and Communications Governance from the London School of Economics and Political Science specialising in AI ethics and the socio-economic causes and effects of disinformation campaigns. For more like this, please see our digital government newsfeed. 


“The truth is rarely pure, and never simple.” – Oscar Wilde, The Importance of Being Earnest

Picture this: you are causally scrolling through your Facebook feed and come across a video of Facebook CEO Mark Zuckerberg.

In it, he says “whomever controls the data controls the future.” Naturally, in light of the Cambridge Analytica scandal, you are alarmed by his apparent earnest proclamation. You question the veracity of this video; surely the chief executive of one of the most powerful companies in the world would not be so blasé about his thirst for power.

But it looks like the real Zuckerberg in the video, has the same mannerisms as Zuckerberg, speaks in the same, vaguely robotic tone as Zuckerberg. So, as the old adage goes, if it walks like a duck and talks like duck…

However, in this case, the duck is not a duck.

What is a deepfake?

The video of Zuckerberg that circulated around the Internet in June was not, in fact, a video of Zuckerberg; it was a deepfake.

Created by a group of artists using machine learning, the video was initially posted to Instagram (which is owned by Facebook) before being flagged by two of Facebook’s fact-checking partners.

 

The Zuckerberg video followed closely on the heels of another viral doctored video — that of US House Speaker Nancy Pelosi appearing to slur her speech. In both cases, the falsified audio-video material spread widely before being flagged by the host platform. While third party fact checkers and other internet commentators were noticeably quicker to identify these videos as fake the varied and inconsistent responses of Facebook, and of other social media platforms represents just the tip of the iceberg when it comes to the deepfake problem.

Deepfakes are realistic-looking images and videos altered to portray someone doing or saying something that never actually happened.

They have become something of a buzzword since they emerged in 2017. A portmanteau of “deep-learning” and “fake,” deepfakes use pairs of algorithms called neural networks to create “generative adversarial networks,” or “GANs,” that pit the pairs of neural networks against each other. Both algorithms learn from each other, with one generating content modelled on source data and the other trying to spot the artificial content, resulting in highly realistic fake audio and visual content.

What happens when it falls into the wrong hands?

Though the original technology has been around since 2017, when it was first used to insert people’s faces into pornography without their knowledge or consent, the capability of the AI needed to create realistic looking fakes has rapidly advanced. For example, researchers at Stanford created an algorithm in the spring of 2019 that allows video editors to modify videos as if you were using a word processor.

While most current projects, like the one out of Stanford, have benign intended uses, like streamlining film editing in post-production, the same machine learning that enabled these studies is also now widely available to anyone with an Internet connection and a curiosity to test the boundaries of what is possible.

More nefarious applications of GANs can lead to programs like DeepNude, a deepfake app that allowed users to create realistic nude pictures of women within thirty seconds. An AI system that generates fake texts’ release was paired back by its creators because its mimicry of human writing was so plausible that it was nearly impossible to differentiate its successful results from actual news stories, product reviews, and even tweets.

deepfakes threaten the integrity of elections everywhere

In August, the European Union’s law enforcement agency, Europol, reported the first ever cybercrime committed using AI; criminals scammed a UK-based energy firm out of nearly €200,000 EUR ($219,200) by using a program that nearly perfectly mimicked a chief executive’s voice.

Given the proliferation of deepfake technology, it is no wonder that policymakers around the globe have picked up on the dangers it poses to society.

Though disinformation spread through image and video manipulation has been a challenge for governments for decades, deepfakes are an arguably more malignant threat. Most people trust what they see or hear more than what they read for the simple reason that it allows people to judge what they observe as if they were first-hand eyewitnesses to the event.

How do we make sense of our current situation?

The prominence of smartphones and instant sharing on social media has made videos and sound clips more commonplace than ever before, and they are often the first thing people encounter when a big news event happens. Deepfakes therefore threaten the basic facts of shared reality —the foundation for any functioning system of government.

Furthermore, deepfakes pose severe threats to fundamental liberties; reproducing someone’s likeness without their knowledge or consent, especially their voice and mannerisms, threatens ones’ personhood. The rise of so-called “revenge porn” and other uses of deepfake AI for blackmail already affects many women, not to mention the potential for that sort of blackmail to be used for political purposes. On the other hand, the technology is also used as an artistic expression or critique of power, as was the case with the fake Zuckerberg videos from June.

Good policymaking can protect both contemporary society as well as future generations from the abuses of new technology

Questions abound as to how to best protect individual rights to expression while also protecting society’s right to a free and accurate information system. The quality, resistance to detection, and accessibility of deepfake technology not only to intelligence agencies but to other non-state and civilian actors jeopardises the stability of society.

On a global scale, bad actors can use this technology to depict more or less anyone spouting inflammatory things, and to discredit opponents or incite violence. At the national level, deepfakes threaten the integrity of elections everywhere.

Via social media, people often retweet or share information they see from others without bothering to check if it is true, especially if it is negative or novel information. Both industry and policymakers alike agree that a new paradigm needs to be set, but the current proposals only scrape the surface of minimising deepfakes’ potential threat.

Merely flagging a video as fake will likely not stop the effect of a deepfake intended to spread disinformation; if people are exposed time and again to false information, they are likely to believe it. This is also called the illusory truth effect, and it often occurs with manipulated or synthetic content, as people only remember that they have seen the fake information before, not that they knew it was flase. Nor is requiring all deepfake content to be watermarked a sustainable measure, as enforcing such legislation would be near impossible across state borders.

What can government do?

Not all hope is lost, though.

While the current state of the global media landscape, coupled with a lack of coherent standards for content regulation across platforms in different jurisdictions, does paint a grim picture for the potential for sustainable deepfake regulation, there are measures that all governments can and should take to pave the way for a new governance regime.

First, media literacy should be a priority for all citizens, regardless of age. Adequate education on how false information is spread and a more general awareness of what constitutes a harm online will not only help citizens identify deepfakes but will also help them identify misleading information elsewhere on the Internet.

Second, governments should reconsider their defamation, fraud, and misappropriation of likeness laws. This would help to define what a malicious deepfake is and would better protect everyday citizens from being victimised by developing technology.

Third, governments should partner more closely with industry to invest in better and more flexible technical solutions that will help to identify deepfakes for moderation purposes. Better identification of deepfakes would help social media platforms create more uniform policies regarding deepfakes and would help to protect people’s right to expression without allowing harmful content to proliferate.

Finally, governments need to establish transparency guidelines as to how and when government itself uses deepfake technology. Just as citizens have the right to be protected from foreign disinformation campaigns, they also have the right to know when and how their own government is deploying machine learning.

No matter how much we may wish it, deepfakes are only going to become more commonplace. Right now, it may seem that insulating societies from the negative consequences of advances in artificial intelligence is impossible (or – at the very least – improbable). However, we must acknowledge that the challenges posed by deepfakes are similar to challenges we as a society have overcome in the past.

Good policymaking can protect both contemporary society as well as future generations from the abuses of new technology. — Clare Welch

(Picture credit: Anthony Quintano//Wikimedia Commons)

Discussion

Leave a Reply

to leave a comment.
Master the skills you need for the public service.

Discover inspiring resources, tools and policies.