|  | 

Opinion

Address the problem of deepfake | Analysis – analysis

img-responsive

War and sex have been fundamental drivers of consumer technology in the past. Satellite navigation, penicillin, microwave ovens and super glue, all have their origin in the imperatives of the battlefield. Nor is it a coincidence that for a couple of decades, the flagship consumer electronics show in Las Vegas happened along with the adult entertainment show, usually in the same building. The Internet owes much to the United States Army for its creation, as well as to the pornography industry for its rapid dissemination.

More recently, a third influencer has joined these two drivers of consumer technology: politics. Political actors around the world have begun to adopt new technological solutions to substantially shape public opinion. Both porn and politics are at the forefront of consumer demand that drives digital technologies.

The 2008 Barack Obama campaign marked the beginning of this trend, with the Republican opposition that became a harvest against the onslaught of social networks. Subsequent electoral battles have seen the emergence of favored technologies of the season, including digital marketing directed through social media posts and tweets, and built echo cameras of viral political opinion using personal messaging applications.

The recent video of a politician from the Bharatiya Janata Party (BJP) speaking in altered English, and with an accent that can attract a certain voter base, has prompted accusations of resorting to “deep forgery” for the first time in Indian politics. This episode forces the question: will deepfakes become new and bright technological tools available to the propaganda industry?

Deepfake videos are a substantial advance on the clumsy images of the nineties. They are the result of the use of a variety of artificial intelligence and deep learning solutions, collectively referred to as generative confrontation networks (GAN), to credibly imitate the real world, be they images, music, speech or prose. As happened recently with Nancy Pelosi, the greater the public availability of video footage of an individual, the greater the possibility of algorithmically generating their fake videos.

There are three main problems with deep fakes that make them particularly worrying. The first problem is related to the compelling narrative created in our minds by the moving image. Of course, from fake news to phishing emails, the global network is a melting pot of fraud and deception. However, deep false videos worry us because internally, we put differential levels of confidence in what we read and what we see. While the first is an expression of something within a person’s mind, the second is the result of physical movement. Because we are aware of the fact that we have many more data points to visually assess and repudiate a fake in the last scenario, we also rely more on our judgment. A false well done, therefore, will attract much less doubt.

The second problem is that refuting deep fake videos becomes much more difficult due to the way the GANs operate to create such videos. Even videos and audio clips manipulated with much less advanced technologies are not easy to refute due to technical alteration processes. The problem gets worse with the GAN. These adverse networks display the architecture of two neural networks facing each other. The generator network analyzes the real-world data sets and generates new data that appears to belong to these analyzed data sets, while the discriminating network evaluates the authenticity of the generated data. Through multiple rounds of cat and mouse between the two networks, the data generated reaches high levels of authenticity, generating synthetic data that almost matches the actual data. By design, significant data and algorithms are needed that can analyze them to verify synthetic data. The demanding member of a WhatsApp family group can find his voice of lost reason in such situations.

The fact that human judgment can no longer serve as the first line of defense against this barrage of automatically generated deep counterfeits also makes it clear that we are faced with an ethical choice here. The broadest option is to enroll in a world where the truth is algorithmically determined, or one in which we protect the human element, but we choose not to make much progress with the GANs and their significant potential to advance in the domain of artificial intelligence.

The ethical choice described above must at some point translate into regulatory action, an issue that is predominantly exclusive to politics. This creates the third problem, the special attraction that political campaigns potentially have for deep forgeries. If political actors benefit simultaneously from the rules of truth and falsehood written by the GAN, they can do nothing better than leave matters to self-regulation. We have already seen this with the ineffective voluntary code of the Internet and Mobile Association of India to address the misinformation during the 2019 parliamentary elections. Deep falsifications are more worrisome, and India must avoid being formulist in its response.

When politics drives consumer technology, it is ethically different from the early adoption of the porn industry. As Jonathan Coopersmith pointed out in 1998, the subject of the latter is in the way of publicly accepting his support. But with political actors, we run a greater risk of not evaluating the technology adopted for its long-term damage. For this additional reason as well, independent regulators such as the Electoral Commission of India must begin to address the false problem before it becomes an unmanageable crisis.

Ananth Padmanabhan is Dean, Academic Affairs, Sai University and a visiting member of the Policy Research Center.

The opinions expressed are personal.

Hindustan Times

address-the-problem-of-deepfake-analysis-analysis

ABOUT THE AUTHOR