Stability of our society threatened by disinformation via deep fakes endangering democracy

According to Arijit Goswami the stability of our society is more threatened by disinformation than anything else we can imagine. He writes

It is a pandemic that has engulfed small and large economies alike. People around the world face threats to life and personal safety because of the volumes of emotionally charged and socially divisive pieces of misinformation, much of it fuelled by emerging technology. This content either manipulates the perceptions of people or propagates absolute falsehoods in society. {Is AI the only antidote to disinformation?}

We wonder if he is not a little paranoid to think

AI-based programmes are being used to create deep fakes of political leaders by adapting video, audio and pictures.

On Nov. 25, 2019, an article headlined “Spot the deepfake. (It’s getting harder.)” appeared on the front page of The New York Times business section.[Cade Metz, “Spot the Deepfake. (It’s getting harder).The New York Times, November 25 2019, Section B, Page 1.]

It is true that we should be very careful and on the look-out to notice the misleading technology, one of the most worrying fruits of rapid advances in artificial intelligence (AI) that  allows those who wield it to create audio and video representations of real people saying and doing made-up things. It has become possible to have some saying things he would never say or giving them facial expressions not in line with words being said.

We can not ignore such technology which is still developing and becoming increasingly difficult to distinguish real audio and video recordings from fraudulent misrepresentations created by manipulating real sounds and images.

“In the short term, detection will be reasonably effective,”

says Subbarao Kambhampati, a professor of computer science at Arizona State University.

“In the longer run, I think it will be impossible to distinguish between the real pictures and the fake pictures.”[Cade Metz, op. cit.]

The technology has been used to sow the seeds of discord in society and create chaos in markets.

AI is also getting better at generating human-like content using language models such as GPT-3 that can author articles, poems and essays based on a single-line prompt. Doctoring of all types of content has been made so seamless by AI that open-source software like FaceSwap and DeepFaceLab can enable even discreet amateurs to be epicentres of social disharmony. In a time when humans can no longer comprehend where to place their trust, “technology for good” looks to be the only saviour. {Is AI the only antidote to disinformation?}

With the polpularity of social media the danger exists that deep fake videos will be multiplied before they shall be recognised as such. The false representations could come to dominate the media landscape for days or even weeks.

“A lie can go halfway around the world before the truth can get its shoes on,”

warns David Doermann, the director of the Artificial Intelligence Institute at the University of Buffalo.[“House holds hearing on ‘deepfakes’ and artificial intelligence amid national security concerns,” CBS News, June 13 2019.]

Despite numerous positive use cases of AI-generated synthetic media, deepfakes might also pose significant dangers to democracy and become the new standard of masterful manipulation. With this in mind, DRI Ukraine partnered with the Committee on the Development of Artificial Intelligence under the Ministry of Digital Transformation of Ukraine to organize a roundtable discussion “Deepfakes in Politics”. It was held on 14 December 2021 online, streamed both in English and in Ukrainian.   {Deepfakes in politics}

At the roundtable, Anna Bulakh challenged the European Parliament’s definition saying that the synthetic media technology and any manipulation is neutral.

What makes it “deepfake” and malicious is intent (why and what for) and attribution (by whom). What modern tech companies like Reface bring to the consumer is democratization of the technology where it becomes more available and thus accessible for anyone, for any intent.      {Deepfakes in politics}

she said.

At the moment deepfake technology is still predominately being used to create sexual videos of women without their consent, but certain politicians with wrong intentions could hire technical staff to bring damage to the opponent. Though fake videos have already been made of politicians endorsing views contrary to their own, public figures confessing to wrongdoing. In those instances, the unusual contextual setting or the explicit acknowledgement of certain ‘facts’ give a sign that they are deepfakes.

There is a good reason to have concerns about the misuse of deepfakes to manipulate elections, perpetuate fraud in business, alter public opinion and threaten national security.

Fake news has often been found to share the same root – the place of origin before the spread of the news. The Fandango project, for example, uses stories that have been flagged as fake by human fact-checkers and then searches for social media posts or online pages that have similar words or claims. This allows the journalists and experts to trace the fake stories to their roots and weed out all the potential threats before they can spread out of control. {Is AI the only antidote to disinformation?}

One might ask whether the enjoyment or lust in gossip talk or in the pub talk found on Facebook and the like has not actually led to an increase in the distribution of deep-fake videos.

There is a marked difference between the way fake news and genuine news travel over social networks. Researchers from MIT suggest that fake news travels six times faster than genuine news to reach 1,500 people on Twitter. Moreover, the chain length of genuine news (the number of people who have propagated a social media post) was never above 10 but rose to 19 for fake news. This is partly because of swarms of bots deployed by malicious elements to make fake stories go viral.

Humans are equally responsible, as people usually share fake news faster without much critical thinking or a sense of judgment. GoodNews uses an AI engine to identify fake news using engagement metrics, as fake news shows more shares than likes, compared to vice versa for genuine news. Such techniques to capture suspicious content based on its spread can help prevent radicalization. {Is AI the only antidote to disinformation?}

We not only should recognise the dangers of deep fakes and fake news spreading, but should alert people when we notice it. Combating disinformation must be our highest priority in safeguarding democracy.

Humanizing the approach to combating disinformation must be the highest priority in order to build a well-informed society of critical thinkers. A lack of proactive measures involving all stakeholders can lead to the rapid erosion of trust in media and institutions, which is a precursor to anarchy. Until humans learn to objectively evaluate online content, AI-based technologies have to be our ally in combatting disinformation online.

 

+

Preceding

  1. Gossip and fake news, opposite fact checking and facts presenting
  2. Conspiracy theories in plenty-fold
  3. The Media and Democracy
  4. Fact-Checking: Information Today
  5. Social Awareness & Social Involvement
  6. Living in a Post-Truth World
  7. Post-Truth, Revisited
  8. Donald Trump and his Republican supporters casting journalists as enemy combatants
  9. The First Great Information War 
  10. Deciphering Russia’s misinformation: How do we sort fact from fiction?

++

Additional reading

  1. Eyes on pages and messages on social media
  2. Social media for Trumpists and changing nature of warfare

+++

Related

  1. What is generative artificial intelligence (AI)?
  2. Post MA: An update on Artists, Python, Artificial Intelligence and Machine Learning
  3. Stanford University Professor Maneesh Agrawala On Video Editing Tools, Deep Fakes & More
  4. AI Tools Playing Center Role In Spotting Duplicate Images In Journals
  5. Deep Fakes & Its Impact On Journalism
  6. Here Are 5 Ways AI Can Compliment Modern Age PR
  7. Content Moderation Case Study: Dealing With ‘Cheap Fake’ Modified Political Videos (2020)
  8. An Audio Editing Tool that “Deep Fakes” Voices
  9. How Deep Fakes Are Shaping Entertainment In The Post Covid Era
  10. Can Blockchain Solve Data’s Integrity Problem?
  11. Deep Fakes are Scary Good
  12. Misinformation Is About To Get Worse
  13. Deep Fake Realism
  14. Birds Are Not Real! How a lighthearted conspiracy can help diffuse damaging ones in times of fear and deep fakes
  15. This New AI Makes DeepFakes for Animation Movies! 🧑‍🎨
  16. The Internet is a tool, not the answer
  17. We’re all deep fakes now
  18. The Price of Media’s ‘Cheap Speech’
  19. Volodymyr Zelenskyy slams ‘childish’ viral deepfake, but experts warn Russia’s cyber hit jobs won’t ‘always be so bad’
  20. Amazon’s plan for Alexa to mimic anyone’s voice raises fears it will be used for deepfakes and scams
  21. 8 Ways For The Average Zillennial To Safely Consume The News
  22. Electoral Fraud in the Digital Era-How is it perpetuated?
  23. Blinken’s Gone Bonkers: Russia Is Alleviating, Not Worsening, Sri Lanka’s Crisis!
  24. Sick Perceptions
  25. Just this simple timely warning folks…
  26. The Rag Man
  27. The Media is a Weapon of the Left
  28. Laura Ingraham’s reaction to her friend Pat Cipollone’s testimony shows how Fox was complicit in Trump’s coup | Media Matters for America
  29. False: Images of ‘smiling’ newscasters announcing Shinzo Abe’s death are doctored
  30. A trial run for the EU’s co-regulatory approach: the Strengthened Code of Practice on Disinformation
  31. The United States Government is Holding us Hostage
  32. Pope asks Signis to combat toxic media filled with hate, fake news
  33. Stupid innocents
  34. President Spokesman states rumours regarding SNA troops in Tigray War are false

Published by Guestspeaker

A joint effort of several authors who do find that nobody can keep standing at the side and that “Everyone" must care about what is going on in today’s world. We are a bunch of people who do not mind that somebody has a totally different idea but is willing to share the ideas with others and to be Active and willing to let others understand how "today’s decisions will influence the future”. Therefore we would love to see many others to "Act today".

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create your website with WordPress.com
Get started
%d bloggers like this: