Don’t believe your eyes? Living in a deep fake world

Seeing Used to Be Believing

Did you see the realistic video of Justin Timberlake chugging alcohol while having his mugshot taken? That was generated from his recent infamous mugshot alone – pretty impressive. Or the online advertisements misusing then Singapore Prime Minister Lee Hsien Loong’s image to sell investment opportunities? Way too far-fetched.

While these are obviously fake, so many other pieces of AI-generated content are not so easy to spot and the results are as we anticipated…bad.

Deepfake Technology in Numbers

Deepfakes in Singapore have jumped five times in the last year alone, with authorities warning that the technology could be misused to commit cybercrimes.

The Sumsub Identity Fraud Report 2023, also showed a 10-fold increase in the number of deepfakes detected globally across all industries from 2022 to 2023.

To combat the advent of fake news influencing India’s nearly 969 million registered voters in its seven-phase elections this year, Meta has had to expand its third-party fact-checking network to include 12 partners – the tech giant’s largest network for a country thus far.

The democratization of digital tools makes it easier than ever to create deepfakes. It is more accessible and more sophisticated than ever – a potent combination for both good and bad.

Three Key Learnings

1. Cat and Mouse Game – Regulation Will Always Be Chasing Technology

Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale, but many manual detection frameworks are not straightforward and will soon be made redundant.

Articles such as “8-steps to detect a deep fake”, with advice such as to “pay attention to blinking to see if the person is blinking enough or too much” still leaves too much space for guessing and uncertainty. Plus, it is a matter of time before unnatural blinking is taken care of.

We need full proof technology-based detection systems, and international consensus on regulations and policies through the collaboration of governments, tech giants and civil society – and we need them to catch up fast.

2. Confirmation Bias – We See What We Want to See

At the heart of the misinformation problem is our own confirmation bias – the tendency to search for and favor information that is aligned with our deep-rooted beliefs or values. This happens so much on an unconscious level that it is extremely difficult to undo. “Birds of the same feather flock together” – we like people and things that are familiar and similar to us, and that pits us against those with differing views.

3. Education is Key – Zero-Trust Approach

We need to keep learning and educating others about the risks of seeing and believing. Media literacy and critical thinking will help us combat AI-empowered social engineering and manipulation attacks. In cybersecurity, the “zero-trust” approach means not trusting anything by default and instead verifying everything. When applied to humans consuming information online, it calls for a healthy dose of skepticism and constant verification.

Keep growing

Seeing used to be the bedrock of reality, but has become a question mark in this digital age. Unfortunately, there is no magic pill or silver bullet to annihilate the threat of deepfakes, so we need to remain vigilant in the face of digital deception, as the boundaries between the physical and virtual worlds continue to blur.