Deepfake technology has been around for quite a while and has increasingly been used for spreading misinformation and impersonation. With the Singapore General Election approaching soon, this might become a greater concern for us. Imagine watching a viral video of a political candidate making a shocking statement—only to later find out that the video was completely fake. This is the power, and danger, of deepfake technology.

Deepfakes can manipulate audio, video, and images to create highly realistic but entirely fake content. While they can be used for fun and entertainment, bad actors also exploit them for fraud, identity theft, and misinformation. In this article, we’ll explain how deepfakes work, why they are dangerous, and how you can spot and protect yourself from them.


How Do Deepfakes Work?

Deepfake technology relies on artificial intelligence (AI) and machine learning to create synthetic content that looks and sounds real. Two key techniques power deepfakes:

  • Generative Adversarial Networks (GANs) – AI models that refine deepfake content by constantly improving its realism.
  • Autoencoders – Algorithms that extract facial features and map them onto different videos.

By using these methods, cybercriminals and scammers can create convincing videos that may deceive even the most skeptical viewers.


Why Are Deepfakes Dangerous?

Deepfake technology can be used for malicious purposes, including:

  • Misinformation & Fake News – Fake videos of politicians or public figures can influence opinions and mislead the public, especially during elections.
  • Fraud & Impersonation – Scammers can fake a person’s voice or appearance to steal money or sensitive information.
  • Cyberbullying & Blackmail – Deepfake videos can be used to harass or defame individuals.
  • Erosion of Trust – When people doubt even real videos, it creates confusion and makes it easier to dismiss the truth.

Real Deepfake Cases in Singapore

While deepfakes are a global concern, Singapore has already seen cases where this technology was misused. Here are some notable incidents:

1. Deepfake Nude Photos Circulated Among Students

In November 2024, students at the Singapore Sports School were found to have created and shared deepfake nude photos of their female classmates. This incident highlighted the malicious use of AI to generate non-consensual explicit content, raising concerns about consent, privacy, and cyber awareness among youths.

2. Extortion Attempts Targeting Government Officials

Over 100 public servants, including five Cabinet ministers, received extortion emails containing doctored images in late 2024. The perpetrators superimposed the officials’ faces onto explicit images and demanded cryptocurrency payments to prevent their release. The deepfake images were created using publicly available photos from sources like LinkedIn.

3. Fake Video of Prime Minister Promoting Cryptocurrency

In June 2024, a deepfake video surfaced featuring Prime Minister Lee Hsien Loong appearing to endorse a cryptocurrency investment platform called Quantum AI. The video was fabricated by overlaying false audio onto a real speech by Mr. Lee, misleading viewers into believing he supported the scheme. He later clarified that the video was fake and expressed concern over the misuse of his image.

These cases illustrate the real dangers of deepfakes, from scams and blackmail to misinformation. They highlight the importance of vigilance and the need for robust measures to detect and counteract deepfake-related threats.


How to Identify a Deepfake

Here are some key signs that a video might be a deepfake:

1️⃣ Unnatural Facial Movements

  • Inconsistent blinking or lack of natural eye movement.
  • Awkward lip-syncing that doesn’t match speech.
  • Jerky or robotic head movements.

2️⃣ Strange Skin & Lighting Issues

  • Blurry or overly smooth skin with no pores or wrinkles.
  • Lighting inconsistencies (shadows in the wrong places).
  • Face edges appearing unnatural or flickering.

3️⃣ Audio & Voice Mismatches

  • A robotic or unnatural voice tone.
  • Lip movements that don’t sync with speech.
  • Odd pauses or background noise artifacts.

4️⃣ Distorted Backgrounds

  • Blurry or flickering objects around the person.
  • Strange warping near the neck, hairline, or ears.

How to Protect Yourself from Deepfake Scams

To stay safe, follow these simple steps:

Verify Before Trusting – Always check multiple sources before believing a video, especially during elections.
Be Wary of Urgent Requests – If someone asks for money or personal info over a video call, confirm their identity via another method.
Enable Multi-Factor Authentication (MFA) – Prevents scammers from accessing accounts, even if they deepfake your voice.
Stay Updated on Cyber Threats – Follow cybersecurity news and educate yourself on emerging threats.

Conclusion

Deepfake technology is evolving fast, making it harder to separate truth from fiction. Whether it’s during elections or in our daily lives, we must stay vigilant against misinformation and online deception. Always verify the authenticity of videos and educate others about the risks of deepfakes.

Because in today’s digital world, seeing is no longer believing.

🚨 Stay alert! If you come across a suspicious video, take a closer look and verify its source before sharing. Cross-check information with official government sources, reputable news outlets, or directly with the person or organization involved.

Podcast also available on PocketCasts, SoundCloud, Spotify, Google Podcasts, Apple Podcasts, and RSS.

The Podcast

Join Naomi Ellis as she dives into the extraordinary lives that shaped history. Her warmth and insight turn complex biographies into relatable stories that inspire and educate.

About the podcast