Understanding Deepfake Technology: Why It’s A Risk To Your Business

Understanding Deepfake Technology: Why It’s A Risk To Your Business image

Deepfake technology, sometimes colloquially named ‘deepfakes’, has been getting a lot of attention in the mainstream news media in recent years.

But now, it’s no longer just a harmless entertainment medium, it actually poses a significant threat to your business if used for nefarious purposes.

In a recent webinar from Barracuda MSP, they highlighted how the AI revolution has made deepfake technology a growing concern in today’s threat landscape.

As this is a highly topical subject, here are some of the highlights from the hour-long presentation.

Divider

Introduction

The webinar; Discerning Reality: Protecting Against Deepfakes was held on the 16th April 2024.

Presented by: Ilya Gaidarov, Senior Product Marketing Manager for Barracuda.

It kicked off with a bird’s eye view of generative AI (GenAI). Then explained a bit about deepfakes and how they’re being used by threat actors in the wild.

Ilya then took a look at regulation and what that was like, before explaining way we can defend against the threat of deepfakes.

 

Discerning Reality: Protecting Against Deepfakes On-Demand Webinar

Barracuda MSP have the full webinar available for you to watch on demand.

The full webinar replay is available here.

 

Understanding Deepfake Technology Why It's A Risk To Your Business

Divider

Generative AI – A Bird’s Eye View

To understand AI as it is today. There are a few terms we need to define.

Generative AI – GenAI learns from patterns in its training data and predicts the expected output based on what’s being asked of it.

Neural Networks – Computational frameworks that mimic the human brain’s structure and function through a system of nodes.

Multimodality – Ingests data across multiple mediums: Text, speech, images and video. Then creates output in various mediums too.

Hallucination – When GenAI creates inaccurate, misleading or false information. This occurs because of inaccuracies or bias in the training data which leads to.

Foundation Models – These are a deep learning architecture that takes multimodal data from multiple mediums and examines it at a component level to cross analyse it more effectively.

A foundation model is continually fed information so that over time the structure changes and adapts. This is so that it can qualify more effectively the data output it creates.

Output mediums include:

  • Information extraction
  • Sentiment analysis
  • Image captioning
  • Summarisation
  • Instruction interpret and follow
  • Object recognition
  • Question and answer
  • Code analysis and generation

Prompt Engineering

As we get better and more precise with the prompts we use in GenAI, the better suited to our purpose the data provided becomes.

Chatbots are the most common way end-users interface with large language models (LLMs). However the landscape is vast in complexity, and is growing.

The market for GenAI apps and software include fine-tune specialist models designed for specific tasks, and AI-integrated software for productivity, collaboration and more.

AI is a Game Changer for Attackers

GenAI has made it easier for cyber attackers to launch their attacks in a number of ways:

  • Automation has impacted on the sheer volume of attacks
  • Simplified personalisation of attacks
  • Made targeting easier
  • Improved code generation to exploit vulnerabilities

Divider

Deepfakes – What Are They?

Deepfakes are synthetic media. Highly realistic video or audio created using AI, making people appear to say or do things they never did.

This is done through:

  • Digitally synthesised voice
  • Attribute manipulation
  • Lip syncing
  • Face reenactment
  • Entire face synthesis

The first example of a deepfake appeared on Reddit in 2017. This deepfake was done for amusement/advertising, but there’s a real risk behind them too.

The number of deepfake videos released online is increasing by 900% annually. These are videos showing people doing or saying things they haven’t done or said.

The Risks Around Deepfakes

Societal Risks

  • Propaganda
  • Electoral Interference

Individual Risks

  • Extortion
  • Reputational Damage

Business Risks

  • Scams
  • Phishing
  • Identity Theft

Understanding Deepfake Technology Why It's A Risk To Your Business

Divider

Examples of Attacks Using Deepfakes

Imposter Phone Scam

An elderly couple received a phone call from a lawyer claiming that their son had been involved in a car accident which resulted in the death of a diplomat. The lawyer informed them their son was in jail and needed money for bail, and put their son on the line to reassure them.

Panicking the couple withdraw $15,000 from their savings to pay into a bitcoin account. However, the whole thing was a scam. The son’s voice had been synthetically reproduced! AI tools only need between 5 and 15 seconds of audio of someone’s voice to be able to accurate clone them.

In 2022 over $11m was stolen through thousands of imposter scams.

Impersonation Video Scam

A multi-national company lost $25 million dollars in a scam after employees at its Hong Kong branch were fooled by deepfake technology.

One incident involved a deepfake of it’s Chief Financial Officer ordering some money transfers during a video conference call. During this call, everyone present was a deepfake, except for the victim.

The scammers used open-source intelligence (OSINT) to create the video deepfakes. For those of you unfamiliar with what OSINT is, this thread on Stack Interchange explains it in more detail.

State Actors

In March 2024, there was an attack carried out on a concert hall in Moscow by an Islamic State (IS) group, where 140 people lost their lives and many more were injured.

Shortly after this, Russian State television aired a video of a top Ukrainian security official seemingly taking credit for this attack. But it was a deepfake, which combined footage from two recent videos with AI-generated audio.

It can be difficult to determine whether or not audio is real or a deepfake.

Recent research indicates that around 1 in 4 people can’t distinguish deepfake audio from the real thing. And while GenAI continues to improve, we can expect to see similar rates for video detection in future.

Recent research indicates that around 1 in 4 people can't distinguish deepfake audio from the real thing. Click to Tweet

Divider

The Regulatory Landscape for Deepfake Technology

When it comes to regulation, the UK and the EU are slightly ahead of the US in terms of legislative protection against deepfakes.

European Regulation

  • UK Online Safety Act 2023 – made it illegal to share explicit images or videos that have been digitally manipulated (but only where they have intentionally or recklessly caused distress to an individual).
  • EU AI Act of 2024 – anyone who creates or disseminates a deepfake must disclose its official origin and provide information about the techniques used.

US Federal Regulation

While there are no current federal laws prohibiting deepfakes, there are some proposals currently being reviewed.

  • No Fraud AI Act – Establishes a framework to protect people against AI-generated fakes by making it illegal to create a ‘digital depiction’ – including the appearance and voice of any person, living or dead, without permission.
  • Nurture Originals, Foster Art and Keep Entertainment Safe (No Fakes) Act – Established to protect the voice and visual likeness or performers.
  • Disrupt Explicit Forged Images and Non-Consensual Edits Act – Allows people to sue over faked pornographic images of themselves.

In terms of State regulation, less than half of the US have some sort of regulation in place for sexual deepfakes or against AI use in elections.

Understanding Deepfake Technology Why It's A Risk To Your Business

Divider

How Defenders Can Respond

As technology partners to your clients, they will be looking to you to mitigate the risks that deepfake technology poses to them and their businesses.

1. AI-Based Approaches

Technology is being increasingly tested in the field to spot deepfakes using deep learning AI models.

  • Example 1: Deep learning models trained to detect feature anomalies (such as eyes, ears, hands, etc.)
  • Example 2: Deep learning models trained to detect biological signs (blood flow, heart rate, etc.)

2. Defence in Depth Best Practice

Ensuring you have a varied layered security strategy in place to protect your assets, and those of your clients.

Security Awareness Training – Encouraging a methodology of verifying audio/video channels before trusting it, especially if requests are abnormal or suspicious. Making sure that verification is done using a different channel (phone, email, slack, etc.)

Zero Trust – Going beyond multi-factor authentication (MFA), offering continuous verification of user and device identity.

Cybersecurity-as-a-Service – For example, 24/7 proactive monitoring and threat hunting.

Divider

Fighting AI with AI

One of the best ways to unsure you keep up with ever-developing deepfake technology is to choose security vendors with experience in leveraging and innovating AI into their solutions.

Barracuda have been integrating natural language processing into their anomaly-detecting email security solutions since 2017.

Email is still the number one vector for attackers to breach your network. 91% of all cyberattacks start with email.

Two downloadable resources for managed service providers (MSPs) available from Barracuda MSP’s website:

Understanding Deepfake Technology Why It's A Risk To Your Business

Divider

Questions and Answers

The session finished with a quick Q&A to close.

Q1: How far behind are defences at being able to accurately spot a deepfake at the moment?

A1: The technology is only at 40% accuracy at spotting deepfakes at the moment, so it’s currently behind. That’s why it’s always good advice to verify anything you’re not sure of, using a different channel medium.

Q2: How can you prove the effectiveness of security technology at managing risk to decision makers?

A2: Email threat scanners should flag up any suspicious activity. Whether that’s emails from unusual geolocations, suspicious links or attachments, or if they’re asking for account details or confidential information.

Q3: Are there any additional training modules that Barracuda provide in how to better spot deepfakes?

A3: Not at the moment, but there will be something to cover deepfake technology in due course.

Q4: Is the distortion you see sometimes in video deepfakes intentional? As a tell tale sign that the video you’re watching is a deepfake?

A4: No, it’s not intentional. It’s just the sophistication of the technology isn’t quite there yet, but that could change in the future.

Divider

Tubblog’s Experiment in AI Voice Synthesis

We experimented with AI in a recent TubbTalk Bonusode we recorded. The audio was created using an AI tool called Descript.

If you want to listen to the result, you find the Bonusode here.

 

Understanding Deepfake Technology Conclusion

Deepfake technology is being used more and more, and not just as a means of entertainment or clever advertising.

It’s now a tool for cyber attackers to use to get into your systems, steal your data and extort money from you.

Audio deepfakes are already difficult to distinguish from the real thing, and soon video may catch up too.

While many people are prepared for suspiciously-worded emails, are you clients as prepared for vishing attacks that sound like their CFOs ordering an emergency bank transfer?

Have you seen a deepfake that one of your clients might well have fallen for? Or are you concerned that sophistication in this field, thanks to AI, could give you sleepless nights as an MSP in the future?

We’d love to hear your thoughts in the comments!

Understanding Deepfake Technology Why It's A Risk To Your Business

Divider

You Might Also Be Interested In

STEPHEN MCCORMICK

I'm a small business owner, technical writer and blogger, with 15 years experience in corporate IT. I frequently attend MSP peer groups and create content relevant to IT service providers and business owners.

All Posts

You might like:

Team Tubb Takeover – Christmas 2024 image

Team Tubb Takeover – Christmas 2024

Article | By jak_admin
Networking Tips for MSPs: How to Build Relationships That Convert image

Networking Tips for MSPs: How to Build Relationships That Convert

Article | By Graham Pierrepoint
The Best Apps and Resources for Winter Wellbeing image

The Best Apps and Resources for Winter Wellbeing

Article | By Gudrun Lauret
Beat the Winter Blues: Top Tips for Better MSP Wellbeing image

Beat the Winter Blues: Top Tips for Better MSP Wellbeing

Article | By Gudrun Lauret
CompTIA EMEA 2024: Member and Partner Update and More! image

CompTIA EMEA 2024: Member and Partner Update and More!

Article | By Richard Tubb
How to Introduce a Mental Health Programme into Your MSP image

How to Introduce a Mental Health Programme into Your MSP

Article | By Graham Pierrepoint
Invest In Mental Health For a Happy MSP Team image

Invest In Mental Health For a Happy MSP Team

Article | By Graham Pierrepoint
The Lowdown: Women In Tech Meetup: Pax8 Beyond EMEA image

The Lowdown: Women In Tech Meetup: Pax8 Beyond EMEA

Article | By Richard Tubb
The Easy Way to Transition Your MSP to an MSSP image

The Easy Way to Transition Your MSP to an MSSP

Article | By Richard Tubb
Pax8 Beyond EMEA 24: Growing Community Beyond the Cloud image

Pax8 Beyond EMEA 24: Growing Community Beyond the Cloud

Article | By Richard Tubb
Partnering with Vendors: A Strategic Approach to Enhance Your MSP’s Offerings image

Partnering with Vendors: A Strategic Approach to Enhance Your MSP’s Offerings

Article | By Graham Pierrepoint
How to Scale Your MSP Without Losing the Personal Touch image

How to Scale Your MSP Without Losing the Personal Touch

Article | By Graham Pierrepoint

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Explore.

Share via
Send this to a friend