
Deepfakes have added another layer of complexity to the social engineering landscape. These synthetic media, generated using artificial intelligence (AI) and machine learning, create believable, realistic videos, images, audio, and text depicting events that never happened.
Despite being a fake, the video quality may be sophisticated enough that a casual viewer could be convinced it's authentic, making deepfakes one of the most dangerous forms of emerging content. They can disseminate disinformation and misinformation, posing risks to individuals, industries, and whole societies.
Research from the Sumsub's 2023 Identity Fraud Report* shows a 300% increase in deepfake content online between 2022 and 2023. This considerable increase suggests that the trend will continue in the years to come and present even greater difficulties to organisations and societies as the technology becomes cheaper and more easily available.
As information technology evolves, organisations must focus on training and awareness programmes to equip employees with the knowledge to detect and defend against social engineering and deepfakes.
Understanding Deepfake Terms and Technology
Deepfake has become the most common terminology used to describe deceptive and misleading content, but it can be helpful to understand other terms when it comes to identifying what's real and what isn't.
For instance, cheapfakes created using readily available software. Shallowfakes is another term for audio-visual manipulations of existing content, such as using video editing software to slow down footage making the speech slurred so that the viewer thinks the high-profile person on camera is drunk. These categories of deepfakes are cheaper, require less technical skill, and are available on a larger scale, making them easy to disseminate online, but not necessarily less threatening or damaging to organisations or individuals.
What sets deepfakes apart is that the technology for creating this style of manipulated content uses deep learning techniques. Deep learning is a subset of machine learning techniques, which are themselves a subset of AI.
Deepfake videos or images often feature people whose voice, face or body has been digitally altered, so that they appear to be saying something else or are someone else entirely. Recent developments, however, have now also made deepfake technology far less resource intensive and consequently more accessible to the general population.
The rise of deepfakes partly is because people are more likely to believe what they see. Synthetic media such as manipulated photos and audio and video deepfakes can be especially convincing and dangerously effective.
How Deepfakes Are Being Used
Deepfake technology is being used for a wide variety of purposes, and social engineering is only one of them. A recent example of this particular use is a deepfake voice that tricked a chief executive officer into believing he was speaking to the company group chief executive; the deepfake voice convinced the CEO to transfer a substantial sum of money to a third-party bank account. Other increasingly common uses include:
- Scams and hoaxes, which are typically a false video of a senior official admitting to criminal activity, such as financial crimes, or making false claims about an organisation's activity. The time and cost to disprove such accusations could have a major impact on the organisation's brand or public reputation.
- Automated disinformation attacks, which spread conspiracy theories and incorrect information about political and social issues that can be very difficult to refute. These attacks can be used to manipulate societal belief systems and political elections and to dismantle the reputation of an organisation or individual.
What Can Organisations Do
To counter the threat of deepfakes, organisations should develop strategies to bolster their reputation and address misinformation, as well as educate employees about what deepfakes are and how to spot them.
A good starting point for training employees to recognise a deepfake is teaching them what types of unusual activity and unnatural movement to watch out for in video content. For example:
- Unnatural or no eye movement
- Irregular or no eye blinking
- Unnatural facial expressions and facial morphing
- Unnatural body shape
- Unnatural hair
- Abnormal skin colours
- Awkward head and body positioning
- Inconsistent facial positions
- Odd lighting, shifts in lighting between frames, misplaced shadows or discolouration of the image
- Bad lip-syncing that fails to match the spoken words
Employers should also encourage media literacy amongst their employees by promoting reliable news sources. Implementing automatic checks built into any process for disbursing funds will also help prevent the success of deepfakes and similar frauds.
Whilst the odds of ending up on the wrong end of deepfake or social engineering attack may be increasing, there are still means to limit your business's exposure and mitigate the risks, but a proactive approach to prevention and crisis management is a crucial component.
Our highly experienced team of qualified risk consultants works with clients across the UK to improve their understanding of their risk exposure and develop bespoke programmes that meet each client's individual needs and requirements, to decrease such risks. Contact us today to discuss how we can help you build a robust organisational defence against potential cyber threats.
To learn more about other social engineering threats read Arming Your Employees Against the Future of Social Engineering: Part One.
Contact Us TodayAuthor
Make Gallagher Bassett your dependable partner
When making the right decision at the right time is critical to minimise risk for your business, count on Gallagher Bassett's extensive experience and global network to deliver.