When you’ve been in the security world long enough, you start to see old playbooks being reused, with new technology. Case in point: ‘Deepfake’ has been an increasingly common phrase in the news, describing digitally manipulated video being used to misrepresent a person or falsify identity. The latest example of deepfake targeting, where a successful video call resulted in a 25 million USD money transfer, captured people’s attention for a number of reasons. The main news value was in the enormous amount of money that the attackers were able to steal by faking a single video call. In itself, the technical playbook used to trick the person was nothing new. However, this deepfake example demonstrated once again just how high a level of sophistication is possible when AI is orchestrated creatively. People generally fear a relatively new technology, like AI, because they can’t immediately grasp its full potential and they have a fear of the unknown. Similarly, technological advancements also scare people when they feel like they pose a threat to their sense of security or working lives, such as losing their jobs to AI.
The social engineering techniques used by adversaries have continuously evolved and usually these adversaries are faster to adopt new technologies for their benefit than we, the defenders, are to protect their victims. You can see examples of this in the not too distant past: In times of modem connectivity, a common piece of malware would dial up a modem in the middle of the night and connect it to a toll number, leading to enormous bills. A few years ago, a rash of malicious android apps hacked mobile phones to dial toll numbers as a way to make quick and easy money – which was basically a modern form of the old modem dialer tactic. Cryptominers harvesting the compute powers of infected systems was then the next step in this evolution.
The human risk factor
History has shown us a number of examples of the old social engineering playbook in use. The technique of faking a senior executive‘s voice by reusing publicly available audio clips to threaten users into taking action is already fairly well known. Faking video sessions showing a range of people in a live and interactive call, however, reaches a new (and scary) level of cybercriminal sophistication and has therefore sown a new level of appropriate and respectful fear around AI’s technological evolution. It is the perfect demonstration of how easily humans can be tricked or coerced into taking action – and of bad actors using this to their advantage. But this attack also highlights how a new piece of technology can enable adversaries to do the same tasks they have been doing before, but more efficiently. And bad guys are taking advantage of this technological advancement fast.
Unfortunately, the general public is still not fully aware of how social engineering techniques continue to evolve. They don’t follow security news and trust that these kinds of attacks will never happen to them. This is what makes traditional security awareness training difficult to prove effective, the public doesn’t believe they (as individuals) will be targeted. So when it does happen, they are unprepared and are duped into falling prey to the social engineering attack.
In the wake of this recent attack questions were also raised about how – if AI is really good enough to make these video scenarios look so realistic – an employee would have any chance of detecting the fake. The fact is that human beings are not machines, and they will always be a risk factor as an organisation‘s first line of defence because they will have a variable level of security awareness (no matter how good the internal training process might be). Imagine if someone has a bad night or returns home late from a business trip or sports event. They simply might not be as laser-focused on detecting modern social engineering techniques or paying attention to the details the following day. The big challenge is that AI won’t have an off day – its targeting will remain consistent.
The technology to fight these playbooks already exists – but it is not widely used
The fact that these kind of plays keep working shows that businesses have not yet adapted their security and organisational processes to handle them. One way to counteract deep fakes videos starts at the (security) process level.
My first idea is a simple one: to ensure that teleconferencing systems include a function to authenticate a logged-on user as a human being. A straightforward plug-in could do the job, employing two-factor authentication to verify an identity within Zoom or Teams, for example. Hopefully such an API would be fairly easy to develop and would be a huge step forward in preventing sniffing attacks via the phone as well.
Additionally, the mindset about being afraid of AI has to change. It is an amazing piece of technology, not only when it is misused. Society just needs to understand its boundaries. AI can actually be implemented to stop these sorts of modern attacks if security executives learn how to control the problem and use the technology to get ahead of the bad actors. Deception technologies already exist, and AI can be used to detect anomalies much faster and more effectively, showing its potential for good.
From a more all-up security perspective, adapting a Zero Trust mentality for security can enable organisations to continually improve their security posture on the process level. Zero Trust could not only help on a connectivity level, but it could also improve security workflows, which helps to verify whether everyone in a call is authenticated against an internal directory. Zscaler‘s Identity Threat Detection and Response (ITDR) is already mitigating threats that are targeting a user’s identity. With the help of the new service, the risk to identities is becoming quantifiable, misconfigurations are being detected, and real-time monitoring and privileged escalations are helping to prevent breaches.
Finally – going back to the initial example of the successful deepfake – it is hard to believe that you can transfer so much money in a modern organization without verification processes operating in the background. Organisations would be well advised to check the overall risk level of such processes within their own infrastructure. It would raise the barriers to an attack greatly, if solid administrative processes were put in place to reduce risk – not only in the security organisation, but for operational processes like payments authentication as well. Not everything needs to be enhanced by a technological solution. Sometimes a new procedure where two people must sign off on a funds transfer could be the step which protects the organization from losing $25m USD.
[[{“value”:”When you’ve been in the security world long enough, you start to see old playbooks being reused, with new technology. Case in point: ‘Deepfake’ has been an increasingly common phrase in the news, describing digitally manipulated video being used to misrepresent a person or falsify identity. The latest example of deepfake targeting, where a successful video call resulted in a 25 million USD money transfer, captured people’s attention for a number of reasons. The main news value was in the enormous amount of money that the attackers were able to steal by faking a single video call. In itself, the technical playbook used to trick the person was nothing new. However, this deepfake example demonstrated once again just how high a level of sophistication is possible when AI is orchestrated creatively. People generally fear a relatively new technology, like AI, because they can’t immediately grasp its full potential and they have a fear of the unknown. Similarly, technological advancements also scare people when they feel like they pose a threat to their sense of security or working lives, such as losing their jobs to AI.
The social engineering techniques used by adversaries have continuously evolved and usually these adversaries are faster to adopt new technologies for their benefit than we, the defenders, are to protect their victims. You can see examples of this in the not too distant past: In times of modem connectivity, a common piece of malware would dial up a modem in the middle of the night and connect it to a toll number, leading to enormous bills. A few years ago, a rash of malicious android apps hacked mobile phones to dial toll numbers as a way to make quick and easy money – which was basically a modern form of the old modem dialer tactic. Cryptominers harvesting the compute powers of infected systems was then the next step in this evolution.
The human risk factor
History has shown us a number of examples of the old social engineering playbook in use. The technique of faking a senior executive‘s voice by reusing publicly available audio clips to threaten users into taking action is already fairly well known. Faking video sessions showing a range of people in a live and interactive call, however, reaches a new (and scary) level of cybercriminal sophistication and has therefore sown a new level of appropriate and respectful fear around AI’s technological evolution. It is the perfect demonstration of how easily humans can be tricked or coerced into taking action – and of bad actors using this to their advantage. But this attack also highlights how a new piece of technology can enable adversaries to do the same tasks they have been doing before, but more efficiently. And bad guys are taking advantage of this technological advancement fast.
Unfortunately, the general public is still not fully aware of how social engineering techniques continue to evolve. They don’t follow security news and trust that these kinds of attacks will never happen to them. This is what makes traditional security awareness training difficult to prove effective, the public doesn’t believe they (as individuals) will be targeted. So when it does happen, they are unprepared and are duped into falling prey to the social engineering attack.
In the wake of this recent attack questions were also raised about how – if AI is really good enough to make these video scenarios look so realistic – an employee would have any chance of detecting the fake. The fact is that human beings are not machines, and they will always be a risk factor as an organisation‘s first line of defence because they will have a variable level of security awareness (no matter how good the internal training process might be). Imagine if someone has a bad night or returns home late from a business trip or sports event. They simply might not be as laser-focused on detecting modern social engineering techniques or paying attention to the details the following day. The big challenge is that AI won’t have an off day – its targeting will remain consistent.
The technology to fight these playbooks already exists – but it is not widely used
The fact that these kind of plays keep working shows that businesses have not yet adapted their security and organisational processes to handle them. One way to counteract deep fakes videos starts at the (security) process level.
My first idea is a simple one: to ensure that teleconferencing systems include a function to authenticate a logged-on user as a human being. A straightforward plug-in could do the job, employing two-factor authentication to verify an identity within Zoom or Teams, for example. Hopefully such an API would be fairly easy to develop and would be a huge step forward in preventing sniffing attacks via the phone as well.
Additionally, the mindset about being afraid of AI has to change. It is an amazing piece of technology, not only when it is misused. Society just needs to understand its boundaries. AI can actually be implemented to stop these sorts of modern attacks if security executives learn how to control the problem and use the technology to get ahead of the bad actors. Deception technologies already exist, and AI can be used to detect anomalies much faster and more effectively, showing its potential for good.
From a more all-up security perspective, adapting a Zero Trust mentality for security can enable organisations to continually improve their security posture on the process level. Zero Trust could not only help on a connectivity level, but it could also improve security workflows, which helps to verify whether everyone in a call is authenticated against an internal directory. Zscaler‘s Identity Threat Detection and Response (ITDR) is already mitigating threats that are targeting a user’s identity. With the help of the new service, the risk to identities is becoming quantifiable, misconfigurations are being detected, and real-time monitoring and privileged escalations are helping to prevent breaches.
Finally – going back to the initial example of the successful deepfake – it is hard to believe that you can transfer so much money in a modern organization without verification processes operating in the background. Organisations would be well advised to check the overall risk level of such processes within their own infrastructure. It would raise the barriers to an attack greatly, if solid administrative processes were put in place to reduce risk – not only in the security organisation, but for operational processes like payments authentication as well. Not everything needs to be enhanced by a technological solution. Sometimes a new procedure where two people must sign off on a funds transfer could be the step which protects the organization from losing $25m USD.”}]]