easy-accordion-free
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php on line 6114zoho-flow
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php on line 6114wordpress-seo
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php on line 6114When you\u2019ve been in the security world long enough, you start to see old playbooks being reused, with new technology. Case in point: \u2018Deepfake\u2019 has been an increasingly common phrase in the news, describing digitally manipulated video being used to misrepresent a person or falsify identity. The latest example of deepfake targeting, where a successful video call resulted in a 25 million USD money transfer, captured people\u2019s attention for a number of reasons. The main news value was in the enormous amount of money that the attackers were able to steal by faking a single video call. In itself, the technical playbook used to trick the person was nothing new. However, this deepfake example demonstrated once again just how high a level of sophistication is possible when AI is orchestrated creatively. People generally fear a relatively new technology, like AI, because they can\u2019t immediately grasp its full potential and they have a fear of the unknown. Similarly, technological advancements also scare people when they feel like they pose a threat to their sense of security or working lives, such as losing their jobs to AI.<\/p>\n
The social engineering techniques used by adversaries have continuously evolved and usually these adversaries are faster to adopt new technologies for their benefit than we, the defenders, are to protect their victims. You can see examples of this in the not too distant past: In times of modem connectivity, a common piece of malware would dial up a modem in the middle of the night and connect it to a toll number, leading to enormous bills. A few years ago, a rash of malicious android apps hacked mobile phones to dial toll numbers as a way to make quick and easy money \u2013 which was basically a modern form of the old modem dialer tactic. Cryptominers harvesting the compute powers of infected systems was then the next step in this evolution. <\/p>\n
The human risk factor<\/p>\n
History has shown us a number of examples of the old social engineering playbook in use. The technique of faking a senior executive\u2018s voice by reusing publicly available audio clips to threaten users into taking action is already fairly well known. Faking video sessions showing a range of people in a live and interactive call, however, reaches a new (and scary) level of cybercriminal sophistication and has therefore sown a new level of appropriate and respectful fear around AI\u2019s technological evolution. It is the perfect demonstration of how easily humans can be tricked or coerced into taking action \u2013 and of bad actors using this to their advantage. But this attack also highlights how a new piece of technology can enable adversaries to do the same tasks they have been doing before, but more efficiently. And bad guys are taking advantage of this technological advancement fast.<\/p>\n
Unfortunately, the general public is still not fully aware of how social engineering techniques continue to evolve. They don’t follow security news and trust that these kinds of attacks will never happen to them. This is what makes traditional security awareness training difficult to prove effective, the public doesn\u2019t believe they (as individuals) will be targeted. So when it does happen, they are unprepared and are duped into falling prey to the social engineering attack. <\/p>\n
In the wake of this recent attack questions were also raised about how \u2013 if AI is really good enough to make these video scenarios look so realistic \u2013 an employee would have any chance of detecting the fake. The fact is that human beings are not machines, and they will always be a risk factor as an organisation\u2018s first line of defence because they will have a variable level of security awareness (no matter how good the internal training process might be). Imagine if someone has a bad night or returns home late from a business trip or sports event. They simply might not be as laser-focused on detecting modern social engineering techniques or paying attention to the details the following day. The big challenge is that AI won\u2019t have an off day \u2013 its targeting will remain consistent.<\/p>\n
The technology to fight these playbooks already exists \u2013 but it is not widely used<\/p>\n
The fact that these kind of plays keep working shows that businesses have not yet adapted their security and organisational processes to handle them. One way to counteract deep fakes videos starts at the (security) process level. <\/p>\n
\tMy first idea is a simple one: to ensure that teleconferencing systems include a function to authenticate a logged-on user as a human being. A straightforward plug-in could do the job, employing two-factor authentication to verify an identity within Zoom or Teams, for example. Hopefully such an API would be fairly easy to develop and would be a huge step forward in preventing sniffing attacks via the phone as well.<\/p>\n
\tAdditionally, the mindset about being afraid of AI has to change. It is an amazing piece of technology, not only when it is misused. Society just needs to understand its boundaries. AI can actually be implemented to stop these sorts of modern attacks if security executives learn how to control the problem and use the technology to get ahead of the bad actors. Deception technologies already exist, and AI can be used to detect anomalies much faster and more effectively, showing its potential for good.<\/p>\n
\tFrom a more all-up security perspective, adapting a Zero Trust mentality for security can enable organisations to continually improve their security posture on the process level. Zero Trust could not only help on a connectivity level, but it could also improve security workflows, which helps to verify whether everyone in a call is authenticated against an internal directory. Zscaler\u2018s Identity Threat Detection and Response (ITDR) is already mitigating threats that are targeting a user\u2019s identity. With the help of the new service, the risk to identities is becoming quantifiable, misconfigurations are being detected, and real-time monitoring and privileged escalations are helping to prevent breaches. <\/p>\n
Finally \u2013 going back to the initial example of the successful deepfake \u2013 it is hard to believe that you can transfer so much money in a modern organization without verification processes operating in the background. Organisations would be well advised to check the overall risk level of such processes within their own infrastructure. It would raise the barriers to an attack greatly, if solid administrative processes were put in place to reduce risk \u2013 not only in the security organisation, but for operational processes like payments authentication as well. Not everything needs to be enhanced by a technological solution. Sometimes a new procedure where two people must sign off on a funds transfer could be the step which protects the organization from losing $25m USD.\u00a0\u00a0<\/p>\n
\u200b<\/p>\n