What are Field Effects thoughts on the use of AI?

(This article is currently under review)


Our Threat and Risk Intelligence team has put a lot of time into researching the security implications of AI tools as they become more readily available. 


The primary ways in which AI seems likely to impact cyber security are two fold. 

  1. First, the possibility of AI facilitating social engineering attacks by mimicking the writing style (or in limited cases the voice) of trusted entities, or by making phishing emails appear more credible. 
  2. Second, in facilitating polymorphic malware by re-generating unique malware code in real time. 


Social Engineering 

We have assessed that the use of AI is very likely to significantly improve the quality of social engineering campaigns (and allow more direct and customized targeting of specific organizations), and as a result will be more difficult for users to identify as not legitimate.


There are additional steps that can be taken to better secure your users and organization against credible phishing and similar social engineering attempts: 

  • Enforce the use of Multi-Factor Authentication (MFA) for all accounts within your organization. This significantly reduces the likelihood that a phishing attempt will successfully compromise an account even if account credentials are stolen. 
  • If you believe there may be an elevated threat to your organization or to certain high-value target users or assets, consider specifically implementing MFA via hardware device. As physical access to the hardware device is required in order to authenticate, this method renders account compromise due to phishing virtually impossible. 
  • Continue to implement pre-existing phishing mitigations, such as warning banners on emails sent from external sources, user training on verifying the source of requests, security solutions monitoring for malicious host and network activity, etc.


Voice Impersonation Specifically 

While there are some examples of celebrities and other public figures having their voices spoofed by AI tools, this is a much less credible threat for other individuals. AI models require a very large data set of pre-recorded voice samples for an individual before believable impersonation is possible, and while this data does exist for public figures, it generally does not for everyone else. 


It is still possible that voice impersonation will become more credible in the future, though the recommendations above for mitigations to social engineering in general should still be effective against these threats as well. 


The bottom line is that users should be trained to verify the source of a request prior to providing sensitive information or actioning financial requests, and the best way to do this is in person - or via a means of contact that is similarly difficult to intercept or spoof, such as a phone call to the known phone number of the individual making the request. 


Malware 

Several organizations have publicly speculated that the use of AI - including the possible integration of real-time calls to AI applications such as ChatGPT - could allow malware to re-write itself during execution in an attempt to evade antivirus and similar security software. In particular, security research organization HYAS Labs developed a proof-of-concept malware named BlackMamba that leverages calls to the web API of a Large Language Model (LLM) for this purpose, which they asserted was able to defeat an unnamed EDR solution: HYAS - BlackMamba: Using AI to Generate Polymorphic Malware.


Multiple security vendors have since published responses to the conclusions drawn by HYAS Labs, including Field Effect: Field Effect - The Brass Tacks of AI and Cybersecurity. While this use of AI LLMs is certainly an interesting area of research, the conclusions drawn by HYAS Labs that this provides novel mechanisms for bypassing modern EDR solutions are somewhat over-stated. Specifically - the use of polymorphic malware is not new. While in the past this may not have been achieved through the use of AI tools, it has nonetheless been a common threat actor technique for a long time, and while it may be useful in avoiding file-signature based detection employed by basic antivirus software, any modern EDR solution of good quality does not rely simply on a file matching the signature of previously-seen malware for detection. 


For example, Field Effect MDR targets the behavior of malware and threat actors to identify malicious activity regardless of its software source. Specifically, Field Effect MDR monitors key 'choke points', or actions without which a threat actor cannot successfully compromise a network, such as attempts to escalate account privilege, move laterally between hosts, or tamper with defensive software. There are limited ways in which these actions can be accomplished, and regardless of how novel or constantly changing a new strain of malware is, at some point it will be forced to take these actions, and as a result will be caught. 


Summary 

In conclusion, while easy access to AI and LLM tools does provide access to some of the more advanced malware and social engineering techniques to those who previously would not have possessed the skills required to access them, the techniques themselves are not new, and do not pose a significantly novel threat to existing cyber security solutions. 


There is some additional risk in the form of more credible social engineering campaigns, though again the base mechanism by which these campaigns compromise accounts and assets have not changed, and many of the mitigations already in place to prevent successful social engineering attacks will still be effective against those leveraging AI tools.

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article