SECURITY

What Generative AI Means For Cybersecurity: Risk & Reward

In recent years, generative artificial intelligence (AI), especially Large Language Models (LLMs) like ChatGPT, has revolutionized the fields of AI and natural language processing. From automating customer support to creating realistic chatbots, we rely on AI much more than many of us probably realize.

The AI hype train definitely reached full steam in the last several months, especially for cybersecurity use cases, with the release of tools such as:

Unfortunately, almost all of this attention focuses on the potential negative impacts of AI, while ignoring beneficial use cases to help organizations defend their networks. As we know, disasters almost always make for better primetime news viewing than cute puppies, and most of these articles have big “if it bleeds, it leads” energy. But is all this negative hype warranted?

What Do We Mean by Generative AI?

We asked GPT-4 to define “generative AI”, and this is what it had to say:

"Generative AI is a type of artificial intelligence that can make new stuff, like writing, pictures, or tunes, by learning from existing examples and making up its own creations based on what it's learned. It uses fancy techniques to make things that look real and original."

Sounds about right! At SURGe, we’re very excited about the possibilities these new AI tools offer defenders. That’s why we wrote this article — to explore these new benefits and potential risks in light of cybersecurity. In this article, we’ll look at the state of generative AI for cybersecurity today.

(Read our generative AI introduction.)

AI Trends & Challenges for Cybersecurity

As we mentioned above, there’s already been quite a lot written about this subject. Let’s start with what people are already saying, worrying about, and (sometimes) even doing when it comes to AI for cybersecurity.

Taking Over Jobs

A common theme in reports about generative AI is that it will impact a certain percentage of “insert job name here” over the next year. You can read similar stories about scribes' fears when Johannes Gutenberg invented the printing press (Bi Sheng would argue this). No longer would they be able to spend months copying books by hand.

Will ChatGPT take over security awareness writers’ jobs just as the printing press put thousands of monks out of business? Will it remove the need for detection engineers like the Jacquard loom once put large numbers of weavers out of business? Maybe, maybe not.  

We often describe the interplay between threat actors and defenders as an “arms race,” and staying ahead in that race requires innovation. AI is great at augmenting, automating, and scaling existing capabilities, but it can’t (yet?) create entirely novel techniques. While some types of cybersecurity jobs may be reduced or even eliminated, humans will always be the driving factors behind good security, and we know how in-demand humans are in our industry.

But blog writers beware, ChatGPT does quite a good job creating 800-1200 word diatribes on things most people won’t read anyway.

Writing Malware

When most new technologies come about, people try to find ways to use them for all sorts of things, and many of them may not be the most benevolent. ChatGPT was no different.

A number of people were able to circumvent the built-in safeguards and get ChatGPT to write malware for them. For example, CyberArk reported that they got ChatGPT to create polymorphic malware—that’s malware that has defense-evading capabilities. At first, ChatGPT refused to create the malware, but through more inventive prompts, ChatGPT relented and created the malware.

Using AI to create new malware is undoubtedly concerning, but there are some important points to keep in mind:

You don’t need AI to create new malware, even as a non-programmer. Humans are already very prolific at creating malware, with thousands of new samples appearing every day. It’s unclear if the amount of AI-generated malware will even approach the sheer numbers we’re already looking at.  But we know that the bad guys will use these tools, so defenders need to understand how they can be used, along with using them to augment their skill sets.

The idea of having a computer write the malware for you isn’t new, either. Over 20 years ago, the venerable Poison Ivy RAT included a generator that gave even neophyte threat actors an easy point-and-click method for customizing both the server and client sides of the malware.

Just because an AI created the malware doesn’t mean it’s any good. There are many factors to consider when judging the quality of a specific piece of malware. For our purposes, probably the most important are:

  • Whether the code runs. AI-generated code often doesn’t even compile/parse correctly at first).
  • Whether it does what it’s supposed to do. Substantial testing will still be required to ensure the code works across different OS versions and user environments.
  • Whether it’s able to bypass AV and other security measures already in place.

Having AI “write malware” and then use that malware successfully in the wild are two very different things.

Tricking People With More Sophisticated Spear Phishing

Adversaries are using Generative AI to help them craft more convincing phishing emails. There have even been reports of attackers using AI tools to create realistic deep fake voices and videos to steal money and gain employment. Since the 2022 Verizon DBIR report shows that phishing is still one of the top ways attackers get initial footholds in a network, it makes sense that we pay very close attention to AI advances in this area.

Of course, the stereotypical phishing email is riddled with spelling and grammatical errors, often to the point where it would make your five-year-old look like a Pulitzer Prize winner. However, this is no longer a reliable phishing indicator. In fact, the general level of sophistication in phishing is rising, and this includes the message bodies, which are now often pitch-perfect.

Moreover, there’s evidence to suggest that some spelling and grammatical errors may be intentional, as a way to ensure that the attacker only works with recipients who are gullible enough to make it worth their time. If this is the case, attackers using AI wouldn’t want to produce perfect messages anyway.

In general, AI isn’t going to suddenly increase the sophistication of phishing attacks because they are already quite sophisticated. However, there are areas where it can help the phishers. For example, within a matter of moments, attackers using an AI-enabled search engine could gather in-depth information about their target and perfectly craft an entire campaign. And those broken English emails you’d typically get from phishers will most likely disappear with tools like ChatGPT at the adversary’s disposal.

There are legitimate worries when it comes to AI phishing, but they may not be what you think.

Rise of the Machines: Experiments in Generative AI

Generative AI has definitely taken the world by storm, but have the machines risen?  Is it time to get Sarah Connor involved? We don’t think so. Do we need to know more about both offensive and defensive uses for these technologies? Of course we do!

That’s why we’re excited to introduce a new blog series, “Rise of the Machines.” In this series, we’ll highlight our experiments using generative AI tools to help cybersecurity pros do their jobs better and faster. We’ll document our findings in our trademarked (not really) practical, fun, and zero FUD format.

And we want to emphasize that these will be experiments, not polished solutions. Some of them may work well, but some of them may fail spectacularly. You won’t know which is which, so you’ll just have to read the series to find out! But no matter if we succeed or fail, we’ll learn together how to best make use of these new tools to help keep our networks safe. And just to allay any fears, all of our experiments are done in a safe environment not using any sensitive information.

We hope you’ll not only enjoy following along with our AI explorations but also get some inspiration for how you can make sure the machines are on your side, not rising up against you.


As always, security at Splunk is a family business. Credit to authors and collaborators: Shannon DavisDavid BiancoRyan Kovar

Shannon Davis
Posted by

Shannon Davis

Security practitioner, Melbourne, Australia via Seattle, USA.

TAGS
Show All Tags
Show Less Tags