AI vs AI: New algorithm automatically bypasses your best cybersecurity defenses


Image source: www.techrepublic.com

Researchers have created an AI that tweaks malware code, and it easily bypassed an anti-malware AI undetected. Is machine learning ready to face down cybersecurity threats?

At DEF CON this past weekend Hyrum Anderson of security firm Endgame demonstrated an alarming AI application: modifying malware to defeat machine learning antivirus software.

The core premise of Endgame’s experiment was that every AI has blind spots, and those blind spots can be exploited by other AI. By hammering antivirus software with slight modifications to malware code, researchers were able to bypass its security measures 16 percent of the time.

16 percent may not seem like much, but if an AI can mutate malware to the point where it’s still functional but undetectable it doesn’t matter how much of it gets through—one infection is enough when it can’t be found.

AI at war with itself

Tricking AI’s ability to recognize objects—or code—is surprisingly simple. Google researchers were able to fool image recognition AI by applying an “imperceptibly small” gradient to an image of a Panda. The change, unrecognizable to our eyes, caused the AI to have more confidence in its mistake than it did in its original assessment that the image was a panda.

screen-shot-2017-08-02-at-1-14-12-pm.png

Image: Google

The goal of anti-malware AI is a little different from recognizing images, but it’s essentially using the same kind of machine learning. It’s fed data in order to learn what constitutes malicious code, determines for itself what kind of patterns to look for, and from there can look at something it’s never seen before and be able to pick out potential malware.

Endgame’s thinking at this point? If AI can learn to recognize potential malware, another AI should be able to learn from watching anti-malware AI make its decisions and use that knowledge to develop the least detectable malware.

So Endgame researchers fired up their own malware modifying AI built on the OpenAI platform and did just that.

The malware AI shot code at its opponent, an anti-malware AI. It looked at the degree of certainty the anti-malware had after scanning the code it sent and it further tweaked the code that had lower degrees of certainty.

Several tweaks later and the malware AI was able to create code that not only bypassed the anti-malware AI’s detection capabilities but still worked once it got through.

AI: Not really that intelligent

The success of Endgame’s experiments on using AI to trick AI is worrying, and it should be. The tech industry is investing a lot of money and time into machine learning, yet it still can’t pick things out with any certainty if you just tweak it a little bit.

That’s not to say that machine learning and AI don’t have their niches—there isn’t any better way to process the near countless amount of data we have and continue to generate daily. It’s also fantastic at making technology more user friendly, though with limitations, like simply not being able to understand language that well.

If we’re going to trust machine learning to secure our networks we need to accept something: It’s just as easy to train a machine to recognize malware as it is to train another machine to defeat it.

Don’t put all your eggs into the machine learning basket—that’s just asking for someone to come along and and prove how unintelligent your AI really is.

You’ll be able to take Endgame’s malware modifying AI for a spin at its GitHub repository, but it isn’t live yet.

Top three takeaways for TechRepublic readers:

  1. Security researchers at Endgame were able to program an AI to modify malware, rendering it undetectable by anti-malware AI. In their tests, 16 percent of attacks penetrated without being recognized as malicious code.
  2. The malware modifying AI tested the anti-malware AI by throwing code at it, then modifying the version with the lowest confidence rate. Eventually it was able to create a malware that slipped by undetected and was still functional once unpacked.
  3. Machine learning AI is good at some things, but not good at others, as proved by this experiment and tests performed by Google in 2015. It only takes slight modifications to fool AI into thinking malware, images, or other categorizable data is something else, proving that it may not be ready for applications like security just yet.

Original story at TechRepublic

Link: www.techrepublic.com/article/ai-vs-ai-new-algorithm-automatically-bypasses-your-best-cybersecurity-defenses/