Security Advisor Middle East | Issue 24

Page 22

FEATURE

HOW ARE CYBER CRIMINALS USING MACHINE LEARNING? Machine learning algorithms will improve security solutions, helping human analysts triage threats and close vulnerabilities quicker. But they are also going to help threat actors launch bigger, more complex attacks.

D

efined as the “ability for (computers) to learn without being explicitly programmed,” machine learning is huge news for the information security industry. It’s a technology that potentially can help security analysts with everything from malware and log analysis to possibly identifying and closing vulnerabilities earlier. Perhaps too, it could improve endpoint security, automate repetitive tasks, and even reduce the likelihood of attacks resulting in data exfiltration. The problem is, hackers know this and are expected to build their own AI and machine learning tools to launch attacks. Machine learning-based attacks in the wild may remain largely unheard of at this time, but some of the following techniques are already being leveraged by criminal groups. Increasingly evasive malware Malware creation is largely a manual process for cyber criminals. They write scripts to make up computer viruses 22

02.2018

and trojans, and leverage rootkits, password scrapers and other tools to aid distribution and execution. But what if they could speed up this process? Is there a way machine learning could be help create malware? The first known example of using machine learning for malware creation was presented in 2017 in a paper entitled “Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN.” In the report, the authors revealed how they built a generative adversarial network (GAN) based algorithm to generate adversarial malware samples that, critically, were able to bypass machinelearning-based detection systems. In another example, at the 2017 DEFCON conference, security company Endgame revealed how it created customised malware using Elon Musk›s OpenAI framework to create malware that security engines were unable to detect. Endgame’s research was based on taking binaries that appeared to be malicious, and by changing a few parts, that code would

appear benign and trustworthy to the antivirus engines. Other researchers, meanwhile, have predicted machine learning could ultimately be used to “modify code on the fly based on how and what has been detected in the lab,” an extension on polymorphic malware. Smart botnets for scalable attacks Fortinet believes that 2018 will be the year of self-learning ‘hivenets’ and ‘swarmbots’, in essence marking the belief that ‘intelligent’ IoT devices can be commanded to attack vulnerable systems at scale. “They will be capable of talking to each other and taking action based off of local intelligence that is shared,” said Derek Manky, global security strategist, Fortinet. “In addition, zombies will become smart, acting on commands without the botnet herder instructing them to do so. As a result, hivenets will be able to grow exponentially as swarms, widening their ability to simultaneously attack multiple victims and significantly impede mitigation and response.” www.tahawultech.com


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.