Can machine learning be used to shore up cyber defenses?

Deep learning can be a vital supplementary tool for cybersecurity.

The meteoric rise of malware has put us all at risk. We are engaged in a never-ending race with cybercriminals to protect systems, plug gaps, and eradicate vulnerabilities before they can gain access. The front line grows by the day as we share more data and employ new network-connected devices via the rise of the Internet of Things.

Keeping up with the fast pace of new malicious threats is a real challenge. If it takes longer to scan for malware than it takes the malware to gain a foothold or exfiltrate data, then we are stuck in a detect and remediate pattern. Prevention would be preferable. One possible path to accurate prediction and real-time prevention is the development of machine learning algorithms.

Threats evolve quickly

Whoever you go to for malware stats the numbers are frightening. The AV-Test Institute registers more than 250,000 new malicious programs every day. A new malware specimen emerged every 4.6 seconds in 2016, according to G Data, and that figure dropped to 4.2 seconds in the first quarter of 2017. As many as 72 million unique URLs were documented as malicious in the last quarter alone.

We’re in the midst of a serious malware epidemic and we need new weapons in the fight against cybercriminals. The traditional approach of blacklisting URLs necessitates frequent updates being pushed out to protect people, or it requires their security systems to access cloud-based blacklists. In both cases there are potential performance impacts, but there’s also a delay between the discovery of the malicious URL and the protection against it.

It’s possible to ramp up the security levels by blocking wider domains, but actions like that lead to false positives. We could enjoy complete protection, but if it means blocking access to everything, it’s obviously not a viable solution. So, what’s the answer?

Machine learning models

There has been plenty of buzz about the potential of machine learning for countless industries, including cybersecurity, but there isn’t much clarity on precisely what it can do. The basic idea is to emulate the human brain with an artificial neural network that’s able to ingest huge data sets. These models learn through human guidance, and trial and error, until they can accurately recognize suspicious URLs or probable malware.

As the machine learning model improves, the hope is that it can reach the point where it correctly predicts what is malware and automatically blocks access to it. There’s a great deal of work in crafting a predictive model like this. The data must be properly prepared, the model must be designed, and then you must train and validate it, before evaluating its effectiveness.

For a deep dive into deep learning, read up on how Sophos has been developing just such a threat detection model through machine learning.

Challenges for machine learning

There are many prerequisites for an effective machine learning model. You obviously want to strike a good balance with high detection rates and minimal false positives. It needs to be fed a stream of relevant, quality data on a vast scale, but it also has to be lean in order to make real-time decisions and be truly proactive.

The potential rewards are so great that the whole industry is moving towards tackling these challenges. Security systems that can leverage big data to consistently and accurately predict and shut out threats in a sea of shifting variables could dramatically reduce the impact of cybercrime. No wonder that ABI Research predicts that machine learning in cybersecurity will boost big data, intelligence, and analytics spending to $96 billion by 2021.

On the horizon

As exciting as developments like machine learning and genetic algorithms are, it’s important to remember that we’re talking about supplementary technologies here. We still need security strategies, proper training for staff, stringent security testing, and a host of tools to protect our networks and data. The guiding hand of cybersecurity experts is essential for these models to continue improving.

These technologies won’t replace humans, they’ll simply empower us with more accurate information. It may be years before machine learning leads to highly accurate autonomous systems, but there’s no doubt that these models will prove a valuable ally as we strive for great security in cyberspace.

 

This article was originally posted on CSOOnline >

The Darwin defense: can ‘genetic algorithms’ outsmart malware?

Coming to a future near you: software code that mutates and evolves.

We often talk about computer systems and information security in biological terms. Threats and defenses evolve, viruses run rampant, and machines learn by emulating the neural networks in our brains. Cybersecurity is an endless war between attackers and defenders, just as biology is a war between predators and prey.

What if we could create an automated process of selection for computer programs, where the fittest would survive and adapt to become more robust, closing vulnerabilities and fixing bugs with each new self-producing version? That’s precisely what some researchers are working on and it may lead us to a future where software repair and security is automated, without the input of coders.

The malware mountain

Malicious software or malware is an enormous problem. The AV-Test Institute registers more than 250,000 new malicious programs every day. Trying to combat that threat is far from easy, especially with limited time and resources. Cybercrime damages will cost $6 trillion annually by 2021, according to Cybersecurity Ventures, up from $3 trillion in 2015.

In a competitive market where new features and devices are developed as quickly as possible, security often takes a back seat. The need to secure the IoT is a good example. We’re connecting billions of devices to our networks that offer new potential points of entry for hackers. Many of these IoT devices lack basic security provisions or they have not been properly configured to take advantage of the security they do offer.

A single default password may hand an attacker the keys to your digital kingdom. Even with a stringent update policy and a string of security patches, which is not the state of play for most businesses, much less your average homeowner, there is still risk. New vulnerabilities emerge all the time and updates can create as many bugs as they fix.

The Darwin defense

The concept of a genetic algorithm was pioneered by John Henry Holland, a professor of psychology, electrical engineering, and computer science. He recognized the potential of applying Darwin’s concept of natural selection to computers. Now, Stephanie Forrest of the University of Michigan, having earned her Ph.D with Holland, is applying these genetic algorithms to software.

The idea is to allow different versions of a computer program to mate and merge their code. Some of the time, the new versions work better than their predecessors. Each software version is judged on its ability to perform the functions it was originally created for. Weak versions that don’t perform well are culled. Promising new variants survive and mate. There’s also an element of unexpected innovation that comes through mutation, providing desirable new features.

These genetic algorithms are essentially evolving through selective breeding and artificial adaptation. New generations can develop quickly with no need for human intervention. This automated process has the potential to get great results far more quickly and cheaply than traditional software development, where repairing bugs and closing vulnerabilities is slow and difficult.

Automation and evolution

Traditional software development has given way to a much faster process and there’s a growing understanding that automation can introduce speed, consistency and free up talent to focus on areas where they can add more value. Artificial intelligence has benefitted enormously by borrowing from biology, so it stands to reason that security software could do the same.

As potential attack surfaces grow, there are countless risks to assess and remediate. There’s so much to consider, from third-party risk management to the growth of botnets. Cybersecurity professionals understand that this is a war that will never end. Hackers and cybercriminals continue to identify and exploit new avenues of attack. Just as innovation drives new software features, it leaves bugs and vulnerabilities in its wake.

Even with the help of a common set of principles, like NIST’s Cybersecurity Framework, it’s difficult to keep malware off your network. New vulnerabilities are discovered every day, but too many companies also fail to remediate for known issues. Patching is a real problem that needs to be addressed.

It’s easy to see the exciting potential of automated, evolutionary software development for rapid bug fixes and enhanced security.

 

This article was originally posted on CSOOnline >