An AI Cybersecurity System May Detect Attacks with 85 Percent Accuracy

Dyllan Furness

Dyllan explores technology and the human condition for Tech Emergence. His interests include but are not limited to whiskey, kimchi, and Catahoulas.

An AI Cybersecurity System May Detect Attacks with 85 Percent Accuracy

How secure is your company’s online data?

Probably not as secure as you think. Recent statistics from a security risk benchmarking startup called SecurityScorecard suggest that the United States federal government ranks dead last among major cybersecurity industries, despite having spent $100 billion on cybersecurity measures over the past decade.

IT and security teams are dangerously understaffed, with over 200,000 cybersecurity jobs going unfulfilled. A Rand Corporation study estimates there are about 1,000 top-level cybersecurity experts  compared to a global need for 10,000 to 30,000. Perhaps most remarkable is the expense. The British insurance company Lloyd’s puts the annual cost of cyber attacks at $400 billion – and that’s without including the significant portion of cybercrime that the World Economic Fund (WEF) claims goes undetected.

So what’s a company to do? Better artificial intelligence may be the answer.

When it comes to detecting cyber attacks, today’s security systems come in two forms: analyst-driven and machine-driven. Analyst-driven solutions are developed and maintained by security experts and rely on rule sets to scan for potential attacks. The weakness with these solutions is that any attack that doesn’t fit neatly into the experts’ set of rules is disregarded and allowed to slip by. Thus, the system overlooks new and unfamiliar attack methods.

The machine-driven form utilizes anomaly detection that’s generated by a machine learning algorithm. Anomaly detection has the opposite weakness, in that it tends to flag too many false positives. This often requires constant feedback from cybersecurity analysts, who tend to have too much on their plates to effectively address and re-label every false positive.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) in collaboration with machine learning-startup PatternEx have combined analyst-driven solutions with anomaly detection to develop what they claim is a drastically better solution. In fact, they claim the system – dubbed AI2 – can predict 85 percent of cyberattacks with only the occasional oversight of human experts needed. The teams presented their findings in a paper at last week’s IEEE International Conference on Big Data Security in New York City.

The “AI-driven predictive cybersecurity platform” works by first combing through data and attempting to detect suspicious activity through unsupervised (or, anomaly detection) methods. Once the system finishes filtering, it presents the suspicious activity to a human cybersecurity expert, who confirms the fraudulent activity and denies any legitimate threats.

AI2 then creates what’s called a supervised model from the expert’s feedback. This model becomes a reference tool for the system when it detects future attacks. AI2 refers to the analyst’s supervised model as it combs through additional data. Again, it presents the suspicious activity to an analyst, who confirms the actual attacks. This feedback is fed again into the supervised model, and the system’s detection becomes progressively more refined.

When these steps are repeated just a few times, AI2‘s researchers claim you’ll have a system with an 85 percent success rate at predicting cyberattacks. You can see a full explanation in the video below:

AI2 standouts by utilizing three different unsupervised learning methods before presenting that data to analysts. The addition of the supervised model – developed from the analysts feedback – means the system can scale back its suspicious activities five-fold in just a few days.

CSAIL research scientist Kalyan Veeramachaneni, who helped lead the project, describes AI2 thus: “The more attacks the system detects, the more analyst feedback it receives, which, in turn, improves the accuracy of future predictions…That human-machine interaction creates a beautiful, cascading effect.”

Meanwhile, even those unaffiliated with the project are excited. Nitesh Chawla, the Frank M. Freimann Professor of Computer Science at the University of Notre Dame, told MIT New, “This paper brings together the strengths of analyst intuition and machine learning, and ultimately drives down both false positives and false negatives.”

If AI2 works like planned, the system may provide IT and security teams with a valuable alternative to anomaly detection and analyst-driven solutions. By combining the two approaches, the researchers have helped refine machine learning methods of cybersecurity while freeing up analysts to focus on other projects.

Image credit: Pixabay

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter:

Subscribe