Artificial Intelligence’s Double-Edged Role in Cyber Security – with Dr. Roman V. Yampolskiy

Daniel Faggella

Daniel Faggella is Head of Research at Emerj. Called upon by the United Nations, World Bank, INTERPOL, and leading enterprises, Daniel is a globally sought-after expert on the competitive strategy implications of AI for business and government leaders.

Artificial Intelligence's Double-Edged Role in Cyber Security - with Dr. Roman V. Yampolskiy

Episode SummaryCyber security is closely linked to advances in artificial intelligence. In this episode, we speak with Dr. Roman V. Yampolskiy about the cyber security factors and risks associated with AI. How is AI both causing risks, and how can AI be used to combat those risks? We dive briefly into the future to speak about some of the potential ‘super’ AI risks to cyber security and touch on what can be done now to help hedge known and unknown threats.


Guest: Dr. Roman V. Yampolskiy

ExpertiseComputer Science and Engineering, Artificial Intelligence

Recognition in BriefDr. Roman V. Yampolskiy is a tenured associate professor at the Speed School of Engineering, University of Louisville. During his teaching career, he has received multiple recognitions that include (but are not limited to): Distinguished Teaching Professor, Professor of the Year, Leader in Engineering Education, Top 4 Faculty, and Outstanding Early Career in Education award. He is the founder and director of the Cyber Security Lab. In addition, he has authored many books, including his latest publication, Artificial Superintelligence: a Futuristic Approach.

Current Affiliations: University of Louisville, IEET, AGI, Kentucky Academy of Science, Research Advisor for MIRI, and Associate of GCRI.

Intelligent Autonomous Systems

Intelligent autonomy over actions is a hallmark of human beings. We accept that other humans have the same type of control, which flavors our world with both diversity and unpredictability. But what happens when artificial intelligent systems gain a similar level of autonomy over systems that we have put in place? Will their goals align or clash with ours? What are the potential risks and benefits?

This type of control is already playing out in the world of software and algorithmic systems, where there are high levels of security risk – both by the humans who control them and, potentially, by the automated systems themselves. Dr. Roman Yampolskiy‘s research interests are focused on the types of potential AI security risks, the ramifications for society, and ways to prevent such risks from becoming realized (at least at catastrophic levels).

“You have intelligent systems trying to get access to resources, maybe through free email accounts, maybe participating in free online games…they take over the games and get the money out as quickly as possible. Some of the work I did in is profiling such systems, detecting them, preventing them,” says Roman.

In certain domains, such as financial and military (though not limited to these sectors), the implications are insidious and potentially disastrous. There are systems, says Roman, that engage in stock trades and try to manipulate the market to get certain outcomes, illegal in terms of market participation. “It is a huge problem to think bout how much of our wealth is controlled by those systems,” he states. In the newer frontier of military-developed AI, it’s obviously important to be able to detect whether our drones have been hacked, says Yampolskiy. We’ve doubtless all heard the publicized threats of Chinese-based hackers tapping into the U.S. companies’ corporate data systems as well.

On the topic of hacking, Roman refers to any type of intelligent system. People have figured out how to automate the process of finding targets through an AI system, find weaknesses in a system, predict passwords, etc. Most anything can be automated today, says Yampolskiy, it’s not beyond our current technology, and hackers are always busy finding new ways to get into a system.

We’re not just concerned about attacks from the outside, explains Roman, such attacks can also happen internally, which is often the case. Online casinos are a hotspot for this type of activity, where an employee with privileges who has access to other players’ cards is suddenly winning every hand and earning thousands of dollars. What’s surprising is that this type of foul play can go on unsuspected for years, says Yompalskiy.

Anticipating Automated Intelligence

To what level is AI actually involved in the hacking process itself? While there are many areas in which AI may act as a line of defense where human beings fail (such as those outlined in this opinion piece by Rob Enderle), Yompalskiy is concerned with the ways in which AI may turn against our systems. “We’re starting to see very intelligent computer viruses, capable of modifying drone code, changing their behavior, penetrating targets,” says Roman. This is in addition to standard hacking scripts that are already available and becoming more sophisticated in a variety of intelligent systems, but Roman is mostly concerned about what will happen in a couple of years, when most “hackers” will be significantly automated ones.

What can be done today in order to combat these threats? Roman refers to it as an “arms race” in  trying to develop inclusion detection systems that detect unusual anomalies in system behavior. The most “rudimentary” detection systems have been around for years, but are ever advancing. Software that looks at a person’s credit standing and catches suspicious transactions, for example, is an example of a system that can be quite successful in forming profiles of other intelligent systems and detecting oddities, explains Yampolskiy.

It seems more than pertinent to be working on such systems now in terms of a future, perhaps super-intelligent AI. When I ask Roman if there’s anything that we can think about or build now in order to prepare for possible futures, he notes that he’s currently working on a project that looks at all possible ways that AI can become dangerous, tries to classify and group those AI systems and threats in a meaningful way, and then strategizes about what we can do to address each one.

Roman emphasizes that each AI system poses a completely different problem; military AI is just one system that opens up a can of potential security risks. Yet some of the aggregate examples include mistakes in code, problems with goals, ensuring that systems align with human values, or even the issue of irrational people developing dangerous AI for their own narrow goals. We’re dealing with all of these in a different manner, says Roman.

“If you think about it, human safety, human security, it’s exactly the same problem…any person could potentially be very dangerous, and there’s an infinite number of ways that can happen…yet somehow society functions even though it’s threatened…I hope that after we understand how many infinite ways there are for AI to fail, we’ll concentrate on those that are truly dangerous”, he states.

Just as with human cloning, explains Roman, we don’t really understand how these advanced AI systems would work. In general as a society, we’ve decided not to clone humans just yet; it’s illegal and unfunded in most places. It might be a good idea to do same with general AI, it’s fine to develop narrow AI, but specific projects with general AI could be put on hold while we develop better safety mechanisms, he suggests.

The broad and unknown nature of AI system-related threats has driven Yampolskiy to widen his driver’s view, so to speak. “I’m looking more at solutions which are universal enough to cover all cases…it will be useful to control (the system) while developing it, so you can test it, so that the system has limited access to resources while we’re learning about its behavior,” says Roman.

 

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter:

Subscribe