Skip to Content

Is MIT’s “AI2” The Next Step In Cybersecurity?

Cybersecurity is perhaps the most important thing on the planet right now. Everything we have and need is on the internet: personal photo albums, business plans and government secrets. And all of it needs protection. For every barrier we learn how to erect, hackers find just as many ways to sneak around them. Unless the promise of MIT’s AI2 changes cybersecurity forever.

Meet AI2

AI2, an artificial intelligence developed at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory (CSAIL) along with startup PatternEx, is meant to improve by learning. By using “active learning,” AI2 can detect 85% of cyber-attacks. Most importantly, it also reduces false positives by a factor of 5.

Take a look at the video below, provided by MIT CSAIL:

While it seems inevitable that robots will replace humans in just about everything, AI2 doesn’t replace humans at all, but rather works with them to be more efficient. AI2 will start out like any human: pretty simple on its first day. Artificial intelligence is not yet at its world-conquering “Terminator” stage, evidenced by how easily it can be fooled (like when the internet recently taught a Twitter bot to be racist). But it can slowly learn how to do particular tasks with human interaction.

How AI2 Works

Think of AI2‘s cyber activity in a matter of days. As the video above shows, day one involves AI2 combing through data, detecting suspicious activity, and logging it.

Day two involves AI2 showing the top 200 abnormal events to a human analyst, who will provide feedback. AI2 will use this data to learn and create a supervised model for dealing with future activity. It will use each supervised and unsupervised model to find the best way to approach problems and slowly adapt. It will also create predictive models to use for the next day.

By day three, AI2 uses the predictive models to preemptively take on cyberattacks. It will encourage increased security methods to counter possible attacks. Think of it like this: if robbers break into your house from a particular window, it stands to reason that they’ll try that or another window nearby. AI2 could suggest locking all windows in the house and adding a camera or two to the likeliest vulnerable points.

What MIT CSAIL and PatternEx plan to do with this is uncertain so far. Surely they can market and license AI2, or they can inspire others to try the same approach to A.I. and cybersecurity. Only time will tell.

It seems we can’t get away from A.I. it’s our future, and teaching these programs how to learn may be the best way to move society forward. Recently we’ve been introduced to a 30 year old program that has lived half a lifetime learning how to think, and one that can find any image on the planet; so do you think A.I. is our future, or our downfall? Let us know in the comments below.

Source: MIT via Wired