AI in Security Carries as Many Questions as Answers
Companies are adopting machine intelligence even though there are still issues and questions regarding its performance, a new report on AI use in cybersecurity shows.
January 2, 2019
Nearly three-quarters of all organizations have implemented security projects that have some level of intelligence built in. And the more security alerts a company sees in a day, the more likely it is to look to machine intelligence in order to deal with the flood.
Those are just two of the conclusions reached in a new white paper, "The State of AI in Cybersecurity: The Benefits, Limitations and Evolving Questions," published today by Osterman Research. The report, based on more than 400 surveys of organizations with more than 1,000 employees, asked questions about the use of AI and the results of that use.
"AI is certainly, thanks to very strong marketing, winning the hearts and minds, not of the practitioners but of the broader executive suite," says Ramon Peypoch, chief product officer of ProtectWise, which sponsored the Osterman research. "They're being taken with the idea of allowing teams to do more and be more productive."
While companies are definitely employing machine intelligence in security, the perception of its value is not universally positive. According to the report, 60% of organizations employing AI think that AI makes investigations of alerts faster. The same proportion report that AI improves the efficiency of their security staff.
The more an organization employs machine intelligence, the more positive its perception of the technique's effectiveness. In companies that have deployed machine intelligence in 10% or less of their security applications, 49% see it speeding their research of alerts. In those companies employing machine intelligence in more than 10% of their security, that number rises to 69%.
Still, machine intelligence isn't perceived as perfect. Some 60% of responding organizations say that it doesn't deal with zero-day or advanced threats, and roughly half complain that it generates too many false positives. These issues are due at least partially, say some experts, to the difficulty in properly training machine learning engines.
"You have very few machine learning professionals that can handle and clarify and gain meeting from the data," says Heather Lawrence, a researcher at the Nebraska Applied Research Institute. She points out that machine learning professionals are rarely experienced in cybersecurity, while cybersecurity experts tend to have no real data science experience. The disconnect slows improvement and wide, effective deployment. "You still need somebody who can understand the data going in and the data going out. It hasn't yet been automated to a point where you can remove the professional to actually get meaning from the data," Lawrence explains.
Peypoch looks at data in the report and sees future progress that is almost inevitable. "AI is one tool for driving efficiencies. It can make your limited staff more effective, but it's not going to replace human staff anytime soon," he says. "AI is an approach, a journey for most organizations deploying it, and I think we're at an early point of deployment, of maturity and sophistication."
Searching for a ready metaphor for the current state of adoption, Peypoch turns to sports. "I don't think we're even in the first inning; the teams are still on the field warming up prior to the game starting."
Related content:
About the Author
You May Also Like