The market for artificial intelligence-based security products has seen exponential growth, with a recent report projecting the global was $14.9 billion in 2021 and could reach $133.8 billion by 2030.

“Most interestingly, we see behavioral analysis tools increasingly using AI,” noted Pillsbury Public Policy partner Brian Finch in a recent interview with NBC Los Angeles. “By that I mean tools analyzing data to determine behavior of hackers to see if there is a pattern to their attacks—timing, method of attack, and how the hackers move when inside systems. Gathering such intelligence can be highly valuable to defenders.”

But while the use of AI for security purposes can prove beneficial, it can also be taken advantage of by hackers. “For instance, AI can be used to identify patterns in computer systems that reveal weaknesses in software or security programs, thus allowing hackers to exploit those newly discovered weaknesses,” Finch said.

Additionally, he noted, “Given the economics of cyberattacks—it's generally easier and cheaper to launch attacks than to build effective defenses—I'd say AI will be on balance more hurtful than helpful. Caveat that, however, with the fact that really good AI is difficult to build and requires a lot of specially trained people to make it work well. Run-of-the-mill criminals are not going to have access to the greatest AI minds in the world.”

Click here to read the full article.