That warning from security researchers is driven home by a team from IBM Corp. who have used the artificial intelligence technique known as machine learning to build hacking programs that could slip past top-tier defensive measures. The group will unveil details of its experiment at the Black Hat security conference in Las Vegas on Wednesday.

State-of-the-art defenses generally rely on examining what the attack software is doing, rather than the more commonplace technique of analyzing software code for danger signs. But the new genre of AI-driven programs can be trained to stay dormant until they reach a very specific target, making them exceptionally hard to stop.

No one has yet boasted of catching any malicious software that clearly relied on machine learning or other variants of artificial intelligence, but that may just be because the attack programs are too good to be caught.

Researchers say that, at best, it's only a matter of time. Free artificial intelligence building blocks for training programs are readily available from Alphabet Inc's Google and others, and the ideas work all too well in practice.

"I absolutely do believe we're going there," said Jon DiMaggio, a senior threat analyst at cybersecurity firm Symantec Corp. "It's going to make it a lot harder to detect."



New genre of artificial intelligence programs take computer hacking to another level