Future of AI. PHOTO: Cybercrime Magazine.

AI Will Become Better Than Humans At Hacking

It could also help us build better, more secure software

David Braue

Melbourne, Australia – Jun. 22, 2021

It may be years away, but Bruce Schneier is convinced that writing bug-free application code is well and truly on the horizon. All we have to do, he explains, is teach AI how to hack us — and explain what it’s doing.

Schneier, a well-known security thinker, writer, and fellow and lecturer at the Harvard University Kennedy School, spent a lot of last year’s COVID lockdowns contemplating the long-tail effects of greater AI adoption and — as you do in lockdown — writing a soon-to-be-published book about it.

One of the conclusions he reached is that AI will inevitably become better than humans at hacking — and nearly everything else.

“Hacking is a very human process,” he told Cybercrime Magazine, “that is all about figuring out a loophole. Whether it’s in computer code, or the tax code, or in financial regulations, it’s about studying all the rules and finding the cracks — the things the designers didn’t anticipate.”

And while humans have dominated the field of hacking to date, ever-smarter AI systems “are getting better” at doing the same kind of contextual analysis and identifying loopholes or exceptions — which, in the case of secure coding, means vulnerabilities.


Cybercrime Radio: Bruce Schneier, Security Technologist

“The Coming AI Hackers”


“This is a human process that will be taken over by AIs,” he explained, noting the steady emergence of automated code-scanning systems means “AI is already finding vulnerabilities in systems — and while they’re not that good at it, they are finding buffer overflows, they find different hacks and exploits.”

Ever-faster patching of application vulnerabilities was allowing disciplined companies to stay abreast of newly discovered software weaknesses — but ultimately, Schneier warned, systems that have been taught to learn over time will do just that.

“My guess is that this follows the trajectory of pretty much all AI systems,” he said, citing examples such as chess or go.

“They start out much worse than humans, then slowly get better while humans stay the same. Then one year, computers cross them — and for the rest of time, computers are better than humans at that task.”

AI vs society

Yet ever-better AI isn’t only going to improve the scanning of application source code for bugs that its human masters missed; given the right parameters, Schneier pointed out, it will increasingly be used to find ways of identifying and exploiting loopholes in the very human systems that keep our society functioning.

Given AI’s increasing semantic understanding, for example, pointing an AI engine at the substance of a national tax code could very well uncover perfectly legal loopholes that would let companies or individuals create new ways of minimizing the tax they pay.

The problem: AIs still aren’t very good at explaining how they work — so it’s entirely possible that many of their findings will be arrived at in ways we don’t immediately understand.

By point of comparison, Schneier referenced the scandal involving carmaker Volkswagen, which in 2017 was fined $2.8 billion for developing and installing software in 11 million cars designed to facilitate state operational approvals by faking emissions readings when the car detected that it was being tested.

That software was, Schneier said, hacking the system using “a black box, and no one knew that it was doing that except the designers. And they got away with it for over a decade.”

Now, he said, imagine an AI doing the same thing — coming up with a solution to a problem that achieves the desired outcome, but lies outside of the acceptable behavior imposed on the problem by regulatory frameworks and other policies.

“Imagine an AI being told to design an engine to maximize performance while also passing all emissions control tests,” he said. “It could independently come up with that hack — but nobody would realize that it was a hack because of the explainability problem.”

With AI becoming integrated into all kinds of systems, this type of weakness reflects the challenges society faces in ensuring that problem-solving AIs play by the rules — and don’t, because they are so good at large-scale analysis, allow people to exploit them to circumvent important controls.

“AI doesn’t have that common sense,” Schneier said. “You have to specify everything — yet you can’t, and the systems are opaque.”

“I worry about AI hackers and what they do if, say, a large investment banking firm tells them to hack financial networks or find new loopholes. You can program the AI not to behave differently during testing [as in the VW example] but there are thousands of other ways that we can’t program in.”

“This is still kind of science fiction,” he said, “and there is a lot between here and there. But it’s not stupid science fiction.”

“We have very little visibility into the security of these systems, yet we must rely on them — so we need to think about how we can patch our social and political systems at the speed we can patch our tech systems.”

David Braue is an award-winning technology writer based in Melbourne, Australia.

Go here to read all of David’s Cybercrime Magazine articles.