Will AI usher in a new era of hacking?

Cybersecurity firms are using more A.I. technologies. Will hackers do so to?
DARPA’s Cyber Grand Challenge held its final round on Thursday.

Credit: Michael Kan

It may take several years or even decades, but hackers won’t necessarily always be human. Artificial intelligence—a technology that also promises to revolutionize cybersecurity—could one day become the go-to hacking tool.

Organizers of the Cyber Grand Challenge, a contest sponsored by the U.S. defense agency DARPA, gave a glimpse of the power of AI during their August event. Seven supercomputers battled each other to show that machines can indeed find and patch software vulnerabilities.

Theoretically, the technology can be used to perfect any coding, ridding it of exploitable flaws. But what if that power was used for malicious purposes? The future of cyberdefense might also pave the way for a new era of hacking.

The possible dangers

For instance, cybercriminals might use those capabilities to scan software for previously unknown vulnerabilities and then exploit them for ill. However, unlike a human, an AI can do this with machine efficiency. Hacks that were time-consuming to develop might become cheap commodities in this nightmare scenario.

It’s a risk that the cybersecurity experts are well aware of, in a time when the tech industry is already developing self-driving cars, more advanced robots, and other forms of automation. “Technology is always frightening,” said David Melski, vice president of research for GrammaTech.

Melski’s company was among those that built a supercomputer to participate in August’s Cyber Grand Challenge. His firm is now considering using that technology to help vendors prevent flaws in their internet of things devices or make internet browsers more secure.

“However, vulnerability discovery is a double-edge sword,” he said. “We are also increasingly automating everything.”

So it’s not hard for security experts to imagine a potential dark side—one where AIs can build or control powerful cyberweapons. Melski pointed to the case of Stuxnet, a malicious computer worm designed to disrupt Iran’s nuclear program.

“When you think about something like Stuxnet getting automated, that’s alarming,” he said.

Tapping into the potential

“I don’t want to give any ideas to anyone,” said Tomer Weingarten, CEO of security firm SentinelOne. But AI-driven technologies that crawl the internet, looking for vulnerabilities, might be among the future realities, he said.

That streamlining of cybercrime has already taken place. For instance, buyers on the black market can hire “rent-a-hacker” services, built with slick web interfaces and easy-to-understand commands, to pull off cybercrime like infecting computers with ransomware.

robot hacking Michael Kan
Security experts wonder if AI will be used for malicious hacking.

Weingarten said it’s possible these rent-a-hacker services may eventually incorporate AI technologies that can design entire attack strategies, launch them, and calculate the associated fee.  “The human attackers can then enjoy the fruits of that labor,” he said.

However, the term AI is a loaded one. Tech companies may all be talking about it, but no company has created a true artificial intelligence. The industry has instead come up with technologies that can play games better than a human, act as digital assistants, or even diagnose rare diseases.

Cybersecurity firms such as Cylance have also been using a subset of AI called machine learning to stop malware. That’s involved building mathematical models based on malware samples that can gauge whether certain activity on a computer is normal or not.

“Ultimately, you end up with a statistical probability that this file is good or bad,” said Jon Miller, chief research officer of the security firm. More than 99 percent of the time the machine learning works to detect the malware, he said.

“We’re continually adding new data (malware samples) into the model,” Miller said. “The more data you have, the more accurate you can be.”

Escalation

A drawback is that using machine learning can be expensive. “We spend half a million dollars a month on computer models,” he said. That money is spent on leasing cloud computing services from Amazon to run the models.

Anyone who attempts to use AI technologies for malicious purposes might face this same barrier to entry. In addition, they’ll also need to secure top talent to develop the programming. But over time, the costs of computing power will inevitably decrease, Miller said.

  • thermopro tp03
    72% off ThermoPro TP03 Digital Food Cooking Thermometer Instant Read Meat…
  • ravpower 7600mah power bank
    70% off RAVPower Portable Charger 6700mAh (2.4A Output & 2A Input) External…
  • screen shot 2016 11 25 at 10.22.25 am
    Save Big on Amazon Products Now Through 11/28 – Deal Alert
Microsoft Windows 10 Pro 64 Bit
$124.58
(683)
Microsoft Windows 10 Home 64 Bit System B…
$90.70
(1006)
Ads by Amazon

Still, the day when hackers resort to using AI may be far off. “Why hasn’t this been done? It’s just not necessary,” he said. “If you want to hack somebody, there are already enough known flaws in everything.”

To this day, many hacks occur after a phishing email containing malware is sent to the target. In other instances, the victims secured their logins with weak passwords or forgot to upgrade their software with the latest patch – making them easier hack.

AI technologies like machine learning have shown the potential to resolve some of these problems, said Justin Fier, director for cyberintelligence at security firm Darktrace. But it may only be a matter of time before the hackers eventually upgrade their arsenal.

That will pit cybersecurity firms against the hackers, with AI on the frontlines. “It seems like we’re heading into a world of machine versus machine cyber warfare,” Fier said.

To comment on this article and other PCWorld content, visit our Facebookpage or our Twitter feed.
source”cnbc”