News: No one is safe from internet attacks, and AI defenses can't help, Google security veteran says
- Software powered by artificial intelligence is better for cyberattacks than cyberdefense, says a founding member of Google's security team.
- Heather Adkins advised consumers not to put certain personal information in their email communications.
A hack "can happen to anyone," Adkins said.
A cybersecurity expert who has protected Google's systems for 15 years said Monday no one is safe from internet attacks and software powered by artificial intelligence can't help defend them.
Heather Adkins, director of information security and privacy and a founding member of Google's security team, also advised consumers not to put sensitive personal information in their online communications.
"I delete all the love letters from my husband," Adkins told several thousand people gathered for TechCrunch Disrupt 2017, a technology conference in San Francisco, after telling them "some stuff" like personal information shouldn't be put in emails.
Network attacks "can happen to anyone ... anywhere," Adkins said during an onstage interview in which she urged startups to assume they would get hacked eventually and to prepare a response plan.
Google has said that more than 1 billion people use its Gmail program.
Adkins' remarks came several days after the credit-monitoring firm Equifax revealed what may be the largest data breach to date.
Adkins explained that AI-powered security software is not particularly effective at stopping even 1970s-era attack methods, let alone more recent ones.
"The techniques haven't changed. We've known about these kinds of attacks for a long time," Adkins told the crowd, pointing to a 1972 research paper by James Anderson.
While AI is very good for launching cyberattacks, it's not necessarily any better than non-AI systems for defense — because it produces too many false positives.
"AI is good at spotting anomalous behavior, but it will also spot 99 other things that people need to go in and check" out, only to discover it wasn't an attack, says Adkins.
The problem in applying AI to security is that machine learning requires feedback "to learn what is good and bad ... but we're not sure what good and bad is," especially when malicious programs mask their true nature, she said.
When asked what advice she would give to businesses to keep their networks safe, Adkins advised "more talent ... less technology."
"Pay some junior engineers and have them do nothing but patch," she said.
Comments
Post a Comment