Generative AI technologies will boost hackers’ ability to trick people, according to a top National Security Agency official.
Generative artificial intelligence has made headlines recently with the emergence of OpenAI’s ChatGPT technology and other capabilities that can generate new content, such as text or images, based on their training using large language models or other tools.
During an event hosted by the Center for Strategic and International Studies on Tuesday, NSA cybersecurity director Rob Joyce was asked if technology like ChatGPT will help hackers create more effective phishing messages.
“I absolutely believe that,” Joyce said. “People have gone across the scale of, you know, how worried should they be about ChatGPT? I will tell you the technology’s impressive, right. It is really sophisticated. Is it going to, in the next year, automate all of the attacks on organizations? Can you give it a piece of software and find tell it to find all the zero-day exploits for it? No. But what it will do is it’s going to optimize the workflow. It’s going to really improve the ability for malicious actors who use those tools to be better or faster.”
“And in the case of the malicious foreign actors, it will craft very believable native language, English text that could be part of your phishing campaign or your interaction with a person or your ability to build a backstory — all the things that will allow you to do those activities or even malign influence, right. That’s going to be a problem. So, is it going to replace hackers and be this super AI hacking [tool]? Certainly not in the near term, but it will make the hackers that use AI much more effective, and they will operate better than those who don’t,” he added.
There’s also a risk that foreign competitors might also try to steal the intellectual property behind some of these generative AI technologies.
“I can’t talk to any specific threats. But … all of our industrial advancements that are game-changing have been targeted in the past, right. Whether it’s material science or chemicals or battery technology, I don’t care what it is — if we’ve innovated it and have the state of the art, you know it’s been under pressure from China and others to pull that and steal and bypass the investments our companies are making to develop it. And so I see no reason that there’s not a major focus on getting those [AI] models and bypassing all the investment and, you know, the capital it took to develop them,” Joyce said.
However, while generative artificial intelligence poses a threat, officials also believe it could be a useful tool for the U.S. intelligence community. The NSA and CIA, for example, are expressing interest in these types of capabilities.
However, the technology currently has major shortcomings, experts say.
“I don’t know how many of you have played with … name your favorite model out there, but they hallucinate. And that’s the technical term of art meaning they will generate data that’s not real. I have to be able to generate real data to bring it to a company or the president [of the United States] or the warfighters. So, I have to get to the point where we’re understanding that outputs are factually accurate. In my world, that’s a high bar,” Joyce said.
“The idea that it will sort and provide acceleration — just like I talked about the advantage to the adversary — you know, every single day our analysts are overwhelmed with … what they have to focus on. If it can raise things up and even if it’s only 90% right on the things it surfaces, then the human can work in a much more enriched flow. And at that point, they become more effective,” he added. “So, I think that’s … the sweet spot. It’s a tool, [but] that’s not going to replace our analysts.”
Leaders at OpenAI have acknowledged some of the risks associated with the technology, including the threat of hallucinations and cyberattacks, while also touting its benefits.
The Pentagon is eyeing generative AI as a tool that could aid the military with “decision support and superiority.”
The Defense Department is scheduled to host a conference in McClean, Virginia, in June to discuss the technology. The confab is still in the early stages of planning, a DOD spokesperson told DefenseScoop recently.
However, Pentagon officials are also concerned about the hallucination problems that Joyce mentioned on Tuesday.
“On the trusted AI and autonomy front, we are working with companies that are developing these large language models. And we recognize that, as all you should recognize that these things don’t have any common sense and they are word completion programs that use statistics to complete thoughts, you know, based on prompts,” Deputy CTO for Critical Technologies Maynard Holliday said at the Potomac Officers Club’s annual R&D Summit last month.
“It still hallucinates greatly and it does not have a corpus of defense-specific information that would give us decision advantage yet. And so, you know, we recognize that we’re going to have to develop that corpus of data so that these large language models can be useful — and then to minimize, you know, the hallucinatory side effects of what these logic language models do,” he added.