As the Pentagon explores how generative artificial intelligence capabilities like ChatGPT can be deployed to support the department, Space Systems Command Chief Information Officer Col. Jennifer Krolikowski is hopeful that the technology can be useful while recognizing it still has its shortfalls.
“I’m cautiously optimistic with it,” Krolikowski said Wednesday during a virtual conference hosted by C4ISRNET. “The enablement of that tech is very, very exciting to see, but I also think you need to have a bit of a critical eye on it.”
ChatGPT is an online chatbot launched in late 2022 by research firm OpenAI that has gone viral. Broadly, generative artificial intelligence is able to take inputs from humans to create a range of content, including audio code, images, text, videos and more.
Since its rise in popularity, U.S. government organizations like the Central Intelligence Agency, the Defense Information Systems Agency and others within the Defense Department have begun ruminating over how the technology could be used to aid their work.
Krolikowski pointed to the benefits of using tools like ChatGPT for day-to-day tasks, such as drafting a long document that normally takes a lot of time and resources, and then having humans go in and make edits afterwards.
“It’s always easier to edit a document than it is to generate a document,” she said. “So I think using something like a ChatGPT to author, but then for us to put our actual brain cells against it to edit and then to validate it.”
Validating information from generative AI would be a key responsibility for the department in order to avoid inaccuracies, Krolikowski noted.
The Pentagon is worried about the generation of misinformation, also known as “hallucinations” associated with the technology. The department plans to host a conference in June to examine how it can strike a balance between leveraging the tech without spreading false information.
Rob Vietmeyer, the DOD’s chief software officer, also stressed the importance of dealing with misinformation created by these types of platforms. He noted one task will be better understanding why the models sometimes create inaccuracies in their answers, as well as addressing other ethical concerns.
Still, AI will certainly have a role in the Defense Department’s data-centric future, Vietmeyer said during the virtual conference on Wednesday.
“As you look at all of the telemetry that’s going to be coming off of our environments, if you look at the skill sets that are going to be needed to assess software, looking for security vulnerabilities, apply attack models — AI is going to just be a core component,” he said. “And we’re going to have to work through the hallucinations and the other concerns that we have with these models. But it’s going to be an ongoing adventure.”