Advertisement

No, ChatGPT didn’t write DOD’s latest autonomous weapons policy — but similar tech might be used in the future

The department's "always looking at ways to take advantage of" emerging tech, a tech policy chief said.
Interactive online assistant. A dialog with a voice-activated virtual chatbot in the smartphone screen. Polygonal construction of lines and points. Blue background.
(Getty Images)

Officials did not apply ChatGPT or any other generative artificial intelligence capabilities to write their recent update to the Pentagon’s overarching policy governing the use of autonomy in weapon systems. But that certainly doesn’t mean the Defense Department’s Emerging Capabilities Policy Office Director Michael Horowitz is ruling out the use of such technologies to inform his team’s future policymaking activities.

“There’s, I think, broad agreement — especially in light of what’s become more public over the last few years, and not just ChatGPT but lots of advances in AI — that there will be capabilities coming online, mostly driven by the commercial sector that we think could have a substantial impact. And so, the department’s always looking at ways to take advantage of those,” Horowitz said during a virtual event hosted by the Institute for Security and Technology on Wednesday.

Launched in late 2022, ChatGPT gained quick popularity online as a chatbot built by research firm OpenAI that can interact with humans and perform tasks (with some accuracy), in a conversational manner. It marks a subfield of generative AI, which broadly develops and refines large language models that can generate audio, code, images, text, videos and other media — when humans direct them to. 

The viral interest ChatGPT sparked on the internet in recent months has already caught the attention of some federal entities. The Central Intelligence Agency is preparing to explore the potential generative AI has to impact its operations, for example — and the Defense Information Systems Agency also announced recent plans to add the emerging technology to its forthcoming mid-year fiscal 2023 tech watch list.

Advertisement

When asked on Wednesday whether such tools were used by officials who crafted the recent update to DOD Directive 3000.09, “Autonomy in Weapon Systems,” Horowitz playfully responded: “if there’s any part of the directive that you think is unclear — that was definitely written by ChatGPT and not by a person.” 

“Jokes aside, the answer to that question is no,” he quickly added. 

Horowitz then shared his own personal opinion — not the DOD’s, he emphasized — on applying this tech. Most Pentagon policies or official documents immediately prior to publication don’t generally make sense to run through generative AI tools, he said. 

Instead, ChatGPT-like capabilities could possibly bring some value to the drafting process.

A utility “could be in feeding it paragraphs and asking it to summarize” the information, and then “seeing whether ChatGPT thinks it says what you think it says. But that is not something [DOD] did with the directive,” Horowitz said. 

Advertisement

“That is definitively an unofficial position. And also, please follow the law. If anybody here works for the government, please follow all the regulations,” he added.

Latest Podcasts