“These are really powerful tools. There are a lot of questions, I think, about the security of the models themselves,” Mieke Eoyang, the deputy assistant secretary of Defense for cyber policy during the Joe Biden administration, told POLITICO Magazine in a wide-ranging interview about these concerns.
In our conversation, Eoyang also pointed to expert fears about AI-induced psychosis, the idea that long conversations with a poorly calibrated large language model could spiral into ill-advised escalation of conflicts. And at the same time, there’s a somewhat countervailing concern she discussed — that many of the guardrails in place on public LLMs like ChatGPT or Claude, which discourage violence, are in fact poorly suited to a military that needs to be prepared for taking lethal action.
Eoyang still sees a need to quickly think about how to deploy them — in the parlance of Silicon Valley, “going fast” without “breaking things,” as she wrote in a recent opinion piece. How can the Pentagon innovate and minimize risk at the same time?
Continue Reading on Politico
This preview shows approximately 15% of the article. Read the full story on the publisher's website to support quality journalism.