Home » AI News » Anthropic Appoints National Security Expert Richard Fontaine to Governing Trust Amid Growing Defense Ties

Anthropic Appoints National Security Expert Richard Fontaine to Governing Trust Amid Growing Defense Ties


AI startup Anthropic has appointed Richard Fontaine, a seasoned national security expert, to its Long-Term Benefit Trust just a day after unveiling new AI models tailored for U.S. national security use. The move signals Anthropic’s deepening alignment with defense priorities and comes at a time when the company is actively expanding its government-facing initiatives.

The Long-Term Benefit Trust is a key part of Anthropic’s governance structure, designed to prioritize safety and long-term societal benefit over pure commercial interests. The trust holds the authority to elect certain members of the company’s board of directors. Fontaine joins a group that includes Zachary Robinson of the Centre for Effective Altruism, Neil Buddy Shah of the Clinton Health Access Initiative, and Kanika Bahl of Evidence Action.

CEO Dario Amodei stated that Fontaine’s addition will strengthen the trust’s strategic oversight during a time when AI is increasingly intertwined with national security concerns. Fontaine, who will not hold equity in the company, brings experience as a former foreign policy adviser to Sen. John McCain and as the former president of the Center for a New American Security, a leading Washington, D.C. think tank. He has also taught security studies at Georgetown University.

Anthropic’s growing engagement with defense clients includes a recent collaboration with Palantir and Amazon Web Services to deliver AI tools to U.S. defense agencies. The appointment of Fontaine further reflects the company’s strategic positioning as AI becomes a cornerstone of both commercial innovation and national defense.

Anthropic joins other major AI players in this push toward military applications. OpenAI is strengthening its ties with the Pentagon, Meta has opened its Llama AI models to defense partners, and Google is tailoring its Gemini AI for classified settings. Cohere is also working with Palantir to expand AI deployment in government use cases.

With Fontaine’s guidance, Anthropic appears poised to navigate the complex intersection of AI innovation and geopolitical security with a stronger focus on responsible governance.