Promotion of AI Use by Government and Enhancing Security Are Key Themes Likely to Shape U.S. AI Action Plan
As artificial intelligence continues to expand across public and private sectors, the Trump administration’s AI policy is starting to take shape. Recent AI guidance from the White House focuses on accelerating responsible AI adoption by federal agencies and enhancing the security of AI data and models, and gives us a likely preview of the AI Action Plan that is expected to be released next month. Two key themes have shaped the administration’s actions to date, 1) the promotion of AI use by the government, and 2) prioritizing security and protection.
To date, the administration’s actions are consistent with our predictions at the beginning of the term. It is a priority to advance and accelerate AI innovation in a responsible way that takes steps to earn the public’s trust. While this administration takes a relatively light regulatory approach, we expect the AI Action Plan to leverage the federal government to model responsible and innovative AI use, use AI to enhance national security, and promote U.S. economic competitiveness by encouraging the use of AI systems and models developed by U.S. companies.
Promoting AI Adoption by Federal Agencies - Actions to Date
In case you missed it, two memoranda released in May — Accelerating Federal Use of AI through Innovation (M-25-21) and Driving Efficient Acquisition of Artificial Intelligence (M-25-22) — updated and replaced key components of the Biden Administration’s AI policies (EO 14110), which President Trump repealed on his first day in office. Though largely technical in nature, the memos offer insight into how the administration is currently framing its approach to AI. Both documents outline the intention to use AI in innovative and effective ways to make government more efficient and to improve services provided by federal civilian agencies. In a surprise to some, they also restate prior policy positions focused on fostering public trust and direct agencies to mitigate, as appropriate, risks to privacy, civil rights, and civil liberties throughout the AI acquisition lifecycle.
Security Is Part of the Plan
The administration has also signaled growing attention to the security of AI systems. In April 2025, the Cybersecurity and Infrastructure Security Agency (CISA) and the Federal Bureau of Investigation (FBI), in partnership with international counterparts, released joint guidance titled Creating Secure Infrastructures for AI and Data Security.
This guidance outlines best practices for deploying AI in sensitive or critical contexts and marks the first publicly released federal document under the current administration to specifically address the resilience of AI infrastructure. It suggests a broader concern with the operational readiness of AI systems in national security and public sector environments.
What to Expect in The AI Action Plan
As we await the comprehensive AI Action Plan, directed by one of President Trump’s first executive orders, we look to the administration’s initiatives to date to predict the scope, and conclude it will likely promote AI use by federal agencies and address national security objectives. What we do not expect to see is new regulation directed to how industry develops and uses AI responsibly, outside of its impact on national security.
Testing AI systems for security and to prevent unintended uses is a national security imperative and an area where both the federal government and private companies have critical roles to play. Anthropic and HackerOne recently expanded a bug bounty program designed to prevent circumvention of Anthropic’s safety defenses by ensuring users could not obtain information about chemical, biological, radiological, and nuclear weapons.
Also important is testing to mitigate other unintended outcomes from AI systems that can negatively impact individuals and the economy. The federal government can and should build public trust in AI, but the government is not the only actor that must take steps to ensure AI is developed and used in ways that benefit individuals and society. The OMB memos require federal agencies to implement minimum risk management practices for “high-impact” AI use cases — those that may affect rights, safety, or access to services – including discontinuing the use of AI if it is not performing at an appropriate level. Ideally, those themes will be carried through in the AI Action Plan.
Federal agencies are actively working to develop the plans required by the OMB and leadership and will soon have the opportunity to review a draft of the AI Action Plan. We encourage agency leadership to embrace the opportunities to advance their mission and improve operations by incorporating AI into their strategies. By incorporating best practices like independent research and testing of AI systems into these plans, the administration can drive innovation responsibly and securely while achieving its AI policy objectives.