The New Rules for Selling AI to the U.S. Government
Harvey Morrison: Co-Founder/CEO, Marion Square
For the past two years, most of the conversation around artificial intelligence in government has focused on experimentation.
Agencies have been running pilot programs. Innovation offices have been testing generative AI. Technology companies have been racing to demonstrate how their models can improve mission outcomes.
But the next phase of the market is beginning to emerge.
The federal government is shifting from AI experimentation to AI procurement, and that shift requires something very different: rules governing how AI systems interact with government data, infrastructure, and decision-making processes.
A recently released draft clause from the General Services Administration titled “Basic Safeguarding of Artificial Intelligence Systems” provides one of the clearest signals yet of what those rules will look like.
While the clause is still in draft form, it outlines detailed requirements governing data ownership, supply chain trust, explainability, and interoperability.
Taken together, these provisions point toward a clear direction for federal AI adoption:
The government wants the power of AI but it will not surrender control of its data, systems, or operational decision-making in the process.
For technology companies pursuing federal markets, that distinction matters.
Government Data Must Remain Government Data
One of the most significant provisions in the draft clause addresses ownership of information generated through AI systems used by federal agencies.
Under the proposed language, the government retains full ownership of all data submitted to an AI system and all outputs generated by that system, including prompts, responses, analyses, derivative data, and generated content.
Contractors and service providers may only use that data for the limited purpose of performing the contract itself, and those rights are revocable and non-transferable.
More importantly, the clause explicitly prohibits contractors from training, fine-tuning, or improving AI models using government data.
This requirement reflects a growing concern among policymakers that government deployments of AI could inadvertently become a training resource for commercial systems.
For many AI vendors, this means federal deployments will require strict separation between government workloads and commercial model development pipelines.
A Push Toward “American AI Systems”
The clause also introduces a supply-chain requirement that reflects broader shifts in U.S. technology policy.
The draft language requires contractors to use “American AI Systems,” defined as systems developed and produced in the United States.
The use of foreign AI systems or systems incorporating components controlled by non-U.S. entities is prohibited for contract performance.
This provision mirrors similar trends across federal policy in areas such as semiconductors, telecommunications infrastructure, and cybersecurity technologies.
As artificial intelligence becomes more central to national competitiveness and government operations, policymakers increasingly view AI platforms as critical infrastructure rather than simply software tools.
Explainability and Human Oversight
The proposed clause also introduces strict requirements around transparency in AI decision-making.
Contractors must provide mechanisms that allow the government to implement human oversight, traceability, and intervention within AI systems.
At a minimum, AI systems must provide access to:
• summarized intermediate processing steps
• model routing decisions and associated rationale
• documentation of retrieval methods used in response generation
• source attribution for materials used to produce outputs
These provisions reflect a practical reality. Many federal missions such as law enforcement, benefits administration, national security analysis, and public policy decisions require systems whose outputs can be validated and explained.
Opaque “black box” systems are unlikely to meet that standard.
Preventing Vendor Lock-In
The draft clause also reinforces a long-standing principle in federal technology procurement: interoperability.
Contractors must ensure that AI outputs and associated data can be exported in open, machine-readable formats such as JSON or XML, enabling agencies to migrate information between platforms.
The intent is straightforward.
Agencies must be able to adopt AI technologies without becoming permanently dependent on a single vendor platform.
Strict Data Protection Requirements
The proposed clause also includes detailed requirements governing how government data must be protected.
Contractors must implement safeguards to prevent unauthorized access, disclosure, or misuse of government data.
These safeguards include:
• “eyes-off” data handling procedures that limit human access to government data
• logging and justification requirements for any human access that occurs
• logical segregation of government data from other customers’ data
• continuous monitoring to detect unauthorized access or misuse
At the conclusion of a contract, all government data must be securely deleted from contractor systems, with written certification provided to the contracting officer.
The Early Framework for Federal AI Procurement
Although the clause is still in draft form, it offers a clear preview of how the federal government intends to operationalize AI adoption.
Several themes emerge.
First, data sovereignty is non-negotiable. Government data will not be used to improve commercial AI models.
Second, AI systems must remain explainable and auditable, allowing agencies to understand and validate system outputs.
Third, supply chain trust is becoming a requirement, with growing emphasis on domestically developed AI systems.
Finally, interoperability remains central, ensuring agencies retain flexibility in their technology environments.
Taken together, these principles represent the early foundation of a federal AI procurement framework.
What This Means for AI Companies
For AI companies pursuing federal markets, the message is increasingly clear.
Success will not be determined solely by the performance of AI models.
It will depend on whether platforms are designed to operate within the government’s requirements for governance, security, and transparency.
This means supporting:
• strong data isolation
• explainable and auditable decision paths
• exportable and interoperable data architectures
• compliance with emerging federal AI policy
The federal AI market will not simply reward the most advanced models.
It will reward the platforms that are architected to operate inside government systems, policies, and trust frameworks.
In that sense, the next phase of the federal AI market is not just about better models.
It is about government-grade AI.