Federal agencies are implementing Model Context Protocol, an open-source standard, to ensure AI chatbots like ChatGPT and Claude can accurately access and use publicly available government data when responding to user queries.
Multiple federal agencies are deploying Model Context Protocol (MCP) to bridge the gap between large language models and government datasets. The Census Bureau and Government Publishing Office have both released public MCP servers on GitHub. A pilot study by US Digital Corps fellows testing MCP with ChatGPT and Gemini showed accuracy rates jumping from near 0% to 95% when querying USASpending and CDC PLACES data. The Census Bureau's MCP server is publicly available with configuration instructions, though it doesn't yet cover all Census data. GPO launched its MCP server in January for GovInfo data, including congressional documents and Federal Register postings. The Centers for Medicare and Medicaid Services and Treasury Department both reference MCP deployment efforts in their 2025 AI use case inventories.
Large language models are pretrained on internet data and use statistical likelihoods to generate responses, but they struggle to access federal data sources despite that information being publicly available. Government officials have long faced challenges getting public data to reach people directly rather than through indirect sources. The Commerce Department published AI-ready data best practices approximately a year ago. The Federal Committee on Statistical Methodology later built on that work and specifically encouraged agencies to experiment with MCP. Anthropic created MCP in 2024 and donated it to the Agentic AI Foundation, an offshoot of the Linux Foundation, in December. The protocol has been adopted by major platforms including Claude, ChatGPT, Gemini, and Copilot.
MCP functions as a "switchboard operator" that ensures LLMs can access what they need to respond to user prompts. Data owners like the Census Bureau create an MCP server that acts as an intermediary between the model and an existing application programming interface (API) for a dataset. The Census Bureau plans to experiment, start small, iterate, and expand in areas where there is demand. The pilot study conducted detailed testing with Claude Sonnet 4, comparing performance with and without MCP servers. When asked about CDC PLACES data for Worcester County, Massachusetts, Claude without MCP searched the web, attempted to reach the dataset directly, failed, and ultimately provided steps on how to access the data manually. With MCP implemented, Claude immediately produced the requested statistic with confidence intervals and population data.
"What's interesting, I think, about LLMs in this context, is that they actually amplify an age-old problem that we've had, which is: a lot of our data reaches the public through indirect sources," Luke Keller, the US Census Bureau's chief innovation officer and bureau AI lead, said at the virtual event.
Keller noted that widespread industry adoption is why the Census Bureau began exploring MCP, saying it has "a lot of wind in its sails." He cautioned that while data owners "should be taking MCP quite seriously," the standard "is not the answer. It's one of many answers."
"For the first time, GPO is providing an officially supported method to allow the use of LLMs and AI agents to 'converse' with GovInfo, the world's only certified trustworthy digital repository," GPO stated in its January announcement.
The US Digital Corps fellows report concluded, "Our results indicate the enormous potential of MCPs to advance accessibility of federal data. With MCP deployment, agencies can enable accurate, reliable machine interpretation and unlock more nuanced interactions with federal data for academic research, policy development, and program evaluation."
Model Context Protocol is an open-source standard that connects AI models with data sources so they can provide answers directly based on specific information. Data owners create MCP servers that act as intermediaries between the model and existing APIs for datasets. Federal law requires government data to be machine-readable, meaning information can be easily processed by computers without loss of semantic meaning. However, that law was written during the API era, and in the age of AI, its meaning is changing.
This development directly addresses an accuracy problem as more Americans turn to AI chatbots for information about government data and services. The technology also has implications for legal researchers, policy analysts, academics, and anyone conducting research or analysis using federal datasets. As Commerce's Dominique Duval-Diop noted, making data work with AI technologies isn't optional, it's consistent with statutory obligations to ensure public data accessibility, though those obligations now extend beyond traditional API-era interpretations.
MCP has been adopted by major platforms including Claude, ChatGPT, Gemini, and Copilot, suggesting broad compatibility across leading AI systems.
The MCP servers from Census Bureau and GPO are publicly available on GitHub for anyone to configure with their models.
MCP servers act as intermediaries between existing APIs and AI models, suggesting agencies can use their current data infrastructure.