Policy Backgrounder: Future of AI Policy: Insights from Stakeholder Input
Our Privacy Policy has been updated! The Conference Board uses cookies to improve our website, enhance your experience, and deliver relevant messages and offers about our products. Detailed information on the use of cookies on this site is provided in our cookie policy. For more information on how The Conference Board collects and uses personal data, please visit our privacy policy. By continuing to use this Site or by clicking "ACCEPT", you acknowledge our privacy policy and consent to the use of cookies. 

Policy Backgrounders

CED’s Policy Backgrounders provide timely insights on prominent business and economic policy issues facing the nation.

The Administration received more than 10,000 responses to its request for information related to artificial intelligence (AI) policy. The comments highlighted both areas of broad agreement and divergence across technology firms, producers of copyrighted content, researchers, and state governments. The Administration is scheduled to release its AI action plan in July, providing a window into its thinking on these issues.

Key Insights

  • Respondents shared broad agreement on the importance of maintaining a leading US role in development of AI technology and governments. However, comments highlighted disagreements about the measures (e.g., export controls) that should be used to maintain that lead.
  • Technology firms and producers of copyrighted content also generally disagreed about the application of the fair use doctrine, with developers generally favoring broader application of fair use and producers advocating for stricter copyright enforcement.
  • Respondents noted that powering AI models will require a significant increase in energy production, recommending increased public sector investments and streamlined permitting for projects.
  • With Congress unlikely to consider a broad AI regulatory framework soon, these unresolved issues may be addressed by state lawmakers. Without Federal preemption, this could lead to a patchwork regulatory environment that may pose compliance challenges for developers and uneven protections for the public.

Background

In January, the President issued an Executive Order directing Federal agencies to develop an AI action plan “to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” Following the release of the Order, the White House Office of Science and Technology Policy issued a request for information (RFI), inviting the public to comment on the highest policy priorities that should be included in the government’s action plan on a range of topics, including model development, use cases, and intellectual property. The government received more than 10,000 responses to the RFI, including from academic institutions, state and local governments, industry associations, and private firms.

Common Themes

Key respondents from technology firms, trade associations, researchers, and state governments identified several common themes in their recommendations for policymakers.

US Global Leadership & National Security

The strategic importance of US leadership in the development of AI technology and governance emerged as a common theme among technology firms, often cast in the framework of a geopolitical competition between a democratic order led by the US and an authoritarian alternative led by China. Many firms suggested that the US should play a leading role in developing market-oriented interoperable international technical standards and regulatory frameworks that would encourage innovation and facilitate fair competition across jurisdictions. Respondents also emphasized potential national security concerns related to AI, noting the potential for “dual-use” – civilian and military – applications with implications for biosecurity, cybersecurity, and military capabilities.  

Respondents offered a range of potential policy approaches to these concerns with some advocating for more aggressive measures and others recommending a more restrained approach. Some firms encouraged the Federal government to treat frontier AI models as “critical national assets,” calling for increased funding for national security testing infrastructure, restrictions on exporting certain chips and model weights, lowering thresholds for foreign chip procurement reviews, establishing formal channels for coordination between frontier AI labs and the intelligence community, and enhancing security protocols at frontier labs to guard against model theft. Others, while acknowledging important national security considerations, favored less aggressive measures. These include facilitating communication between the government and technology companies to improve security along the supply chain and technology stack, developing voluntary technical and security standards, and deploying AI to help defend government systems.

The comments revealed disagreement among technology companies regarding export controls for AI chips. Some favored tightening export controls on certain chips, including by making the Biden Administration’s Diffusion Rule, a new export control policy, more restrictive. However, others called for rethinking US export control policy – with some arguing that it is ineffective – and criticized the Diffusion Rule, arguing that it is overly complex, makes US chipmakers less competitive, and encourages impacted countries to shift away from US technology.

Notably, the Trump Administration recently announced that it would not enforce the Diffusion Rule and move to rescind it, though the Administration continues to pursue the underlying policy objective of limiting adversaries’ access to advanced chips and AI technology. In February, the Administration released an “America First Investment Policy,” which announced that the US will also “establish new rules to stop United States companies and investors from investing in industries that advance the PRC’s national Military-Civil Fusion Strategy” as well as “allowing only those investments that serve American interests.” This will presumably cover investments in at least some AI technologies.

In addition, on the same day the Administration announced its decision regarding the Diffusion Rule, the Department of Commerce released guidance with more limited scope relying on existing regulations finding that advanced chips for training AI models could enable military intelligence and weapons of mass destruction and outlining related export activities that would require government approval.

Intellectual Property

Respondents highlighted the need to update and clarify intellectual property laws to respond to the unique considerations AI models have begun to introduce, noting that existing ambiguity is a potential barrier to innovation. Some respondents noted that improving digital access to non-copyrighted assets (e.g., information and images held by the National Archives, the Library of Congress, the Smithsonian Institution, and other government agencies) would assist with model development. However, respondents expressed differing views related to protecting intellectual property rights and copyrighted material. Some firms, particularly AI developers, argued that using copyrighted materials in AI model development should fall under fair use doctrine while others, typically firms that create such content and their trade associations, emphasized the importance of strong copyright protections, arguing that they do not hinder model development. Instead, these copyright owners favor negotiating rights agreements with the model developers.

Some respondents also encouraged policymakers to support the development of open-source AI models – which, they argue, are more secure, promote competition, lower barriers to entry, and provide greater transparency – by avoiding model export controls that would favor closed models. Open-source proponents also suggested that the government should adopt an “open by default” software policy.

Infrastructure & Manufacturing

Across respondents, there was broad agreement that significant investments in US technical infrastructure (e.g., datacenters and broadband networks) and energy production will be needed to support continued AI innovation. Firms called on the government to accelerate the permitting approval process for energy and datacenter projects, invest in grid-enhancing technologies, and create incentives to improve energy efficiency. Respondents representing state and local governments acknowledged the importance of investing in energy infrastructure but also emphasized that local communities should have decision-making authority over infrastructure projects. Firms also recommended policy changes that would support co-locating datacenters with power plants to reduce transmission power loss and increasing investments in advanced energy sources such as small modular nuclear reactors.  

Respondents also emphasized the importance of promoting domestic manufacturing of both chips and the components needed to expand technical and energy infrastructure. Firms recommended a range of potential policy measures to advance these goals including extending the CHIPS Act’s Section 48D tax credit designed to support advanced manufacturing, establishing new tax incentives for research and development, expediting approvals for domestic use of chemicals needed in chip manufacturing, and revising “Made in America” standards to allow for more supply chain flexibility. Respondents also called for enhanced public-private partnerships to support AI research and development.

Regulation & Governance

Respondents expressed a range of views related to regulating AI uses, with some noting that burdensome regulation could stifle innovation and others expressing concern that insufficient oversight could pose risks. Several respondents recommended a risk-based approach to regulation tailored to specific AI use cases rather than sweeping, one-size-fits-all frameworks. Many firms also suggested that the government could play a central role in developing technical standards based and some recommended the development of a tiered classification system to assess and regulate models based on their capabilities and potential misuse risks, rather than focusing on the model or training data size. Respondents also recommended defining prohibited use cases and establishing disclosure and transparency requirements so humans will know when they are interacting with an AI model.

Several AI researchers highlighted the need for a governance framework that both adapts to rapid technological change and responds to potential risks. Some respondents recommended a “functional equivalence” approach– for example, requiring that developers of models providing medical advice be held to the same accountability standards as human medical providers. Others echoed technology firm calls for safety standards that could be enforced through auditing ecosystems that would be mandatory for high-risk systems and voluntary for lower-risk systems. Several respondents also highlighted the importance of giving individuals control over how AI systems may use data related to them (particularly in sensitive settings such as education, healthcare, and employment) and providing transparency and explainability when AI models are used for decision-making in some contexts (e.g., employment or lending).  

Respondents also highlighted tensions between Federal and state approaches to regulation. Some firms noted that a patchwork of state AI laws poses compliance challenges, while others representing state governments argued that state-level regulation provides improved opportunities for policy innovation and responsiveness compared to Federal regulation. 

Talent & Workforce Development

A notable shared area of concern for respondents across categories – including technology firms, AI researchers, and state governments – is the need for investments in labor force education and training, not only to supply the talent that will continue to advance AI innovation but also to prepare the broader labor force to succeed in an economy impacted by AI. Respondents broadly expressed support for expanding STEM and technical training programs at all levels of education. Firms also emphasized the role of businesses in upskilling the workforce and supporting public educational programs. Respondents also called for improving AI literacy, developing trade skills through supporting and modernizing apprenticeship programs, and expanding the use of microcredentials.

Policy Outlook

The RFI responses can be seen as more of an outline of the policy tensions between the various stakeholders related to AI rather than a roadmap for potential legislative or regulatory measures along the lines of CED’s Solutions Brief “Principles for AI Guardrails in the US,” designed to promote safety, security, and innovation. While many of the objectives highlighted by respondents have bipartisan support in principle – e.g., maintaining US leadership in AI development, investing in domestic infrastructure and the manufacturing base, enhancing STEM education – policymakers frequently disagree about the specific policy measures needed to achieve those objectives. In addition, other areas raised by the respondents – e.g., intellectual property, export controls, and approaches to risk mitigation – are heavily debated both among policymakers and other stakeholders. Congress has not signaled an intention to advance legislation establishing a regulatory framework for AI in the near term, indicating that these policy tensions are unlikely to be resolved soon. In the absence of federal clarity, however, US states are likely to continue advancing their own rules, which has prompted some lawmakers to propose a Federal moratorium on state AI regulations.

Conclusion

As the July 22 deadline for the AI Action Plan approaches, key questions remain unanswered. While the Administration has consistently voiced support for advancing US leadership in AI innovation, it has yet to articulate clear positions on several critical policy areas, including data privacy, intellectual property, and open-source governance. Whether the forthcoming action plan will introduce substantive policy shifts or remain largely strategic is unknown.

Meanwhile, the US may benefit from closely observing developments in the European Union, which, after approving the world’s first comprehensive AI regulatory regime last year, recently indicated a desire to simplify its approach to facilitate innovation and investment. Diverging global regimes risk imposing complex and conflicting compliance burdens on developers, particularly startups and cross-border platforms. 

Future of AI Policy: Insights from Stakeholder Input

May 22, 2025

The Administration received more than 10,000 responses to its request for information related to artificial intelligence (AI) policy. The comments highlighted both areas of broad agreement and divergence across technology firms, producers of copyrighted content, researchers, and state governments. The Administration is scheduled to release its AI action plan in July, providing a window into its thinking on these issues.

Key Insights

  • Respondents shared broad agreement on the importance of maintaining a leading US role in development of AI technology and governments. However, comments highlighted disagreements about the measures (e.g., export controls) that should be used to maintain that lead.
  • Technology firms and producers of copyrighted content also generally disagreed about the application of the fair use doctrine, with developers generally favoring broader application of fair use and producers advocating for stricter copyright enforcement.
  • Respondents noted that powering AI models will require a significant increase in energy production, recommending increased public sector investments and streamlined permitting for projects.
  • With Congress unlikely to consider a broad AI regulatory framework soon, these unresolved issues may be addressed by state lawmakers. Without Federal preemption, this could lead to a patchwork regulatory environment that may pose compliance challenges for developers and uneven protections for the public.

Background

In January, the President issued an Executive Order directing Federal agencies to develop an AI action plan “to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” Following the release of the Order, the White House Office of Science and Technology Policy issued a request for information (RFI), inviting the public to comment on the highest policy priorities that should be included in the government’s action plan on a range of topics, including model development, use cases, and intellectual property. The government received more than 10,000 responses to the RFI, including from academic institutions, state and local governments, industry associations, and private firms.

Common Themes

Key respondents from technology firms, trade associations, researchers, and state governments identified several common themes in their recommendations for policymakers.

US Global Leadership & National Security

The strategic importance of US leadership in the development of AI technology and governance emerged as a common theme among technology firms, often cast in the framework of a geopolitical competition between a democratic order led by the US and an authoritarian alternative led by China. Many firms suggested that the US should play a leading role in developing market-oriented interoperable international technical standards and regulatory frameworks that would encourage innovation and facilitate fair competition across jurisdictions. Respondents also emphasized potential national security concerns related to AI, noting the potential for “dual-use” – civilian and military – applications with implications for biosecurity, cybersecurity, and military capabilities.  

Respondents offered a range of potential policy approaches to these concerns with some advocating for more aggressive measures and others recommending a more restrained approach. Some firms encouraged the Federal government to treat frontier AI models as “critical national assets,” calling for increased funding for national security testing infrastructure, restrictions on exporting certain chips and model weights, lowering thresholds for foreign chip procurement reviews, establishing formal channels for coordination between frontier AI labs and the intelligence community, and enhancing security protocols at frontier labs to guard against model theft. Others, while acknowledging important national security considerations, favored less aggressive measures. These include facilitating communication between the government and technology companies to improve security along the supply chain and technology stack, developing voluntary technical and security standards, and deploying AI to help defend government systems.

The comments revealed disagreement among technology companies regarding export controls for AI chips. Some favored tightening export controls on certain chips, including by making the Biden Administration’s Diffusion Rule, a new export control policy, more restrictive. However, others called for rethinking US export control policy – with some arguing that it is ineffective – and criticized the Diffusion Rule, arguing that it is overly complex, makes US chipmakers less competitive, and encourages impacted countries to shift away from US technology.

Notably, the Trump Administration recently announced that it would not enforce the Diffusion Rule and move to rescind it, though the Administration continues to pursue the underlying policy objective of limiting adversaries’ access to advanced chips and AI technology. In February, the Administration released an “America First Investment Policy,” which announced that the US will also “establish new rules to stop United States companies and investors from investing in industries that advance the PRC’s national Military-Civil Fusion Strategy” as well as “allowing only those investments that serve American interests.” This will presumably cover investments in at least some AI technologies.

In addition, on the same day the Administration announced its decision regarding the Diffusion Rule, the Department of Commerce released guidance with more limited scope relying on existing regulations finding that advanced chips for training AI models could enable military intelligence and weapons of mass destruction and outlining related export activities that would require government approval.

Intellectual Property

Respondents highlighted the need to update and clarify intellectual property laws to respond to the unique considerations AI models have begun to introduce, noting that existing ambiguity is a potential barrier to innovation. Some respondents noted that improving digital access to non-copyrighted assets (e.g., information and images held by the National Archives, the Library of Congress, the Smithsonian Institution, and other government agencies) would assist with model development. However, respondents expressed differing views related to protecting intellectual property rights and copyrighted material. Some firms, particularly AI developers, argued that using copyrighted materials in AI model development should fall under fair use doctrine while others, typically firms that create such content and their trade associations, emphasized the importance of strong copyright protections, arguing that they do not hinder model development. Instead, these copyright owners favor negotiating rights agreements with the model developers.

Some respondents also encouraged policymakers to support the development of open-source AI models – which, they argue, are more secure, promote competition, lower barriers to entry, and provide greater transparency – by avoiding model export controls that would favor closed models. Open-source proponents also suggested that the government should adopt an “open by default” software policy.

Infrastructure & Manufacturing

Across respondents, there was broad agreement that significant investments in US technical infrastructure (e.g., datacenters and broadband networks) and energy production will be needed to support continued AI innovation. Firms called on the government to accelerate the permitting approval process for energy and datacenter projects, invest in grid-enhancing technologies, and create incentives to improve energy efficiency. Respondents representing state and local governments acknowledged the importance of investing in energy infrastructure but also emphasized that local communities should have decision-making authority over infrastructure projects. Firms also recommended policy changes that would support co-locating datacenters with power plants to reduce transmission power loss and increasing investments in advanced energy sources such as small modular nuclear reactors.  

Respondents also emphasized the importance of promoting domestic manufacturing of both chips and the components needed to expand technical and energy infrastructure. Firms recommended a range of potential policy measures to advance these goals including extending the CHIPS Act’s Section 48D tax credit designed to support advanced manufacturing, establishing new tax incentives for research and development, expediting approvals for domestic use of chemicals needed in chip manufacturing, and revising “Made in America” standards to allow for more supply chain flexibility. Respondents also called for enhanced public-private partnerships to support AI research and development.

Regulation & Governance

Respondents expressed a range of views related to regulating AI uses, with some noting that burdensome regulation could stifle innovation and others expressing concern that insufficient oversight could pose risks. Several respondents recommended a risk-based approach to regulation tailored to specific AI use cases rather than sweeping, one-size-fits-all frameworks. Many firms also suggested that the government could play a central role in developing technical standards based and some recommended the development of a tiered classification system to assess and regulate models based on their capabilities and potential misuse risks, rather than focusing on the model or training data size. Respondents also recommended defining prohibited use cases and establishing disclosure and transparency requirements so humans will know when they are interacting with an AI model.

Several AI researchers highlighted the need for a governance framework that both adapts to rapid technological change and responds to potential risks. Some respondents recommended a “functional equivalence” approach– for example, requiring that developers of models providing medical advice be held to the same accountability standards as human medical providers. Others echoed technology firm calls for safety standards that could be enforced through auditing ecosystems that would be mandatory for high-risk systems and voluntary for lower-risk systems. Several respondents also highlighted the importance of giving individuals control over how AI systems may use data related to them (particularly in sensitive settings such as education, healthcare, and employment) and providing transparency and explainability when AI models are used for decision-making in some contexts (e.g., employment or lending).  

Respondents also highlighted tensions between Federal and state approaches to regulation. Some firms noted that a patchwork of state AI laws poses compliance challenges, while others representing state governments argued that state-level regulation provides improved opportunities for policy innovation and responsiveness compared to Federal regulation. 

Talent & Workforce Development

A notable shared area of concern for respondents across categories – including technology firms, AI researchers, and state governments – is the need for investments in labor force education and training, not only to supply the talent that will continue to advance AI innovation but also to prepare the broader labor force to succeed in an economy impacted by AI. Respondents broadly expressed support for expanding STEM and technical training programs at all levels of education. Firms also emphasized the role of businesses in upskilling the workforce and supporting public educational programs. Respondents also called for improving AI literacy, developing trade skills through supporting and modernizing apprenticeship programs, and expanding the use of microcredentials.

Policy Outlook

The RFI responses can be seen as more of an outline of the policy tensions between the various stakeholders related to AI rather than a roadmap for potential legislative or regulatory measures along the lines of CED’s Solutions Brief “Principles for AI Guardrails in the US,” designed to promote safety, security, and innovation. While many of the objectives highlighted by respondents have bipartisan support in principle – e.g., maintaining US leadership in AI development, investing in domestic infrastructure and the manufacturing base, enhancing STEM education – policymakers frequently disagree about the specific policy measures needed to achieve those objectives. In addition, other areas raised by the respondents – e.g., intellectual property, export controls, and approaches to risk mitigation – are heavily debated both among policymakers and other stakeholders. Congress has not signaled an intention to advance legislation establishing a regulatory framework for AI in the near term, indicating that these policy tensions are unlikely to be resolved soon. In the absence of federal clarity, however, US states are likely to continue advancing their own rules, which has prompted some lawmakers to propose a Federal moratorium on state AI regulations.

Conclusion

As the July 22 deadline for the AI Action Plan approaches, key questions remain unanswered. While the Administration has consistently voiced support for advancing US leadership in AI innovation, it has yet to articulate clear positions on several critical policy areas, including data privacy, intellectual property, and open-source governance. Whether the forthcoming action plan will introduce substantive policy shifts or remain largely strategic is unknown.

Meanwhile, the US may benefit from closely observing developments in the European Union, which, after approving the world’s first comprehensive AI regulatory regime last year, recently indicated a desire to simplify its approach to facilitate innovation and investment. Diverging global regimes risk imposing complex and conflicting compliance burdens on developers, particularly startups and cross-border platforms. 

Authors