Nearly three-quarters of the S&P 500—72% of companies—now flag AI as a material risk in their public disclosures. That’s up from just 12% in 2023, underscoring how rapidly AI has moved from experimental pilots to business-critical systems, and how urgently boards and executives are bracing for reputational, regulatory, and operational risks.
Reputational risk tops the list, cited by 38% of companies. Firms warn that failed AI projects, missteps in consumer-facing tools, or breakdowns in service could quickly erode brand trust. Cybersecurity risks follow, disclosed by 20% of firms. Filings flag the potential perils of AI: the technology is expanding the areas in which a company can face attacks, while also arming adversaries with more sophisticated tools for attacks.
“We’re seeing a clear theme emerging across disclosures: Companies are worried about AI’s impact on reputation, security, and compliance. The task for business leaders is to integrate AI into governance with the same rigor as finance and operations, while communicating clearly to maintain stakeholder confidence,” said Andrew Jones, author of the report and Principal Researcher at The Conference Board.
These findings come from a new report by The Conference Board and ESGAUGE. Findings are based on Form 10-K filings from S&P 500 companies available through August 15, 2025.
1—Disclosure Trends
Public company disclosure of AI as a material risk has surged in the past two years.
- From 2023 to 2025, the share of companies reporting AI-related risks jumped from 12% to 72%.
While all sectors are disclosing risks, financial, health care, and industrials have seen the sharpest rise.
- From 2023 to 2025, the number of companies disclosing AI-related risks jumped in financials (from 14 to 63 companies), health care (from 5 to 47), and industrials (from 8 to 48).
- Why these sectors? Financial and health care companies face regulatory and reputational risks tied to sensitive data and fairness, while industrials are scaling automation and robotics.
2—Reputational Risks
Implementation failures, consumer-facing mistakes, and privacy breaches are the leading sources of risk.
- Reputational risk is the most frequently cited AI concern, disclosed by 38% of companies in 2025.
- Implementation & adoption (45 companies): Reputational fallout may follow if AI projects fail to deliver promised outcomes, are poorly integrated, or are perceived as ineffective.
- Consumer-facing AI (42 companies): Missteps—such as errors, inappropriate responses, or service breakdowns—are considered highly damaging, particularly for consumer-oriented brands.
- Privacy and data protection (24 companies): Mishandling sensitive information is flagged as a reputational hazard; breaches can spark regulatory action and public backlash.
“Reputational risk is proving to be the most immediate and visible threat from AI adoption. One lapse—an unsafe output, a biased decision, or a failed rollout—can spread rapidly, driving customer backlash, investor skepticism, and regulatory scrutiny in ways that traditional failures rarely do,” said Brian Campbell, Leader of The Conference Board Governance & Sustainability Center.
3—Cybersecurity Risks
Cybersecurity risk tied to AI was cited by 20% of firms in both 2024 and 2025.
- Companies note that AI both enlarges attack surfaces—through new data flows, tools, and systems—and strengthens adversaries by enabling more sophisticated, scalable attacks.
- AI-amplified cyber risk (40 companies): Disclosures describe AI as a force multiplier, escalating the scale, sophistication, and unpredictability of cyberattacks.
- Third-party and vendor exposure (18 companies): Disclosures point to vulnerabilities stemming from reliance on cloud providers, SaaS platforms, and external partners.
- Data breaches and unauthorized access (17 companies): Breaches are a central concern, with firms emphasizing how AI-driven attacks can expose sensitive customer and business data.
4—Legal & Regulatory Risks
Legal and regulatory risk stands out as one of the most persistent themes in AI reporting.
- Unlike reputational or cybersecurity risks, which can manifest quickly, legal risk is framed as a longer-tail governance challenge that can lead to protracted litigation, regulatory penalties, and reputational harm.
- Evolving regulation (41 companies): Firms cite difficulty in planning AI deployments amid fragmented and shifting rules. For example, the EU AI Act is frequently flagged for its strict requirements on high-risk systems, conformity assessments, and non-compliance penalties.
- Compliance and enforcement (12 companies): Many disclosures warn that new AI-specific rules will bring heightened compliance obligations and potential enforcement actions.
- Cross-cutting legal risks (6 companies): Filings highlight uncertainty over how courts will treat IP claims tied to AI training data or who bears liability when autonomous AI systems cause harm.
5—Emerging Risks
Intellectual property, privacy, and adoption risks are now surfacing in company disclosures.
- These risks reflect unsettled legal environments as well as strategic uncertainty around business models, customer relationships, and long-term competitiveness:
- Intellectual property (24 companies): Firms highlight risks spanning copyright disputes, trade-secret theft, and contested use of third-party data for model training.
- Privacy (13 companies): Firms warn of sensitive exposure under the General Data Protection Regulation, Health Insurance Portability and Accountability Act, and California privacy laws (CCPA/CPRA).
- Technology adoption (8 companies): Several firms point to risks in execution—high costs of new platforms, uncertain scalability, and the possibility of under-delivering on promised returns.