Are Companies Paying Enough Attention to AI Risks?
Our Privacy Policy has been updated! The Conference Board uses cookies to improve our website, enhance your experience, and deliver relevant messages and offers about our products. Detailed information on the use of cookies on this site is provided in our cookie policy. For more information on how The Conference Board collects and uses personal data, please visit our privacy policy. By continuing to use this Site or by clicking "ACCEPT", you acknowledge our privacy policy and consent to the use of cookies. 

C-SUITE PERSPECTIVES

Are Companies Paying Enough Attention to AI Risks?

10 NOVEMBER 2025

Companies in the S&P 500 are increasingly disclosing AI-related risks. Find out what this means for C-Suite leaders and boards.

Companies in the S&P 500 are increasingly disclosing AI-related risks. Find out what this means for C-Suite leaders and boards. 

  

More than 70% of the S&P 500 disclosed material AI risks in 2025, up from only 12% in 2023. What are the biggest AI-related risks for these companies, and how can they integrate AI into governance and risk frameworks? 

  

Join Steve Odland and guest Andrew Jones, principal researcher at the Governance & Sustainability Center of The Conference Board, to discover why AI disclosures have soared since 2023the challenges of divergent regulations in the EU and US, and why AI further complicates cybersecurity. 

 

For more from The Conference Board: 

Are Companies Paying Enough Attention to AI Risks?

Don’t miss an episode of C-Suite Perspectives

Sign Up for Episode Alerts

Listen on

Companies in the S&P 500 are increasingly disclosing AI-related risks. Find out what this means for C-Suite leaders and boards. 

  

More than 70% of the S&P 500 disclosed material AI risks in 2025, up from only 12% in 2023. What are the biggest AI-related risks for these companies, and how can they integrate AI into governance and risk frameworks? 

  

Join Steve Odland and guest Andrew Jones, principal researcher at the Governance & Sustainability Center of The Conference Board, to discover why AI disclosures have soared since 2023the challenges of divergent regulations in the EU and US, and why AI further complicates cybersecurity. 

 

For more from The Conference Board: 

Return to podcast series

Experts in this series

Join experts from The Conference Board as they share Trusted Insights for What’s Ahead®

C-Suite Perspectives

C-Suite Perspectives is a series hosted by our President & CEO, Steve Odland. This weekly conversation takes an objective, data-driven look at a range of business topics aimed at executives. Listeners will come away with what The Conference Board does best: Trusted Insights for What’s Ahead®.

C-Suite Perspectives provides unique insights for C-Suite executives on timely topics that matter most to businesses as selected by The Conference Board. If you would like to suggest a guest for the podcast series, please email csuite.perspectives@conference-board.org. Note: As a non-profit organization under 501(c)(3) of the IRS Code, The Conference Board cannot promote or offer marketing opportunities to for-profit entities.


Transcript

Steve Odland: Welcome to C-Suite Perspectives, a signature series by The Conference Board. I'm Steve Odland from The Conference Board and the host of this podcast series, and in today's conversation, we're going to talk about how businesses are grappling with AI risk.

Joining me today is Dr. Andrew Jones, principal researcher at The Conference Board in our Governance & Sustainability Center. Andrew, welcome.

Andrew Jones: Thanks so much, Steve. Great to be back with you again.

Steve Odland: Andrew, you've written a new report, and it focuses on the S&P 500, so the larger firms, but talk to us about the scope of what you looked at for AI disclosure, AI risk, and so forth. What does AI mean in this context?

Andrew Jones: Yeah, sure. Happy to, Steve. And yeah, the Governance & Sustainability Center at The Conference Board put out some new research recently in a brand new report on how, as you said, S&P 500 companies are describing AI-related risks in the risk factors section of their annual SEC findings. That's their Form 10-K. We've looked back over the last few years, starting in 2023 all the way up to 2025. So, it's capturing the period where AI is quickly mainstreamed and been adopted by a lot of companies.

And for those who don't know, the Form 10-K is, the risk factor section of it is a standardized, legally required disclosure meant to investors to material business risks. So a risk that a reasonable investor might consider important when making decisions. So looking at these risk disclosures can be a really valuable lens on how companies, how boards, how executives are thinking about risks, thinking about what's material, thinking about what's ahead, and how they're adapting their governance.

And AI is obviously a really interesting area because of how quickly it's exploded in the last few years. It's quite a timely and quite an innovative study. No one else has really done this kind of work yet, and I think it only offers a window into how big companies are integrating AI into their frameworks, into their enterprise risk management frameworks, into their governance processes, and how it's becoming a key responsibility for both the C-Suite and the board.

Steve Odland: So this study looked at the S&P 500 companies and their disclosures, what they've written about it. And so the scope of AI is whatever it was for that company that they deemed to be material right.

Andrew Jones: That's right. It's a really good point that we talk about AI sometimes in this broad way, obviously under the umbrella of AI, of artificial intelligence. There's a lot of different technologies, different processes, different capabilities, right? Generative AI has obviously captured a lot of attention in the last few years, and some companies might have been disclosing about generative AI and those kind of models and what they mean, but also talk about other technologies, too, such as machine learning, deep learning, computer vision, predictive analytics.

There's a lot of technologies here that together mark a new era of, I guess a new era of almost cognitive infrastructure, where AI is being embedded across functions and products and decisions. And companies are grappling with what that means, both internally for their workforce with operations, but also externally for their products and how they engage with their customers.

Steve Odland: So basically, this isn't a laundry list of everything that they're doing in AI. This is essentially a disclosure, a legal disclosure, essentially, or a governance disclosure that informs investors and external parties what the risks are around their AI use. So it's not meant to be a catalog of anything, but tell us, what are the legal requirements for AI disclosure?

Andrew Jones: Obviously, we've looked at Form 10-K filings, right? So this is a standardized, legally required annual report. And within that, companies are obliged, US public companies are obligated by the SEC, the Securities and Exchange Commission to disclose their material risks. So these are risks that a reasonable investor would consider important when making investment decisions. So in other words, meaning it could significantly affect a company's financial performance, operations, or reputation.

Different companies and different industries obviously will be disclosing different kinds of things there. There might be some commonalities, but every company has its own sort of, I guess, its own risk exposure, its own sort of material issues. And AI is obviously starting to bubble up into that conversation, whereas companies integrate AI, whether it's into their operations or among their workforce, or entering into their products, it's raising new questions about risk, in terms of risks, whether AI systems fail or whether they misinform, or whether they perpetuate bias. And also the broader sort of sphere of risks attached to AI, related to regulations and litigation and investments.

So there's a long list of things here, right? There's a broad universe of risk, but companies are obligated to disclose what they think is a material risk. And looking at these disclosures, therefore. we can see a lot of things, including, as you said, how companies perhaps are thinking about. Investor expectations, thinking about future legal exposure, thinking about how boards are approaching governance and oversight of AI.

So it's a broad, it's one lens of many, but it is an important lens and an important window, I think. onto how companies are thinking about AI, and not just thinking about the opportunities of AI, but also the risks that can potentially materialize from that.

Steve Odland: Now your report talks about the surge here in disclosures just since '23—'23, '24, '25. Talk about the percentage of companies now disclosing some sort of material risk for AI and how that's changed. And then, what does that mean about the rapid pace of AI adoption?

Andrew Jones: Yeah, it's a really interesting finding. So we found as recently as 2023—we're talking two years ago, the 2023 filing year—only 12% of the S&P 500 referred to AI at all in their risk. So, had at least one material, AI-related risk disclosure. This year, it's 72%. That's a rapid increase in a very short space of time.

I think it just sort of really illustrates and underscores how AI has so quickly shifted from being something that's new, something that's shiny, something that's niche, something that's perhaps in is pilot stages, to really starting to be embedded in the enterprise. And companies across industries are starting to act on it and deploy it and invest it in and bring it into their compliance, into their operations, into their brand, into their products. So I think it just really illustrates how quickly that has happened.

And perhaps also at the same time, given that we're talking here about risks, also about shows that we're at an important moment where boards and executives are starting to think more about, it's not just about opportunities. It's about the challenges associated with this now. And how do we govern and have oversight of AI in a way that is effective and efficient and responsible. So very, very quick surge, Steve, in AI. I think just illustrates that AI, it's on the table now. It's table stakes. It's in every conversation. And this is now a material risk for many companies.

Steve Odland: And so as you look across the Fortune 500, the S&P 500, there's a lot of different sectors represented. Do you see any trends across sectors?

Andrew Jones: We do. So when we break down data on S&P 500, we tend to slice into the 11 sectors that come under the GICS standard. So every public company is assigned to one of these 11 sectors. And the truth is, first of all, all sectors are disclosing on AI risk, as you can imagine, given how many we just spoke of are doing it.

But this recent surge in disclosure is really concentrated in about four or five industries. And these are financials, health care, industrials, IT, so information technology, and consumer discretionary. So these are quite different kind of sets. There's obviously a lot of different companies under those five. But we feel that perhaps the reason why is they maybe share two traits, one of which is high data intensity. The second is direct human or regulatory exposure.

So as AI is becoming embedded into decisions, these risks around bias and reliability and privacy and oversight have become financially material. So it's a broad thing, but it is interesting how it's a handful of sectors that have really led this charge so far, at least from a risk disclosure perspective.

Steve Odland: It's interesting because I can't imagine any company not having some sort of AI deployment, whether it's just simply a large language model or use of ChatGPT for enhanced search or a little bit of Gen AI. But we talked about the surge, but the flip side is, you got 30% who are not disclosing anything as a material risk. What does that say?

Andrew Jones: It's a really good point, yeah. But it's still, while 72% is a big number, there's still a significant number that aren't saying anything yet about AI risk. That's not to say they won't be soon. We attribute this to a few things. I think one, it might just be simply disclosures haven't caught up with practice yet. It'd be interesting to see what this number looks like next year. Some companies may also be still perhaps folding AI implicitly into other disclosures. So perhaps they talk about cybersecurity. They don't quite foreground the AI angle yet, but maybe that's coming.

But we do expect this number to continue to grow, Steve. I think it will become just commonplace to acknowledge at least one, if not more, material AI risks in the years to come.

Steve Odland: So if you think about, your point was that there are people who are, companies that are heavily tech-oriented or heavily regulated are the ones that lean a little bit more to the disclosure side. So you can see that with the banks that are heavily regulated. And so therefore, the disclosures are around trying to make sure that they don't get sued, they're following the laws on these things. And if your business is IT or AI-related, of course, your whole business model's predicated on it.

What are the most common categories of AI risk that you see, now that we've looked at the sectors?

Andrew Jones: We grouped every single AI risk disclosure into broad categories. These are broad categories that themselves have many subcategories, but we found three broad themes really dominated across the disclosures, and that was reputational risk, cybersecurity. and thirdly, legal and regulatory, and particularly regulatory. So these are broad categories, and we can go into what comes under them and what doesn't come under them.

But yeah, these are really where companies' focus is trained, where they're really looking right now. Reputational risk, in particular, is the most common we found. About 40% of all S&P 500 firms disclose something relating to reputation this year, and that might encompass bias, it might encompass misinformation, might encompass privacy lapses. And so on. But other categories, very important, too.

Steve Odland: Now, what kind of regulatory risk did you see disclosed here? I guess regulatory risk is always an issue in any company that follows regulations. And if you read the risk factors in these reports, you always see factors related to whatever industry they're in, that's tied to regulatory risk.

So is a bunch of this just that there're new regulations and so therefore, companies feel obligated to do that? I'm trying to get to, how much of this is just process-oriented, regulatory-oriented, and how much of this is that the world has changed and we really have a more risky situation because of it?

Andrew Jones: It's a really great question. I think on the one hand, you are right that there's always going to be an element of that, that there's always new regulations, right? There's always diverging regulations, there's always complexity and uncertainty around regulations. Thinking, for example, in the G&S Center, The Conference Board, we do a lot of work on ESG and sustainability disclosure regulations. And Steve, that is a huge sort of minefield and a rapidly evolving field.

But I think when it comes to AI, there is something new, as well. And actually most of the risk disclosures that refer to regulations on AI typically refer to the challenges of uncertainty and the divergences between jurisdictions, and how difficult it is for firms to plan their AI deployments amid diverging frameworks. And particularly, the key divergence—particularly US terms and US multinationals—is between the EU, where you're seeing the EU AI Act starting to come in, it's really going to take force in the next couple of years, has quite big obligations on companies. And the US, which is a very different sort of approach where there isn't really a sort of unified federal law, but there is state-level regulations emerging, which is creating a kind of patchwork, and companies are struggling to navigate.

So I think that's the really challenging thing about AI that, on the one hand, companies are in a rapid race to invest, to deploy, to integrate into their business, into their operations, into their products. But there's such uncertainty as to how to navigate these different regimes. And particularly, the US-EU divergence, that is probably only going to continue to grow in the years to come.

Steve Odland: Yeah. It's a good point. A lot of these companies in the S&P 500 are multinational, and therefore it's not just a US issue. And so you've got different countries regulating in different fashions, and yet a company needs to act pretty consistently across all of their operations.

So that's a challenge. But even in the US, these, as you said, these multi-state regulatory schemes are very difficult for companies to navigate. And it feels like these states should get together or the federal government should step in and homogenize these rules.

Andrew Jones: You're so right, Steve, it's a real challenge where the EU effectively has created the world's first big comprehensive binding AI law, and it has all these different aspects and risk-tiered systems and all these disclosure obligations. Whereas, yeah, the US is different, and the federal government and the executive branch has very much emphasized innovation and infrastructure and global competitiveness and trying to have a light-touch regulatory approach.

But yet the states, it's very different and different state laws. Some are very focused on specific issues, whether it's bias in employment or deepfake disclosure laws, but others are looking quite ambitious in their own right. I'm thinking, particularly, emerging regulatory frameworks in places like Colorado, California, Texas, very very ambitious, and have different kind of approaches and different levels of emphasis.

And it's going to be difficult for companies to stay ahead of all of this in the next few years in a way that still allows them to be nimble and agile and compete and invest and deploy. So it's definitely, I think you can see that in the risk disclosures. As recently as a year or two ago, you didn't really see many disclosures about this. Now it's quickly emerging as a material issue. So I think it is definitely top of mind for boards and for C-Suites.

Steve Odland: We're talking about AI risk in companies and disclosure. We're going to take a short break and be right back.

Welcome back to C-Suite Perspectives. I'm your host, Steve Odland from The Conference Board, and I'm joined today by Dr. Andrew Jones, principal researcher in the Governance & Sustainability Center at The Conference Board. OK, Andrew, we're talking about your new paper, which our listeners can access on our website, TCB, and just click on the Governance & Sustainability Center, and they'll find your paper. It's a really interesting thing.

Cybersecurity risks have been disclosed for a very long time, decades and decades, as digital transformation has come into the company since the 1970s. And I don't know, this sort of feels like just the next wave of digital transformation. AI is, at its root, it's software, right? It does then introduce a whole other layer of cybersecurity risk, too.

Andrew Jones: It really does. And I think some of the cybersecurity risks being collected by companies are very interesting. And as I mentioned earlier in the podcast, cybersecurity risk was actually the second-most-disclosed category. About 20% of all S&P 500 firms both this year and last year said something about AI specifically, not just talking about cybersecurity in broad terms, AI specifically, and what it means for cybersecurity.

And I think this is interesting, actually, most of those disclosures talk in general terms about how AI is effectively just massively complicating the whole issue of cyber risk and cybersecurity. The AI itself is a force multiplier that increases the scale, speed, and unpredictability of cyberattacks. And I think people, hostile actors can use AI to really just create much more sophisticated attacks, find those sort of surfaces to get in.

At the same time also, the company's use of AI is itself inviting new kinds of challenges if hostile actors are able to, for example, inject prompts into their own system. So it's a real big concern, and I think this is something that's going to be, we expect this to grow more in visibility as we see high-profile incidents. And AI just, it just really complicates what was already a very challenging area. So a very interesting field, Steve, and something we'll be keeping a close eye on.

Steve Odland: The Conference Board just released its latest CEO Confidence Index for the fourth quarter, and we asked about AI and adoption. And we asked specifically of CEOs what their organization's primary driver was for AI investment. And it came back that the number one driver for investment in AI was to drive cost reduction and operational efficiency. Number two was new product and service innovation. Three was customer experience enhancement.

So, as you think about then risk disclosure, everything you've talked about follows that. The risks of, there's always the cybersecurity risk and people abusing AI tools, but the risk factors follow along these priorities for AI investment.

Andrew Jones: They do. It's a really good point. I think it does, and I think it shows how, as companies scale up investment in those priorities, it also just creates or enhances or intensifies the risks, as well. And something we've heard a lot in our own engagements with chief legal officers, with heads of AI, with board directors, is how companies might benefit from differentiating between different types of AI and different types of uses from a risk perspective.

So for example, if you have a consumer-facing AI product, perhaps you put a bit more emphasis and a bit more controls on that from a risk management perspective than you do from a workforce-facing product. Because obviously, the reputational implications of a failing, misinforming customers, or perpetuating bias is enormous. And companies I think are waking up to how AI does both intensify certain risks that are already there—thinking, for example, cybersecurity—and also maybe fundamentally change or introduce new risks at the same time.

Steve Odland: It's interesting, we talked about the cybersecurity and regulatory risks, but early on, you mentioned reputational risks. What do you mean by that?

Andrew Jones: So we found this is the most frequently disclosed AI category, right? And the truth is it's a broad category. It encompasses a lot of stuff. But reputational risk, I think just reflects how, when we talk about reputational risk, we're talking about how brand trust or stakeholder confidence can be impacted by AI performance. And we found a lot of things that came under the umbrella of reputation. Perhaps I'll just touch on a few that were the most disclosed.

Interestingly, Steve, often when people think of reputation or harm, they might think of privacy and data protection issues, or generative AI creating hallucinations or perpetuating bias. And those are in there, and we find companies are disclosing more on those kinds of issues. But actually the most commonly disclosed reputational AI risk related to the potential damage from overpromising or failed AI deployments that do not meet expectations. So in other words, the risk that AI might not deliver the promise that we hope it does.

So I think it's just really interesting how companies are looking at reputation and AI from a very much a multidimensional lens. They're thinking about reputation in terms of consumers and stakeholders, but they're also thinking about the reputational harm that may ensue if AI doesn't work out.

Steve Odland: Well there's so many pieces that AI impacts. If you're talking about generative AI, you know this, you've mentioned hallucinations. But even beyond that, you have AI producing content or design or creating things. You don't know if they're violating somebody else's IP. You don't know whether this stuff is right or wrong a lot of times. All of these things go to people trusting the products and services of a company. And so this is part and parcel of the reputational risk.

Andrew Jones: It's a really good point. Yeah. And it's extremely difficult to keep up, right? It's difficult to keep up from a corporate perspective, from a risk management perspective, from a governance perspective. And I think it just illustrates how, as AI uses mainstream, and it's incorporated into all sort of corners of the business and the enterprise, AI risk is also cutting into all corners of the enterprise, too. And it's cutting across legal, financial, operational, and ethical dimensions. And it just raises huge governance challenges.

And I think that's perhaps the key to all of this. That if firms are going to harness the potential of AI while avoiding or mitigating these kind of potential risks, some of which have potentially huge material impacts, it will require very effective and very agile governance and oversight.

Steve Odland: Another place that reputational risks creep in is in recruiting. And a lot of human resource departments have implemented AI tools in order to sort through the enormous volume of resumes and applications that people get for jobs. And so in other words, think about it as you just program the screening of this thing, and they're finding sometimes that these tools are screening out certain classes or certain types of people, maybe on demographic data and so forth, and can be discriminatory. So that's another place where reputational risk creeps in and the need to have, human beings really govern the implementation of AI.

Andrew Jones: Really important point, Steve. I think people are waking up to the fact that AI can often, I guess it can perpetuate certain biases or could, as you said, perhaps exclude certain groups, certain categories, particularly demographic categories, perhaps because the training data itself it's trained on has gaps in it, or even the way it's processed. And that can then take on and perpetuate already existing disproportionate impacts and biases.

It's a really important point. And these kind of issues, if they come to light or they lead to discriminatory outcomes, can be reputationally catastrophic. And it's worth pointing out that the lit there is litigation emerging on, for example, the use of AI in workforce hiring practices. Companies need to be very careful here, for sure.

Steve Odland: And it cuts both ways, because you also have the government, clamping down on DEI-related and quota-related training and recruiting. And so therefore, either way, if it's a positive bias, negative bias it can be risky, as well.

We're talking about disclosures in your report mostly. The concept of disclosures come out of a company's ERM, enterprise risk management planning, which usually sits with the audit committee. Sometimes there's a risk committee in financial services and other industries, but it's usually the audit committee. But that whole ERM process needs to be rethought in order to include all of these kinds of AI risks, because the risk factor reporting comes out of that process.

Andrew Jones: Agreed. And we've done some interesting sort of early work on in the Governance & Sustainability Center on where companies and specific boards are starting to sit AI responsibility. And to your point, Steve, yeah, we're often finding the audit committee is where it is for the reasons you said, I think—the oversight of risk or the oversight of risk management. But clearly there is, I think to your point, there is a huge need for evolution here to refresh and even rethink some of the risk exposures of companies.

And also think about how you govern all of that, right? And how you govern across functions, as well. And I think the more effective approaches here are those that really do bring in from across the company. I'm thinking, obviously, risk and compliance, technology, but also legal, communications, ESG. There needs to be a lot of voices here to ensure that AI risks one, aren't ongoing undetected, and two, are being managed and mitigated in an effective and agile way.

Steve Odland: So just wrapping up here, as you look ahead, what are some areas of AI risks that you think will become more prominent and companies need to be thinking about just in case they have blind spots?

Andrew Jones: So it's a really great question, and I think you already mentioned one of these, Steve, which is around the workforce and labor relations, and what the scaling and integration of air might mean. We all know the automation in AI can impact unemployment patterns, can widen skill gaps, can lead to changes within workforce composition. and that does raise risks, right? And we didn't really find any companies really disclosing on these issues yet. But I think the labor and retraining costs and reputational elements could become material in the years ahead.

And in addition to that, we'd also perhaps point out two other things, one of which is the environmental and sustainability dimensions here. No risk disclosures to talk about this, but we know training and running large AI models can consume a lot of energy, can consume a lot of water. AI's resource intensity might start attracting more investor attention in the years to come. And we might see, particularly certain companies like the hyperscalers, the tech companies, the data center providers, may be putting a bit more emphasis on that area.

And one other thing I'd point to that I think is across the board and something that a lot of companies still need to grapple with is the next generation of AI, right? And particularly agentic AI, that is a lot more autonomous, has a lot less human involvement. And I think the unpredictability and lack of clear human control could trigger a whole range of new liability and governance challenges. So there's a lot that companies should be keeping on their radar here.

Steve Odland: Yeah. And if you look at the stock market today, we are in a bit of an AI bubble. It reminds me of 1999 and the dot-com era bubble where the market got frothy. And a key risk here is that these AI companies, or companies that have talked about implementation of AI or their business model revolves around it, there's risks that they're overpromising to their investors. And you know, no line continues at a steep up, up, up. And so there are all sorts of market risks and valuation risks about. And any loan and debt covenants that revolve around that. So lots of financial things that companies need to think about, as well.

Andrew Jones: Really great point, Steve. I completely agree, and I'd add as well that those market risks are also obviously intertwined with geopolitical risks, as well. At the same time, we're seeing AI at the global level, it has almost become a sort of proxy for economic power. And, the US, China, EU are all pursuing quite a different approach to AI, but obviously the end goal is clear to position as a leader in the space. So that complicates all of the market and industry-level risks, as well. So it's a really fascinating area, and companies need to stay ahead and need to look ahead and need to be closely monitoring the environment.

Steve Odland: OK, we'll leave it there. Dr. Andrew Jones, thanks for being with us today.

Andrew Jones: Great to be with you, Steve. Always a pleasure.

Steve Odland: And thanks to all of you for listening to C-Suite Perspectives. I'm Steve Odland, and this series has been brought to you by The Conference Board.

Episodes

Other Related Resources