How AI Coaching Is Redefining Leadership Development
Our Privacy Policy has been updated! The Conference Board uses cookies to improve our website, enhance your experience, and deliver relevant messages and offers about our products. Detailed information on the use of cookies on this site is provided in our cookie policy. For more information on how The Conference Board collects and uses personal data, please visit our privacy policy. By continuing to use this Site or by clicking "ACCEPT", you acknowledge our privacy policy and consent to the use of cookies. 

C-SUITE PERSPECTIVES

How AI Coaching Is Redefining Leadership Development

24 NOVEMBER 2025

AI is redefining what effective leadership development looks like.

AI is redefining what effective leadership development looks like. Coaching, once limited to the few, can now reach the manyempowering managers and employees alike. 

Diana Scott welcomes Allan Schweyerboth of The Conference Board, to discuss how embedding AI coaches into daily workflows is redefining employee development, scaling mentorship, and enabling organizations to cultivate a more agile and emotionally intelligent workforce. 

For more from The Conference Board:  

How AI Coaching Is Redefining Leadership Development

Don’t miss an episode of C-Suite Perspectives

Sign Up for Episode Alerts

Listen on

AI is redefining what effective leadership development looks like. Coaching, once limited to the few, can now reach the manyempowering managers and employees alike. 

Diana Scott welcomes Allan Schweyerboth of The Conference Board, to discuss how embedding AI coaches into daily workflows is redefining employee development, scaling mentorship, and enabling organizations to cultivate a more agile and emotionally intelligent workforce. 

For more from The Conference Board:  

Return to podcast series

Experts in this series

Join experts from The Conference Board as they share Trusted Insights for What’s Ahead®

C-Suite Perspectives

C-Suite Perspectives is a series hosted by our President & CEO, Steve Odland. This weekly conversation takes an objective, data-driven look at a range of business topics aimed at executives. Listeners will come away with what The Conference Board does best: Trusted Insights for What’s Ahead®.

C-Suite Perspectives provides unique insights for C-Suite executives on timely topics that matter most to businesses as selected by The Conference Board. If you would like to suggest a guest for the podcast series, please email csuite.perspectives@conference-board.org. Note: As a non-profit organization under 501(c)(3) of the IRS Code, The Conference Board cannot promote or offer marketing opportunities to for-profit entities.


Transcript

Diana Scott: Welcome to C-Suite Perspectives, a signature series by The Conference Board. I'm Diana Scott, Center Leader of the US Human Capital Center at The Conference Board, and the guest host of this podcast.

Today we'll discuss how AI coaching is transforming leadership and the future of workforce development. Joining me is Allan Schweyer, Principal Researcher, Human Capital Center. Welcome, Allan. Allan, your report suggests AI coaches could soon deliver 90% of workplace coaching. That's a pretty aggressive figure.

What does that imply for the future architecture of talent development inside large organizations?

Allan Schweyer: It means that potentially the benefits of coaching, which you know, is one of the most effective development tools available in the workplace, could be made available to every worker and every manager for about 90% of their everyday coaching needs according to our research.

But that doesn't mean that human coaches go away. AI handles routine coaching, like goal tracking, feedback, rehearsal, meeting preparation, nudges, after-action learning, and so on. While human coaches then focus mostly on senior executives, like they always have, where the decision stakes are high and to help navigate complex relationships and so on.

Diana Scott: You, you think about the kinds of organizations that are doing this. 90% is a pretty high figure. You do highlight some examples of companies that are piloting AI coaches enterprise wide. What separates those organizations that can scale successfully from those that actually get stuck in the pilot phase?

Allan Schweyer: First, to do this right, you should test multiple solutions with dozens of end users-- workers, and managers-- ideally in pilots. Then these workers and managers should report back on things like ease of use, accuracy of the information the AI coach delivers, and obviously applicability to their work. Leaders should choose a solution that embeds in the flow of work. And by that I mean it should be able to integrate with email, with any collaboration tools like Teams or Slack that the organization uses, even HRIS and other systems, so that the AI can draw from those sources and also engage with the worker within the systems that they use every day. That's really important.

Another thing is that organizations should also establish some cross-functional governance, involving HR, technology leaders, legal, obviously procurement professionals and so on, to develop standards, especially around data security and privacy.

Because nothing will shut this down faster if people don't trust it. So select a supplier that can clearly meet your criteria around privacy. And then monitor too. You can't just implement it and leave it. You have to look at issues like language glitches when it speaks to people in different languages.

There are sometimes issues, hallucinations, which we see with AI in general. And then having escalation procedures where a person being coached with AI might cross the line in terms of what they discuss with it. So how do you deal with things like that? It's complicated, like everything else we do.

Right, Diana? It's not as easy as it looks.

Diana Scott: Yeah. You bring up a really important point. Individuals in an organization, you talked about how they have to actually trust this. Is that something in your research that you were able to dive into?

How do you get individuals to actually trust this system and how do you get them to trust that this system is not going to reveal their deep, dark secrets?

Allan Schweyer: Yeah, it's a real challenge because the more you use AI coaching, the more people who are using it in the organization, the more valuable the data and content they create becomes to the organization.

And yet you cannot violate the individual conversation with the AI coach. All that information has to be held and used in the aggregate. To do that, first, use things like your business resource groups or your employee resource groups to ensure that you have some inclusive adoption so that everyone has an opportunity to use this if they want to.

And then focus on establishing and keeping trust by giving opt-in controls. If you're going to link the AI to a person's calendar or email, ask their permission first and honor their preferences if they don't want that done.

Also, of course, only use the data in the aggregate. Most of the suppliers that we spoke to, Diana, they do this for you. They keep the data at their end and they do not give the organization any access to individual coaching conversations.

Diana Scott: I assume when you're communicating that becomes a very important thing that you communicate to the users. The fact that it is not housed in the company, it's actually housed with the outside provider. That must give a sense of comfort to the actual user. So that probably builds trust.

It sounds like to be successful in implementing this system, you're talking about making this coaching part of daily work. Typically, when somebody is being coached, it can't all be on the individual. You've gotta provide some sort of nudges. You've gotta give direction before, during, and after coaching. How does that work when the coach is not a person? Talk a little bit about that.

Allan Schweyer: Yeah, for sure.

Diana Scott: It's a machine.

Allan Schweyer: We've gotta keep remembering that the AI coach is not a person.

Diana Scott: But it feels like a person, right? I mean, I've heard it actually feels like a person.

Allan Schweyer: Feels like a person and over 90% of participants in our experiments said that it really does a good job with empathy and showing understanding. So it draws people in. They start talking to it, they share things with it that they might not even share with a trusted friend because they might be embarrassed to say certain things. There's an AI coach and you should try to keep remembering that it isn't a real person.

But one of the real potentially transformative benefits of AI coaching is that it can be embedded, as you say, in the flow of their work. But that means it's kind of always looking over your shoulder. If it's turned on and you're doing your email or you are just having a conversation on Teams, it's observing both ends of the conversation, by the way. And it might offer you a suggestion, might remind you that this person reacts better to this type of an approach and that type. It might do that in real time in some cases. That can be annoying to some people and extremely helpful to other people.

So again, it comes back to the opt-in. If I feel that's helpful to me, then I want to turn it on. I think leaders should encourage that, though. Not enforce it, but encourage it so that they do get used to the AI.

If you think about the AI is maybe not a person but someone who is a discreet colleague that really only cares about your success, then all of us could benefit from that sort of teammate. We just have to get used to it being over our shoulder and offering us advice without us soliciting that advice.

Diana Scott: Yeah. So now I'm sitting in my seat as a chief human resources officer (CHRO). What kind of cultural levers do you think a CHRO can really use to make this AI coaching stick? Because you're talking about piloting it and having to then embed it into the culture. What makes it go beyond just being a tool and actually becoming a habit?

Allan Schweyer: Well, I think it's some of the things we talked about before. You mentioned good communication, ensuring that everyone knows the conversations that they are having with their AI coach are confidential, giving them that ability to opt in. That's the table stakes, right? You have to start there.

And then you need to really be good about selecting a solution that is user friendly, that does embed, that isn't overly over your shoulder. It doesn't need to be telling you what you're doing wrong every five minutes. It needs to have judgment in a lot of ways, artificial judgment, so that people appreciate nine out of 10 of the nudges they receive.

Let's say if that's your bar, that they see the benefit of being told about their blind spots. We have a really good case study that talks about a person who was ready to demote an individual that had only promoted a few months before because that person was not showing the type of initiative and risk taking that was required for the position.

But the AI, in looking through the past sessions, the one-on-ones that the two had, looking back at the manager's own blind spots and that individual's cultural differences, was able to show the manager that maybe their definition of taking initiative was quite a bit different than the employees given their background.

I think a manager might appreciate that. And when you can have those kinds of small wins, I think over time the CHRO's job in convincing workers to adopt this and use it regularly will be solved in most cases.

Diana, just the last point there, we've seen organizations go from pilot and then over the course of 90 to a 100 days, expand that to several thousand employees and then over the course of a year to upwards of 100,000 employees. Everyone has a access to it and is achieving use rates, which means not using it once but coming back and using it again and again, of 75% to 80%.

Diana Scott: That's amazing. I want to go back to the point that you just made, when you talk about the leader actually being coached, essentially, to think differently by the AI coach who's been employed to coach the employee. That signifies to me a mindset shift from the part of the leader that is really significant. So the coach is actually not only coaching the employee but perhaps coaching the manager as well, which is a different way to think about this. That is something that you might talk about a little bit because I think that's significant. That's different than how I think a lot of people view coaching. They think it's all always just one way. This is a two way coaching effort, right?

Allan Schweyer: Yeah, absolutely. We interviewed quite a few leaders in organizations who have begun to implement or are in different stages of implementing AI coaching. And one of the things that they talked about was that they said, especially when they're bringing in a new manager, someone who's been an individual contributor, making that very difficult transition for the first time to leading other people. And how big a gap there often is in skill sets there. And how an AI coach can really accelerate their ability to become an effective manager quickly and maybe not have to go through all of that pain, making mistakes, and so on, to become the type of manager who is better at coaching others. So, absolutely. I think one of the benefits of AI coaching is that it can coach the coach.

We have a big section of the paper, by the way, around that. How does AI help human coaches, whether they are manager coaches or professional certified executive coaches? And there's a great deal of potential there for an AI coach to be a coach's assistant.

Diana Scott: Yeah, and I think what you're saying, there is really AI coaching reshaping the whole psychological contract between employer, employee. It really is reshaping how we think about growth, perhaps privacy and trust as well. And I don't know if you have some comments about that, but this is big.

Allan Schweyer: Yeah. In the paper we say it's potentially transformative and that word's thrown around a little bit. But I think in this case it's likely true. It's too early to say but if you think about what AI coaching can do, we've talked about it performing 90% of routine coaching for every worker, for every manager across the organization.

We've talked about it helping managers become better coaches. We've talked about helping professional certified executive coaches become better coaches. What we haven't talked about very much is how, in the aggregate, it can give a CHRO access to an unprecedented level of real-time workforce intelligence insights that come from hundreds and then thousands of coaching sessions. And imagine that we work with engagement surveys. We do them once a year, twice a year, and so on. We scour the results and try to make some changes. And that's self-reported data. This is actual data, real conversations coming constantly, every day across hundreds of conversations that a CHRO can act on in the aggregate again, to see patterns and insights that can help them improve the culture. So I do, I agree. I think this is a big deal and is potentially transformative for organizations.

Diana Scott: Thank you. We're going to take a short break now and we'll be right back with more of my conversation with Allan Schweyer.

Welcome back to C-Suite Perspectives. I'm your host, Diana Scott, Center Leader of the US Human Capital Center at The Conference Board. I'm joined by Allan Schweyer, Principal Researcher from the Human Capital Center.

Allan, we were talking right now about the transformative nature of this AI coaching tool and we were beginning to delve into the kind of access to information that CHROs might have by using this tool. Let's talk a little bit more about that. I want to dive into that. For CHROs presenting to boards or talking to their chief financial officers (CFOs), what do you think the most persuasive way to frame the return on investment (ROI) or the strategic value of AI coaching?

Allan Schweyer: Suppose you have a budget of, let's just say, $1 million per coaching. And with that $1 million, you can have so many hours of very expensive human coaching, $400 or $500 an hour or more, for your executives and you generally exhaust your budget in most years by providing human executive coaches to your leaders.

And then you expect your managers to act as coaches as well. Maybe even in some organizations you talk about peer coaching and you provide some learning and training around that. But you know, people are limited. They have limited time. Of course, managers have a greater span of control than they've ever had. We know. And it adds up in terms of the cost of time.

If you take that $1 million budget, maybe you carve out a little piece to license, or I don't, I'm not gonna say a little piece, but a much less expensive piece, to license a secure and private AI coaching platform that gives AI coaching to everyone, then you could expect pretty hefty, I think, impact on ROI. Because, again, we know effective coaching is one of the best developmental tools.

We also know from our research, and I mentioned this earlier, that people find that it provides concrete advice, next steps, is empathetic, is understanding, and so on. So if you help your workforce, whether that's 5,000 or 100,000 people or whatever number it is, become better at conversations and better at understanding the people they work with.

One of our CEOs put it well. He said, "You can raise the collective emotional intelligence, the EQ of the organization, by 10 points," something the CEO said he'd been looking to do for years and years and was able to do with AI coaching. So then you can imagine an organization that just functions: better well-being, better collaboration, less negative conflict.

No, you don't want everyone agreeing with each other, but positive conflict because people are being coached on how to work with each other more effectively. That is one side, so you expand it to your whole workforce. It takes care of, like we say, about 90% of the routine coaching at a price that's far less than what you would pay for human coaching.

This reserves human coaching, again, for the big decisions, the decisions that executives tend to make for emotionally fraught conversations where values and ethics might come in or complex relationships. So you're actually focusing that human coaching better than you were in the past. So there's ROI in that, and then of course there's large potential ROI in the collective insights you receive from having AI coaching across the enterprise.

Diana Scott: Can you talk a little bit more about that? You started to talk about that before our break. There is real insight that comes from that. Describe a little bit about what that might be and how that might be helpful to executives across the company, to the CHRO, or even to a board in terms of understanding a little bit more about the ethos and the culture of the organization.

Allan Schweyer: Yeah. It's always good to start this with emphasis on privacy and never looking into people's AI, coaching conversations. People have to be a 100% certain you're not going to do that or they're just not going to use it. So once you've solved that, you've earned the trust, you've kept the trust, then you should look at the aggregate data.

Your supplier, if you're using an external AI coaching provider, can help you with that. You can use it to look at patterns as they're developing. So this could be employee sentiment, it could be like an engagement survey. You're using it to understand how your employees are feeling, what kinds of tasks they're asking their AI coach to help them with, which can inform learning and development.

As you go a little bit more granular, you might learn about issues within divisions. Gets a little more dangerous when you look down to the team level because you're getting closer to the individual. But you might look at issues that could be arising, emergent issues that you could resolve more quickly if you understand what's happening.

These are all cultural things. Insights that you get from conversations that people have with their AI coaches in the aggregate really can affect every aspect of human capital management. If you think about it, it can give you insights into the emerging skills. There are some organizations that are using it to build skills, to help build skills taxonomies as they become skills-based or skills-first organizations.

They're extracting that type of intelligence, definitely helping with more precise learning and development, how to keep employees engaged. You could also infer skills and interest to think about how you might optimally deploy people and teams with this data. It can be used in workforce planning.

The uses we're still discovering. In the paper we have some vignettes that we've taken out from talking to organizations. You'll see in the paper uses in recruiting, uses in inclusion and belonging, use cases also for workforce planning, and, by the way, health and well-being.

Diana Scott: It sounds like there's a treasure trove of data and information and learning that can come out of this. You alluded to this. Obviously, as you begin to analyze communication and behavior patterns and everything, you begin to delve into some potentially dangerous areas. So talk a little bit about some of the ethical guardrails that you might need to put in place before you think about full-scale deployment. What are some of the things that companies are doing just to make sure that they're guarding against crossing any lines?

Allan Schweyer: Yeah. We haven't talked about the difference between generic large generative AI models. Your ChatGPTs and Copilots and Geminis. These are where most organizations tend to start. You have a curious human resources leader who says, "I'm going to try this for my own career development," or, "I'm going to do something over here with an AI coach." And then they're intrigued and maybe they're the catalyst for something like a formal pilot.

The thing about the solutions that are the generic, ChatGPT and so on, is they are owned or licensed. There is a version of that on your servers in your organization. You cannot say in those circumstances that the organization could not access individual conversations because they can. You could guarantee that they won't but it's a little harder because you can't say that they can't, if you follow.

The solutions that are purpose built for AI coaching, and some of the ones that we tested, are the ones who will put that firewall between the organization and individual conversations. So that could be an important part of it. You want to adopt a privacy-by-design solution, whatever it is. That means encryption of these conversations.

Retention because when you retain coaching conversations over time, the data's bigger and better. You've got historical conversations to look back at and to synthesize with current conversations. You don't want to necessarily delete them but you do have to have some standards around how long you're going to retain a coaching conversation. Obviously aggregate only reporting.

Keeping a human in the loop for sensitive topics, to intervene when necessary, to be like an ombudsman. If someone has a question or a challenge to what's going on in the organization.

You should be, as I mentioned, aligning. It's usually driven by the chief human resource officer or a senior leader in that group, but bring legal in early on. Bring communications in early on. Leverage your employee resource groups.

And, I mentioned, don't make employees use it. Don't tell employees that you're going to integrate it with their calendar and their emails. Ask for permission, offer opt-in for these types of things, and then come up with a series of trust metrics so you can track how many times are people challenging this?

What is your opt-in? That's a really good one, right? Where if you've said you can opt-in to have the AI integrate with your calendar, email and other systems, what percentage of your workforce is saying, okay versus no, right? Because that shows trust. So have some metrics and track those and see how you're doing so you have a trust barometer.

Diana Scott: Sounds really like great advice. From a global perspective, we know that there's some growing divergence between the EU's AI Act approach and, obviously, what's a little bit more flexible in the US. How should organizations think about that when they're deploying AI and talent development?

Allan Schweyer: Yeah, I think this one's a simple one. It sounds like it could get very complex if you try to overthink it. There's also differences between states and the US as in terms of privacy rules and laws. So I think the right thing to do for any organization is to look at everywhere that they operate, whether that's internationally or across different states in this country, and default to the strictest of those regimes. That's especially true for Europe because, as you mentioned, it's got some of the most protections for the individual. So just fall to the standards of the strictest regime and you should be okay.

Diana Scott: That makes sense. Final question for you, Allan. As you look about five years out, do you think AI is going to make leadership development more human or is it going to risk making it more mechanized? And does that give you optimism about where this is headed?

Allan Schweyer: Yeah, that's the question of the day. If you'd asked me that before we started this research, I would've said it'll become more automated and mechanized. But I think paradoxically, it'll make it more human if it's designed well.

You know why? Because, first, the 90% of coaching that we're talking about is routine, everyday stuff. This leaves more time for human coaches to attend to the 10%, the coaching that deals with values, emotion, corporate politics, complex relationships with stakeholders inside and outside the organization. Those things that AI really cannot do.

We haven't talked about that too much but there's a lot that AI coaching can't do yet. This could change in five years but we're talking about now. And I know we're talking about five years out but I'm going to say I still think it's going to be more human. When AI acts as a human coach's assistant, for example, it remembers prior conversations. It can help the human coach remember what you talked about three sessions ago and why that should be brought into the next conversation. It can do administrative things like that.

And what happens when it takes those things off the human coach's plate? It frees a human coach to be more human, to focus on human issues rather than administrative issues.

We talked about AI accompanying worker, being that teammate looking over their shoulder. If it can anticipate their needs like we've been talking about and help them strengthen their relationships with team members, that's gonna make the workplace a happier, more friendly and collaborative place. So I think more human.

All bets off if we get some form of super intelligence in the next five years or something of that nature, then, you know, then I'll be wrong about this.

Diana Scott: I love your optimism, Allan, and I'll take that AI makes us more human. It's an augmentation tool. It makes us more human. I'll take it. And thank you for joining us today, Allan.

Allan Schweyer: Thank you. I appreciate it.

Diana Scott: And thanks to all of you for listening to C-Suite Perspectives. I'm Diana Scott, and the series has been brought to you by The Conference Board.

Episodes

Other Related Resources