AI is a powerful accelerant for performance systems, but only when employees trust the system. Here’s how Microsoft does it.
Artificial intelligence is helping enterprises improve their performance systems and incentives. How can companies capitalize on this opportunity while also instilling trust, governance, and clarity that motivates employees and improves their decision-making?
Join Diana Scott, Human Capital Center Leader at The Conference Board, and guest Nandini Ramaswamy, corporate vice president of seller incentives at Microsoft, to find out how Microsoft uses AI to improve performance-based decision-making, why trust is the crucial differentiator, and how to measure whether your AI strategy is working.
For more from The Conference Board:
Join experts from The Conference Board as they share Trusted Insights for What’s Ahead®
Corporate Vice President, Seller Incentives
Microsoft…
C-Suite Perspectives is a series hosted by our President & CEO, Steve Odland. This weekly conversation takes an objective, data-driven look at a range of business topics aimed at executives. Listeners will come away with what The Conference Board does best: Trusted Insights for What’s Ahead®.
C-Suite Perspectives provides unique insights for C-Suite executives on timely topics that matter most to businesses as selected by The Conference Board. If you would like to suggest a guest for the podcast series, please email csuite.perspectives@conference-board.org. Note: As a non-profit organization under 501(c)(3) of the IRS Code, The Conference Board cannot promote or offer marketing opportunities to for-profit entities.
Diana Scott: Welcome to C-Suite Perspectives, a signature series by The Conference Board. I'm Diana Scott, leader of the Human Capital Center and the guest host of this podcast series.
Today, we'll be discussing operating at scale in the AI era: how businesses are redesigning performance, trust, and control. As AI scales across enterprises, the real challenge for leaders isn't just the technology itself. It's whether performance systems are designed to preserve trust, control, and strategic alignment at speed.
Joining me is Nandini Ramaswamy, corporate vice president of seller incentives at Microsoft. Welcome, Nandini.
Nandini Ramaswamy: Hello. It's a pleasure to be here. Thank you, Diana.
Diana Scott: Oh, and thank you for being with us. Nandini, to start out, I'd love you to just tell our listeners a little bit about your organization within Microsoft.
Nandini Ramaswamy: I lead our global seller incentives organization that shapes how sales compensation drives growth, behavior, and trust at global enterprise scale. Our remit is to inspire our sellers to maximize their rewards and enable durable company growth and customer success. We are accountable for global incentive strategy and operations across all our businesses, with over 40,000 sellers spanning comp design, quota and target-setting, analytics and insights, payouts, governance, seller experience, and technology transformation. We are at the heart of empowering human ambition with AI.
Diana Scott: Wow, that sounds like a huge remit. So tell me now, how is AI fundamentally changing how the decisions are made and how behaviors are reinforced across organizations?
Nandini Ramaswamy: The two most important elements in any AI solution, in my mind, are intelligence and trust. The biggest shift is that decision-making can happen much faster, intelligently, and starts to become continuous and proactive. So historically in my space, for example, leaders looked backward after a quarter closed to understand what happened. But with AI, we are designing systems that surface insights in flight, while there's still time to course-correct.
In my team, that means AI-driven insights are embedded directly into the seller and leader experience, and that can help managers understand performance trends much faster. We are using AI to reinforce behaviors. Sellers get faster answers, clearer explanations, and fewer surprises, which builds trust and reinforces desired actions.
Diana Scott: Wow. So it sounds like AI is accelerating decision-making across the empire. So what role is it then really playing in performance systems?
Nandini Ramaswamy: Yeah, that's such a good question. In my mind, as you think about performance systems in general, right? So when AI shows up in performance systems, it amplifies whatever foundations already exist, for better or worse. So in a well-designed system, AI removes human toil, the busywork, right? It speeds up insights, allows leaders to get and act earlier, more confidently and emotionally. Now, in incentive compensation, specifically, the focus is on clarity. Sellers want clarity.
And it's on data integrity. You want trust. And governance. This is a space where you need immense amount of governance to make sure there is no exposure. So if sellers don't understand how they are measured, or leaders don't trust the data behind the outcomes, AI can seem like a black box and pretty scary. So that's why, in my organization, we are modernizing end-to-end from plan design through seller support, with a single source of truth, a unified data platform, and embedded governance.
It's a journey we are on, to be candid, but it is a journey we are very intentional about. The accountability inherent in performance systems is still there. It forces leaders to question whether their systems are simple enough—and simplicity is such a big, fundamental piece in all of this—trusted enough, and aligned enough to scale.
What's changed in my mind is not that AI makes decisions for people, but it changes what people spend time on. Leaders can focus on judgment and trade-offs rather than just being stuck in the data or volume of information in the current era.
Diana Scott: Wow. So you've kind of described seller incentives as the proving ground for operating in this AI era. So tell me, what are you seeing actually motivates the sellers today, then, and how is that evolving?
Nandini Ramaswamy: Yeah, it is early days, I mean, to some extent. But you know, ultimately sellers still want fairness. They want fairness. They want clarity. They want confidence that their effort will be rewarded. So what has changed is their tolerance for friction and their expectations, as our customer.
In an AI-enabled world, sellers expect answers much faster and explanations that they can trust independently. So that's why we are focused so heavily on seller experience. So through tools embedded within our seller portal that's built on Microsoft Dynamics and seller-facing Copilot experiences, sellers see performance, earnings, and guidance in one place. When sellers understand the why behind outcomes, motivation becomes much more natural and, hopefully, fewer escalations.
AI allows us to scale the clarity. It helps remove ambiguity, reduce disputes, and make performance feedback more immediate. And in that environment, incentives reinforce trust, which is ultimately what sustains motivation at scale and, therefore, performance at scale.
Diana Scott: OK, so let's turn to governance, though. So, governance seems to be shifting from more static roles to something that's kind of more continuous and embedded. So how has governance changed in recent years, and what does it look like actually in practice?
Nandini Ramaswamy: You're so right. Governance is moved from being a check at the end of the process to being designed into the system itself. It's an imperative. When AI is involved, static rules, and after-the-fact reviews just don't scale fast enough.
So my team, with embedded governance across four factors: data, fundamental; process, continuous improvement to run processes; tools, all our tooling and systems; and AI, starting with authoritative data sources, standard intake and prioritization for AI use cases, and clear human ownership.
And this is an important point. Human ownership. We have this concept of DRI, directly responsible individual, and we want a DRI for every agent as we conceptualize, build, and therefore execute against it. AI helps us operate faster. Absolutely, yes. But people still own decisions, and that decision-making is such a critical piece of it.
And therefore, governance becomes embedded from the very onset. Practically, this means shared visibility into AI workstreams, so transparency. And we measure those outcomes through bowlers. Bowlers are metrics that can measure business impact and continuous process innovation. So we can see this performance, the trends, and the associated results much faster than ever before.
And again, governance is throughout all of it, all decisions being made, and all execution that we do. It is just part and parcel of all of that.
Diana Scott: I mean, I think we all keep saying there's got to be a human in the middle.
Nandini Ramaswamy: Exactly.
Diana Scott: In all of this, AI is a fantastic tool, but a human will always be in the middle.
Nandini Ramaswamy: Human intelligence plus trust. Exactly.
Diana Scott: Absolutely. Nandini, we're going to take a quick break and then we'll be right back with more of this conversation with Nandini Ramaswamy.
Welcome back to C-Suite Perspectives. I'm your host, Diana Scott, leader of the Human Capital Center, and I'm joined by Nandini Ramaswamy, corporate vice president of seller incentives at Microsoft.
So we were talking about governance, but now I want to turn this conversation to a little bit something different. So as AI increases output, measurement becomes much more complex, doesn't it? So can you talk a little bit about how you and your team are defining, measuring success in this whole new environment?
Nandini Ramaswamy: Measurement is, again, very foundational, because that's the feedback loop. So, we're very intentional, very intentional about what we measure. So we have two new muscles that we are strengthening in my org. Number one, we are using bowlers. I spoke about bowlers just in the last comment, as well.
What are bowlers? Bowlers are basically business impact metrics. What's interesting about the bowler frame is that it doesn't allow one to overindex on just one metric at the expense of overall performance. So you have a measurement of multiple metrics that contributes to true business impact.
And the second piece is AI evals, evaluations. So we ensure agents deliver real business value by systematically measuring agent performance, quality, and reliability against defined outcomes. And we can keep improving against that. So this means improving accuracy, speed, and consistency, predictability, at a global scale.
So from an experience standpoint, our sellers, the leaders, spending less time navigating complexity and more time driving outcomes. That's the measurement that we really want to go after, right? And so if AI increases volume, but erodes trust, that's a failure, to be very clear. So success is when AI quietly removes human toil, human busywork, raises confidence and trust, and allows humans to focus on higher-value-order work and on decisions and outcomes.
And so measurement and all of that is the foundational block, in addition to process that will help with defining how AI is successful and therefore has impact.
Diana Scott: I really love, Nandini, how you constantly weave in this concept of trust and how important trust is. It's not just about improving output or speed. There has to be an underlying element of trust in all of this. I think that's really wonderful, and it's so important, as well.
There's a risk that teams are going to optimize for speed over alignment, though. So, what do you think leaders most underestimate when they're operating at speed with AI?
Nandini Ramaswamy: Optimizing for speed over alignment is a very real risk in the current context, right? Especially in terms of either org pressure, leader pressure, peer pressure.
So speed, speed absolutely becomes a very real possibility, a threat, to some extent. And so what I would offer up is, what leaders often underestimate is how quickly misalignment can scale. The point being, don't optimize for speed, but optimize for alignment. Because AI moves fast, and if strategy metrics, measurements, and incentives aren't aligned, you can get systemic fresh challenges at scale.
And so we all know about GIGO, right? Garbage in, garbage out. This means we need to make sure we have clean data, optimized process with constant continuous improvement of process and process optimization, strong governance, iterative evals, measurement via bowlers, which are balanced, and refresh all the time, all of these.
And these are table stakes, so that's why I would offer up we start with anchoring everything to a clear North Star vision. And then in my team, we prioritize simplification first. Reducing plan complexities, standardizing experiences, and aligning to company's strategy before layering automation and agents.
Cause AI doesn't eliminate the need for leadership judgment at all. In fact, it actually demands more of it much earlier and with greater clarity. And so, definitely optimize for alignment and coupled up with speed. It's an "and."
Diana Scott: Right, and, getting back to this concept of alignment. I mean, I think the opposite of alignment is obviously misalignment. So if organizations get things wrong, misalignment then becomes something that could actually scale rather quickly, I assume. How can leaders then detect when AI is amplifying the wrong behaviors, rather than improving the outcomes?
Nandini Ramaswamy: The signals are actually very quick. The earliest signals show up in trust and behavior, again, right? Because sellers escalate more, which can totally be measured in terms of escalation management. They don't escalate less. So that is an early signal. If leaders start questioning data more often, or if exceptions increase, these are all red flags.
Again, it goes back to trust. When you start having mistrust and misalignment, it's actually very quick to get to the signals that you can see. So the other piece is therefore even more important as to why we have feedback loops and listening systems. They matter so much in this current era because you have to do it very early on.
We continuously monitor these systems. Feedback loops, listening systems, satisfaction scores, and operational outcomes. AI, again, should reduce toil, not generate work. And when it does the opposite, if it's creating noise, distraction, leaders must really need to pause and adjust very quickly. And that's a very real possibility in terms of being able to adjust quickly. That is the positive piece of it. But we have to do this all the time.
The uber comment would be there is speed, there is early, and there is constant, to make sure that all that happens and all the time is the more imperative piece of it, to make sure AI can actually work at scale.
Diana Scott: Yeah, that's absolutely true. And I think the fact that we all know that the human element really is judgment. Judgment is always going to be something that AI is never going to bring to the process. Human judgment will always be part of this.
So, finally, I'm going to ask you this. As AI continues to scale, what are the most important questions do you think board members and executives should be asking to ensure that the performance systems really do build trust—you mentioned trust a lot—control, and long-term value?
Nandini Ramaswamy: And I'm going to say trust one more time because the most important questions really aren't about technology. They're about design and trust. Design, in that they need to be simple enough. Are our performance systems simple enough to be understood? Are they governed well enough to scale responsibly? Do our incentives reinforce the behaviors we want as the business evolves?
Both should ask whether AI is removing toil, or is it obscuring accountability? Cause that's not the point of this. Is AI strengthening alignment? Or is it amplifying noise? And whether leaders are measuring outcomes then, or are they measuring activity?
And so if you can get very clear from the very onset about these three fundamentals, then I think we have the right governance. And the other piece, of course, is from the early times, we have to measure it constantly and continuously and iteratively.
AI will scale whatever system you give it. The real leadership opportunity is making sure that the system is the right one to scale. And I will go back to, is human empowerment plus trust, right? Human intelligence plus trust actually comes into play.
Diana Scott: Wow. Thank you. Nandini, I really appreciate your joining us today to talk about this incredibly interesting and very, very important topic. So, thank you.
Nandini Ramaswamy: Thank you so much for having me. It's a pleasure.
Diana Scott: And thanks to all of you for listening to C-Suite Perspectives. I'm Diana Scott, and this series has been brought to you by The Conference Board.
C-Suite Perspectives / 11 May 2026
AI is a powerful accelerant for performance systems, but only when employees trust the system. Here’s how Microsoft does it.
C-Suite Perspectives / 07 May 2026
In this episode of C-Suite Perspectives we talk about how companies can move from sustainability ambition to measurable impact.
C-Suite Perspectives / 04 May 2026
How are companies translating their AI principles into practice?
C-Suite Perspectives / 30 Apr 2026
In this episode of C-Suite Perspectives we explore how sustainability is being integrated into European corporate strategy.
C-Suite Perspectives / 28 Apr 2026
Consumer confidence has ticked up slightly from March but remains low, and the global economic outlook has been downgraded for a second time.
C-Suite Perspectives / 27 Apr 2026
Learn what business leaders need to know when operating in and with Canada.