HR Executive published a story recently about the growing use of quotas and incentives of various kinds to make employees use AI tools. This is a spectacularly bad idea.
There are many reasons why. To begin, if you make people do something, you will get what you are measuring, which is rarely the underlying change you want. In most cases, the quota and incentive programs are self-reported—“I used 10% more AI this quarter”—but what we really want is the ability to do existing work faster and better, and especially to solve new problems.
Let’s assume that my leader has given me a quota for this quarter. How will I respond to that? Knowing there is no way to check on my claim, I could of course just make it up—or, if I feel guilty, I will use ChatGPT more often, even when I don’t need to do so. Rather than looking up caterers for my event, I ask AI to do it. It generates March Madness bets, draft memos I won’t use and so forth. I use it more but don’t get any real value from it.
Suppose we assume that our leadership is a little more sophisticated and wants me to document, as part of my performance appraisal, three new ways I have used AI this year. Although I can still make stuff up, it sounds better. We have to trust that what employees tell us really is an improvement.
Among people who say they are using AI, most of what they’re doing these days is just substituting Copilot or Gemini built into our search engines to do the searches for us. That saves time, but, in my experience, it is not better if you want a serious answer based on credible sources. We can measure faster, but measuring “better” is quite difficult.
What we are doing so far isn’t working well, as you may have seen from the recent State of AI in Business study, which found that only about 5% of AI projects seem to work and produce real results for the organization. Introducing AI to improve work output in meaningful ways is a difficult task that cuts across jobs. Individual employees cannot do that on their own. Quotas and incentives for individuals are not a substitute for organizational change.
Opportunity for employees to figure out AI use works better than incentives
The great opportunity here is if employees—not just individually but in work groups—could use their discretion to figure out how to do things better. Think about the great success of “lean production,” where employees took responsibility for improving quality, productivity and performance in their work areas and, in the process, eliminated the need for much of the supervision. Why do they do so? In part to make their own jobs easier and better, and in part because they care about the organization and its success. It’s a group effort not an individual one.
What should we be doing to increase the effective use of AI at work? First, stop scaring employees by talking about how much headcount we are hoping to cut by using it. Top-down and across-the-board efforts to introduce AI just are not going to work if employees think the goal is to kill their own jobs.
Second, drop quotas and incentives for using AI. Instead, give some time and resources to first-mover groups who have an idea of what to change. Take the ones who succeed and have them explain to others what they did and how they did it. If you don’t have any, find an example among your vendors. Bring them in to sit with groups over lunch or coffee and explain what they did and especially how they did it. If there are no questions, people aren’t listening. We need to see examples of a change in a context that looks like ours to see what to do—and the more examples we see, the better.
Giving recognition and rewards to groups that use AI in ways that truly improve the workflow is a good idea. But successes are unlikely to pour forth at once and on schedule. They have to be seeded by giving people with imagination, who may not be obvious to identify in advance, the chance and time to play around with an AI tool and the opportunity to talk to people who have had at least a little experience about how to do it. Rather than just expecting individual employees to come up with great improvements, it makes more sense to solicit proposals for bigger uses that require support from the organization.
Third, this is a place where psychological safety really matters. It is about the need to take some risk and feel like we won’t be punished if we fail. If we believe we will be dinged if we get permission to try something and it doesn’t work (likely because of a lack of cooperation), we won’t even try.
Finally, even if employees believe AI won’t take their job, if they think becoming more productive will just mean more of the same work to do, we will kill the incentives to use them. This is happening now with programmers, where AI is writing initial code and the humans are now doing more boring work of just checking mistakes across more and more code being produced. The idea that AI will take over the boring work and leave employees with the interesting tasks looks like a myth. Yes, it writes the reports now, but employees check them. If they can’t see how it makes them better off, they have no incentive to try to use AI. The idea that we can make them innovate with quotas and incentives is also a myth.
Credit: Source link









