Executive anxiety about productivity loss has spurred scores of U.S. organizations to institute return-to-office policies. A new report from Seramount, a global talent services firm in Washington, D.C., however, contends that what leaders perceive as a productivity problem associated with remote or hybrid work is actually a “measurement” problem.
The report found that many leaders today still use legacy office-era metrics such as visible activity to judge performance, rather than measuring “outcomes, alignment and impact.” The report, based on conversations with more than 100 CHROs, urges leaders to adapt to the times.
Stephanie Larson, principal, strategic research at Seramount, explains that the research points to an “AI productivity paradox” at work: AI is able to make work faster, but not necessarily better. “And that,” she says, “is what makes it a productivity problem.”
At the same time, she adds, AI can lower the cost of production, but not the cost of judgment. So, if employers focus only on using AI to speed up processes, they may get more output, but also more need for reviews, more rework, more ambiguity—and ultimately, longer cycle times.
“AI actually can weaken engagement, because people lose clarity about what good performance looks like and where accountability sits,” she explains. Because of that tension, organizations need to be asking whether they are building the “human judgment” needed to make AI’s acceleration valuable.
To Larson, AI should act as a “thought partner, not just a tool,” adding that AI can be most useful when it helps people think better, not when it does the thinking for them.
“I believe we miss how their strengths can help us interrogate and improve our work. For HR leaders, that means building a workforce that knows how to use AI—not just to produce more, but to question more,” she says.
Employees also need to ask: Is this accurate? What might I be missing? What context or nuance got flattened? What risk am I taking on if I rely on automated output?
“Fluency with the AI tool is not the same as judgment,” she says.
Larson explains that many organizations are racing to scale AI adoption so quickly that it may favor deployment speed over building employee judgment and decision-making capabilities. And that can drive four significant risks, she adds, including:
- Reputational risk: Polished but lower-quality work can be circulated before anyone catches potential problems.
- Revenue risk: Managers can end up spending time correcting output that only looked efficient upfront.
- Leadership risk: Many of the tasks AI is absorbing were never just tasks; they were training grounds where people learned judgment.
- Inclusion risk: AI tends to amplify the systems already in place, so early differences in access to training, manager support and room to experiment can quickly widen into larger gaps in capability and opportunity.
“When it comes to talent development, HR should be prioritizing ways to ensure employees can effectively review, challenge and refine AI-generated output,” Larson says.
Larson would focus on critical thinking, writing, revision, communication and problem-solving—normally framed as “soft skills.” However, she adds, there is “nothing soft” about the ability to communicate clearly, weigh competing perspectives, anticipate counterarguments or make sound decisions during complex moments.
“I spent nearly 15 years in higher education, most recently as an English professor, and that background still shapes how I think about AI,” she says. “We need humanists and social scientists now more than ever, because critical thinkers know how to question, critique, contextualize and challenge something, not just accept it at face value.”
The best use for AI
Looking ahead, Larson says, organizations that lead the pack will be those that understand the objective is not merely faster workflows, as “smarter goals” offer better judgment, wider trust and more equitable access to growth.
“In a world where AI can help nearly everyone be productive faster, the real differentiator becomes whether an organization is still advancing its people: whether employees are learning to think critically,” she says.
To Larson, that means the strongest, most successful organizations will use AI to strengthen human capability.
“They will protect the developmental experiences, mentorship and accountability structures that build future leaders,” she says. “That will show up in performance because the work is better, in culture because people trust the system more and in retention because people will stay where they can continue to grow.”
Credit: Source link









