AI officially overtook both immigration and DEI as the top area where employers are bracing for changing regulations and workplace policies, according to a new report from law firm Littler.
The annual report, which surveyed about 300 C-suite executives, found that 84% of respondents expect AI-related policy or regulatory changes in the next year. That figure is double what it was at this time last year.
Niloy Ray, co-chair of the AI & Technology Practice Group at Littler, says that the shift highlights a number of realities, including “natural growth” in the number of businesses that have implemented AI over the last year.
“This year, we’re closer to AI becoming the median expectation, as most employers are using it and grappling with regulatory and compliance obligations,” Ray says.
At the same time, the regulatory landscape is growing more complex. Multiple measures are moving forward to regulate AI in a number of states, while the Trump administration continues to press for a uniform approach that limits state oversight.
“Smart employers are expecting more and more change,” Ray says of the regulatory landscape.
Despite the pace of AI adoption, governance is lagging.
The report finds that 68% of organizations do have a formal policy around AI use in the workplace, nearly double where the figure stood last year. However, such policies are often not comprehensive enough, Ray says. Just over half have restrictions on the information that can be entered into the tools, while approximately the same percentage have a formal process to approve the tools and use cases.
Without such provisions, organizations could be damaging both the AI confidence and the utilization of the workforce.
“Too many businesses, in my experience, don’t provide a clear enough pathway to using AI, because their governance and policies tend to put the compliance and risk mitigation burden on the end user, rather than the organization as a whole shouldering that,” Ray says.
For instance, when policies dictate to employees that their AI use cannot subject the organization to bias, hallucinations or plagiarism, it’s unrealistic to expect end users to figure out such risks on their own. Employees should be empowered to use AI, Ray says, in a way that matches the organization’s “enthusiasm for AI use and level of risk aversion.”
Policies should be created through a “multi-stakeholder process,” involving a governing group from leaders across functions. Guidelines should stipulate approval, implementation, use and development of AI and must include very clear, “not prospective, but descriptive” objectives.
At the end-user level, it’s vital to have “clear do’s and don’ts,” Ray says. A handbook, for instance, should cover which AI tools are approved, processes for seeking approval outside of that list and basic reminders on how to treat confidential information.
“All of those basic rules of the road are what the company as a whole has decided on—and then it’s made clear to employees that this is what the philosophy is of the business, which has taken on the burden of deciding which tools and uses meet expectations,” Ray says. “Then, employees can go forward empowered, knowing the business has taken on the burden of managing risk.”
See also: What an employment attorney wants HR leaders to know about AI risk
A rising role for HR
When it comes to implementation, HR appears to be ground zero for most companies, reflecting the “critical” role CHROs play in shaping the development and rollout of AI strategy.
When asked which areas of the organization AI was being used in, HR topped the list—tied with IT—at 54%. Following these two functions, AI use is most common in marketing, sales and communications; followed by legal and compliance.
HR is the lynchpin that takes AI policy from being hypothetical to a reality, Ray says.
“When the rules meet reality, if they haven’t been tested against what employee workflows and expectations are, people won’t use it,” he notes. “CHROs understand how AI is going to be used.”
They also represent the function that “knows how to beat to the collective heartbeat of its people,” which will be key to driving adoption.
“We can’t forget,” Ray says, “the human part of the equation is the hardest one to manage and also the most critical to protect.”
Credit: Source link









