Unions are negotiating who gets notified before a new scheduling algorithm goes live, who sits on tech committees and what limits exist on electronic monitoring and surveillance technologies.
Research from the UC Berkeley Labor Center inventory of union contracts shows that collective bargaining is already being used as a tool to give workers a voice over how automated management systems improve jobs rather than degrade them.
Workers in several sectors have already won contracts that constrain how monitoring data can be used for attendance, performance evaluation, discipline or interference with protected activity, according to the UC Berkeley Labor Center’s inventory of union contracts.
The policy signal
In April, OpenAI published an industrial policy blueprint calling for formal worker co-governance of automation deployment across U.S. employers. “Give workers a voice in the AI transition to make work better and safer, including a formal way to collaborate with management to make sure AI improves job quality, enhances safety and respects labor rights,” as documented in the blueprint.
While OpenAI is a tech vendor and a business, it’s notable that the organization building some of the most powerful automation tools in the world calls for worker governance as a policy baseline.
OpenAI’s call for worker co‑governance parallels a still‑pending legislative effort in Congress, according to a brief from law firm Fisher Phillips. In 2023, the Senate introduced the No Robot Bosses Act. This aims to bar employers from relying solely on automated decision systems for hiring, scheduling, pay and termination decisions and would require human review and bias‑testing of those tools.
Read more: That AI notetaker could be your next compliance problem
The employee signal
About two‑thirds of U.S. workers cite risk associated with emerging technology as a top macro concern, according to MetLife’s 24th Annual Employee Benefit Trends Study, released earlier this year. That ranks alongside mental health and geopolitical instability as a present-tense pressure workers are navigating now.
As companies move faster on automation, the MetLife study finds that feeling connected at work and using human-centered skills remain the strongest drivers of workforce performance. Employees who feel connected are 25% more productive and show 15% stronger retention.
Findings from the research and grantmaking organization Washington Center for Equitable Growth document the forms of automated management that workers are pushing back against. These include algorithmic scheduling, pace‑setting software and continuous performance monitoring. These were identified as worker pain points across sectors, including logistics, healthcare, retail and financial services.
According to the Washington Center for Equitable Growth, many union contracts now include advance notice requirements when employers use automated management and surveillance tools, and some agreements include limits on how these tools affect workers’ job conditions. These provisions often emerged from workplace conflicts and grievances over the introduction of algorithmic tools.
Read more: AI’s $130M lobbying blitz hands HR the real AI compliance burden
What HR leaders should do now
For organizations interested in following the suggestions from OpenAI, there are a few areas for HR leaders to consider.
First, map what’s already deployed. Many organizations have more algorithmic management tools in place than HR leaders realize. These are often adopted at the department level without central visibility or policy.
Second, build a formal input process before the next deployment. This requires a defined protocol addressing who gets consulted, at what stage and what criteria determine whether a tool meets the bar for job quality.
Third, set explicit limits on harmful uses. The OpenAI industrial‑policy framework names workload intensification, narrowed autonomy and undermined scheduling and pay as areas that HR leaders can operationalize through written policies, vendor contracts and manager guidance.
“There is also a risk that the economic gains concentrate within a small number of firms like OpenAI, even as the technology itself becomes more powerful and widely used,” according to the blueprint. “Workers using AI might well agree that it’s increasing their productivity without believing they’re seeing the benefits.”
Credit: Source link









