European employers are rapidly adopting workforce technology, but many are lagging in AI compliance, putting workforce management and regulatory obligations at risk. That’s the stark finding from global employment and labor law practice Littler’s 2025 European Employer Survey, which surveyed over 400 HR professionals, in-house lawyers and business executives across 14 European countries this fall.
The contradiction
Seventy-one percent of European employers have reassessed or are actively reassessing job responsibilities due to the implementation of AI. On the high end, in Italy, that figure reaches 79%. On top of that, more than a quarter have reduced hiring or cut jobs as a direct result of AI deployment.
Fewer than 20% of organizations say they are “very prepared” for the EU AI Act. Its provisions, which cover most workplace AI applications, take effect in August 2026, less than nine months away. The law requires high-risk AI systems to meet strict standards, including high-quality input data to prevent discrimination, proper human oversight, robust security and clear traceability of information.
By contrast, a fifth of employers report being “not at all prepared,” the same share as a year ago. Despite twelve months passing and the deadline drawing closer, overall readiness has not improved.
“AI is transforming how work gets done, but most employers haven’t built the compliance infrastructure to sustain that transformation legally,” says Deborah Margolis, Littler senior counsel in the U.K.
What compliance actually requires
The EU AI Act imposes substantial obligations on employers who deploy “high-risk” AI systems, which include recruitment tools, performance evaluation systems, task allocation algorithms and employee monitoring technologies.
Employers must use AI according to instructions, assign humans to oversee AI decisions, monitor systems for risks, keep usage logs and conduct fundamental rights impact assessments.
Violating the AI practices by utilizing high-risk or harmful uses, such as manipulative or discriminatory systems, can result in fines of up to €35 million, or 7% of a company’s global annual turnover from the previous year, whichever is higher. Other breaches, for example, failing to maintain proper risk management, documentation or human oversight for high-risk AI, can lead to fines of up to €15 million, or 3% of global annual turnover, whichever is higher.

The preparation gap
According to Littler, among the 80% of employers who say they are “at least somewhat prepared” for the EU AI Act, most have not completed foundational compliance steps.
- Only 51% have reviewed or updated AI use policies
- Only 47% have identified which compliance obligations apply to them
- Only 40% have conducted training for relevant teams
- Only 34% have conducted internal audits of AI use
- Only 29% have assigned internal ownership for compliance
- 10% have taken no steps at all
“Our survey suggests that there is currently a lack of preparedness for the EU AI Act, which is a concern given the scale of the law’s compliance obligations and the significant penalties for non-compliance,” Margolis notes.
Read more | Danone case study: Workforce planning built on business strategy
The works council factor
AI isn’t just a regulatory compliance issue; it’s becoming a labor relations flashpoint. Forty-four percent of employers report that AI has arisen in discussions with trade unions or employee participation works councils over the past year. In Germany, that figure reaches 52%.
Works councils want to understand AI’s impacts on company strategy, employee privacy and potential job displacement. In jurisdictions with strong codetermination rights like Germany and the Netherlands, implementing AI systems that alter job responsibilities likely triggers mandatory consultation requirements.

“There have been numerous works council disputes related to remote work policies, particularly in jurisdictions like the Netherlands where employees don’t have a ‘right’ to work from home,” says Dennis Veldhuizen, a Littler partner in the Netherlands.
He adds that works councils are playing a growing role in discussions with management about AI, seeking insight into its effects on business strategy, employee privacy and the risk of job losses.
Employers who implement AI-driven workforce changes without adequate consultation face potential legal challenges, implementation delays and damaged employee relations.
What ‘very prepared’ looks like
The 18% of employers who say they are very prepared for the EU AI Act, mostly large companies, have moved from updating policies to implementing operational changes. They’ve conducted comprehensive AI audits, assigned cross-functional task forces, engaged external expertise and established documented human oversight protocols.
“It’s more critical than ever that businesses identify their obligations, audit their current exposure, conduct training and assign a cross-functional task force to oversee these efforts,” Margolis emphasizes.
To assess whether an AI-driven workforce strategy is legally sustainable, the Littler report implies five critical questions:
- Can you list every AI system currently used in HR processes?
- Have you identified which systems qualify as “high-risk” under the EU AI Act?
- Who is clearly responsible for AI Act compliance?
- Have you consulted with works councils about AI implementation?
- Can you demonstrate human oversight of AI-driven employment decisions?
The survey data suggests fewer than one in five European employers can answer “yes” to all five questions.
The compounding complexity
AI compliance doesn’t exist in isolation. The survey reveals multiple intersecting pressures for HR leaders across Europe:
Pay Transparency Directive
Only 24% of employers say they are very prepared for the June 2026 deadline, which requires pay visibility. Littler notes that few employers have mapped how to comply with both laws when using AI for compensation decisions.
U.S.-Europe divergence
For employers with U.S. operations, 79% say they struggle to manage divergent regulatory approaches, especially around inclusion, equity and diversity initiatives required in Europe but considered problematic in the U.S.
Return-to-office monitoring
Sixty-three percent of employers have increased or plan to increase required in-person workdays, often using technology to monitor compliance. But those monitoring tools may themselves qualify as high-risk AI systems.
Ongoing changes
Since this survey was conducted, the regulatory framework has shifted. Recently, the European Commission unveiled its Digital Omnibus package, proposing amendments to ease certain AI Act requirements and potentially extend implementation timelines for high-risk AI systems.
The European Parliament and Council must still approve these changes, which include exemptions for AI used in narrow procedural tasks and adjustments to compliance deadlines. However, the proposals don’t eliminate employers’ fundamental obligations around AI transparency, human oversight and risk management.
Littler’s survey findings underscore that most organizations haven’t yet built the foundational compliance infrastructure these regulations require, regardless of when final deadlines take effect.
“Are you using AI to transform your workforce faster than you’re preparing to comply with AI regulations?” the report implicitly asks.
Nearly three-quarters of European employers are reshaping jobs for AI, but with just 18% fully prepared for the EU AI Act, the answer for many HR leaders is currently yes.
Credit: Source link









