The rapid news cycle concerning the workforce and AI is putting pressure on HR professionals to address tech topics with lightning speed.
According to McKinsey Global Institute’s July report, Generative AI and the future of work in America, the U.S. is in an era of workforce development accelerated by generative AI. This will require most employers to expand their hiring practices, with the report predicting that 30% of work hours will shift from being done by humans to automated systems in the next six to seven years.
The report forecasts that 11.8 million workers may need to adapt their current line of work by 2030. Currently, many of these people work in office support, customer service and food services. “Employers will need to hire for skills and competencies rather than credentials, recruit from overlooked populations (such as rural workers and people with disabilities) and deliver training that keeps pace with their evolving needs,” researchers write.
Organizational leaders must balance steadiness with the adoption of culture-changing new technology. Stakeholders face significant questions, such as determining which jobs AI will reduce or enhance. Meanwhile, HR relies on systems and services that tech companies are now delivering with new layers of machine learning. And facing shorter-than-ever planning horizons, large-magnitude decisions are making some companies, and their HR and tech teams, feel rushed.
7 top AI priorities for HR leaders
To help employers navigate this rapidly changing environment, we talked to thought leaders and founders from the C-suite, product development, and legal and compliance about what HR should focus on today when it comes to AI. Here’s what we found out:
Jordi Romero, founder and CEO of Factorial
To effectively prepare for AI regulations amid the changing employment landscape, HR leaders should adopt a cautious yet proactive approach. Embracing new AI-powered technologies requires avoiding hype-driven predictions and instead fostering a curious and open mindset. HR leaders are encouraged to experiment with these technologies on a smaller scale, such as within their own HR teams, to gauge their practicality and benefits.
Malcolm Burenstam Linder, CEO and co-founder of Alva Labs
Regulation isn’t trying to rig the system against AI—we’ve seen examples of flawed execution, and regulation will reduce those instances. As with so many other technologies, AI models need to be auditable; if they can demonstrate that they can accurately predict job success and safeguard users’ data and privacy, then there’s no reason to fear them. I think we can expect these early examples of legislation to be catalysts for global change, influencing other regions across the world as they face up to what using AI in hiring really means.
Chris Briggs, SVP identity and head of product at Mitek
The first area related to AI that leaders working across business functions, including HR, should focus on is mitigating bias. Even the type of AI “bias” that makes headlines, isn’t usually intentional. It is a case of low accuracy or inconsistent accuracy across demographic boundaries. Ensuring that your AI systems are trained on a diverse dataset to account for a diverse talent pool is key to mitigating this risk that can unintentionally lead to discrimination. Additionally, transparency should be prioritized with the goal of building trust in technology and in the organization as a whole.
Asha Palmer, SVP of compliance at Skillsoft
Setting up a risk assessment should be high on your priority list. While the use of AI is inevitable, it’ll be important to not only track its use but also assess the risks involved with its use. We can’t take a “set it and forget it” approach—constantly assessing its use and impact on the business will be an ongoing process. Similarly, we cannot deploy AI in the workplace without first setting up training and learning opportunities to better educate ourselves and employees to understand AI’s power and pitfalls. Whether you’re onboarding new talent or reskilling the talent in place, learning and development should go hand in hand with the deployment of the technology.
Amanda Monroe, labor and employment attorney at Michelman & Robinson
Companies need to be thoughtful and not fully erase the human element from human resources. There will remain decision-making processes that are generally more of an art than a science that may be difficult (at first) for AI to seamlessly replicate. As we are already beginning to see, AI’s success rate will largely depend on the “brain” or “library” from which it is pulling. Thus, this is a great time for employers to ensure that their internal forms, policies and processes are fully compliant, up-to-date and accurate such that the integration of AI provides reliable information and data.
Andreea Wade, VP of product strategy at iCIMS
Keep in mind that AI should be used as a productivity aid to get better-quality starting points, to give better context for decisions, to improve experiences and to reduce time and streamline processes. The concern comes in when the processes and technologies used aren’t ethical or responsible. All decisions should begin and end with human decision points. Technology vendors and tools must be technically robust and safe, inclusive and fair, private and secure, transparent and accountable.
Vladimir Polo, CEO of AcademyOcean
A top priority is to train the employees on how to use AI responsibly. As artificial intelligence becomes more incorporated into all parts of work, it is critical to give AI literacy training to assist people in grasping AI ideas, advantages, and limits. When dealing with AI systems, employees should be trained on possible biases and ethical implications. Risks may be addressed, confidence in AI systems can be created, and workers can embrace AI as a helpful tool in their everyday work by fostering responsible AI use and building a culture of AI awareness.
HR tech considerations checklist
- Experiment within your HR own teams to test new tech first – Jordi Romero, founder and CEO of Factorial
- Investigate how AI models are auditable and demonstrate accuracy – Malcolm Burenstam Linder, CEO and co-founder of Alva Labs
- Prioritize transparency with the goal of building trust in technology – Chris Briggs, SVP identity and head of product at Mitek
- Put a risk assessment high on your priority list and keep eyes on it – Asha Palmer, SVP of compliance at Skillsoft
- Check your internal data and integrate AI based on reliable information – Amanda Monroe, labor and employment attorney at Michelman & Robinson
- Insist that all decisions begin and end with human decision points – Andreea Wade, VP of product strategy at iCIMS
- Introduce training within your organization to teach AI responsibility – Vladimir Polo, CEO of AcademyOcean
Credit: Source link