Navigating the EU AI Act: Implications for HR and recruitment

The EU AI Act sets strict rules for using AI in HR, covering hiring, evaluations, and workplace decisions. Employers must ensure transparency, fairness, and staff training by 2025. Non-compliance risks fines and reputational harm. Proactive steps, like updating AI policies and maintaining human oversight, are essential for compliance and trust-building in the workplace.

Earlier this year, the EU Artificial Intelligence Act (“AI Act”) entered into force. The AI Act introduces a risk-based legal framework for AI systems and applies to companies that are located in the EU; develop and place AI systems on the EU market or put AI systems into service in the EU under their own name or trademark, irrespective of the provider’s location. The AI Act’s obligations will become applicable in phases starting on 02 February, 2025. 

How the AI Act Applies in Recruitment and Employment

AI systems intended to be used for the recruitment or selection of individuals, in particular to place targeted job advertisements, to analyze and filter job applications, and to evaluate candidates; and AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behavior or personal traits or characteristics, or to monitor and evaluate the performance and behaviors of individuals in such relationships.

Therefore, employers deploying AI systems for candidate screening, employee evaluation, and other employment-related decision-making in the EU must take appropriate steps to comply with the AI Act’s requirements related to the use of high risk AI systems. There are, of course, other scenarios where the use of AI in the workplace could trigger certain obligations, but these are the most obvious and are those that will be most relevant to employers based on current-use cases.

Key Obligations

Transparency: Employers must inform candidates and employees about the use of a high-risk AI system in recruitment and employment, explaining how the AI system will function and how decisions will be made. Individuals have the right to request explanations on the role of the AI system in the decision-making procedure and the main elements of the decision taken.

Data Management: If employers exercise control over the input data used in high-risk AI systems, they are required to ensure that the AI system training data is relevant and sufficiently representative (i.e., accurate and without bias) to prevent discriminatory outcomes.

Monitoring: Employers using high-risk AI systems in recruitment and employment must continuously monitor the operation of those systems following the instructions provided by the AI system’s provider, and identify any risks arising from their use.

Human Oversight: Employers must ensure appropriate human oversight over the operation of high-risk AI systems for recruitment and employment activities to ensure fairness and accuracy.

Data Protection Impact Assessment (DPIA): Where a high-risk AI system processes personal data, employers are required to conduct a DPIA. The DPIA should evaluate the potential impact of AI systems on individuals’ rights and freedoms and propose mitigation measures, where necessary. 

AI Literacy: Effective as of February 2025, employers must ensure that staff members and other persons dealing with the operation and use of AI systems on their behalf have a sufficient level of AI literacy, tailored to their technical knowledge, experience, education, and the context in which the AI system is used. Essentially, employers must implement robust, up-to-date AI Act training programs to meet this requirement. This is a general requirement applicable to any AI system and is not specific to high-risk AI systems.  

Workers’ Representatives: Employers using high-risk AI systems in the workplace are required to inform workers’ representatives and the affected workers before deployment. This will likely coincide with existing works council obligations.

If a company is a “provider” of high-risk AI systems for use by deployers (employers) in their HR activities, it will be subject to more stringent ”provider” obligations under the EU AI Act. This includes conducting conformity assessments, establishing and implementing a comprehensive risk management system throughout the lifecycle of the AI system, implementing data quality, data governance and data management requirements, maintaining comprehensive technical documentation of the AI system, providing adequate information to deployers of the high-risk AI system about how to operate the system safely (instructions for use), and implementing post-market monitoring. 

Non-compliance with the AI Act can result in complaints, investigations, fines, litigation, operational restrictions and damage to a company’s reputation. The GDPR continues to apply where AI systems process personal data.

Conclusion

Companies that use AI systems in the context of their human resources activities should take proactive steps to review their AI practices in light of the new requirements under the EU AI Act. This is required both to comply with the new law and to build trust with candidates and employees. Employers should implement the necessary compliance measures, such as drafting or reviewing AI governance policies and procedures and ensuring human oversight and transparency. The requirement to ensure AI literacy of staff members will take effect sooner than other obligations and should be prioritized, where possible.

    Read more

    Latest News

    Read More

    How to stay grounded while guiding others through change

    7 January 2025

    Newsletter

    Receive the latest HR news and strategic content

    Please note, as per the GDPR Legislation, we need to ensure you are ‘Opted In’ to receive updates from ‘theHRDIRECTOR’. We will NEVER sell, rent, share or give away your data to third parties. We only use it to send information about our products and updates within the HR space To see our Privacy Policy – click here

    Latest HR Jobs

    University of Oxford – Nuffield Department of Primary Care Health SciencesSalary: £31,459 to £36,616. Grade 5 (with a discretionary range to £39,749 per annum) This

    Heyne Tillett Steel is an award-winning structural and civil engineering practice with a reputation for intelligent design and innovative, practical solutions. Based in central London

    JOB TITLE: Hotel HR Manager – FTC 12 Months (Start: Early 2025) LOCATION: North West England SALARY: £45,000 per year performance-based bonus, rewards, and comprehensive

    Leeds Arts UniversitySalary: £39,370 to £43,002 This provides summary information and comment on the subject areas covered. Where employment tribunal and appellate court cases are

    Read the latest digital issue of theHRDIRECTOR for FREE

    Read the latest digital issue of theHRDIRECTOR for FREE