It’s evident that Artificial Intelligence (AI) is revolutionizing Human Resources (HR) in countless ways: it streamlines recruitment and talent acquisition, improves candidate experience, enhances employee onboarding and training, allows predictive analytics for employment engagement and retention, and increases efficiency of various operations like employee’s assessment or payroll management.

Yet, do you truly understand how your company is utilizing AI? Have you ever scrutinized the nature of the algorithms you are buying? 

Imagine if AI tools discriminated against certain genders or age groups, if chatbots exhibited aggressive behaviour, if your tools violated fundamental rights or if the black-box nature of the algorithms prevented you from understanding the hiring decisions. 

These issues aren’t hypothetical—they’ve happened. 

iTutor Group settled a lawsuit for unjustly rejecting candidates, Amazon’s AI showed bias against female applicants, Google faced criticism for tagging black people as “gorillas” and CVS was sued for using AI facial recognition in screening job applicants, just to name a few. 

Could similar errors be lurking in your company, unnoticed?

Consider Amazon’s case: the machine learning platform examined a decade’s worth of resumes in a predominantly male-dominated sector and subsequently favoured male resumes over female ones. Are your AI tools properly trained? 

Programming errors and biases in AI algorithms can lead to unfair hiring practices, biased assessments, and unequal opportunities for diverse employees. Relying solely on AI for decisions may result in culturally mismatched hires and disengaged employees. Moreover, as AI-driven data collection expands, it’s crucial to securely store and ethically utilize data to protect privacy rights.

Such failures not only tarnish your HR policies – preventing you from attracting talented candidates – but also carry reputational and legal risks, with penalties reaching 7% of your global annual turnover under the new EU AI Act.

By the way, do you know how your company will be impacted by the EU AI Act? If not, you should. If you use AI, you accept the responsibility to adhere to all regulations, assess potential consequences, limit liabilities, and understand the risks involved.

What needs to be done:

Strategic oversight: Management must understand how AI is used and define a robust ethical framework, compliance rules, and operational guidelines (for instance, have you ever decided about disclosing candidates about how AI is used in the process, and what it’s measuring?) 
Risk assessment: With the EU AI Act in effect, risk assessments are imperative. Those will lead to changes in decision-making processes, creation of governance structures, renegotiation of contracts, and development of new roles and procedures.
Ensure fairness and diversity: Scrutinize the data used by AI, promote diversity examining factors, foster a diverse HR team to identify and address biases, monitor AI outputs rigorously and maintain human involvement to establish meaningful connections with candidates/employees and uphold organizational values. 
Comprehensive training: Foster a widespread understanding of and compliance with these standards. Assuming knowledge of rules can lead to significant oversights—as demonstrated by Samsung’s 2023 incident where confidential code was leaked via ChatGPT.

Embracing AI challenges can be a competitive advantage, but it requires more than technical management—it demands careful consideration of legal, ethical, and operational impacts. 

Let me make it clear: I’m not opposed to disruption. In fact, for the past 8 years, I’ve been advising clients whose business models thrive on disruption. As someone deeply involved in guiding companies through regulatory landscapes, I firmly believe that confronting these challenges directly isn’t just essential—it’s an opportunity to propel businesses forward and cultivate a hiring process characterized by fairness, transparency, and the ability to attract a diverse pool of talented candidates.

Adolfo Mesquita Nunes

Lawyer (AI Compliance)

Parter at Pérez-Llorca

O conteúdo AI in HR: Legal Landmines (and Competitive Edges!) aparece primeiro em Landing.Jobs.

By