Navigating AI & Automated Systems: DOL’s Guidelines and Legal Implications
- August 27, 2024
- Posted by: Selen Warner
- Category: Blog
Earlier this year, the U.S. Department of Labor’s (DOL) Wage and Hour Division (WHD) issued new guidance clarifying employer responsibilities under federal labor laws concerning the use of automated systems and artificial intelligence (AI). The Field Assistance Bulletin No. 2024-1, titled “Artificial Intelligence and Automated Systems in the Workplace under the Fair Labor Standards Act and Other Federal Labor Standards,” aims to assist WHD field staff in applying the Fair Labor Standards Act (FLSA) and related laws to the increasing integration of automated systems and AI technologies.
The guidance emphasizes several critical areas where federal labor laws intersect with automated systems and AI deployments. One key area highlighted is the accurate tracking of hours worked. Employers utilizing AI to monitor work time, breaks, or geographic location must ensure compliance with federal minimum wage, overtime, and other wage requirements. The guidance warns that these technologies, while beneficial for productivity tracking, may sometimes inaccurately count hours worked, potentially leading to violations if not properly managed.
Moreover, the bulletin stresses that productivity-measuring technologies like those tracking keystrokes or internet usage do not replace the obligation to accurately account for hours worked under the FLSA. Employers are reminded of their duty to exercise diligence in tracking employees’ hours, including waiting times and short breaks, and to oversee AI systems generating predictive time entries to ensure they reflect actual hours worked.
According to the guidance, human oversight is necessary to ensure automated systems and AI accurately calculate and pay employees in accordance with minimum wage and overtime laws. Systems that adjust pay rates based on factors like demand fluctuations or worker efficiency must be carefully monitored to prevent potential wage law violations.
The guidance also addresses the administration of leaves under the Family and Medical Leave Act (FMLA), cautioning that automated systems handling leave requests must comply with FMLA requirements and refrain from soliciting excessive information from employees. There may be risks of systemic violations if AI or automated systems are not appropriately calibrated to manage employee leave in accordance with federal law.
The bulletin also warns against the use of AI technologies as lie detectors, noting that technologies using biometric data such as eye measurements, voice analysis, and other body movements to detect deception may violate the Employee Polygraph Protection Act (EPPA) unless used within the law’s limited exemptions.
Recently, in the Commonwealth of Massachusetts, a legal dispute has arisen over CVS Health Corp. and CVS Pharmacy Inc.’s use of artificial intelligence tools in employee interviews, which are alleged to function as “lie detectors.” A candidate, who applied for a supply chain position with CVS, filed the lawsuit. The case centers on CVS’s use of video-interview technology developed by HireVue, Inc. as part of its application process. During the recorded video interviews, applicants are prompted to respond to questions such as “What does integrity mean to you?” and “Tell me about a time that you acted with integrity.” HireVue, after recording these responses, uploads the video to a third-party platform called Affectiva. Affectiva then utilizes AI to scrutinize applicants’ facial expressions, eye contact, voice intonation, and inflection.
This case highlights the dual nature of AI as a powerful tool for HR professionals, offering potential benefits but also posing significant risks that can lead employers into legal challenges. By using AI to analyze facial and vocal cues to assess candidate integrity, companies like CVS aim to streamline decision-making and improve hiring outcomes. However, the lawsuit illustrates how such technologies can inadvertently lead to legal disputes, particularly when they intersect with laws designed to protect applicants’ rights and ensure fair employment practices.
Such assessments can be culturally biased, disadvantaging candidates from backgrounds where behaviors like direct eye contact may differ from Western norms. For instance, individuals from Eastern cultures may not engage in as much direct eye contact, which could unfairly impact their employability scores. AI systems may unfairly interpret these nuances and adversely affect candidates’ assessments. This raises significant concerns about the fairness and potential discriminatory effects of using AI tools to evaluate “cultural fit” in hiring decisions. Employment attorneys and AI experts point out that these technologies while promising efficiency and objectivity, risk perpetuating biases based on race, gender, or disability—areas protected under Title VII of the Civil Rights Act of 1964.
The US Equal Employment Opportunity Commission (EEOC) has prioritized addressing discriminatory hiring practices involving AI in its Strategic Enforcement Plan for 2024-2028. In August 2023, the EEOC resolved its inaugural “AI bias” lawsuit, which involved allegations that an employer used AI to exclude candidates aged 55 and older from consideration. Such cases highlight significant legal risks for companies deploying these technologies. While some argue AI can enhance efficiency and objectivity in hiring, others caution that without careful oversight, it may amplify existing biases and hinder efforts toward diverse and inclusive workplaces.
As companies increasingly adopt AI and automated systems in their human resources operations, the DOL’s guidance serves as a critical reminder of the legal obligations and potential risks involved. Employers are advised to implement robust oversight mechanisms, regularly review their AI policies and practices, and stay informed about evolving regulatory requirements to mitigate compliance risks effectively. By proactively addressing these considerations, businesses can leverage AI technologies while upholding fair labor practices and ensuring compliance with federal and state laws and labor standards.