
The Equal Employment Opportunity Commission (EEOC) recently added another agency victory in procuring a six-figure settlement in its merit lawsuit against iTutorGroup.1 The defendant, iTutorGroup, comprised of three integrated companies, hired tutors based in the United States to provide online tutoring services from their homes and other remote locations to students in China.2 The EEOC alleged that iTutorGroup programmed their application software to automatically reject female applicants over the age of 55 and male applicants over the age of 60. As a result of the programming, iTutorGroup excluded over 200 otherwise qualified applicants in violation of the Age Discrimination in Employment Act. The parties settled the suit for $365,000.
This lawsuit and settlement are indicative of the agency’s targeted initiative to address the use of artificial intelligence (AI) and the potential for discrimination in violation of the various federal laws under its purview. In 2021, the EEOC launched its Artificial Intelligence and Algorithmic Fairness Initiative aimed at insuring that the use of software, including AI, machine learning and other novel technologies used in the recruitment and hiring process, comply with federal civil rights law. In furtherance of its goal, the EEOC aggressively targeted the issue by releasing a number of technical assistance documents.3 On May 12, 2022, the EEOC issued “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees.”4 The guidance “explain[s] how employers’ use of software that relies on algorithmic decision-making may violate existing requirements of Title I of the [ADA] … and provides practical tips … on how to comply with the ADA…”5
Most recently, in May 2023, the EEOC issued a technical assistance document titled “Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.” This guidance discusses how existing Title VII requirements may apply in assessing adverse impact by employment selection tools that use AI.
Additionally, the EEOC, the Consumer Financial Protection Bureau, the Department of Justice’s Civil Rights Division and the Federal Trade Commission issued a joint statement “reiterat[ing their] resolve to monitor the development and use of automated systems and promote responsible innovation.”6
Looking Ahead
As companies look toward streamlining and optimizing human resources functions, we can expect an increased utilization of AI technology. Currently, nearly one in four organizations use automation and/or AI to support HR-related activities, and one in four organizations plan to start using or to increase their use of automation or AI in recruitment and hiring over the next five years.7
Insurance Implications
When it comes to employment practices liability insurance (EPLI) and AI implications in the workplace, there are several important considerations:
- AI systems used in hiring and human resources can sometimes introduce bias into decision-making inadvertently.
- If your company already uses or is planning to use AI technology in recruiting, consider conducting audits to ensure there is no unintended disparate discriminatory impact against a particular demographic, which may lead to legal exposures.
- Discrimination and disparate impact claims such as the ones at the center of this matter constitute employment practices wrongful acts under standard EPLI policies.
- Claims brought by the EEOC alleging wrongful employment practices acts would likewise constitute claims as defined by an EPLI policy. These types of claims should be reported to your EPLI policy , and NFP will assist its clients in doing so.
- As the use of AI becomes more prevalent, along with its concurrent risks as illustrated above, it is likely to become a component in the insurance underwriting process.
- Using an outside vendor for recruiting or HR services may lead to claims against a company as the actual employer. Inquire as to the vendor’s use of AI, their third-party discrimination liability insurance and their professional liability insurance. Additionally, when selecting an AI vendor, ensure that the vendor has also conducted bias audits.
- The legal and regulatory landscape for AI in the workplace is continually evolving. EPLI policies may need to adapt to changes in laws and regulations related to AI and employment practices. You will need an insurance broker that is well-versed in EPLI and the evolving AI exposure.
Discuss potential coverage for AI-related legal exposures with your NFP account representative.
Questions? Contact:
Matthew G. Schott
Managing Director
Management, Cyber and Professional Liability
NFP Property & Casualty Services Inc.
P: 856.287.1496 | matthew.schott@nfp.com | NFP.com
Moire L. Morón, Esq.
Vice President, Advocacy and Technical Claims
200 Park Ave. | Suite 3202 | New York, NY 10173
P: 404.504.3819 | moire.moron@nfp.com | NFP.com
Jonathan Franznick
SVP, Head of Claims Advocacy
200 Park Ave. | Suite 3202 | New York, NY 10173
P: 212.301.1096 | M: 908.461.1389
jonathan.franznick@nfp.com | NFP.com
Kevin M. Smith
Senior Vice President, Management and Professional Liability
NFP Property & Casualty Services Inc.
M: 201.314.0801| kevin.m.smith@nfp.com | www.nfp.com