Ph: 404.844.4130
Fax: 404.844.4135
3212 Northlake Pkwy #450906
Atlanta, GA 31145

Adjusting existing legal guidelines to the new era of AI technology, the Equal Employment Opportunity Commission (EEOC) recently clarified the responsibilities of employers using artificial intelligence related to applicants and workers with vision impairments, The new technical assistance document issued on July 26 lays out the reasonable accommodation obligations employers have when using AI to screen candidates or assess worker performance so that they don’t run afoul of the Americans with Disabilities Act (ADA).

Understanding the Basics

The ADA not only prohibits employers from discriminating against applicants and workers with disabilities, but it also mandates that employers offer reasonable accommodations to ensure applicants have equal opportunity during the hiring process and to ensure qualified employees can perform the essential functions of their position. The EEOC’s most recent guidance focuses on the obligations arising when applicants or employees have vision impairments that require accommodation.

The EEOC provides a detailed list of possible accommodations that can be considered in such situations that employers can review as needed. There is a wide range of possible changes to the application process or to the day-to-day workplace that may aid those with vision impairments. For example, assistive technology such as text-to-speech software or accessible materials such as braille or large print could serve to satisfy the law’s requirements.

Key Takeaways from the EEOC’s Technical Assistance

Here are some of the key points that employers need to know from the entire technical assistance document.

Employers may need to make a new kind of reasonable accommodation.

The new technical assistance document emphasizes that employers may have an obligation to make reasonable accommodations for applicants and employees with visual disabilities when using a decision-making tools involving AI or algorithms. While this might seem like an obvious point to employers well-versed on their legal compliance obligations, it is an increasingly common issue given the rise of AI technology being deployed by employers – and thus one that the EEOC considered worthy of special mention.

AI tools might be unintentionally creating problems.

The EEOC stressed that AI decision-making tools might unintentionally screen out qualified individuals with disabilities in the application process and could negatively impact qualified employees on the job – which could lead to ADA violations. For example, an applicant or employee may have a visual disability that reduces the accuracy of an AI assessment used to evaluate them.

Employers have available options.

The EEOC identified alternative testing formats as a potential solution for AI products that could be causing problems for individuals with disabilities and require accommodation. The alternate testing formats could provide a more accurate assessment of the applicant’s or employee’s ability to perform the position (unless, of course, the alternative creates an undue
hardship.

The EEOC has provided a helpful example to explain how employers’ obligations may arise.

The agency’s guidance provided the following example to illustrate how these situations may arise. An employer begins using an AI-fueled algorithm that takes into account the employee’s average number of keystrokes per minute to evaluate productivity. An employee with a vision impairment who uses voice recognition software instead of a keyboard may be rated poorly by the algorithm and thus lose out on a promotion or other job opportunity as a result, which, according to the EEOC, could be an ADA violation.

Employers should take proactive measures to reduce the chance of committing an ADA violation.

The EEOC recommends employers proactively provide information to employees about how the AI tool evaluates applicants/employees, thus alerting those with visual disabilities that the technology might not accurately assess their qualifications. In combination with such disclosure, the agency recommends that employers provide instructions on how the applicant or employee could seek a reasonable accommodation if the applicant believes one is needed.

Using the example above, if the employer had informed its employees ahead of time that they would be assessed partly on the basis of keyboard usage, an employee with a vision impairment would have known to request an alternative means of measuring productivity as a reasonable accommodation. For example, the employer could have deployed an alternative method that takes voice recognition software into account when assessing productivity.

While not carrying the force of law, the new EEOC release is a key signal to how the agency will act.

Technical assistance documents like the one we are discussing aren’t statutes or fully formalized regulations and thus do not carry the force of law. However, they can – and will – be cited by agency investigators and lawyers in administrative and civil actions. And judges will carefully consider the information contained in the technical assistance document when weighing decisions before them.

This Guidance Is Part of a Growing Trend

Finally, employers should realize that this guidance does not exist in a vacuum, but is part of a bigger trend. Earlier this summer, the EEOC warned employers using AI to assist with hiring or employment-related actions that it will apply long-standing legal principles to today’s evolving environment. This followed the EEOC teaming up with the Department of Justice (DOJ), the Federal Trade Commission, and the Consumer Financial Protection Bureau to announce that they would be scrutinizing potential employment-related biases that can arise from using AI and algorithms in the workplace. Indeed, within the past year, the EEOC and DOJ jointly released a pair of guidance documents warning that relying on AI to make staffing decisions might unintentionally lead to discriminatory employment practices. Thus, we fully expect that in the coming weeks, the EEOC, and perhaps some of these other named entities will issue regulations that deal with AI’s potential for making decisions based on age, race and sex, thus violating Title VII and the Age Discrimination in Employment Act.

Both Georgia law and the federal laws relevant to your workforce are constantly changing. You can count on Schwartz Rollins to continue to keep you updated on decisions that can affect your business. Please contact one of our attorneys at Schwartz Rollins, or our legal assistant, Vicki Perry at 404.844.4130 if you have questions or would like to discuss updating your restrictive covenant agreements based on this decision.

Related Posts