Federal Agencies Say Employers’ Use of AI and Hiring Algorithms May Lead to Disability Bias: 5 Takeaways | Fisher Phillips

  • Identify the AI ​​and algorithms you use in the hiring process

    In a tight job market with record job openings, many employers are turning to technology to ease staffing burdens. It is estimated that over 80% of all employers use some form of AI or algorithms to help with all types of HR functions. So, the first step to ensuring compliance is to inventory any AI-based tools or algorithms your company uses for HR functions. This is vital, especially in large enterprises, where the use of these tools may not be well known.

    Software programs may use algorithms – which are sets of rules for model-based computers – and artificial intelligence – which means that a computer performs tasks that are usually performed by an employee.

    According to DOJ guidelines, employers could use technology for the following reasons:

    • To show job vacancies to targeted groups;
    • To decide whether a candidate meets the professional qualifications;
    • To interview candidates online;
    • To administer computer-based tests that measure a candidate’s skills or abilities; and
    • To grade candidate CVs.

    In the hiring process, algorithmic decision making may include the use of:

    • Chatbots and virtual assistants that ask questions to screen job candidates;
    • Scanners that classify resumes by keywords;
    • Software that monitors and rates employees based on keystrokes;
    • Technology that assesses candidates based on facial expressions and speech patterns; and
    • Testing software that scores candidates based on their personality traits or skills.

    Employers can use technology to “save time and effort, increase objectivity, or reduce bias,” but “use of these tools may disadvantage job applicants and employees with disabilities,” the agency noted. EEOC in its technical support document.

    Moreover, state legislatures are already dealing with these issues. For example, Illinois recently passed the Artificial Intelligence Video Interview Act. This law imposes strict requirements on employers who use software to analyze video interviews. These requirements include specific notification requirements, limits on information sharing, and strict reporting requirements. Washington State is considering similar legislation. Expect many other states and cities to follow suit.

  • Provide reasonable accommodation

    The ADA requires most employers to provide reasonable accommodations to job applicants and employees with disabilities unless it would cause undue hardship.

    According to the DOJ and EEOC, you can set job-related qualification standards that meet business needs, but you must seek reasonable accommodations that will enable applicants and employees with disabilities to meet those standards, especially if you use AI or algorithms as part of the hiring process.

    Reasonable accommodations are changes you can make to help an applicant with a disability apply for a job. Under the guidelines, you can tell applicants or employees what steps an assessment process includes and ask them if they will need reasonable accommodations to complete it. For example, you can offer specialized equipment or alternative test formats. But you don’t have to lower production or performance standards, eliminate an essential function, or provide an accommodation that would create undue hardship.

    The DOJ noted that existing technical standards, such as the Guidelines for Web Content Accessibility, provide helpful guidance on how to ensure website functionality is accessible to people with disabilities, including those who are blind.

    In addition, the EEOC provided a list of accessibility questions for employers to ask software vendors. For example, “Are documents presented to candidates or employees in alternative formats? If so, what formats?

  • Regularly review programs for potential bias and unintended “filtering”

    Be sure to review your hiring tools for potential bias before using them and periodically thereafter, as even technology is not immune to bias. Algorithms may not intentionally try to screen out applicants based on a protected category, but they may be based on the qualities of top performers and therefore may unintentionally exclude a disproportionate number of qualified applicants in a protected category. Many technology vendors may claim that the tool they have is “bias-free,” but you should take a close look at the biases that the technology claims to eliminate. For example, it may focus on eliminating biases related to race, gender, national origin, color or religion, but not necessarily on eliminating biases related to disability. You should keep in mind that there are many types of disabilities, and hiring technologies may impact each one differently, the DOJ noted.

    As an example, the EEOC said employers who administer pre-employment tests that measure personality, cognitive, or neurocognitive traits may want to consult with neurocognitive psychologists to help identify and correct ways in which the tests could unintentionally exclude autistic or cognitive, intellectual or mental people. health-related disabilities.

    In addition to monitoring AI tools for potential disability-related biases, you should ensure that your selection process does not negatively impact job applicants based on other protected characteristics. For example, if your system automatically rejects applicants who live more than 20 miles from the workplace, you may be unintentionally limiting the ethnic and racial diversity of applicants you consider, based on area demographics. Or, if the AI ​​tool only allows consideration of candidates from certain schools or minimum education criteria, it may unintentionally exclude various candidates with non-standard work experience who would otherwise be suitable for the position.

    Consider getting (or requiring) independent bias auditing of all AI and algorithm-based tools. A law recently passed in New York requires employers to obtain a “bias audit” for all automated employment decision tools. This is an unbiased assessment by an independent auditor that tests, at a minimum, the tool’s disparate impact on individuals based on their race, ethnicity and gender. This law also contains strict notice and opt-out requirements. The law takes effect on January 1, 2023. Expect many states and other major cities to adopt similar requirements for bias audits.

  • Use gamification software with caution

    Do you use “games” as part of the hiring process? Games can be used to assess personality traits and job-related skills while making the hiring process more engaging and fun for candidates.

    The DOJ and EEOC have warned, however, that employers must ensure that these games only assess job-related skills and abilities rather than any sensory or manual impairments or speaking skills. a candidate.

    For example, a blind contestant should not be automatically eliminated if they cannot play a particular online game that measures memory. They may be able to perform the essential functions of the job and an alternative test should be administered.

    “If a test or technology screens out someone because of a disability when that person can actually do the job, an employer should instead use an accessible test that measures the candidate’s job skills, not their disability, or provide further adjustments to the hiring process so that a qualified individual is not screened out due to a disability,” the DOJ said.

  • Expect more guidance and regulations

    EEOC President Charlotte Burrows said the agency is “committed to helping employers understand how to benefit from these new technologies while complying with labor laws.” In October 2021, the agency launched an initiative on artificial intelligence and algorithmic fairness, addressing how the use of these technologies in the workplace can contribute to systemic discrimination in employment. Since this is a priority for the EEOC, you can expect more guidance on this topic from the agency.

    Additionally, states and cities are beginning to regulate the use of AI tools by employers. As noted above, New York employers using AI technology will face significant compliance obligations starting in 2023. Illinois already regulates video interviewing. And California’s Fair Employment and Housing Council is also looking at potential AI regulations. Additionally, many other states are considering laws prohibiting the use of discriminatory algorithms in areas such as insurance and banking. It will be easy to add prohibitions against the employer to these laws.