A Worker-Resistant Approach to AI Is Harming Our Workforce, Economy, and Civil Rights.
By Kya Hector
While the rise of the use of artificial intelligence (AI) in the workplace brings potential benefits such as increased efficiency, workers are also vulnerable to the harms posed by the use of new technologies. We must take intentional steps to protect the civil rights of workers and ensure that they are entitled to reap the benefits of technological innovation — just as business leaders do.
The Biden-Harris administration has already taken steps to address civil rights issues posed by the use of AI and workplace technologies, including through the White House’s Blueprint for an AI Bill of Rights, the Department of Labor’s AI Principles for Developers and Employers, and the EEOC’s Artificial Intelligence and Algorithmic Fairness Initiative. These documents and initiatives emphasize the need to center worker empowerment and include workers in the development and deployment of such technologies in the workplace. While these represent progress, more must be done to ensure workers are not unjustly taken advantage of while corporations continue to benefit from their labor. Without immediate intervention, negative health outcomes for workers, non-diverse hiring pools, and an unprepared workforce will continue to harm our workforce and economy.
A worker-resistant approach to incorporating AI in the workplace is restricting the talent that companies can choose from in the first place, as AI algorithms are picking up on the bias already apparent in our hiring patterns. AI hiring software has been known to penalize women candidates in the tech field, candidates with disabilities, and candidates of color. Reliance on AI hiring without human oversight could cement our existing hiring biases into exclusionary algorithms. With the recent attacks on DEI initiatives across the country, biased algorithms will create steel barriers to employment for marginalized populations. These factors will create a workforce that is not representative of the rich diversity that exists in the United States — hampering the potential of companies and violating the civil rights of prospective workers.
Furthermore, a worker-resistant approach to AI creates unhealthy work environments for employees. Some estimates report that the amount of large firms monitoring their employees has doubled since the beginning of the COVID-19 pandemic. With surveillance ranging from monitoring keystrokes to retaining full access to worker systems, it is inevitable that workers feel more anxiety and pressure on the job. For example, inside Amazon warehouses, workers wear devices that track their productivity through an algorithm. Later, workers are sent automated dismissal notices to those who underperform according to the algorithm. Furthermore, these workers are being sent real-time warnings for being “off task” for too long. Some workers have even reported not being able to use the bathroom without risking a warning. This surveillance has been linked to negative health outcomes in a study on Amazon warehouse workers, which reported that the increased monitoring and focus on speed has led to higher rates of burnout, exhaustion, and injury. This increased pressure can present higher risks for disabled and pregnant workers, who for years have lacked protections to ensure their health and safety on the job. The Leadership Conference and our coalition partners continue to advocate for policy and legislative changes to protect workers, including the Pregnant Workers Fairness Act (PWFA), which went into effect last year. Even so, more must be done to address the potential civil rights harms presented by the use of emerging technologies.
Though there are many potential dangers for workers associated with AI, unionization efforts have created opportunities for workers to benefit from the use of AI as well. Unions and collective bargaining can be instrumental in easing the transition for current workers — while ensuring workers have a voice in the implementation of AI in the workplace. This is demonstrated in the contract that SAG-AFTRA reached with AMPTP, which ensures that performers consent and are compensated when AI is used to replicate their work. Furthermore, unions have been able to organize around workplace surveillance and give workers more job security through collective bargaining — giving workers a voice in how technology is used in their place of work.
Additionally, advocates are working together to identify principles and standards to guide the use of AI and technology and protect the civil rights of workers. In 2020, a coalition of civil rights and tech policy organizations published the Civil Rights Principles for Hiring Assessment Technologies in an effort “to guide the development, use, auditing, and oversight of hiring assessment technologies, with the goals of preventing discrimination and advancing equity in hiring.” In 2022, the Center for Democracy & Technology (CDT), The Leadership Conference, and a coalition of national civil rights and workers’ rights organizations published the Civil Rights Standards for 21st Century Employment Selection Procedures, a detailed set of policy recommendations on the tools and methods that employers use to recruit and assess workers.
We still have more work to do to ensure workers also benefit from new technology and that the use of this technology does not contribute to greater occupational segregation or deepen wage gaps for workers of color, women, LGBTQI+ workers, workers with disabilities, and others. Reports predict that there will be a 50 percent hiring gap for AI-related positions, as many workers lack the knowledge needed to work in AI. With the lack of programs equipped to teach workers the skills needed to work with AI, workers do not have many opportunities to gain needed skills. The programs that do exist are often pricey and difficult for working adults to take part in. But we can get ahead of this for the future workforce by ensuring AI education is included in our K-12 and higher education curriculum. AI has the potential to help our communities, but if they aren’t equipped to successfully enter the future of work, they will not reap the benefits.
The Leadership Conference and its Center for Civil Rights and Technology will continue to advocate for concrete rules of the road that protect workers. Any AI tools used by employers to screen or surveil employees must have sufficient and transparent testing to ensure intended, fair, equitable, and unbiased results and to ensure they will not produce inequitable outcomes for historically disadvantaged groups. Further, regular auditing requirements must be put in place to ensure these systems continue to function in a non-discriminatory manner, and Congress must empower relevant federal agencies to carry out rulemaking and enforcement.
Kya Hector was a summer 2024 undergraduate intern at The Leadership Conference on Civil and Human Rights.