FOR IMMEDIATE RELEASE
September 13, 2023 4:11 PM
“I am a proud member of the Alphabet Workers Union-CWA, representing over 1,400 workers within Alphabet, Google's parent company. I've driven up to DC from North Carolina to tell lawmakers: when it comes to regulating AI, don’t trust Google, trust Google workers.
I have spent 10 years working as a Google rater and I am one of thousands of invisibilized “ghost workers” who are tasked with rating the quality and accuracy of responses from the Google AI chatbot, Bard. I am paid only $14.50 an hour, even though Google claims that its “extended workforce” earns a minimum of $15.00 an hour. Why? Because Google excludes workers like me from their own minimum standards and benefits. I make less working on Google’s important AI products than my own daughter who works at a fast food restaurant.
The working conditions for AI workers like me are incredibly strenuous. Timing for AI tasks seems to be random. Raters many times are faced with two or three minutes to finish a task that takes 10 minutes to verify accuracy. If we take too long, our account can be locked out for productivity by an AI on the back end—and that means no paid work until that is addressed days or weeks later. We are told we can release under timed tasks, but too many releases can also lead to lock outs by that same automated system. We are not only building AI tools, we’re being managed by AI. Most raters just rate as accurately as possible in the small amount of time we have—but this leaves dangerous gaps that could misinform or mislead users.
Google refuses to acknowledge us and they refuse to sit down at the table with us to talk about our working conditions and the quality of their AI product. This is why I, with the help of fellow union members, wrote a letter to Congress during the first AI hearings earlier in the year.
To put it bluntly: Raters are underpaid, over stressed and have low morale. A group of us were fired for being vocal about working conditions even though Google claims that workers should not face retaliation for organizing. We have since received our jobs back with back pay, but the threat of termination is always there. Working conditions affect quality of work, and when dealing with AI—a poor quality product is dangerous. The dangers go beyond worker’s rights. Imagine a future where AI is the primary source of information spitting out incorrect medical advice, changing historical facts or misrepresenting candidates’ views. This is the future we are looking at if we do not address the poor working conditions for raters. Poor working conditions will equal a poor-quality product and a dangerous future.
As Google raters and AI workers, we represent the future of work as we both evaluate and are managed by AI algorithms. And if Google can treat us without standards, then we are driving towards a future where any and all workers are managed by AI and building AI tools without dignified wages and working conditions.
On behalf of Alphabet Workers Union-CWA I implore regulators: speak directly with us, real AI workers, and together we can establish regulations that protect workers and all of us.”