Krista Pawloski remembers the single defining moment that shaped her opinion on the ethics of artificial intelligence. As an AI worker on Amazon Mechanical Turk β a marketplace that allows companies to hire workers to perform tasks like entering data or matching an AI prompt with its output β Pawloski spends her time moderating and assessing the quality of AI-generated text, images and videos, as well as some factchecking.
Roughly two years ago, while working from home at her dining room table, she took up a job designating tweets as racist or not. When she was presented with a tweet that read βListen to that mooncricket singβ, she almost clicked on the βnoβ button before deciding to check the meaning of the word βmooncricketβ, which, to her surprise, was a racial slur against Black Americans.
βI sat there considering how many times I may have made the same mistake and not caught myself,β said Pawloski.
The potential scale of her own errors and those of thousands of other workers like her made Pawloski spiral. How many others had unknowingly let offensive material slip by? Or worse, chosen to allow it?
After years of witnessing the inner workings of AI models, Pawloski decided to no longer use generative AI products personally and tells her family to steer clear of them.
βItβs an absolute no in my house,β said Pawloski, referring to how she doesnβt let her teenage daughter use tools like ChatGPT. And with the people she meets socially, she encourages them to ask AI about something they are very knowledgable in so they can spot its errors and understand for themselves how fallible the tech is. Pawloski said that every time she sees a menu of new tasks to choose from on the Mechanical Turk site, she asks herself if there is any way what sheβs doing could be
Continue Reading on The Guardian
This preview shows approximately 15% of the article. Read the full story on the publisher's website to support quality journalism.