Blog

Team BiasBlocker Team BiasBlocker

BiasBlocker: We asked a language model to identify racism and it tried to erase baby Hitler

Biased copy can exist in many forms: It can be problematic or offensive words; narrow framing of a topic; leading language to draw the reader to a conclusion; omission of detail; or using vocabulary designed to provoke the pathos of the author.

There are many aspects to our project — including half of it being developed in Arabic. But at the three-month milestone of our six-month development cycle, and for the purposes of this blog, we’re only going to focus on what happened when we tried to train data with ChatGPT and explore how popular language models currently interpret bias and racism in the English language.

Image from Teresa Berndtsson / Better Images of AI / Letter Word Text Taxonomy / CC-BY 4.0

Read More