"Robo-writers: the rise and risks of language-generating AI"

Interesting and worrying piece in Nature this week on the perils of AI language models.

But researchers with access to [OpenAI’s] GPT-3 [language model] have also found risks. In a preprint posted to the arXiv server last September, two researchers at the Middlebury Institute of International Studies in Monterey, California, write that GPT-3 far surpasses GPT-2 at generating radicalizing texts. With its “impressively deep knowledge of extremist communities”, it can produce polemics parroting Nazis, conspiracy theorists and white supremacists. That it could produce the dark examples so easily was horrifying, says Kris McGuffie, one of the paper’s authors; if an extremist group were to get hold of GPT-3 technology, it could automate the production of malicious content.

This brings to mind my recent piece warning on using natural language AI to rewrite scientific documents.

GPT-3 is very splashy and attention-grabbing, but there’s a lot of legitimate worry going around about what it and other methods like it may do. Like they say, where there’s smoke, there’s fire.

https://www.nature.com/articles/d41586-021-00530-0

Jim Bagrow
Jim Bagrow
Associate Professor of Mathematics & Statistics

My research interests include complex networks, computational social science, and data science.

Next
Previous

Related