AI vs AI
Fake News has developed from bait clicking and being light-hearted to having serious political repercussions such as recently when many Russians spread fake news targeting Ukraine’s elections. To respond to the escalating seriousness of fake news Harvard and MIT researchers have collaborated to create AI to spot AI-created fake text.
The instrument is called the Giant Language Model Test Room (GLTR) exploiting the fact that AI text generators generally use statistical patterns when writing text, effectively checking if the text is too predictable to have been written by a human. Which you can have a try for yourself here. The GLTR improves the detection rate of fake text by 17% from 55% to 72%.
The GLTR checks the predictability of the text by calculating the statistical likelihood that one word would be chosen after another word. As you can see below there is a range of colours from purple to green. Each colour represents how statistically likely the word is to appear after the preceding word. The least likely words are purple, those in the middle are yellow and red and words that are most likely to appear are highlighted in green. Below is genuine text written by a human because it shows a mixture of words containing red, yellows, and purples. By contrast, the text below that is predominantly greens and yellows indicating that it is machine-generated.
Harvard and MIT researchers are heading in the right direction with diagnosing fake text however the relatively low success rate of 72% is not yet sufficient to introduce a strict combatting regime whereby all fake text detected is taken down as there is still a 28% chance that the text was human written. So improving the AI’s accuracy should be a key focus for the future.