Watching Readers' Eyes May Help AI
When we read, our eyes go all over the page, leaving some time on longer words or skipping past the shorter words, widening eyes if strong emotions are felt, these are all insanely complex, however studying this can give us more and more clues as to how our brains work and hence how to replicate them with AI
Students at the German University, ETH Zurich, think that the gaze could be useful in giving us clues as to how computers should go about reading a text since they are pretty much the only direct and simplistic way of looking into the brain since brain waves are way too complex to be understood.
Over time, our networks that try and understand the meaning of language have improved immensely, you would be able to remember for yourself if you used a voice assistant a few years ago and remembered how robotic it was compared to the significantly more natural way it works now. This is great, but the way we achieved this is not so great, we have only improved because of the massive amounts of data and the increasing amount of computing power we are slamming onto our AI projects. However, if we understand how our gaze works, we could give AI a clue as to which words to look out for when decoding the meaning behind a piece of language.
The researchers, collected data and created a neural network using it, which takes the amount of time looking at each word into account and discovered that this significantly helped in a variety of things, including detecting hate speech, analysing sentiment and noticing grammatical errors meaning that this is a potentially fruitful addition to current networks and one step in the correct direction for AI that can talk just like a normal human.