We’ve all done it. We’re texting on the phone, and we get auto-complete suggestions to complete a sentence or phrase without having to type all of it.
Frequently, if not always, the suggestions taken consecutively to complete a sentence end up being incoherent and gibberish.
It’s not hard to see why. Artificial intelligence has managed to best humans in a variety of activities. From playing chess and analysing emotions to driving cars and translating conversations real-time, AI is becoming more and more adept at being human every day.
But there’s one mundane task that has consistently eluded our algorithmic brethren – that of reading. The literacy rate amongst computers is dismal. Despite forward strides in employing deep learning to learn statistical correlations and pour over countless pages of text in a matter of seconds, computers still lack the ability to comprehend the meaning of sentences, even simple ones.
Case In Point
Take Google’s “Talk to Books” project, which purports to answer any question by reading thousands of books. The tool isn’t particularly accurate. Answers vary according to how the question is framed and in too many cases the correct answer is either lost in a pool of incorrect guesses or not found at all.
Why is AI Bad at This?
So why is AI so bad at making sense of sentences? Because it misses something linguists call compositionality – the ability of deriving the meaning of a sentence by deriving the meaning of its component phrases or words. AI reads well only if it’s guided through the entire process. And in those rare instances when computers do manage to read sentences accurately, it’s kind of like when you have a student who can do well at tests without recognising any of the subject material.
Another reason why AI struggles with language is the lack of carefully structured data. To make AI adept at a task, you need thousands of data samples for algorithms to comb through to learn patterns and correlations – and apply similar logical connections to other sentences and phrases. These data sources are limited and can take a lot of effort to compile and guide machines through. This is why algorithms may be fine at guessing the next word but a complete dunce if asked to write a poem.
Until now, that is.
Back to the Future
Recent leaps in AI tech have enabled machines to generate whole chunks of coherent text on their own, instantaneously. All you need to do is input a line or headline and – voila! – the algorithms work their magic and you’re given a full-fledged article, story, summary or even novel. And the text is coherent and readable enough to mistake the robot writer for a human one (incidentally, if you’re looking for a book by a robot, try this textbook on lithium-ion batteries, the first such textbook ever).
Long Read Ahead
Before you start writing eulogies for novelists and journalists, you should know that AI tech is still far from perfect when it comes to writing entirely coherent passages. When Vox tried generating an article from an impromptu headline, the programme wrote a piece on a creature that was both “mouse-like” and the size of a horse. That’s because AI can’t comprehend the size differences like a human writer can.
But a more serious problem – since it’s likely robot writers will only get better at what they do soon – might be the fact that AI can’t distinguish right from wrong or fact from fiction. Imagine engineering a programme that can generate hundreds of fake news stories in a matter of minutes. Stories written like a journalistic report, with hyperlinks, quotes, images and a matter-of-fact tone. The power such a machine could wield in misinforming the public would be tremendous – and a challenge to contain, to say the least.
Congratulations! You've made it to the end. Looking for more takes on Business, Finance, Markets, and Investing? Subscribe to our Wrap Up Newsletter for informative and insightful daily news updates, smartly curated from the top sources, delivered straight to your inbox.