Transfin.
HomeNewsGuidesReadsPodcastsVideosTech
  1. Reads
  2. Deep Dives

Future of Artificial Intelligence: All About Elon Musk-Backed OpenAI's New Language Generator GPT-3

Jul 21, 2020 10:33 AM 4 min read
Editorial

Look up Sharif Shameem (@sharifshameem) on Twitter, Founder of Debuild, a platform that helps build web apps faster, and you will see his feed flooded with videos testing out the latest internet buzz - GPT-3. 

Not just Shameem, but over the past few days, samples testing out OpenAI's new machine learning language algorithm have been circulating widely on social media.

And GPT-3’s language capabilities are truly spectacular. It can compose creative fiction, generate functioning code, write thoughtful business memos and much more. You can check for yourself; we are truly blown! And, we dig further.

 

 

What is OpenAI's New Language Generator GPT-3 and How Does it Work?

After originally publishing its GPT-3 research in May, Elon Musk-backed OpenAI gave select members of the public access to the model last week via an API, and since then the internet has been abuzz with tech enthusiasts testing, admiring, critiquing and debating this latest technology. 

GPT-3 is the third generation of OpenAI’s Generative Pretrained Transformer - a general-purpose language algorithm that uses machine learning to translate text, answer questions and predictively write text. 

It uses a vast data bank of English sentences and highly powerful computer models (called neural nets) to spot patterns and learn its own rules of how language operates (it has 175 billion rules of its own making). It then expands on these examples to produce entirely original output in the form of an article or an image.

Essentially, OpenAI's New Language Generator GPT-3 is an extremely sophisticated text predictor.

But how is GPT-3 able to generate these predictions? GPT-3 has effectively ingested all of the text available on the internet. It is this vast amount of data that enables it to generate a statistically plausible response to the input it is given.

Interestingly, OpenAI declined to make GPT-2, GPT-3’s predecessor, publicly available because of its ability to create extremely realistic and coherent “fake news” articles based on something as simple as an opening sentence.

In an example, shared by MIT Technology Review, GPT-2 wrote a full, coherent fake news article based on nothing more than the opening sentence, “Russia has declared war on the United States after Donald Trump accidentally…”

Russia has declared war on the United States after Donald Trump accidentally fired a missile in the air. 

Russia said it had “identified the missile’s trajectory and will take necessary measures to ensure the security of the Russian population and the country’s strategic nuclear forces.” The White House said it was “extremely concerned by the Russian violation” of a treaty banning intermediate-range ballistic missiles. 

The US and Russia have had an uneasy relationship since 2014, when Moscow annexed Ukraine’s Crimea region and backed separatists in eastern Ukraine.

Needless to say, trained on a much larger set of data (an archive of the web called the Common Crawl), GPT-3 is exponentially more potent. 

 

All About Elon Musk-Backed OpenAI's New Language Generator GPT-3

 

Potential Impact, Use and Threats of OpenAI's GPT-3 Algorithm

Although it is way too early to say which sectors might benefit the most with this technology, GPT-3's potency makes it adaptable to all sorts of different tasks that involve any sort of language. "I can see a future in which every doctor just asks GPT-3 what the cause of a certain set of symptoms one of their patients might have would be, and then gives a reasonable response," says Shameem.

A common discourse that accompanies the development of any new technology is the potential threat it poses to industries and workers. But some like Shameem are more optimistic than others when it comes to the potential impact and threats of GPT-3.

“I think it’ll lead to more productive coders,” says Shameem. “With every level of abstraction that programming has brought us, it only increased the number of potential coders, because it decreased the skill level required to become a productive programmer. I think it’s actually going to result in far more people becoming programmers.”

And when it comes to threats like fake news and deep fakes, he says, “With every tool there are dangers, but I think the pros outweigh the cons. I think GPT-3 is inherently a net positive for humanity, and that we’re better off having it than not.”

 

Achilles’ Heel: Limitations of AI and GPT-3

As with all technology, GPT-3 comes with its own set of limitations, and as we all know, traditionally, Artificial Intelligence (AI) has struggled at “common sense”. 

As per this Forbes article, trained on a dataset of half a trillion words, GPT-3 is able to identify multitudes of linguistic patterns, but it possesses no internal representation of what these words actually mean. It has no semantically-grounded model of the world or of the topics on which it discourses. And therefore, when faced with concepts, content, or even phrasing that the Internet’s corpus of existing text has not prepared it for, it is at a loss.

Here’s an intriguing illustration by Kevin Lacker, and to quote him, “GPT-3 is quite impressive in some areas, and still clearly subhuman in others.”

FIN.

Congratulations! You've made it to the end. Looking for more takes on Business, Finance, Markets, and Investing? Subscribe to our Wrap Up Newsletter for informative and insightful daily news updates, smartly curated from the top sources, delivered straight to your inbox.