We have all had some type of interaction with a chatbot. It’s usually a small pop-up in the corner of a website offering customer support, often complicated to navigate, and almost always frustratingly non-specific.
But imagine a chatbot, enhanced with artificial intelligence (AI), that can not only expertly answer your questions, but also write stories, give life advice, even compose poems and code computer programs.
It seems that ChatGPT, a chatbot launched last week by OpenAI, is achieving these results. It has generated a lot of excitement, and some have gone so far as to suggest that it could signal a future in which AI has dominance over human content producers.
What has ChatGPT done to advertise such claims? And how could it (and its future iterations) become indispensable in our daily lives?
What can ChatGPT do?
ChatGPT is based on the older OpenAI text generator, GPT-3. OpenAI builds its text generation models by using machine learning algorithms to process large amounts of text data, including books, news articles, Wikipedia pages, and millions of websites.
By ingesting such large volumes of data, models learn the complex patterns and structure of the language and gain the ability to interpret the desired result of a user’s request.
ChatGPT can build a sophisticated and abstract representation of knowledge in the training data, which it relies on to produce results. That’s why he writes relevant content and doesn’t just spout grammatically correct bullshit.
While GPT-3 was designed to continue with a text message, ChatGPT is optimized for engaging in conversation, answering questions, and being helpful. Here’s an example:
A screenshot of the ChatGPT interface while explaining the Turing test.
ChatGPT immediately caught my attention by correctly answering the test questions I asked my undergraduate and graduate students, including questions that required coding skills. Other academics have had similar results.
In general, you can provide genuinely informative and helpful explanations on a wide range of topics.
ChatGPT can even answer questions about philosophy.
ChatGPT is also potentially useful as a writing assistant. It does a decent job of drafting copy and coming up with seemingly “out of the box” ideas.
ChatGPT can give the impression of brainstorming ‘out of the box’ ideas. The power of feedback
Why does ChatGPT seem so much more capable than some of its older counterparts? A lot of this is probably down to how he was trained.
During its development, ChatGPT displayed conversations between human AI trainers to demonstrate the desired behavior. Although there is a similar model trained this way, called InstructGPT, ChatGPT is the first popular model to use this method.
And it seems to have given him a great advantage. The addition of human feedback has helped steer ChatGPT in the direction of producing more helpful responses and rejecting inappropriate requests.
ChatGPT often rejects inappropriate requests by design.
Refusing to consider inappropriate input is a particularly important step in improving the safety of AI text generators, which can otherwise produce harmful content, including bias and stereotyping, as well as fake news, spam, propaganda, and fake reviews.
Previous text generation models have been criticized for regurgitating gender, racial, and cultural biases contained in training data. In some cases, ChatGPT successfully avoids reinforcing such stereotypes.
In many cases ChatGPT avoids reinforcing harmful stereotypes. In this list of software engineers, it features both masculine and feminine sounding names (although they are all very Western).
However, users have already found ways to circumvent their existing safeguards and produce biased responses.
The fact that the system often accepts requests to write fake content is further proof that it needs refinement.
Despite its safeguards, ChatGPT can still be misused. overcoming limitations
ChatGPT is arguably one of the most promising AI text generators, but it is not free of bugs and limitations. For example, the coding advice platform Stack Overflow temporarily banned the chatbot’s responses due to inaccuracies.
A practical problem is that ChatGPT knowledge is static; it does not access new information in real time.
However, its interface allows users to give feedback on the model’s performance by indicating the ideal responses and reporting harmful, false, or useless responses.
OpenAI intends to address existing issues by incorporating this feedback into the system. The more feedback users provide, the more likely ChatGPT will reject requests that lead to an unwanted result.
One possible improvement could come from adding a “trust indicator” feature based on user feedback. This tool, which could be built on top of ChatGPT, would indicate the model’s trust in the information it provides, leaving it up to the user to decide whether or not to use it. Some question and answer systems already do this.
Read more: What’s the secret to making sure the AI doesn’t steal your work? Work with it, not against it
A new tool, but not a human replacement
Despite its limitations, ChatGPT works surprisingly well for a prototype.
From a research standpoint, it marks a breakthrough in the development and deployment of human-aligned AI systems. On the practical side, it’s already effective enough for some everyday applications.
It could, for example, be used as an alternative to Google. While a Google search requires you to browse through a number of websites and dig even deeper to find the desired information, ChatGPT answers your question directly, and often does so well.
ChatGPT (left) can, in some cases, be a better way to find quick answers than Google search.
Also, with user feedback and a more powerful GPT-4 model, ChatGPT can be significantly improved in the future. As ChatGPT and similar chatbots become more popular, they are likely to have applications in areas like education and customer service.
However, while ChatGPT may end up doing some tasks traditionally done by people, there are no signs that it will replace professional writers any time soon.
While they may impress us with their skills and even apparent creativity, AI systems are still a reflection of their training data and do not have the same capacity for originality and critical thinking as humans.
Read more: Instead of threatening jobs, AI should help human writers