Foreign
Introducing GPT-4: The Revolutionary AI Model by OpenAI
A Breakdown of the 5 Biggest Differences from Previous Versions
OpenAI’s new GPT-4 AI model has made its big debut and is already powering everything from a virtual volunteer for the visually impaired to an improved language learning bot in Duolingo. But what sets GPT-4 apart from previous versions like ChatGPT and GPT-3.5? Here are the five biggest differences between these popular systems.


What’s in a Name?
First, though, what’s in a name? Although ChatGPT was originally described as being GPT-3.5 (and therefore a few iterations beyond GPT-3), it is not itself a version of OpenAI’s large language model, but rather a chat-based interface for whatever model powers it. The ChatGPT system that exploded in popularity over the last few months was a way to interact with GPT-3.5, and now it’s a way to interact with GPT-4.

Multi-Modal Understanding
The most noticeable change to this versatile machine learning system is that it is “multimodal,” meaning it can understand more than one “modality” of information. ChatGPT and GPT-3 were limited to text: They could read and write but that was about it (though more than enough for many applications). GPT-4, however, can be given images and it will process them to find relevant information.

Malicious Prompt Resistance
For all that today’s chatbots get right, they tend to be easily led astray. A little coaxing can persuade them that they are simply explaining what a “bad AI” would do, or some other little fiction that lets the model say all kinds of weird and frankly unnerving things. People even collaborate on “jailbreak” prompts that quickly let ChatGPT and others out of their pens. GPT-4, on the other hand, has been trained on lots and lots of malicious prompts — which users helpfully gave OpenAI over the last year or two.
Expanded Memory
These large language models are trained on millions of web pages, books, and other text data, but when they’re actually having a conversation with a user, there’s a limit to how much they can keep “in mind,” as it were (one sympathizes). That limit with GPT-3.5 and the old version of ChatGPT was 4,096 “tokens,” which is around 8,000 words, or roughly four to five pages of a book. GPT-4 has a maximum token count of 32,768 — that’s 2^15, if you’re wondering why the number looks familiar. That translates to around 64,000 words or 50 pages of text, enough for an entire play or short story.
Multilingual Capabilities
The AI world is dominated by English speakers, and everything from data to testing to research papers are in that language. But of course the capabilities of large language models are applicable in any written language and ought to be made available in those. GPT-4 takes a step toward doing this by demonstrating that it is able to answer thousands of multiple-choice questions with high accuracy across 26 languages, from Italian to Ukrainian to Korean.
Integrated Steerability
“Steerability” is an interesting concept in AI, referring to their capacity to change their behavior on demand. This can be useful, such as in taking on the role of a sympathetic listener, or dangerous, like when people convince the model that it is evil or depressed. GPT-4 integrates steerability more natively than GPT-3.5, and users will be able to change the “classic ChatGPT personality with a fixed verbosity, tone, and style” to something more suited to their needs.
Conclusion
There are lots more differences between GPT-4 and its predecessors, most more subtle or technical than these. No doubt we will learn many more as the months wear on and users put the newest language model through its paces. Want to test GPT-4 out yourself? It’s coming to OpenAI’s paid service ChatGPT Plus, will soon be available via API for developers, and will probably have a free demo soon.
Credit: https://techcrunch.com/2023/03/14/5-ways-gpt-4-outsmarts-chatgpt/
ENND


