GPT-4

Sam Altman, the CEO of OpenAI, in a question-answer session in AC10 online meetup, spoke about the impending GPT-4 release.
He said in the interview that contrary to popular belief, GPT-4 will not be any bigger than GPT-3 but will use more compute resources. 100 trillion parameter model won’t be GPT-4 and is far off. They are getting much more performance out of smaller models. Maybe they will never need such a big model. AGI will likely require new algorithmic breakthroughs that haven’t happened yet, instead of just building bigger models.
GPT-4 would focus more on coding, that is, Codex (Codex is a descendant of GPT-3). It is worth noting that OpenAI recently released Codex through API in private beta. Codex is also the basis for GitHub Copilot. It understands more than a dozen languages and can also interpret simple commands in natural language and execute them on users’ behalf, allowing building a natural language interface to existing applications.
The progress will come from OpenAI working on all aspects of GPT (data, algos, fine-tuning, etc.). GPT-4 will likely be able to work with longer context and be trained with a different loss function – OpenAI has “line of sight” for this.
Coding seems to be another major application area for GPTs. One such example is Microsoft’s recently announced GPT-3 based assistive feature for the company’s PowerApps software that converts natural language into code snippets. With Altman’s recent statements, it is expected that OpenAI would be leveraging this capability more with the new instalment of GPT.
What about GPT-5?
GPT-5 might be able to pass the Turing test. But probably not worth the effort.
Sources:

https://analyticsindiamag.com/gpt-4-sam-altman-confirms-the-rumours/
https://news.knowledia.com/US/en/articles/sam-altman-q-and-a-gpt-and-agi-lesswrong-aa6293dde7b95b537f5f50e37c861645fd9b4dbb