Sunday, February 2, 2025

ChatGPT-maker OpenAI’s ‘next big thing’ may have landed in trouble – Times of India

Must read

OpenAI‘s ambitious next-generation AI project, GPT-5 (codenamed Orion), is facing significant challenges, raising questions about the timeline and feasibility of its launch. According to a report, the development of the next generation of ChatGPT is getting delayed.
The Wall Street Journal has reported that despite over 18 months of development and two large-scale training runs, Orion has fallen short of expectations. Citing sources, the report indicates that while it performs better than current models, the improvements haven’t justified the enormous computational costs, estimated at half a billion dollars for a six-month training run.
The report said in early 2024, OpenAI renewed its efforts to develop Orion, its next-generation AI model, with a focus on utilizing improved data. The researchers conducted a series of smaller-scale training runs in the first few months of the year to gain confidence and refine their approach.
By May, OpenAI’s research team felt prepared for another large-scale training run for Orion, which they anticipated would continue until November.
Once the training began, researchers discovered a problem in the data: It wasn’t as diversified as they had thought, potentially limiting how much Orion would learn.
The problem hadn’t been visible in smaller-scale efforts and only became apparent after the large training run had already started. OpenAI had spent too much time and money to start over.

Why ChatGPT-5 launch is behind schedule

Development of GPT-5 has not only gobbled up dollars but also running behind schedule – a news that comes as a blow to OpenAI’s partner and major investor, Microsoft, which had anticipated seeing GPT-5 by mid-2024.
The delay also casts doubt on OpenAI CEO Sam Altman‘s prediction that GPT-5 would represent a “significant leap forward” in AI capabilities.
GPT-5 is intended to unlock new scientific discoveries and perform complex tasks, with researchers hoping it will be less prone to errors and “hallucinations” that plague current AI models.

Latest article