Within a week after OpenAI unveiled its latest ChatGPT model, GPT-5, with grand promises, the company found itself in damage control mode. On 7 August 2025, in less than 24 hours after the launch, people found out that the new “PhD-level expert” did not live up to its expectations, with social media platforms such as X flooded with mixed reactions, from excitement to scepticism. Despite the concerns, the users increased to 700 million in anticipation of the release of the new model.
Technical Improvements and Shortcomings
GPT-5 brings several improvements to the table. It excels in enterprise, and has noticeable improvement in reasoning, accuracy and liability. The new update has also installed better language support, with enhanced multilingual performance for a global market. Coding is of much higher quality, and generating front-end user interfaces with little prompting, the model also exhibits advances in personality and steerability.
However, this does not mean it is without its concerns. The most immediate criticism that followed the release was OpenAI CEO Sam Altman’s claims of a PhD-level intelligence system and how this failed. The new system cannot label maps without spelling errors, raising questions about the gap between OpenAI’s promotional rhetoric and actual performance. Users reported a “colder tone, reduced creativity, slower responses, and workflow disruptions” compared to previous versions. Many longtime subscribers felt the new model lacked the warmth and creative capabilities they had grown accustomed to, describing the experience as a downgrade rather than an improvement.
Safety Concerns
OpenAI made notable strides in safety with GPT-5. A new safety training program dubbed “safe completions” was launched by the corporation. It educates the model to provide the most helpful response while adhering to safety protocols. Instead of focusing on a refusal boundary based on user input, safe-completion focuses safety training on the output safety of a model. These improvements came at a crucial time
A day before GPT-5’s release, the Centre for Countering Digital Hate’s (CCDH) new research showed that 53% of ChatGPT responses to teen queries contained harmful content, including detailed instructions on concealing eating disorders and composing suicide letters. Teens would spend more than three hours on ChatGPT and would vividly give instructions on how to get drunk and high.
Another issue is that a lot of people are turning to AI chatbots for friendship and engaging in para-social relationships with them, creating unhealthy emotional attachments. Altman has publicly addressed this issue, saying that he and his team are trying to reduce the emotional overreliance on AI as it could become potentially dangerous.
Industry Implications
While GPT-5 did show clear technical improvements over earlier models, the significant gap between high expectations and actual user experience has affected OpenAI’s credibility. It has also raised concerns about responsible AI marketing. The company’s choice to bring back older models indicates that they understand user preferences and the need for better transitions. However, it may take time to fully regain user trust.
Looking Ahead
As companies compete to showcase their AI advancements, the pressure to hype their abilities while failing to deliver practical results leads to a disengaging cycle, ultimately eroding public trust. For OpenAI, the path forward will require not just technical improvements but a fundamental reassessment of how the company communicates about its products. As the AI landscape continues to evolve rapidly, the GPT-5 launch serves as a cautionary reminder about the risks and safety concerns of AI and putting marketing promises ahead of user needs.

Leave a Reply