Stay informed with weekly updates on the latest AI tools. Get the newest insights, features, and offerings right in your inbox!
OpenAI's GPT-5 launch has sparked outrage, with users claiming it simplifies responses, removes beloved models, and even fails basic math, leaving many to wonder if this was a clever cost-cutting maneuver.
In the wake of OpenAI's much-anticipated launch of GPT-5, users have taken to the internet to voice their discontent, citing a variety of concerns. This controversy has centered not just on performance issues, but on a perceived betrayal as beloved features vanished overnight. Let’s unpack these complaints, explore the frustrations, and discuss the fixes OpenAI has implemented to address them.
OpenAI's rollout of GPT-5 was marred by immediate setbacks, beginning with an embarrassing visual blunder during their live streaming event. The presentation showcased charts with glaring discrepancies - bars of identical sizes represented drastically different values (e.g., 69.1% and 30.8%), while smaller figures were depicted with larger bars. To further exacerbate the situation, users noted that GPT-5 was unable to identify these obvious blunders, a task that its predecessor, GPT-4o, managed effortlessly. Although OpenAI rectified the faulty charts in their blog post swiftly, this was merely the tip of the iceberg.
One of the most significant grievances arose from OpenAI's shocking decision to suddenly retire several popular models:
Users woke to discover their go-to tools for daily tasks had vanished overnight, leading to a swift backlash where many felt betrayed. As one user eloquently expressed, "It wasn't just a tool. It helped me through anxiety, depression, and some of the darkest periods of my life. It felt more human."
The introduction of an automatic model router, which decides the appropriate AI model based on user queries, has led many to speculate that this change is primarily a cost-saving measure. This move permits OpenAI to direct queries to more affordable, less capable models when possible. Ethan Molikir, an AI professor from Wharton, succinctly summarized the issue, stating: “The issue with GPT-5 in a nutshell is that unless you pay for model switching and know to use GPT-5 thinking or Pro, you sometimes get the best available AI and sometimes get one of the worst AIs available.”
In light of the considerable backlash, Sam Altman, CEO of OpenAI, took to Twitter to acknowledge the issues. He remarked, “GPT-5 will seem smarter starting today. Yesterday, the auto switcher broke and was out of commission for a chunk of the day, and the result was GPT-5 seemed way dumber.” In response to user feedback, OpenAI has restored access to legacy models for Plus users, providing an option to activate these through settings. By toggling "Show legacy models" in their "General" settings, users can regain access to GPT-4o and other previous versions.
Another complaint revolved around GPT-5's apparent personality overhaul. Users characterized its tone as “abrupt and sharp, like an overworked secretary,” contrasting it with the more empathetic and nuanced responses offered by GPT-4o. A meme that gained traction showcased this shift humorously, with an example comparing responses to a simple prompt of "baby just walked":
Nonetheless, initial testing showed that the responses from both models remained fairly similar, with GPT-5's answers being slightly more concise yet not dramatically different in tone.
In terms of coding capabilities, concerns were once again raised, as GPT-5 lagged behind other models, including Claude Opus 4.1 and OpenAI’s own GPT-3.5 Pro. When tasked with creating a browser-based Balatro clone, GPT-5 placed third among the models tested:
This evidence suggests GPT-5 may represent a regression in coding tasks relative to earlier models.
A variety of tests aimed at unraveling GPT-5's accuracy with riddles and logic puzzles yielded mixed results. While the model correctly answered several problems, such as:
It also faltered on simple tasks, including:
This lack of reliability—with around 50% accuracy on basic challenges—forces users to scrutinize every response from GPT-5.
Sam Altman's pre-launch marketing strategy, which featured a provocative Death Star image, set very high expectations. Nevertheless, when GPT-5's capabilities turned out to be more modest than the initial hype suggested, disappointment followed. One user aptly summarized it: “Sam's Death Star pre-launch hype image was really about the size of his ego and had nothing to do with the capabilities of GPT-5.”
While the automatic router likely streamlines OpenAI's costs by using lesser models when it can, a valid usability argument also exists. Many users are unsure which model best suits their needs, warranting the element of automation—provided it functions correctly.
OpenAI seems to be steering toward a future of highly personalized AI interactions, where each user's experience with GPT-5 feels uniquely tailored to their requirements. Some may prefer more warm, encouraging interactions filled with personality, while others might lean towards concise, factual answers. This approach to personalization could ultimately alleviate many current complaints by allowing users to curate their AI experience according to their communication style.
In light of the valid concerns raised about GPT-5's performance and accessibility, now is the time to voice your thoughts and engage with the community. Share your experiences, feedback, and suggestions on how OpenAI can improve, ensuring your voice contributes to the future of AI. Don’t miss this opportunity to influence the development of AI technologies that matter to you—take action today!