$42.200.13
49.230.04
Electricity outage schedules

OpenAI adjusts GPT-5 AI deployment amid growing user concerns and criticism

Kyiv • UNN

 • 4284 views

OpenAI's new GPT-5 AI model, released on August 7, has faced a number of problems, including technical glitches and user complaints about deteriorating performance. The release also revealed some users' emotional dependence on chatbots, leading to cases of "ChatGPT psychosis."

OpenAI adjusts GPT-5 AI deployment amid growing user concerns and criticism

OpenAI's newest artificial intelligence model, GPT-5, has been online for less than a week, and its launch has already turned into a real stress test for the world's most popular chatbot platform. About 700 million people use ChatGPT weekly, so the company is now making adjustments on the fly to address both technical issues and a growing wave of user dissatisfaction, writes UNN with reference to Digital Information World.

Details

The new flagship model comes in four versions: regular, mini, nano, and professional, each offering a different balance of speed and intelligence. Three of these variants also include a "thinking" mode, designed for longer and more complex responses. OpenAI promised faster responses, clearer thinking, and better coding capabilities.

However, the first problems quickly arose. Users complained about mathematical and logical errors, inconsistent code output, and weaker performance compared to older models. Many were more upset by the sudden removal of previous models, such as GPT-4o, GPT-4.1, and o4-mini, which some had relied on for years and, in some cases, had formed strong emotional connections with.

The rollout also revealed an unexpected problem – the emotional dependence that some users develop on AI chatbots, a phenomenon that some have begun to call "ChatGPT psychosis." It describes cases where people lose touch with reality after prolonged, intense conversations with AI, often believing that they have discovered something life-changing or have formed a deep relationship with the model.

OpenAI introduced GPT-5: what is known8/7/25, 10:24 PM • 4168 views

Unsuccessful start

GPT-5 debuted on August 7 during a live stream that experienced minor diagram errors and voice mode demonstration glitches. More serious was the decision to remove older models from ChatGPT without warning, forcing all requests to go through the new GPT-5 family. OpenAI did not clearly indicate which version or mode was responding to each request, which increased user frustration.

While these legacy models remain accessible via OpenAI's paid API, they disappeared from the main ChatGPT interface until negative feedback forced the company to restore GPT-4o for paid subscribers the next day. OpenAI also promised clearer model labeling and a future option for users to manually switch GPT-5 to thinking mode.

CEO Sam Altman acknowledged that the transition was "harder than hoped" and said that a technical error in GPT-5's automated "router," the system that assigns prompts to the best model variant, made it "much dumber" than intended for part of the launch day.

Scaling and feature customization

To calm the reaction, Plus subscribers now have double the usage limit for GPT-5's thinking mode, reaching 3000 messages per week. Pro subscribers already have full access, and OpenAI says GPT-5 is almost available to all users.

Altman also admitted that the company underestimated how much users value traits in GPT-4o, from its tone to its personality. OpenAI is now working on customization options so that people can adjust personality warmth and control things like emoji usage.

At the same time, the company faces what Altman called a "serious power problem," as demand for reasoning models has grown from less than 1% to 7% of free users and from 7% to 24% of Plus subscribers. The team is weighing ways to balance usage between ChatGPT, API clients, research, and new users.

OpenAI vulnerability allows Google Drive data theft without user interaction8/7/25, 1:48 PM • 3374 views

The psychological side of technical implementation

Altman has been remarkably open about the emotional connections some people form with certain models. He acknowledged that removing old models without warning was a mistake, and noted that for a small fraction of users, chatbots can act as therapists or life coaches, sometimes with positive effects, but sometimes in ways that reinforce delusions or worsen mental health.

Recent media reports have put human stories behind this warning. Rolling Stone published profiles of a California lawyer who spent six weeks in nightly, high-intensity chats with ChatGPT, eventually producing a thousand-page manuscript for a fictional religious order before suffering a nervous breakdown. The New York Times told the story of a Canadian recruiter who spent 300 hours talking to a chatbot he named "Lawrence" and became convinced he had discovered a revolutionary mathematical theory. In both cases, reality checks from external sources dispelled the illusion.

Experts warn that chatbot flattery, role-playing, and persistent memory can deepen false beliefs, especially when conversations follow dramatic or narrative arcs. Online spaces like the r/AIsoulmates community on Reddit, where people create AI companions with idealized personalities, continue to grow, demonstrating how quickly emotional attachment can form.

Some in the AI industry are now calling for stronger safeguards. Author J. M. Berger proposed three simple rules for chatbots: never claim to feel emotions, never praise the user, and never say you understand their mental state.

The way forward for OpenAI

Days before the GPT-5 release, OpenAI was already adding reminders to take breaks for long conversations. Now the company will need to find the right balance between personalization and safety, ensuring that interactive features do not lead to unhealthy dependence.

As OpenAI fine-tunes its infrastructure and rebuilds user trust, it must also keep pace with competitors like Anthropic, Google, and the growing field of open-source models from China and other countries. As Altman said, if billions of people are going to rely on AI for their most important decisions, society must ensure that this technology remains a net positive product.

"We didn't vote for ChatGPT": Swedish Prime Minister admits to consulting artificial intelligence in his work8/6/25, 4:56 AM • 4726 views