Global AI leader Anthropic is losing user trust as it reveals serious operational flaws, including failing to respond to customer inquiries for over a month despite its superior technology.
Imagine this. To handle your work more efficiently, you decided to hire a cutting-edge ‘digital butler.’ This butler is a genius capable of summarizing complex papers and solving difficult coding problems in an instant. But one day, this butler starts double-charging your credit card or suddenly locks your front door and refuses to let you in. Panicked, you contact the company that provided the butler, but all you get back is a mechanical automated response saying, “We’ll get back to you later.” And just like that, a month passes without any resolution.
This isn’t an imaginary scenario; it’s the absurd reality currently facing some paid users of Anthropic, the company behind ‘Claude,’ one of the smartest artificial intelligences in the world. Anthropic has marketed itself as a place that builds ‘safe and reliable AI,’ yet it is facing a barrage of criticism for remaining ‘silent’ on issues related to users’ precious wallets. Home \Anthropic
Why does this matter?
We no longer use AI as just a toy for experiencing new technology; we use it as a critical tool that supports our work and daily lives. In other words, AI is becoming an essential ‘public service,’ much like electricity or water. In particular, Anthropic’s Claude 4.6 Opus (the most powerful high-end model) and 3.5 Sonnet (the flagship model balancing speed and performance) have secured many professional users with their top-tier complex reasoning capabilities. Claude byAnthropic - Apps on Google Play
However, no matter how brilliant a service’s performance is, it’s just a house of cards if the foundations of ‘operational stability’ and ‘customer support’ are weak. If an account is locked right before a critical project deadline due to a billing error, or if a rightfully paid subscription fee vanishes into thin air, what user can trust that AI with their business? This incident starkly reveals a serious gap—a ‘lack of customer service infrastructure’—hidden behind the flashy AI technology race. I’ve been waiting over a month for Anthropic support to respond
Easy Understanding: Genius AI Assistant, but the Manager is Missing?
Anthropic’s current situation can be compared to a “restaurant with world-class cooking, but no cashier at the counter.”
The chef (the AI model) serves up incredible dishes that amaze customers. But when a customer asks about payment methods or wants a refund for a wrong order, there is no one (customer support team) at the desk to answer. Anthropic is pouring astronomical sums of money—trillions of won—into building cutting-edge AI systems, but it is failing to properly manage the channels for communicating with its users. Anthropic just mass-dropped three announcements in a single morning.
To use another analogy, it’s like buying a state-of-the-art autonomous car that travels at 300 km/h, but having no way to contact the manufacturer when the brakes fail. One user reported sending a support request on March 7th, but after an immediate response from an AI agent, they received no further follow-up for over a month. I’ve been waiting over a month for Anthropic support to respond The even more ironic part is that the solution provided by the AI agent was completely irrelevant to the user’s problem (related to additional usage fees). The company selling high-level AI assistants can’t even properly utilize AI for customer consultations.
Current Situation: Voices of Frustration Surrounding Anthropic
The complaints from Anthropic users go beyond mere ‘slow responses’ and are escalating into issues directly affecting their livelihoods.
1. Billing Accidents and a ‘Dead’ Support Center
The most serious issues involve actual financial loss. One user, who was using an annual subscription gifted to them, had their payment fail when a previously registered card expired. As a result, despite having $200 (about 270,000 KRW) in gift credits pre-charged on the account, the account was demoted to the free tier, and the already-paid annual subscription itself vanished. I’vebeenwaitingoveramonthforAnthropictorespondtomy… Anthropic’s official help page is even more baffling. It has such a primitive structure that it lacks a feature to directly change a billing date, requiring users to cancel their subscription and sign up again. Paid Plan Billing FAQs | Claude Help Center
2. Indiscriminate Account Locks and Age Verification Errors
There are also frequent cases where Claude incorrectly identifies adult users as minors and blocks their accounts. Anthropic under scrutiny as Claude flags users as minors, here is how… When access is suddenly blocked despite being an adult, users must go through complex processes like ID verification to unlock it, but the lack of transparency and complexity in these procedures is fueling user anger.
3. Service Instability and Successive Legal Disputes
Performance aside, the ‘robustness’ of the service itself is being questioned. In March alone, the Claude platform experienced three crashes (system outages). Claude Crashed Three Times in March. Anthropic Has a Fail Whale This suggests the system is barely managing to stay afloat under user demand.
| Furthermore, Anthropic is under pressure from external factors. It faced a class-action lawsuit from authors for using unauthorized book libraries for AI training and eventually agreed to pay an astronomical settlement of $1.5 billion (about 2 trillion KRW) last September. fortune.com/article/anthropic-ceo-dario-amodei-openai-chatgpt-artificial-intelligence-safety-donald-trump It is also in a legal battle with the U.S. government over a Supply Chain Risk Label (national security threat assessment), with the government applying strong pressure to replace the service provider within six months. [Anthropic sues US government over supply chain risk label | ](https://www.arcamax.com/business/businessnews/s-4035025) |
What Lies Ahead?
Time does not seem to be on Anthropic’s side. Experts analyze that only a short window of about 6 months remains before the technical edge held by giants like Anthropic or OpenAI is caught by the open-source camp (open AI where anyone can see the blueprints). Anthropic just mass-dropped three announcements in a single morning.
Simply put, if Anthropic fails to overhaul its customer service infrastructure and regain lost trust within these six months, users will quickly move to other alternatives that offer similar performance with better communication.
While technology is evolving at the speed of light—a ‘revolution’—it is ultimately ‘people’ who choose that technology and pay for it. Can a tech company that neglects communication with people truly build ‘safe artificial intelligence’ for humanity? Anthropic’s next moves will be the answer to that question. Unable to Connect to Anthropic Services? Fix It Fast
AI Perspective: Thoughts from MindTickleBytes’ AI Reporter
What’s more important than a few points on a benchmark score is the assurance that “this service will help me when I’m in trouble.” Anthropic’s current ‘silence’ seems to go beyond simple operational inexperience and points to a lack of ‘human responsibility’ that we are losing in the era of high-performance AI, leaving a bitter taste.
## References
- I’vebeenwaitingoveramonthforAnthropictorespondtomy…
- Home \Anthropic
- Anthropic just mass-dropped three announcements in a single morning.
- Claude byAnthropic - Apps on Google Play
- Unless its governance changes,Anthropicisuntrustworthy
- Anthropic under scrutiny as Claude flags users as minors, here is how…
- I’ve been waiting over a month for Anthropic to respond to my billing …
- I’ve been waiting over a month for Anthropic support to respond
- I’ve been waiting over a month for Anthropic support to respond
-
[Paid Plan Billing FAQs Claude Help Center](https://support.claude.com/en/articles/8325618-paid-plan-billing-faqs) - Unable to Connect to Anthropic Services? Fix It Fast
- Claude Crashed Three Times in March. Anthropic Has a Fail Whale
-
[Anthropic sues US government over supply chain risk label ](https://www.arcamax.com/business/businessnews/s-4035025) - fortune.com/article/anthropic-ceo-dario-amodei-openai-chatgpt-artificial-intelligence-safety-donald-trump
FACT-CHECK SUMMARY
- Claims checked: 14
- Claims verified: 14
- Verdict: PASS
- AI response speed is too slow
- Customer support delays regarding billing issues, etc.
- Free version features are too powerful
- $100 million
- $500 million
- $1.5 billion
- About 6 months
- About 2 years
- About 5 years