Anthropic predicts its AI systems will match Nobel Prize-winning intellect by late 2026 or early 2027. Meanwhile, actual customers have been waiting over a month for someone to respond to their billing problems. The contrast tells you everything you need to know about where AI companies put their priorities.
In early March, users started noticing unexpected charges on their Anthropic accounts. One Claude Max subscriber reported approximately $180 in mystery charges appearing between March 3-5. As of April 8, 2026, they’re still waiting for a response. That’s over a month of radio silence from a company building systems that supposedly rival humanity’s brightest minds.
When the Robots Work Better Than Customer Service
The irony is almost too perfect. Anthropic’s Claude can write code, analyze complex documents, and hold nuanced conversations about philosophy. But try to get a human at Anthropic to look at your credit card statement? Good luck.
This isn’t just about one frustrated customer. Multiple users reported similar billing issues in March, with charges they didn’t authorize or couldn’t explain. Some tried updating their payment methods with three different credit cards—Mastercard, Visa, and American Express—plus Stripe Link. Every single one got rejected. That’s not a user error. That’s a system problem.
The Real Cost of Moving Fast
AI companies love to talk about racing toward artificial general intelligence. They’re less excited to discuss the boring infrastructure that keeps actual customers happy. Support tickets. Billing systems. The unglamorous work of making sure people can actually use your product without getting phantom charges.
Anthropic isn’t alone in this. The entire AI industry has a pattern of building incredible technology while treating customer support as an afterthought. But when you’re charging people money—especially unexpected amounts—that afterthought becomes a real problem.
The timing makes it worse. In April 2026, Anthropic pushed out version 2.1.88 of its Claude Code software package. The update accidentally included a file that shouldn’t have been there, causing its own set of headaches. TechCrunch called it “a month” for Anthropic, and they weren’t wrong.
What This Means for Regular Users
If you’re using AI tools from any company, not just Anthropic, pay attention to your credit card statements. Set up alerts for charges over a certain amount. Screenshot your usage and billing history regularly. These tools are powerful, but the companies behind them are still figuring out basic operational stuff.
The bigger question is what happens when these AI systems become more integrated into critical services. If a company can’t handle billing support now, what happens when their AI is making decisions about healthcare, finance, or infrastructure?
The Nobel Prize Problem
Anthropic’s prediction about Nobel-level AI by late 2026 or early 2027 is ambitious. Maybe they’ll get there. But intelligence isn’t just about solving complex problems. It’s also about responding when someone needs help. It’s about building systems that work reliably. It’s about treating customers like humans, not data points.
An AI that can match a Nobel laureate but works for a company that ignores billing complaints for a month isn’t actually that intelligent. It’s just another example of tech companies optimizing for the flashy stuff while neglecting the basics.
For now, those unexpected charges from March remain unexplained. The support tickets remain unanswered. And somewhere, an AI system is probably getting smarter by the minute, completely unaware that the humans who built it can’t figure out how to process a refund.
That’s the AI industry in 2026: building minds that could change the world, one ignored customer at a time.
đź•’ Published: