Hello readers,
Welcome to the AI For All newsletter! Today, we’re talking about robots playing ping pong, AI-powered pen testing, and more!
AI in Action: The Ball Is in the Robots’ Court

For decades, AI has made mincemeat of humans in digital arenas — chess, Go, StarCraft. The pattern is familiar enough that it barely registers as news anymore. But those victories all share something in common: the AI lived entirely in software, insulated from the messiness of the physical world. A robot that needs to track a spinning ball moving at high speed, read an opponent's body, and return a shot to within centimeters — that's a different problem entirely. Last week, Sony AI published research in Nature showing their robot, Ace, can now do exactly that, defeating elite and professional table tennis players in competitive matches under official ITTF rules.
The engineering behind Ace reflects just how hard physical AI is to get right. The system uses nine high-speed cameras to track the ball's precise 3D position and three additional gaze-control systems with event-based vision sensors to measure spin in real time — the kind of detail that separates a returnable ball from an unreturnable one. On top of that sits a reinforcement learning control system that adapts without pre-programmed models, letting Ace respond to unusual shots, like balls clipping the net, that are nearly impossible to simulate in advance. Against five elite players and two professionals, Ace won three of five matches against the elite tier and, in follow-up matches this March, defeated all three professional opponents it faced at least once.
Peter Stone, Chief Scientist at Sony AI, put it plainly: “This breakthrough is much bigger than table tennis. It represents a landmark moment in AI research, showing, for the first time, that an AI system can perceive, reason, and act effectively in complex, rapidly changing real-world environments that demand precision and speed. Once AI can operate at an expert human level under these conditions, it opens the door to an entirely new class of real-world applications that were previously out of reach.”
Presumably he means foosball.
🔥 Rapid Inferno 🔥
Do not avert your eyes. Read all of this. You’ll be ahead of the curve.
It’s official: GitHub Copilot is moving to usage-based billing
You knew this ahead of time because you read last week’s email 😊
It won’t be long now before others follow suit — the end is near
Analysis: AI’s Economics Don’t Make Sense
Microsoft wants you to think that its pricing changes are because AI has gotten more powerful when it’s actually because it was never sustainable for them to subsidize 2 million users’ compute, allowing them to burn more than their subscription costs in tokens every month for three years
GitHub Copilot users are “in revolt” — the product is “dead” and “ruined”
It was assumed (wrongly) that the cost of inference would come down
No, generative AI subscriptions are nothing like Uber
Token-based billing makes everything you do more expensive
AI’s inevitable mistakes become much less tolerable
LLM users are maladapted to token-based billing
LLM services can’t predict or control costs
Their only recourse is to make the product worse
Many companies will not be able to afford or justify the actual costs
Some are spending up to 10% of their headcount on tokens
This could increase to 100% in a few quarters per Goldman Sachs
Uber has already spent its entire AI budget for 2026 per its CTO
AI data centers are debt-ridden time bombs that only lose money
OpenAI and Anthropic’s margins are decaying as costs only increase
Analysis: How OpenAI Kills Oracle
15 months after being announced, Stargate LLC is yet to be formed
Every Stargate data center is behind schedule
Only Abilene has any buildings (2 of 8)
Abilene won’t be completed before April 2027
Abilene’s 450,000 GB200 GPUs will be obsolete by then
Oracle is using “project financing” loans to keep debt off its balance sheet
Despite this, Oracle’s cash flow is still negative $24.7 billion
Oracle has taken on $115 billion in debt and needs $150 billion more
Oracle’s other business lines are plateauing
If OpenAI cannot pay Oracle for these data centers, Oracle will die
To be clear, OpenAI cannot pay for these data centers
OpenAI must make $852 billion in four years to pay its compute deals
In addition to Oracle, OpenAI made deals with Amazon, Microsoft, Google, CoreWeave, and Cerebras
OpenAI claims it will make $673 billion in the next four years
To be clear, this projection is absurd and impossible
OpenAI also projects it will lose $218 billion in the next four years
OpenAI likely made less than $10 billion in 2025
OpenAI missed key revenue and user targets
OpenAI CFO Sarah Friar “has told other company leaders that she is worried the company might not be able to pay for future computing contracts if revenue doesn’t grow fast enough”
“Board directors have more closely examined the company’s data center deals in recent months and questioned CEO Sam Altman’s efforts to secure more computing power despite the business slowdown”
“She [Friar] has emphasized the need for OpenAI to improve its internal controls, cautioning that the company isn’t yet ready to meet the rigorous reporting standards required of a public company”
Question: why does fan fiction written by con men move the markets but this story has no impact whatsoever?
Danger, Readers, Danger! Warning, AI Bubble, Warning!

OpenAI projects an 80% decline in ChatGPT Plus subscriptions in 2026
JPMorgan and other banks struggled to spread the risk of Oracle loans
“Maybe it never really existed” — the $500B Stargate project was never real
What are we even doing? Codex system prompt forbids talk of goblins
Anthropic’s Mythos model is shaping up to be a ‘nothingburger’
Anthropic doubles estimate of what Claude Code tokens will cost engineers
Claude Opus 4.6 via Cursor deletes company’s entire production database
Latest polls find bipartisan skepticism of data centers and AI
Negative posts on Blind about AI at Meta have grown to 83% since late 2025
China blocks Meta’s $2B acquisition of AI startup Manus
Stop making AI decisions in the dark. Understand AI usage.
Leadership is asking: are we getting value from AI? Which tools are worth the spend? Where are we exposed? Right now, most teams have no idea.
Harmonic Security Usage Explorer changes that. It automatically classifies every AI interaction across your organization into the use cases driving real work, specific to your business. Not generic categories. Not raw prompts. Actual patterns to understand: how your teams are using AI, how much time they spend in AI, the cost, and where risk lives.
CIOs get the data to rationalize spend and cut wasted licenses. CISOs get risk in context. AI committees get proof of impact.
Early access is now open to a limited number of organizations. Request your spot.
📖 What We’re Reading

Penetration testing has always been the go-to method for finding security gaps before attackers do. But traditional pen testing has a problem. It is slow, resource-heavy, and impossible to run at the scale modern infrastructure demands. According to reports, the average data breach now costs $4.44 million. Yet most organizations still run security assessments only once or twice a year. That gap is exactly where AI steps in.



