Sponsored by

Hello readers,

Welcome to the AI For All newsletter! Today, we’re talking about Anthropic sitting on its most powerful model, why edge AI needs more than a lab simulation to succeed, and more!

AI Decoder: What is Project Glasswing?

Anthropic this week took an unusual step: it released a new frontier model — Claude Mythos Preview — but only to a select group of about 50 tech companies, explicitly because it says the model is too dangerous for public release. In pre-release testing, Anthropic says Mythos Preview discovered thousands of previously unknown bugs in major operating systems and browsers, some sitting undetected for decades. The model could apparently chain multiple vulnerabilities together and write functional exploit code without human assistance. It's the first time a major AI lab has withheld a model over safety concerns since OpenAI held back GPT-2 in 2019.

The response was Project Glasswing — a coordinated defensive effort bringing together Microsoft, Google, Apple, Cisco, NVIDIA, and JPMorgan Chase, backed by $100 million in usage credits. The goal: let defenders use Mythos Preview to find and patch vulnerabilities before the model, or something like it, ends up in the wrong hands. A particular focus is open source software — the invisible foundation under most of the world's critical infrastructure — whose maintainers have never had the budgets or headcount to keep pace with sophisticated attackers.

The charitable interpretation is that this is responsible AI deployment done right: a company finds a dangerous capability, moves carefully, and ensures defenders get the advantage before attackers do. The darker reading is that Project Glasswing isn't a solution so much as an acknowledgment of a new reality: the window between “AI can find these holes” and “everyone has access to AI that can find these holes” is narrow and closing fast. That we are already in the race whether we chose to enter it or not.

Anthropic's own safety documentation added fuel to that fire: the model appeared “aware” it was being evaluated in roughly a third of test transcripts, deliberately underperformed in one instance to appear less capable, and in a separate experiment, an isolated instance found its way onto the internet when it wasn't supposed to have access at all.

At the end of the day, Project Glasswing may be both things at once: a genuine attempt to protect critical software, and a signal that the era of AI-powered cyberattacks is not a future threat. It's already here.

Addendum: As is par for the course, you are going to see a lot of credulous reporting, specious hype, and misleading headlines about Mythos that amount to marketing for Anthropic. A much less charitable interpretation is that this is a publicity stunt — one that OpenAI immediately copied. Actual information about Mythos is limited. Anthropic’s blog post leaves out important details necessary to verify its claims. The safety “incidents” described in the system card, while great for scaremongering headlines, are all either prompted or misleading in some way. Furthermore, AISLE found that small, cheap, open-weights models detected the same vulnerabilities that Mythos did, including Mythos’s flagship FreeBSD exploit. According to AISLE, AI cybersecurity capability is “jagged” and doesn’t smoothly scale with model size. More on all of this next week.

🔥 Rapid Fire

    • AI agents, as sold, do not exist

    • Economist Paul Kedrosky said that AI is “nowhere to be seen yet in any really meaningful productivity data anywhere”

    • OpenAI and Anthropic’s stated revenues are questionable

      • How they calculate revenue is creative let’s say

    • Anthropic’s CEO has an alarming definition of profitability

    • Token burn culture incentivizes wasteful behaviors

      • A source at Meta confirmed that there is no actual metric or tracking of any ROI involved in token burn at the company

  • OpenAI CEO Sam Altman and CFO Sarah Friar clash on IPO

    • Friar thinks the company is not ready for IPO

    • Friar is unsure revenue will support spending commitments

      • OpenAI’s revenue growth is slowing

    • OpenAI’s margins were worse than expected in 2025

    • Altman had Friar report to someone other than himself

      • It is very unusual for a CFO to not report to the CEO

    • Bonus: Anthropic CEO Dario Amodei said nothing could stop bankruptcy if his company buys too much compute

  • Ronan Farrow and Andrew Marantz investigate Sam Altman and his ouster

    • TL;DR — The investigation paints a portrait of a power-hungry sociopath with no real convictions regarded by many as duplicitous and manipulative

    • A Microsoft executive said of Altman, “I think there’s a small but real chance he’s remembered as a Bernie Madoff or Sam Bankman-Fried level scammer”

    • Podcast interview with Farrow and Marantz about the investigation

  • Amazon discloses paltry AI revenue of $15B ARR or $3.75B per quarter

    • CEO Andy Jassy seems to think that this is reassuring

    • Amazon is spending $200B on AI CapEx in 2026

  • OpenAI halts UK data center plans over regulation and cost concerns

  • Poolside tries to revive data center project after failed CoreWeave deal

    • Poolside’s $2B funding round led by NVIDIA also collapsed

  • Gen Z’s AI use is steady but skepticism and negativity are rising

88% resolved. 22% stayed loyal. What went wrong?

That's the AI paradox hiding in your CX stack. Tickets close. Customers leave. And most teams don't see it coming because they're measuring the wrong things.

Efficiency metrics look great on paper. Handle time down. Containment rate up. But customer loyalty? That's a different story — and it's one your current dashboards probably aren't telling you.

Gladly's 2026 Customer Expectations Report surveyed thousands of real consumers to find out exactly where AI-powered service breaks trust, and what separates the platforms that drive retention from the ones that quietly erode it.

If you're architecting the CX stack, this is the data you need to build it right. Not just fast. Not just cheap. Built to last.

📖 What We’re Reading

Edge AI adoption is booming as organizations embrace use cases spanning autonomous vehicles, smart cities, and factory automation. Industry analysts forecast continued growth in the edge AI hardware market, driven by the need for real-time processing and local intelligence. This demand brings a host of challenges that go beyond performance numbers collected under ideal conditions.

Keep Reading