Hello readers,
Welcome to the AI For All newsletter! Today, we’re talking about Google’s stealthy edge AI challenge to Apple’s built-in voice dictation, protecting data in autonomous AI systems, and more!
AI Toolkit: Google Edge AI Eloquent

What: Google AI Edge Eloquent is a free, on-device dictation app for iOS that transcribes your speech in real time, automatically removes filler words, and offers one-tap tools to polish, formalize, shorten, or expand your text. It runs entirely on your phone using Google's Gemma-based speech recognition models — no subscription, no server, no usage cap — and quietly appeared in the App Store in early April 2026 with virtually no fanfare from Google.
How: Speak into Eloquent and it transcribes live, then automatically cleans up the raw output when you stop — stripping the ums and ahs and smoothing everything into readable prose. From there, you can apply transformation modes (formal, short, long, key points) or just copy the cleaned text straight to your clipboard. One of its more underrated features is how it builds your personal vocabulary: correct a name or piece of jargon once, and Eloquent quietly adds it to its dictionary so you don't have to again. The whole experience is designed to get out of your way.
Why: If you've ever fought with Apple's built-in dictation — and on iOS, most people have — Eloquent is a genuine upgrade. The interface for reviewing and accepting AI edits is clean and intuitive, and the edge-based processing means it works even without a signal, which matters more than the privacy angle for most users. It's not quite frictionless yet: the polish feature can occasionally over-edit longer dictations, seeming to interpret too many filler words as content worth cutting. And as seamless as the clipboard workflow is, the app's ceiling is limited until it becomes a native iOS keyboard — right now you're always one context-switch away from wherever you actually want the text to land.
Our favorite part: The dictionary-learning flow. Most dictation apps make you configure custom vocabulary upfront, which nobody does. Eloquent just watches you make corrections and adapts. It's a small thing, but it's the kind of creature comfort that makes a utility feel like it was built by someone who actually uses it.
Pricing:
Free: Full access to all transcription, polish, and transformation features. Fully offline mode included, no subscription required.
Deals or promotions: No paid tier at time of writing. Given Google's track record, that could change — or the app could quietly disappear. Enjoy it while it's here.
🔥 Rapid Fire Inferno
Analysis: The Hater’s Guide to OpenAI
Sam Altman exhibits a “pattern of deception”
OpenAI did not make $13B in revenue in 2025
OpenAI likely made less than $10B
OpenAI counts sales before Microsoft’s 20% share
OpenAI cannot and will never be able to afford $600B in compute spend
The media fails to mention this and other obvious problems
OpenAI CFO Sarah Friar does not believe OpenAI is ready to go public
Friar is concerned about slowing revenue growth and about the company’s ability to pay its bills
Friar has been left out of conversations around financial planning for data center capacity and no longer reports to Altman
Startup governance expert Eric Ries said that some of OpenAI’s accounting practices would have been considered “borderline fraudalent” in other eras
Ries told The New Yorker, “The company levered up financially in a way that’s risky and scary right now” (OpenAI disputes this)
Commentary: The deceptive and dangerous marketing of AI
How we talk about “AI” (LLMs) needs to change
Anthropic’s users are facing erratic rate limits and model performance
What a user pays for an LLM subscription is massively subsidized
The economics of LLMs are such that there is no “profit lever”
Compute demand is unpredictable — companies are guessing
Guessing incorrectly has material financial consequences
Anthropic’s Mythos model is all deceptive marketing
Anthropic’s developers suggested that Mythos had “broken containment and sent a message” when it was actually instructed to send the message and never escaped any container
AI CEOs co-opt doomerism for marketing purposes (at their own peril)
The man who threw a Molotov cocktail at Sam Altman’s house was partially inspired by a doomer fan fiction that was itself inspired by Altman’s doomer marketing and whose author was suggested for the Nobel Peace Prize by … Altman himself
OpenAI investors question $852B valuation as strategy shifts
OpenAI CRO Denise Dresser accuses Anthropic of inflating its revenue
This is the pot calling the kettle black
Stargate continues to collapse — Microsoft takes over Norway data center
Snap is the latest company to engage in AI washing
Snap was pressured by an activist investor to reduce costs and headcount
Snap is unprofitable and mentioned AI to lift its declining stock price
It sounds better to Wall Street to attribute layoffs to “AI efficiencies” than to admit a bad business — Snap’s stock rose 6%
Snap’s $400M deal with Perplexity collapsed
Perplexity likely couldn’t afford to pay $400M over one year
Microsoft, Amazon, Oracle, Meta, Block — none of these layoffs were because of LLMs replacing workers — they were for reasons unrelated to AI (overhiring) or to free up cash to burn on AI infrastructure
LinkedIn data shows that the hiring decline is not because of AI
Addendum: TechCrunch adds the obligatory “yet” to the end of the headline for no reason — four years in and we’re still talking about LLMs in the future tense instead of the stagnant reality
Yeah, we’re in a bubble: struggling shoe retailer Allbirds pivots to AI 🙃
Allbirds announced a deal to raise $50M in funding
$50M is not enough to buy the number of GPUs needed for such a venture nor the associated hardware
Addendum: CNBC says AI infrastructure “can be lucrative” and cites NVIDIA as its sole example. NVIDIA has a monopoly on GPUs, it does not run the data centers. CoreWeave, Oracle, and others build and run the data centers. They are very much not lucrative.
Benchmarks for AI model performance are flawed and unreliable
Microsoft starts removing “unnecessary” Copilot buttons from Windows 11
Google releases native Gemini app for macOS
In a World of AI Agents: Intent > Identity
AI-powered bots aren’t just logging in anymore. They’re mimicking real users, slipping past identity checks, and scaling attacks faster than ever.
Thousands of companies worldwide trust hCaptcha to protect their online services from automated threats while preserving user privacy.
Now is the time to take control of your security.
📖 What We’re Reading
The expansion of AI-driven systems is redefining the scope of big data security. As organizations scale their use of autonomous agents and ML pipelines, they face a new class of threats. Distributed denial-of-service (DDoS) attacks now target inference endpoints and orchestration layers, while data leaks emerge from overlooked vectors. The security challenge is no longer limited to protecting data at rest or in transit. It now extends to protecting data while AI systems are processing it, requiring organizations to secure both the information and the operations that manipulate it. Addressing these risks requires a combination of architectural foresight, real-time behavioral monitoring, and strict access governance. Without these guardrails, system complexity can outpace an organization’s ability to contain threats.




