Hello readers,
Welcome to the AI For All newsletter! Today, we’re talking how AI research has revealed new details about how human vision works — and how AI vision could work. That, and the biggest AI news of the week.
AI in Action: A new way to see

For 50 years, neuroscience textbooks taught that the brain's visual cortex works essentially as an edge detector — identifying sharp contrasts between light and dark. A new study from Stanford University and the University of Göttingen has upended that model, and AI is what made the discovery possible.
Researchers built AI-powered digital twins of individual mouse neurons — virtual replicas that behave like the real cells — and used them to rapidly predict which images would activate specific neurons. That process surfaced a previously unknown third type of neuron with a two-part receptive field: one part processes high-frequency textures like fur or feathers, while the other processes broader shapes and arrangements, like a face. Together, they allow the brain to separate objects from their backgrounds far more efficiently than edge detection alone could explain. Predictions from the AI models were later confirmed through targeted experiments on real mouse brains at Stanford.
The practical implications extend well beyond neuroscience. Most modern computer vision systems are still built on the old edge-detection model — meaning they can struggle to identify objects in cluttered or complex environments. By incorporating these newly discovered two-part neurons into AI design, researchers believe vision systems could get significantly better at real-world object recognition.
The study is also a showcase for digital twins as a scientific tool. Rather than running endless physical experiments, researchers used AI to simulate millions of image combinations in seconds — dramatically accelerating a discovery that traditional methods might have taken years to surface.
🔥 Rapid Fire
Commentary: How the war in Iran might make the AI bubble worse
SoftBank is trying to raise a $40B loan to cover its OpenAI investment
SoftBank is already $56.5 billion in debt
OpenAI is no longer part of the planned expansion of Stargate Abilene
Stargate Abilene is delayed and will not be done in mid-2026
Oracle was dissatisfied with the revenue it was making
Anthropic’s lifetime revenue is ~$5B per affidavit filed in DoD lawsuit
Anthropic spent over $10 billion on training and inference
NVIDIA wants to spend $26 billion to build its own models
ChatGPT and other chatbots approved for use in the Senate
Senators will ask ChatGPT questions like ‘what did he say?’
Superhuman faces class action lawsuit over Grammarly AI feature
Judge orders Perplexity to stop its AI agent from trying to shop on Amazon
AI vision systems don’t see the way you do and that could be a problem
AI Agents Are Reading Your Docs. Are You Ready?
Last month, 48% of visitors to documentation sites across Mintlify were AI agents—not humans.
Claude Code, Cursor, and other coding agents are becoming the actual customers reading your docs. And they read everything.
This changes what good documentation means. Humans skim and forgive gaps. Agents methodically check every endpoint, read every guide, and compare you against alternatives with zero fatigue.
Your docs aren't just helping users anymore—they're your product's first interview with the machines deciding whether to recommend you.
That means:
→ Clear schema markup so agents can parse your content
→ Real benchmarks, not marketing fluff
→ Open endpoints agents can actually test
→ Honest comparisons that emphasize strengths without hype
In the agentic world, documentation becomes 10x more important. Companies that make their products machine-understandable will win distribution through AI.
📖 What We’re Reading
“If you need a system to stop a robotic arm because a human walked through a laser curtain, use a hard-wired sensor and simple logic code. It is cheap, fast (<10ms), and 100% reliable. Do not route this signal through an LLM to ask, "Do you think there is a human there?" The latency and the risk of hallucination are unacceptable for immediate safety threats.
GenAI belongs in the domain of Optimization and Prediction (seconds to hours), not in the domain of Reflex and Safety (milliseconds).”




