Sponsored by

Hello readers,

Welcome to the AI For All newsletter! Today, we’re talking how AI research has revealed new details about how human vision works — and how AI vision could work. That, and the biggest AI news of the week.

AI in Action: A new way to see

For 50 years, neuroscience textbooks taught that the brain's visual cortex works essentially as an edge detector — identifying sharp contrasts between light and dark. A new study from Stanford University and the University of Göttingen has upended that model, and AI is what made the discovery possible.

Researchers built AI-powered digital twins of individual mouse neurons — virtual replicas that behave like the real cells — and used them to rapidly predict which images would activate specific neurons. That process surfaced a previously unknown third type of neuron with a two-part receptive field: one part processes high-frequency textures like fur or feathers, while the other processes broader shapes and arrangements, like a face. Together, they allow the brain to separate objects from their backgrounds far more efficiently than edge detection alone could explain. Predictions from the AI models were later confirmed through targeted experiments on real mouse brains at Stanford.

The practical implications extend well beyond neuroscience. Most modern computer vision systems are still built on the old edge-detection model — meaning they can struggle to identify objects in cluttered or complex environments. By incorporating these newly discovered two-part neurons into AI design, researchers believe vision systems could get significantly better at real-world object recognition.

The study is also a showcase for digital twins as a scientific tool. Rather than running endless physical experiments, researchers used AI to simulate millions of image combinations in seconds — dramatically accelerating a discovery that traditional methods might have taken years to surface.

🔥 Rapid Fire

AI Agents Are Reading Your Docs. Are You Ready?

Last month, 48% of visitors to documentation sites across Mintlify were AI agents—not humans.

Claude Code, Cursor, and other coding agents are becoming the actual customers reading your docs. And they read everything.

This changes what good documentation means. Humans skim and forgive gaps. Agents methodically check every endpoint, read every guide, and compare you against alternatives with zero fatigue.

Your docs aren't just helping users anymore—they're your product's first interview with the machines deciding whether to recommend you.

That means:
→ Clear schema markup so agents can parse your content
→ Real benchmarks, not marketing fluff
→ Open endpoints agents can actually test
→ Honest comparisons that emphasize strengths without hype

In the agentic world, documentation becomes 10x more important. Companies that make their products machine-understandable will win distribution through AI.

📖 What We’re Reading

“If you need a system to stop a robotic arm because a human walked through a laser curtain, use a hard-wired sensor and simple logic code. It is cheap, fast (<10ms), and 100% reliable. Do not route this signal through an LLM to ask, "Do you think there is a human there?" The latency and the risk of hallucination are unacceptable for immediate safety threats.

GenAI belongs in the domain of Optimization and Prediction (seconds to hours), not in the domain of Reflex and Safety (milliseconds).”

Keep Reading