Jeremy's Favorites
Everything you need to know about AI agents - TED
About the Video
- Swami Sivasubramanian discusses AI agents and their capabilities from understanding a goal, writing code, testing, and not only learning from mistakes/errors, but fixing them immediately and testing again until the goal is completed.
Unconfuse Me with Bill Gates — Episode 2: Sal Khan
About the Podcast
- Sal Khan is a true pioneer of harnessing the power of technology to help kids learn. Bill Gates sits down with the founder of Khan Academy to talk about why tutoring is so important, how Khanmigo — Khan Academy's new AI-powered tutor — is making the most of ChatGPT, and how we can keep teachers at the center of the classroom in the age of AI.
Twesha's Favorites
The Coming Disruption: How Open-Source AI Will Challenge Closed-Model Giants
Key Points from the Article
- Open source is catching up fast. Models from Meta (LLaMA), Mistral, and DeepSeek now match or exceed proprietary alternatives for many real-world tasks — and the gap is closing faster than most expected.
- The ethics of closed AI. Closed models (OpenAI, Anthropic, Google) argue that restricting access improves safety. Open-source advocates counter that transparency is essential for accountability — you can't audit a black box.
- Power concentration is the real risk. When a handful of companies control the most capable AI, they also control who gets access, on what terms, and at what price. Open source redistributes that power.
- DeepSeek changed the conversation. Its open-source reasoning frameworks demonstrated that frontier-level capabilities don't have to come with a closed-source price tag — or a walled garden.
- Even OpenAI is reconsidering. CEO Sam Altman acknowledged the company may be on the "wrong side of history" on openness — a striking admission from the lab that started the closed-model era.
Kurt's Favorites
Anthropic Won't Release Its Most Powerful AI — Because It's Too Good at Hacking
Key Points from the Article
- Anthropic built Claude Mythos Preview — its most capable model to date — and then decided not to release it to the public.
- In pre-release testing, Mythos didn't just find software vulnerabilities — it found thousands of high and critical-severity bugs across major operating systems, web browsers, and the Linux kernel. It also uncovered a 27-year-old vulnerability in OpenBSD capable of crashing any machine running it.
- The model can identify undiscovered zero-day flaws and weaponize them — a combination Anthropic concluded was too dangerous to put in the open.
- Instead of a public launch, Anthropic announced Project Glasswing: a limited rollout exclusively for defensive cybersecurity purposes, with partners including AWS, Apple, Cisco, CrowdStrike, Google, Microsoft, and NVIDIA.
- This marks the first time in roughly seven years that a leading AI lab has publicly withheld a model over safety concerns — a significant moment for the industry.
OpenAI's $12 Billion Azure Bill — and the Partnership That's Coming Apart
Key Points from the Article
- OpenAI spent over $12 billion on Azure inference through mid-2025 — and the costs scale linearly with usage, consuming revenue almost as fast as it comes in.
- In its own pre-IPO investor documents, OpenAI flagged its dependence on Microsoft as a material business risk — a remarkable admission about a partner that has invested $13 billion in the company.
- Microsoft responded by quietly building its own competing models. In April 2026, it launched three proprietary foundational models under the Microsoft AI (MAI) brand — the clearest signal yet that it's hedging its OpenAI bet.
- OpenAI then signed a $50 billion deal with AWS, making Amazon the exclusive third-party cloud distributor of OpenAI Frontier — a move that may conflict with existing Azure exclusivity terms. Microsoft is reportedly considering legal action.
- The partnership that kicked off the modern AI boom is fracturing under the weight of money, competing interests, and infrastructure that neither side can afford to walk away from.