
Welcome, AI enthusiasts
Voice AI is getting a rethink. Mira Murati’s startup Thinking Machines says today’s assistants feel clunky because they were built in pieces—and its new model aims to make conversations feel faster, smoother, and more natural. Let’s dive in!
In today’s insights:
Thinking Machines Bets Voice AI Got It Wrong
Google Confirms First AI-Built Zero-Day Used by Criminals
OpenAI Launches Daybreak to Fight AI-Era Cyber Threats
Read time: 4 minutes
LATEST DEVELOPMENTS
THINKING MACHINES
🎙️ Thinking Machines Bets Voice AI Got It Wrong
Evolving AI: Mira Murati's startup just shipped its first model, arguing rivals like OpenAI and Google built voice AI on the wrong foundation.
Key Points:
Thinking Machines Lab released TML-Interaction-Small, a 276B-parameter mixture-of-experts model that processes audio, video, and text in 200-millisecond chunks rather than waiting for a user to finish speaking.
The startup claims it beats OpenAI's GPT-Realtime-2 and Google's Gemini Live on interaction benchmarks, hitting 0.40-second response latency versus 1.18 and 0.57 seconds.
A second background model runs in parallel to handle reasoning and tool use, feeding results into the live conversation without breaking flow.
Details:
Today's voice assistants rely on a harness of small components that detect when a user stops talking, then hand a finished utterance to the language model. Thinking Machines argues this is why voice AI still feels robotic. Its new Interaction Models replace that scaffolding with a single model that reads and speaks on the same 200-millisecond clock. The system can interject, stay quiet, or talk over the user. A paired background model handles slower reasoning tasks and weaves answers back in when the moment fits.
Why It Matters:
Voice has quietly become the next architectural battle in AI. NVIDIA's PersonaPlex, Kyutai's Moshi, and now Murati's team are all betting the cascaded pipeline is a dead end, and that natural conversation requires one model doing everything at once. For Thinking Machines, which raised $2 billion before shipping anything and has lost key staff since, this launch is the proof point investors have been waiting on. The harder question is whether being technically right about architecture is enough when OpenAI and Google own the distribution.
Attio is the AI CRM for high-growth teams.
Connect your email, calls, product data and more, and Attio instantly builds your CRM with enriched data and complete context. Whether you’re running product-led growth or enterprise sales, Attio adapts to your unique GTM motion.
Then Ask Attio to plan your next move.
Run deep web research on prospects. Update your pipeline as you work. Find customers and draft outreach emails. Powered by Universal Context, Attio's intelligence layer, Attio searches, updates, and creates across your data to accelerate your workflow.
Ask more from your CRM.
Evolving AI: Google's threat intelligence team has identified the first real-world zero-day exploit it believes was developed with AI, marking a turning point in offensive cyber capabilities.
Key Points:
Google Threat Intelligence Group (GTIG) caught a criminal group preparing a mass exploitation campaign using a 2FA bypass exploit bearing clear signs of LLM authorship.
The same report documents PRC and DPRK actors automating vulnerability research, including APT45 sending thousands of recursive prompts to validate CVE exploits.
New AI-enabled Android malware called PROMPTSPY uses the Gemini API to autonomously navigate victim devices, marking a shift from AI as advisor to AI as operator.
Details:
GTIG's analysis found the exploit targeted a popular open-source server admin tool and was written in textbook Pythonic style, complete with a hallucinated CVSS score and educational docstrings typical of LLM output. Google disclosed the flaw to the vendor before mass exploitation could begin. The broader report tracks state-linked actors using "wooyun-legacy," a Claude code skill plugin trained on 85,000 real vulnerability cases, to prime models for expert-level code review. Russia-nexus malware families CANFAIL and LONGSTREAM are using AI-generated decoy code to camouflage malicious behavior.
Why It Matters:
Six months ago Anthropic disclosed a Chinese group running Claude Code as an autonomous attack agent. Mandiant's 2026 numbers showed mean time to exploit had already collapsed to under a week. Now GTIG has the receipts on a working AI-built zero-day in the wild. The threshold the industry kept calling "imminent" has quietly become the baseline. Defenders are no longer racing AI-assisted attackers; they're racing AI-native ones.
10x the context. Half the time.
Speak your prompts into ChatGPT or Claude and get detailed, paste-ready input that actually gives you useful output. Wispr Flow captures what you'd cut when typing. Free on Mac, Windows, and iPhone.
Evolving AI: OpenAI unveiled Daybreak, a cybersecurity platform built on GPT-5.5 that scans code, finds vulnerabilities, and tests patches before attackers can strike.
Key Points:
Daybreak is OpenAI's direct answer to Anthropic's Project Glasswing, signaling that frontier labs now see cyber defense as a core product line.
The platform runs on three model tiers: GPT-5.5, GPT-5.5 with Trusted Access for Cyber, and GPT-5.5-Cyber for authorized red teaming.
Launch partners include Cisco, Cloudflare, CrowdStrike, Palo Alto Networks, Oracle, and major banks like JPMorgan and Goldman Sachs.
Details:
Daybreak combines OpenAI's frontier models with its Codex agentic harness to build editable threat models of customer codebases. It identifies realistic attack paths, tests vulnerabilities in isolated environments, and proposes fixes that route back into developer workflows. Sam Altman framed the launch as a chance for companies to "continuously secure themselves." Access is gated through vulnerability scan requests, with the Trusted Access for Cyber program now spanning hundreds of organizations across finance, infrastructure, and government.
Why It Matters:
Defenders are getting AI tools because attackers got them first. Mandiant found 28.3% of CVEs are now exploited within 24 hours of disclosure, and one researcher recently declared the 90-day patch window dead. Daybreak is essentially cleanup for a problem frontier models helped accelerate. The real test isn't whether AI defense works, but whether it can ever outrun AI offense.
QUICK HITS
👏 Generative AI may significantly reduce the number of animal experiments.
🤖 Amazon Staff Using AI Tools For ‘Trivial’ Tasks to Boost Usage Numbers and Please Bosses.
📱 A smarter, more proactive Android with Gemini Intelligence.
🛒 AI-referred shoppers convert better and spend more.
📈 Trending AI Tools
🎞️ Guideless - AI tool that automatically turns your clicks and workflows into step-by-step video guides*
🤖 Tana - Structure messy thoughts automatically into reusable systems
✍️ Lex - Collaborative documents, with powerful AI editing tools
🗣️ Superwhisper - AI powered voice to text for macOS, Windows, and iOS
*partner link





