In partnership with

Welcome, AI enthusiasts
The whale is back. DeepSeek just shared a new way to train large AI models, and analysts say it challenges how these systems have been built for nearly a decade. Letās dive in!
In todayās insights:
DeepSeekās new AI training breakthrough
The next phase of AI adoption
OpenAI ramps up audio AI efforts ahead of device
Read time: 4 minutes
LATEST DEVELOPMENTS
DEEPSEEK
š³ DeepSeekās new AI training breakthrough
Evolving AI: Chinaās DeepSeek published a new AI training method that analysts say could change how large models scale.
Key Points:
DeepSeek introduces a new training method called mHC.
Analysts call it a breakthrough for stable scaling.
The research may point to DeepSeekās next model.
Details:
DeepSeek tackled a long-standing limit in how modern AI models are built. Since 2015, almost all large language models pass information forward through a single main pathway between layers. That design is stable, but it also limits how much information the model can carry as it grows. Researchers have tried widening that pathway so more information can flow, but doing that usually causes training to collapse. Models become unstable, forget earlier layers, or require huge amounts of memory and compute. DeepSeekās new method, called mHC, shows how to widen that internal pathway without breaking training. It adds extra internal connections but keeps them mathematically constrained, so the model can share more information while still behaving predictably. In simple terms, it gives the model a wider ābrain highwayā without losing control.
Why It Matters:
Whatās interesting here is the shift from āwho has the most GPUsā to āwho trains smarter.ā DeepSeek is basically saying scaling does not have to mean runaway cost or unstable training, and analysts think this kind of architecture work could spread fast once other labs copy the idea. Add chip limits and export controls into the mix and you get even more pressure to win through efficiency, not brute force. If DeepSeek folds mHC into its next flagship model, it is another sign that Chinaās top labs are confident enough to publish key methods while still racing ahead on the model itself.
TOGETHER WITH ELEVENLABS
š¤ Great AI agents start with great prompts
Evolving AI: Learn how to build prompts for AI agents.
This new ElevenLabs guide walks developers through the frameworks, structure, and evaluation methods that make conversational agents reliable, secure, and context-aware.
Learn how to build and iterate on prompts that deliver real-world results, fast.
Download The Prompt Engineering Guide and start building smarter voice AI systems today.
MICROSOFT CEO SATYA NADELLA
⨠The next phase of AI adoption
Evolving AI: Satya Nadella says 2026 will mark a shift where AI is judged less by demos and more by what it actually delivers in the real world.
Key Points:
AI is moving from discovery to everyday use, with real impact lagging behind raw capability.
The next gains will come from systems and orchestration, not bigger standalone models.
AI should support human work and earn trust by solving real problems.
Details:
In a new outlook piece, Satya Nadella argues that AI is entering a more grounded phase. Model capabilities have grown fast, but turning them into daily value has been slower. He calls this a āmodel overhang.ā The focus now is diffusion: getting AI into real workflows. Nadella also says progress will come from combining models, tools, memory, and agents into full systems. He frames AI as scaffolding for people, helping teams work better rather than replacing them. Success, he writes, depends on whether these systems solve real problems and earn broad acceptance.
Why It Matters:
AI is hitting a new bottleneck: not intelligence, but rollout. Nadellaās āmodel overhangā idea captures what a lot of teams are feeling right now, models can do plenty, but turning that into reliable work outputs takes orchestration, data access, permissions, and guardrails. That shift also lines up with the pressure Microsoft and others are facing to prove real adoption, especially when tools like Copilot still struggle to become daily habits inside orgs. If 2024 and 2025 were about model leaps, 2026 looks more like a systems year where the winners are the ones who make AI boring, dependable, and actually used.
Evolving AI: OpenAI is betting that the next big interface is not a screen but sound. Voice is moving to the center of how we use technology.
Key Points:
OpenAI is rebuilding its audio stack for a future device.
Big tech is shifting from screens to voice-first use.
New wearables aim to make audio always available.
Details:
OpenAI has spent the last two months quietly reshuffling teams to rebuild its audio models. This is not just about nicer voices in ChatGPT. According to The Information, it is groundwork for an audio-first personal device planned for about a year from now. The focus is on making voice feel natural, with interruptions, overlap, and timing that resemble real conversation. You can see the same thinking across Silicon Valley. Meta is using advanced microphones in Ray-Ban smart glasses to help people hear in noisy places. Google is turning search results into spoken summaries. Tesla is bringing a conversational assistant powered by xAIās Grok into its cars. Some screenless bets failed, like the Humane AI Pin, but the idea keeps coming back. Audio is slowly replacing the screen as the main way we interact with tech.
Why It Matters:
When OpenAI rebuilds audio for a screenless device, it is a sign that the next interface fight is moving from what you look at to what you hear. Real conversation is hard: interruptions, talking over each other, timing, tone. If OpenAI can make that feel normal, voice assistants stop being a gimmick and start living in your car, your glasses, your home, basically everywhere. Meta is already shipping āConversation Focusā on its AI glasses, and Google is testing Audio Overviews in Search, so the āalways listeningā future is showing up fast. That brings a real tradeoff too: convenience goes up, privacy pressure goes up with it.
QUICK HITS
š§± dbt Labs published a fresh OāReilly report on how to build AI apps on analytics stacks that are governed, easy to find, and ready for production use.
ā” Elon Musk said xAI has secured a new building for MACROHARDRR, its third massive data center. The move pushes xAIās total training power close to 2 gigawatts.
š Zhipu AI kicked off a $560M share sale in Hong Kong at a reported $6.6B valuation, right after rolling out its new GLM-4.7 model and ahead of a planned IPO.
š± Alibaba rolled out MAI-UI, an AI agent that can operate smartphone apps on its own and handle multi step tasks directly on mobile devices.
š® Tencent open sourced Hunyuan Motion 1.0, a 1B parameter model that turns text prompts into 3D character animations for games and animation workflows.
š„ Adobe confirmed a partnership with Runway, adding its video models, including Gen-4.5, to the Adobe Firefly AI studio.
š Trending AI Tools
š£ļø Elevenlabs - Free text-to-speech generator used by many*
šļø Mapify ā Turn notes, data, or audio into smart mind maps with AI chat
š© Marblism ā Automate tasks like writing, support, and emails with one AI-powered tool
š Nebulock ā Autonomous AI threatāhunting platform that scans enterprise systems and responds in real time
šÆ JobāÆCopilot ā AI job-seeking agent
Ā *partner link







