
Welcome, AI enthusiasts
Anthropic says three Chinese labs used thousands of fake accounts to secretly tap into Claude, racking up millions of prompt exchanges in what it calls a massive distillation attack. The fallout could reshape how AI companies protect their models. Let’s dive in!
In today’s insights:
Anthropic accuses Chinese Labs of AI Fraud
Google adds AI agents to Opal
Anthropic introduces plugins for Claude
Read time: 4 minutes
LATEST DEVELOPMENTS
ANTHROPIC
🥊 Anthropic accuses Chinese Labs of AI Fraud
Evolving AI: Anthropic accuses Chinese labs of model distillation fraud.
Key Points:
Anthropic says DeepSeek, Moonshot, and MiniMax ran 24,000 fake accounts to extract Claude’s capabilities.
Over 16 million exchanges were allegedly used to replicate model behavior at lower cost.
The company frames the issue as national security, urging industry and government coordination.
Details:
Anthropic claims three Chinese AI labs conducted large-scale distillation attacks against Claude, using 24,000 fraudulent accounts to generate over 16 million prompt exchanges. Distillation is a standard method for compressing models, yet Anthropic argues competitors used it to replicate capabilities without building them from scratch. OpenAI previously raised similar accusations against DeepSeek. The debate exposes tension between open web training practices and proprietary model protection.
Why It Matters:
If Anthropic is right, “model moats” just got a lot thinner: you can pour billions into training, then a competitor tries to copy the behavior by hammering your API with scripted prompts at scale. That pushes every lab toward stricter access controls fast, think tougher account verification, sharper rate limits, more aggressive bot and behavior fingerprinting, which can add friction for normal developers too. For teams building products on top of Claude or similar models, this is a heads-up to treat LLM access like any other critical dependency: expect policy shifts with little notice, and have a fallback plan if a provider clamps down after an abuse wave.
TOGETHER WITH DATADOG
🔐 AI Security Best Practices
Evolving AI: Your Guide to Building Secure AI Applications.
As AI adoption accelerates, new attack surfaces are emerging across infrastructure, supply chains, and model interfaces. Datadog’s AI Security Best Practices Guide breaks down how to secure:
The underlying components that host and run AI applications
The software and data that an AI application uses to operate
The entry points and business logic that enable a user to interact with an AI application
This guide provides actionable strategies to help teams strengthen AI security without slowing innovation.
GOOGLE
🤖 Google adds AI agents to Opal
Evolving AI: Google expands Opal with autonomous app-building agents.
Key Points:
Google introduces a new agent inside Opal that builds mini apps from text prompts and plans tasks on its own.
The agent runs on Gemini 3 Flash and selects tools like Google Sheets to store memory across sessions.
Opal expands globally and faces growing competition from Lovable, Replit, Wabi, and Emergent.
Details:
Google has added an autonomous agent to Opal, its vibe-coding app that lets users build mini web apps without writing code. Powered by Gemini 3 Flash, the agent can interpret text prompts, create step-by-step plans, and choose the right tools to complete tasks. It can connect to services like Google Sheets to store information across sessions, such as maintaining a shopping list. The agent is interactive, asking users for missing details and guiding next steps. Since launching in the U.S. in July 2025, Opal has expanded to more than a dozen countries and is now integrated into the Gemini web app through a visual editor.
Why It Matters:
When Opal can turn a plain-text prompt into a mini app that picks its own tools and keeps state in something familiar like Google Sheets, the day-to-day payoff is fast: fewer copy-paste routines, fewer “where did I save that list” moments, and quicker one-person automations for things like shopping trackers, content checklists, lead triage, and internal request forms. It’s a clear push to make agent-style workflows feel normal inside the apps people already open, which raises the bar for teams building in tools like Replit and Lovable.
ANTHROPIC
🚀 Anthropic introduces plugins for Claude
Evolving AI: Enterprises gain private plugin marketplaces and cross-app AI workflows.
Key Points:
In Claude, companies can now build private plugin marketplaces, giving admins tighter control over plugins, connectors, and skills across teams.
New connectors span Google Workspace, Docusign, financial data providers and more.
Claude can complete multi-step projects across Excel and PowerPoint, carrying context between apps.
Details:
Enterprises can now create internal plugin marketplaces that centralize how teams access and manage AI workflows. Anthropic’s new Customize hub unifies plugins, skills, and connectors, with guided setup and stronger admin controls, including private repositories and per-user provisioning. Structured slash commands simplify execution for employees. New connectors link Claude with tools like Google Workspace, Docusign, and major financial platforms. Early cross-app support lets Claude move from Excel analysis to PowerPoint presentations in a single flow.
Why It Matters:
Private plugin marketplaces mean Claude can be rolled out like normal enterprise software: IT picks approved tools, teams grab what they need, and access stays controlled through connectors to systems like Google Workspace or DocuSign. The big trend is “AI agents inside your stack,” not another chat tab, and the new Customize hub plus marketplace files make that rollout repeatable across departments. OpenTelemetry tracking and structured slash-command forms make it easier to see spend, audit tool calls, and run workflows without prompt gymnastics. Pair that with Claude moving work from Excel to PowerPoint in one run, and finance, ops, HR teams get fewer copy-paste handoffs and faster turnaround on real deliverables.
Better prompts. Better AI output.
AI gets smarter when your input is complete. Wispr Flow helps you think out loud and capture full context by voice, then turns that speech into a clean, structured prompt you can paste into ChatGPT, Claude, or any assistant. No more chopping up thoughts into typed paragraphs. Preserve constraints, examples, edge cases, and tone by speaking them once. The result is faster iteration, more precise outputs, and less time re-prompting. Try Wispr Flow for AI or see a 30-second demo.
QUICK HITS
🎮 New Microsoft gaming chief has “no tolerance for bad AI”.
💰 Amazon to spend $12 billion in Louisiana on AI data centers.
👏 Council on AI ethics formed to balance innovation with human dignity.
🌊 Google claims it's building data centers that barely use any water.
📈 Trending AI Tools
🗣️ Wispr Flow - Voice-to-text AI that turns speech into clear, polished writing in every app*
🤖 Opal - Google’s AI tool that turns simple text prompts into mini web apps
🩺 Lotus Health AI - Your AI doctor, powered by real doctors & leading medical evidence
🎵 Riffusion - Makes music from text prompts using AI-driven audio models
*partner link





