
Welcome, AI enthusiasts
Sam Altman has stepped into the growing debate around AI’s environmental impact, pushing back on viral claims about energy and water usage. Speaking during a recent AI summit in India, the OpenAI CEO argued that many comparisons circulating online misunderstand how AI systems actually consume resources. His comments, especially a comparison between training AI models and educating humans, quickly sparked discussion across the tech world. Let’s dive in!
In today’s insights:
Sam Altman Sparks Debate Over AI Energy Use
Fake faces generated by AI are now "too good to be true”
Google's Gemini 3.1 Pro is here, and it just doubled its reasoning score
Read time: 4 minutes
LATEST DEVELOPMENTS
SAM ALTMAN
⚡ Sam Altman Sparks Debate Over AI Energy Use
Evolving AI: Sam Altman pushes back on claims about AI energy and water use, arguing many online comparisons misunderstand how AI systems actually consume resources.
Key Points:
Altman called viral claims about ChatGPT’s water usage “completely untrue.”
He said total AI energy use is a valid concern as adoption grows fast.
He compared AI training energy to the decades required to educate a human.
Details:
Speaking at an event hosted by The Indian Express during an AI summit in India, OpenAI CEO Sam Altman addressed growing criticism around AI’s environmental footprint. He dismissed viral claims that each ChatGPT query uses large amounts of water, saying modern data centers no longer rely on the cooling methods behind those estimates. Altman agreed overall energy use matters as AI adoption accelerates and said the focus should shift toward cleaner power sources like nuclear, wind, and solar. He also argued that energy comparisons are often framed unfairly, noting that human intelligence requires decades of food, education, and infrastructure before producing expertise. The comments quickly spread online, with supporters calling it a discussion about efficiency per task and critics saying it downplays the rising electricity demand from expanding data centers.
Why It Matters:
Altman’s “training a human” line blew up because it dodges the part people can’t ignore anymore: AI is turning data centers into a real power story, not a nerd argument about one prompt. The IEA now expects global data center electricity use to roughly double by 2030, with AI a key driver, and US government analysis says data centers could rise to a big share of US electricity use by 2028. At the same time, the EU is pushing mandatory reporting on data center energy performance, which is basically a sign that “trust us” is no longer enough. When a single quote can spark this much heat, it’s because the next phase of AI isn’t just better models, it’s who gets the grid access, who pays for the upgrades, and how fast clean generation can keep up.
Snippets that scale your voice
Save and insert standard intros, calendar links, and bios by voice so recurring emails and updates take seconds. Wispr Flow keeps your tone and speeds execution. Try Wispr Flow for founders.
Evolving AI: New research shows AI-generated faces now fool most people, raising concerns for identity verification and online trust.
Key Points:
AI-generated faces are now as convincing as real photos.
Even “super recognizers” struggle to reliably detect fakes.
People strongly overestimate their ability to spot AI images.
Details:
A study by researchers at UNSW Sydney and the Australian National University, published in the British Journal of Psychology, tested 125 participants on their ability to distinguish real faces from AI-generated ones. Results showed average participants performed only slightly above chance, while expert “super recognizers” did only marginally better. Modern AI faces no longer contain obvious glitches. Instead, they appear unusually average, highly symmetrical, and statistically typical. Researchers warn that growing realism combined with human overconfidence increases risks around fraud, scams, and identity verification systems. The team is now studying individuals who may act as “super AI face detectors” to help improve future safeguards.
Why It Matters:
When AI faces look more “real” than real, the whole internet gets easier to game. Think fake LinkedIn profiles, catfishing, scam accounts, and even KYC checks that lean too much on selfies. What’s scary here is the combo the researchers found: most people are basically guessing, yet still feel confident they’re good at spotting fakes, and even super recognizers only get a small boost. So trust is going to shift from “i can tell” to boring but necessary stuff like liveness checks, stronger verification, and tracking where an image came from, because our eyes are no longer a reliable filter.
Evolving AI: Google released Gemini 3.1 Pro, reporting a major jump in reasoning performance across new logic benchmarks.
Key Points:
Gemini 3.1 Pro doubles reasoning performance compared to Gemini 3 Pro in testing.
New scores place it among the strongest reasoning models, though rivals still lead in some rankings.
Benchmarks show progress, but real-world performance remains uncertain.
Details:
Google introduced Gemini 3.1 Pro as the next upgrade to its flagship AI model, focused on stronger reasoning and problem solving. The model scored 77.1% on the ARC-AGI-2 benchmark, more than doubling Gemini 3 Pro’s reasoning performance on unfamiliar logic tasks. It builds on the recent Gemini 3 Deep Think update, a research mode designed for complex science and engineering problems. On Humanity’s Last Exam, Gemini 3.1 Pro reached 44.4%, improving on Gemini 3’s earlier record, though Anthropic’s Claude Opus 4.6 still leads some broader capability and safety rankings.
Why It Matters:
Google is pushing Gemini 3.1 Pro into the places people actually work like the Gemini app, NotebookLM, and developer stacks, and it’s showing big jumps on harder reasoning tests like ARC-AGI-2. But the bigger trend is how fast the top spot flips now: Deep Think can score higher by spending more time thinking, and competitors like Claude are still leading on some broader capability and safety-style rankings. So for most teams, the question is less “who won this week” and more “which model is reliable for my daily workflow, at my budget, with the right tradeoffs.”
The Lithium Boom Is Heating Up
Lithium stock prices grew 2X+ from June to January. $ALB climbed 227%. $LAC hit 151%. $SQM, 159%. But the real winner may be a private stock, EnergyX. Their tech can recover 3X more lithium than traditional methods, leading General Motors to invest. Now they’re preparing to unlock up to 9.8M tons of lithium. Buy private EnergyX shares alongside 40k+ people before EnergyX’s share price increases after 2/26.
This is a paid advertisement for EnergyX Regulation A offering. Please read the offering circular at invest.energyx.com. Under Regulation A, a company may change its share price by up to 20% without requalifying the offering with the Securities and Exchange Commission.
QUICK HITS
⚖️ Lawyer says Google shut down his Gmail, Voice and Photos after NotebookLM upload.
🔬 AI reveals unexpected new physics in the fourth state of matter.
🎮 Microsoft’s new gaming CEO vows not to flood the ecosystem with ‘endless AI slop’.
🚔 Met police using AI tools supplied by Palantir to flag officer misconduct.
⛪ Pope Leo XIV has urged priests to not to use artificial intelligence to write their homilies or to seek “likes” on social media platforms like TikTok.
⚠️ Google VP warns that two types of AI startups may not survive.
🛡️ Anthropic launches Claude Code Security for AI-Powered vulnerability scanning.
📈 Trending AI Tools
🎬 Descript - Edit audio and video like text with AI*
🚀 Replit - Build apps and sites with AI.
🎞️ Filmora AI - Edit video like never before with the magic of AI features.
🎙️ Krisp - AI meeting assistant.
*partner link







