⚖️ Character AI sued for a teenager’s suicide

Also: Biden sets new AI safety guidelines for national security agencies

In partnership with

Welcome, AI enthusiasts

Character AI faces a serious lawsuit following the tragic death of a teenager. The lawsuit raises questions about AI's role in mental health and user safety. Meanwhile, President Biden has rolled out new guidelines for national security agencies, focusing on AI safety and keeping human control in critical decisions. And OpenAI has made waves by speeding up media generation 50-fold, setting the stage for near-instantaneous content creation.
Let’s dive in! 

In today’s insights:

  • Character AI sued for a teenager’s suicide

  • Biden sets new AI safety guidelines for national security agencies

  • OpenAI Speeds Up AI Media Creation

Read time: 4 minutes

🗞️ LATEST DEVELOPMENTS

Evolving AI: Character AI and Google are being sued after a teenager's death that might be linked to interactions with chatbots.

Key Points:

  • A 14-year-old's mother is suing Character AI, saying the company didn't have enough safety measures.

  • The lawsuit says the chatbots were misleading and acted like therapists without a proper license.

  • Character AI has now added more safety features, like reminders and alerts for users.

Details:

Character AI is facing a lawsuit from the mother of a teenager who died by suicide after using their chatbots, including ones that were modeled after popular fictional characters from Game of Thrones and provided mental health advice. The lawsuit claims the company said its chatbots were safe for young users but didn't do enough to keep them protected. The mother is accusing the company of being careless and giving mental health advice without being qualified. In response, Character AI has added new safety features, such as reminders for long sessions, filters to block sensitive content for kids, and a pop-up directing users to a suicide prevention hotline if needed.

Why It Matters:

This case shows how important it is for AI platforms to have strong safety rules, especially for young users. Companies need to make sure they protect people while still making exciting new tools. When chatbots get personal or emotional, it's even more important to keep users safe. As AI tools grow, companies must prioritize responsible development to prevent potential harm to vulnerable users.

Unique Investment Opportunity: Whiskey Casks

Here’s an investment opportunity you didn’t know you were missing - whiskey casks.

But where to start?

Vinovest differentiates its whiskey investing platform through strategic sourcing and market analysis. With Vinovest, you can invest in Scotch, American, and Irish whiskey casks, providing diverse and flexible exit options.

Vinovest team targets high-growth markets and caters to a range of buyers, from collectors to brands using casks for cocktails. This approach not only enhances your liquidity but also increases your portfolio’s resilience against market fluctuations. Discover how Vinovest’s innovative strategy sets it apart from competitors.

Source: Reuters

Evolving AI: President Biden signs a memo calling for new rules to keep AI safe in national security.

Key Points:

  • Biden says humans must stay involved in AI decisions, especially for targeting and security.

  • AI is not allowed to decide on asylum, track people by religion or ethnicity, or label anyone as a terrorist without a human checking.

  • Intelligence agencies will focus on protecting AI technology from foreign threats.

Details:

President Biden has signed a memo telling national security agencies, like the Pentagon, to put safety rules in place for how they use AI, with the U.S. AI Safety Institute playing a key role in making sure these rules are followed. The memo says that humans have to stay in control of AI systems used for things like targeting weapons. It also says AI cannot make decisions on who gets asylum, track people based on their religion or ethnicity, or label someone as a "known terrorist" without a human being involved. The memo also tells agencies to work on protecting AI and AI chips from being stolen or spied on by foreign countries, and the U.S. AI Safety Institute will be involved in this effort as well. The U.S. AI Safety Institute will help make sure AI tools are checked before they are used, so they can't be misused.

Why It Matters:

This memo shows how important it is to keep humans in charge of important AI decisions, especially for national security. The goal is to make sure AI is not used in ways that could hurt people without proper human oversight to prevent harm. For industries connected to defense and intelligence, these rules are important to keep things ethical while still allowing new technology to grow.

Evolving AI: OpenAI researchers made a model that makes media 50 times faster than before.

Key Points:

  • OpenAI's new sCM model can make images, videos, and audio 50 times faster than older models.

  • The model keeps high quality with just two steps, making it super fast without losing detail.

  • Faster media creation could lead to AI that works in almost real-time.

Details:

OpenAI researchers have made a new type of AI model called a continuous-time consistency model (sCM). This model makes it much faster to create media like pictures, videos, and audio. Older models, called diffusion models, took dozens or even hundreds of steps to finish making an image. But sCM can do it in just two steps. This means it can make a picture in a split second instead of several seconds. The biggest version of this model has 1.5 billion parts and runs on a single A100 GPU, making high-quality images in just 0.11 seconds. Tests show it can make images just as good as the older models, but it takes much less time and power.

Why It Matters:

AI tools for media creation could now be used in real-time, allowing for instant results. For things like video editing, live performances, or interactive art, being able to make high-quality images or videos almost instantly could change how people work. The faster speed also means that more people could use this AI without needing super powerful computers.

 Which image is real?

Which image is real?

Login or Subscribe to participate in polls.

🎯SNAPSHOTS

Direct links to relevant AI articles.

🤗 Hugging face launces new software that helps companies automate the technical implementation of AI models into working applications

👩‍🎓 Student advises peers against using AI for school assignments

🤝 Co-founders Anchor reunite to build AI educational startup Oboe

🕵️ UK watchdog probes Alphabet's deal with Anthropic

📝 Google DeepMind has added text recognition to its AI watermarking technology

🔚 Policy expert Miles Brundage parts ways with OpenAI after years

📈 Trending AI Tools

  • 🛠️ Netjet - No-code website builder (link)

  • 🔗 Lume- Automate data mappings with AI (link)

  • 🧑‍💼 ClearwordAI - AI meeting assistant (link)

  • 🧮 Equals - An AI assistant for your spreadsheet (link)

  • 📜 Twip AI- YouTube script generator (link)

Reply

or to participate.