🤖 Why Google's AI Guru believes robots could outsmart us soon

Also: America's big AI Plan

 

Welcome, AI enthusiasts

Happy Wednesday. The AI landscape is evolving at rapid speed! Google’s AI guru says robots might become smarter than us soon. The USA has a new plan about AI, while there’s a debate between AI and news companies. Lastly, have you ever thought about why big companies are so concerned about AI?

Let's dive in!

In today’s insights:

  • Why Google's AI Guru believes robots could outsmart us soon

  • America's big AI Plan: What it means

  • Is AI stealing the news?

  • Why big companies scare us about AI

Read time: 4 minutes

🗞️ LATEST DEVELOPMENTS

Source: Getty Images

Evolving AI: The man who helped start Google DeepMind, Shane Legg, says there's a fair chance we might have robots as smart as people by the year 2028.

Key Points:

  • Shane Legg sticks to his old guess: 50-50 chance of super-smart robots soon.

  • Two big 'buts': What do we mean by 'smart'? Can we make big computer programs without too much trouble?

  • What happens next is very important, but we can't say for sure.

Details:

Did you mark 2028 on your calendar? You might want to! Shane Legg, a big name at Google DeepMind, says that year might be special. He believes there's a 50-50 chance we will have robots that think as well as people do by then. But wait, there are questions. First, how do we say what 'smart' really is? And second, can we make these computer programs big and strong, but not wasteful?

Why It Matters:

What Shane Legg says is not 100% sure, but it's important to think about. If he's right, life could change in big ways. We could solve many problems, but also face new risks. Imagine you have a box, and you don't know what's inside until you open it. It could be very good or very bad. That's how we should think about the year 2028 and these super-smart robots. Are we ready for what might happen?

Source: DigWatch

Evolving AI: President Joe Biden of the United States has a new big plan. It's about making rules for artificial intelligence (AI) to keep people safe and private.

Key Points:

  • New rules to make AI safe and reliable.

  • Companies must share safety test results with the government.

  • The plan protects your privacy and rights.

Details:

So, President Biden has a grand plan about AI. The new rules will touch many parts of life, from the country's safety to your own private information. Companies that create important AI must tell the government how they test for safety. The plan also looks at preventing bad use of AI in science. Worried about fake videos made by computers? This plan aims to sort out how to tell real from fake.

The Relevance:

This is not just an American thing. The U.S. is setting an example for the whole world about how to deal with AI. They are thinking hard about doing the right thing and keeping people safe. Will other countries follow these ideas? That's a big question and only time will tell.

Source: Adobe Stock

Evolving AI: AI and news companies are fighting over who owns written articles. Is AI helping or hurting journalism?

Key Points:

  • News group says AI takes articles without asking.

  • This group also says AI is taking away money and people who read the news.

  • Big companies like OpenAI and Google are getting sued.

Details:

The News Media Alliance, a group that speaks for news companies, says AI is taking articles and using them wrongly. They published a long report saying AI systems use a lot of news articles to learn. This is a problem because these AI machines then write things that are very similar to the original news. The group says while they spend money and time to make news, AI companies get all the benefits like more users and more money. People have even taken OpenAI and Google to court over this.

Our thoughts:

Why is this important? This fight is about more than just news companies and AI. It questions who owns what is written and how it can be used. The news group has a point. But what if AI could make news better and more accurate? Who wins then? This is something to think about as we see how this story unfolds.

Source: Steve Jennings / Stringer/Getty Images

Evolving AI: Andrew NG, a top expert from Google, says big companies make us afraid of AI on purpose. Why? They want less competition.

Key Points:

  • Google expert says big companies use fear to keep others away from AI.

  • Big bosses have said AI is as dangerous as war, which could make laws tighter.

  • Europe may be the first to put new rules on AI.

Details:

Is fear about robots taking over the world a story big companies tell us? Andrew Ng, a co-founder of Google Brain, says yes. He recently said that these big companies use scare tactics. Why? They want strict laws about AI that make it hard for smaller groups to compete. Earlier, some AI experts even said that AI is as risky as nuclear war. Ng thinks this could be a trick to change laws.

Why It Matters:

We talk a lot about how AI could be risky. But what if the real risk is stopping new ideas in AI? If Andrew Ng is right, big companies may control AI's future by making us scared. So, do we need strict laws to keep us safe, or will these laws just help big companies? We need to watch this closely as countries think about new AI rules.

🎯 SNAPSHOTS

Direct links to relevant AI articles.

🛡️ Shield AI lands funding: $200M to scale AI pilot

🥊 Artists vs. AI Art Generators: A Copyright Infringement Case

🆕 A new world of AI understanding: Asking questions is the key

📈 Trending AI Tools

  • 📊 Flexberry: AI Assistant for business analyst (link)

  • 📜 Super AI: Generative AI document processing (link)

  • 🧾 Receipt AI: Receipt management with AI and text messages (link)

  • 🎙️ Solidly: Automatically transcribe and summarize your meetings (link)

  • 📚 StudyCards: AI-powered flashcard generation for learning (link)

  • 🧩 Magic ToDo: Break down tasks into sub-tasks with AI (link)

  • 🎨 Slatebox: Create editable visualizations using natural language prompts and mind-maps (link)

  • 🎧 Dexa: Bot which allows you to chat with your favourite podcasts (link)

Reply

or to participate.