šŸš€ GPT-4 is BACK on top!

Also: Exclusive interview with Yann LeCun

In partnership with

Welcome, AI enthusiasts

OpenAI has once again taken the lead in the LLM leaderboard with significant improvements to GPT-4, solidifying its position as the top chatbot. Additionally, a new AI-powered music app called Udio has been launched, offering users the chance to experiment by creating up to 600 songs per month for free. We have also been given an exciting opportunity by Meta to interview their Chief AI Scientist, Yann LeCun. His pioneering contributions to deep learning and neural networks have been fundamental to the development of modern AI. Let's dive in!Ā 

In todayā€™s insights:

  • OpenAI Improves GPT-4 and is Back on Top

  • Former Google DeepMind researchers launch AI-powered music creation app Udio

  • Exclusive Interview with AI Godfather Yann LeCun

Read time: 7 minutes

šŸ—žļø LATEST DEVELOPMENTS

Source: Ideogram prompted by Evolving AI

Evolving AI: OpenAI updates ChatGPT to be better, simpler and faster and is back on top of the LLM Leaderboard.

Key Points:

  • Enhanced GPT-4 Turbo version for premium users.

  • Direct and concise language in responses.

  • Updated knowledge base and improved capabilities in writing, math, logic, and coding.

Details:

OpenAI has rolled out a new model named "gpt-4-turbo-2024-04-09," specially for its premium users. This updated ChatGPT aims to answer with more clarity and less fluff, making conversations quicker and easier to understand. Besides being trained on the latest information up to December 2023, it has better abilities in writing, math, logical thinking, and coding, helping it provide more accurate and relevant information.

Why This Matters:

Anthropic's Claude 3 Opus AI surpassed OpenAI's GPT-4 on the Chatbot Arena leaderboard upon its release a few weeks ago. This marked the first time GPT-4 had been dethroned since its debut over a year ago. In response, OpenAI has decided to enhance GPT-4 comprehensively to regain the top position.

Have an AI Idea and need help building it?

When you know AI should be part of your business but arenā€™t sure how to implement your concept, talk to AE Studio.

Elite software creators collaborate with you to turn any AI/ML idea into a realityā€“from NLP and custom chatbots to automated reports and beyond.

AE Studio has worked with early stage startups and Fortune 500 companies, and weā€™re ready to partner with your team. Computer vision, blockchain, e2e product development, you name it, we want to hear about it.

Evolving AI: A new app called Udio, created by ex-Google DeepMind experts, makes it easy for anyone to create their own music quickly.

Key Points:

  • Backed by big names like will. i. am and Common.

  • Generates fully mastered tracks in under a minute.

  • Features robust copyright protection.

Details:

Udio is an exciting new tool that lets you make professional music fast, using just a few simple inputs like genre or lyrics. The company, founded by former Google DeepMind researchers, sees its product as a tool to enhance human creativity. The free beta version allows users to generate up to 600 songs per month. Endorsed by tech and music industry leaders, this app provides quick, polished music tracks tailored to your preferences. It also offers tools to make further adjustments, ensuring your music is both unique and high quality. Udio opens up the world of music creation to everyone, whether you're just starting out or you're already making music.

Our Thoughts:

AI music generators such as Udio and Suno.ai raise the question of whether they represent competition for human musicians or are more likely to establish themselves as tools for background music and sound samples. Either way, their capabilities are evolving from gadgets to serious music production tools.

EVOLVING AI EXCLUSIVE - Q&A WITH YANN LECUN

Source: Meta

We're excited to bring you an interview with Yann LeCun, Chief AI Scientist at Meta and one of the pioneering figures in artificial intelligence. His contributions to deep learning and neural networks have been foundational to modern AI.

Thanks to the enthusiastic response from our followers on Instagram, we received over 350 questions. We've selected the most compelling ones to feature.

Enjoy this exclusive read!

Note: Weā€™ve made Yannā€™s answers more concise to fit the newsletter. You can read the Q&A with the full answers here.

Q: If you could just introduce yourself, that would be great.
A: My name is Yann LeCun. Iā€™m the Chief AI Scientist at Meta and I'm also a professor at New York University.

Q: How long have you worked at Meta?Ā 
A: A little over 10 years.Ā 

Q: And what's your favourite thing about working at Meta?Ā 
A: Openness.

Q: Given the rapid advancements in AI, how do you envision its role in shaping the future of our society and the global community?
A: In the short term, predicting impacts is straightforward, but long-term effects of AI will be extensive and somewhat unpredictable. My vision is that AI assistants will mediate nearly all our interactions, both digitally and personally. We'll communicate with them via smart glasses and smartphones, and they'll constantly assist us, enhancing our intelligence. It's like having a team of virtual experts at our disposal, making us all managers to a degree. These AI assistants will handle details and execution, while we set their objectives. What that means globally, at the level of humanity, we will be smarter because we're going to be augmented by those AI systems. And the transformation of society might be similar to what happened after the invention of the printing press. In the 15th century, people got smarter, they started to learn to read; we have the enlightenment; we also had the feudal system which was then displaced by democracy ā€“ there could be a similar trend here.

Q: So, that kind of links to one of the questions that we've got, which is that do you see AI integration in daily tasks enhancing or hindering human creativity? I guess taking away the admin from our lives, which would enhance creativity. But what's your view on that?
A: It will certainly enhance creativity and allow people to be more creative. For example, people can now produce music but they couldnā€™t do before given they couldnā€™t play an instrument, but now they can use computers and digital audio workstations. Similarly with digital art, AI can produce art in ways that gets around the technicality of it.

Q: What are some of the projects that you're working on that you're most excited about at the company?
A: I work on long-term projects focused on the next generation of AI systems. Over the coming years, we'll see advancements in systems that can answer questions and assist with daily tasks through AI assistants, and create content like text, images, audio, and videos. We expect a major shift in AI architecture soon, enabling capabilities like understanding the physical world, having persistent memory, and planning and reasoning. My work involves developing systems that learn about the world similarly to human babies or animals.

Q: How do language models work exactly?
A: Language models are trained by taking a text, removing some words, and using a large neural network to predict the missing words. Sometimes, the prediction is limited to words on the left. Once trained, these models understand language, grammar, and semantics. They can be fine-tuned for specific tasks like answering questions or detecting hate speech. Additionally, they generate text by predicting one word at a time, based on previous words, which allows them to produce text continuously, although this can lead to inaccuracies.

Q: And what the challenges involved with that?
A: Predicting text involves generating a probability distribution for potential words, like guessing 'mouse' when the cat chases something in the kitchen. But, video predictionā€”like anticipating a cat's actions in a videoā€”is far more complex due to the numerous possibilities. Computers struggle with this because there are too many variables to consider. Instead of predicting every pixel, a feasible approach is for the system to learn an abstract representation of the video's content and predict at this level. This isn't a generative model, which interestingly suggests a non-generative future for AI, contrary to current trends.

Q: As AI continues to advance, how do you propose we manage its risksā€”such as those highlighted by concerns over AI safety and controlā€”while also staying critically engaged with its development as it begins to match and possibly surpass human intelligence?
A: Our systems are significantly behind human intelligence. Currently, they can't invent new thingsā€”especially LLMsā€”because they're trained only on language, which represents just a fraction of human knowledge. Most early human learning, like that of animals, is non-linguistic.

To illustrate, LLMs train on about 10 trillion tokens (words or sub-words), equating to 20 trillion bytes. Reading this would take 100,000 years, covering all publicly available internet texts. Contrast this with a child's data intake: by age four, after 16,000 waking hours and processing around 20 megabytes per second visually, they surpass LLMs by 50 times in just 300 hours.

Human intelligence largely stems from interacting with the physical world; language is secondary, though still crucial. Developing AI with human-like intelligence through text alone seems unlikely. Achieving this level involves incremental progress from basic animal-like AI to more sophisticated systems, incorporating real-world interaction safeguards. Expecting sudden leaps to advanced AI (AGI) that could threaten humanity is unrealistic.

Q: As we move closer to achieving AGI, what key scientific breakthroughs or milestones do you believe are still required to achieve it?
A: No, it wonā€™t happen suddenly. Basic concepts may emerge over the next five to ten years, with ongoing progress but no sudden event. Its akin to the development of turbo-jet engines, which took decades to advance from frequent failures to reliable, long-distance travel.

Similarly, AI will likely take years, if not decades, to evolve. Future AI systems will differ significantly from current ones, capable of planning and imagining consequences. These objective-driven AI systems can be guided by guardrails to ensure safety, unlike humans who can validate laws.

Q: How do you see AI contributing to our understanding of the universe and solving longstanding scientific mysteries?
A: AI offers hope and initial successes in understanding various phenomena, not just in physics or chemistry but also in biology and society Wherever complex interactions lead to emergent phenomena, traditional models struggle to predict ecological properties. For example, the mysterious behaviour of materials like graphene when twisted at a certain angle to become superconductors defies reductionist theories. Here, AI can help predict interactions, such as how water molecules interact with substrates, aiding in processes like hydrogen separation, which could be vital for combating climate change through sustainable energy production.

Q: What role do you envision for AI in addressing fundamental global challenges like climate change and wars?
A: Researchers are exploring AIā€™s potential in predicting catalyst properties and discovering new materials, such as for batteries, where traditional methods struggle due to the need for rare-elements like lithium. AI could revolutionize battery efficiency on a large scale.AI might also aid in controlling plasma for fusion reactors, a challenge persisting for decades despite understanding its principles. Could it help with carbon capture or things like that? So a lot of hope for the use of AI in material science. AI also shows promise in carbon capture and in medicine for understanding mechanisms of biology, of interactions between proteins, for example. So there are a lot of people working on this, and it's very fascinating.

Q: And what are the AI products that people can go and use at the moment on Meta platforms that you think are the best ways to engage with AI?
A: In the US, Meta AI is a dialogue system not yet in Europe or the UK due to regulatory complexities. It works through devices like the Ray Ban Meta glasses I'm wearing, equipped with cameras and microphones for voice interactions, providing real-time information or translations. Soon, we'll see smart letters with display features for instant translations, similar to the 2013 movie "Her," where AI is central to everyday interactions.

Q: Meta has taken a very strong position on open source and its large language models, so, could you tell us a bit more about why you've done that and perhaps answer some of the critics that have criticised this decision?
A: Yes. Our company has been deeply involved in open-sourcing for over fifteen years, covering everything from platform infrastructure to software and hardware design, including widely-used AI platforms. This approach encourages community involvement to improve and diversify our platforms through shared insights and feedback.

Additionally, it fosters an ecosystem that supports the growth of an entire industry. For example, the release of Lama2, our open-source LLM, has sparked the rise of many startups that specialize in customizing LLMs for various uses, like language adaptation or specific industry applications.

One such project in India is adapting Lama2 to support all 22 official languages of the country, which is a significant achievement.

Q: Can you expand on translation tools for Llama?
A: We support translation projects for unwritten languages, preserve local dialects, and provide voice access for the illiterate. Millions have adapted our models to their languages and cultures, making a significant impact.

Thereā€™s a future where AI assists in every digital interaction. We must avoid a monopoly by a few U.S. West Coast companies, which mainly focus on major Western languages, neglecting global dialects. This limitation could narrow the AI's worldview, echoing specific value systems and political views.

A diverse range of AI assistants is as crucial as a free, diverse press for protecting democracy and cultural diversity. Thus, open-source AI is essential, not just desirable.

Despite concerns that open-source AI could be misused by malevolent entities, such risks are inherent to all technologies. Restricting access might actually increase risks by limiting AI's opinion diversity.

Closing off access to AI poses greater dangers than any minor increase in safety it might offer.

šŸŽÆĀ SNAPSHOTS

Direct links to relevant AI articles.

āœļøĀ Job replacements: Texas is replacing thousands of human exam graders with AI.

šŸŽĀ Apple: Apple plans Mac line overhaul with AI-focused M4 chips.

šŸ†•Ā Amazon: Amazon adds AI expert Andrew Ng to board as GenAI race heats up.

šŸ¤–Ā Meta AI: Meta AI releases OpenEQA to spur ā€™embodied intelligenceā€™ in artificial agents.

šŸ“ˆĀ Trending AI Tools

  • šŸ–„ļø WPTurbo - AI-powered WordPress development tool (link)

  • šŸŽžļø Spikes Crudio - AI-powered video editing tool (link)

  • šŸ“ŠĀ Syllaby - AI-powered Social Media Strategy tool (link)

  • šŸŽ™ļø Castmagic - AI content platform for podcasts and meetings (link)

  • šŸŒĀ WebWave: Create a website with skills you already have (link)

  • šŸ“¹ OpusClip: Turn long footage into viral-ready short clips (link)

  • šŸ”‰Ā Gotalk.ai: Transform text into voiceovers with AI (link)

Join the conversation

or to participate.