Open AI CEO Sam Altman reveals insights on GPT-5, says it will be better at everything

Open AI CEO Sam Altman has already begun discussing the next generation of ChatGPT—GPT-5—and he claims that “it’s gonna be better at everything across the board.” Speaking at the World Government Summit, Altman shared insights about GPT-5, stating that the new model will be “smarter,” “faster,” and “more multimodal.” He noted that while other companies developing similar models might tout the same improvements, what truly matters is the model’s increased intelligence, which is what he aimed to highlight about GPT-5. “This is a bigger deal than it sounds”, Altman said. “What makes these models so magical is that they are general. So if they are smarter, it means they are a little bit better at everything,” he added.

But what does smarter GPT-5 really mean? Think of artificial intelligence being able to comprehend and understand not just words but images, sounds, and diverse data forms swiftly and accurately. The big idea, which feels very futuristic, is centered around an AI that can think on its own, making more AI like itself without humans having to watch over it.

Altman has been hinting at ChatGPT-5 having an improved general intelligence and the ability to handle multiple data types concurrently. This advancement can possibly revolutionise the speed, efficacy, and dependability of AI applications. Altman hasn’t provided any details regarding a potential launch date. Speculation suggests that GPT-5 may emerge as a multimodal AI model known as “Gobi,” slated for release in the spring of 2024. The model is apparently currently undergoing extensive training on a vast dataset.

GPT-5 is the next version of GPT-4, which is a chatbot made by OpenAI. With GPT-4, people have to pay each month to use it. GPT-4 can talk like a human and can even understand and create pictures and talk. GPT-5 is supposed to be even better. It will be more personal for each user, make fewer mistakes, and handle more stuff, like videos in the end.

While we still wait for more details on GPT-5, OpenAI on Friday morning unveiled its AI model called Sora that can create minute-long videos from text prompts. “We’re teaching AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction,” the OpenAI announced. OpenAI says Sora has the ability to create complex scenes with many characters, precise actions, and detailed backgrounds. This model not only understands what the user asks for but also figures out how these things would look in real life.

Meanwhile, on the heels of the GPT-5 insights from Sam Altman and the announcement of Open AI Sora, Google has announced its own next-generation Gemini 1.5 AI model https://gemini.google.com , which as per the company can handle up to 1 million pieces of information at a time, which is the most any big model has ever been able to do so far. “This means 1.5 Pro can process vast amounts of information in one go — including 1 hour of video, 11 hours of audio, codebases with over 30,000 lines of code or over 700,000 words. In our research, we’ve also successfully tested up to 10 million tokens”, Google says. 

Source:-indiatoday.in

Similar Posts