Artificial intelligence has advanced rapidly, yet it continues to fall down on the basics. Demis Hassabis, the head of Google DeepMind, says this inconsistency is what stands in the way of Artificial General Intelligence (AGI).
On Google’s developer podcast, he pointed to the Gemini model, which can display extraordinary skill in advanced reasoning. “It shouldn’t be that easy for the average person to just find a trivial flaw in the system,” he said.
Why AI excels and fails at the same time
Hassabis gave a striking example. With DeepThink, a reasoning technique, Gemini can win gold medals at the International Mathematical Olympiad, a competition considered the toughest in the world. Yet the same system can “still make simple mistakes in high school maths.”
He called this “uneven intelligences” or “jagged intelligences”. “Some dimensions, they’re really good; other dimensions, their weaknesses can be exposed quite easily,” he added.
Google CEO Sundar Pichai has also used the phrase. On Lex Fridman’s podcast in June, he described the current stage of AI as “AJI” or “artificial jagged intelligence”, where systems achieve brilliance in some areas but fail in others.
Why bigger models won’t fix it
Hassabis is clear that size alone is not the solution. “Some missing capabilities in reasoning and planning in memory” still need to be cracked, he said. He also called for “new, harder benchmarks” to test strengths and weaknesses with greater precision.
Without such measures, AI will continue to look powerful but remain fragile when pushed beyond narrow tasks.
The race to AGI
Companies such as Google and OpenAI are competing to reach AGI, where machines can reason, plan and learn like humans. For now, systems remain prone to hallucinations, misinformation and errors that undermine trust.
In April, Hassabis predicted AGI could emerge “in the next five to 10 years,” though he made clear this depends on solving the fundamental problems of inconsistency.
OpenAI’s take on the gaps
Sam Altman, CEO of OpenAI, has made similar points. Ahead of the release of GPT-5, he described it as a big step but not the destination.
“This is clearly a model that is generally intelligent, although I think in the way that most of us define AGI, we’re still missing something quite important, or many things quite important,” he said in a press call.
Altman argued that one crucial capability is absent. “One big one is, you know, this is not a model that continuously learns as it’s deployed from the new things it finds, which is something that to me feels like AGI. But the level of intelligence here, the level of capability, it feels like a huge improvement,” he said.
Both Hassabis and Altman agree that the path to AGI will not be reached by scale alone. The future depends on consistency, stronger reasoning, independent learning and reliable memory.
Until then, AI will remain powerful but flawed, capable of world-class performance in one moment and tripped up by simple maths in the next.
On Google’s developer podcast, he pointed to the Gemini model, which can display extraordinary skill in advanced reasoning. “It shouldn’t be that easy for the average person to just find a trivial flaw in the system,” he said.
Why AI excels and fails at the same time
Hassabis gave a striking example. With DeepThink, a reasoning technique, Gemini can win gold medals at the International Mathematical Olympiad, a competition considered the toughest in the world. Yet the same system can “still make simple mistakes in high school maths.”
He called this “uneven intelligences” or “jagged intelligences”. “Some dimensions, they’re really good; other dimensions, their weaknesses can be exposed quite easily,” he added.
Google CEO Sundar Pichai has also used the phrase. On Lex Fridman’s podcast in June, he described the current stage of AI as “AJI” or “artificial jagged intelligence”, where systems achieve brilliance in some areas but fail in others.
Why bigger models won’t fix it
Hassabis is clear that size alone is not the solution. “Some missing capabilities in reasoning and planning in memory” still need to be cracked, he said. He also called for “new, harder benchmarks” to test strengths and weaknesses with greater precision.
Without such measures, AI will continue to look powerful but remain fragile when pushed beyond narrow tasks.
The race to AGI
Companies such as Google and OpenAI are competing to reach AGI, where machines can reason, plan and learn like humans. For now, systems remain prone to hallucinations, misinformation and errors that undermine trust.
In April, Hassabis predicted AGI could emerge “in the next five to 10 years,” though he made clear this depends on solving the fundamental problems of inconsistency.
OpenAI’s take on the gaps
Sam Altman, CEO of OpenAI, has made similar points. Ahead of the release of GPT-5, he described it as a big step but not the destination.
“This is clearly a model that is generally intelligent, although I think in the way that most of us define AGI, we’re still missing something quite important, or many things quite important,” he said in a press call.
Altman argued that one crucial capability is absent. “One big one is, you know, this is not a model that continuously learns as it’s deployed from the new things it finds, which is something that to me feels like AGI. But the level of intelligence here, the level of capability, it feels like a huge improvement,” he said.
Both Hassabis and Altman agree that the path to AGI will not be reached by scale alone. The future depends on consistency, stronger reasoning, independent learning and reliable memory.
Until then, AI will remain powerful but flawed, capable of world-class performance in one moment and tripped up by simple maths in the next.
You may also like
Every word Thomas Frank said on Xavi Simons, more transfers and Tottenham injury news
I'm an EV expert and this trick will save you and your family money
Manali's right bank swept into Beas as torrential rains batter northern India
Report details how Pakistan's power elite undermines foreign assistance
Next polls riskiest in Bangladesh's history: Election Commission