Gemini, equipped with Google DeepMind's 'Deep Think' capability, has set a historic milestone by achieving a gold-medal standard score at the 2025 International Mathematical Olympiad while adhering to official rules.
Introduction: Was Mathematics an ‘Impregnable’ Fortress for AI?
Imagine teenage math geniuses from around the world gathered in one place. These are the candidates for the International Mathematical Olympiad (IMO), the world’s most prestigious brain olympics held annually since 1959 Source 1.
This competition isn’t just a test where you plug memorized formulas into equations. The problems presented are like bizarre and complex logical mazes never seen before. Participants must find their own path and provide logical proofs. It is, quite literally, a place that tests the limits of human intelligence.
However, in July 2025, unbelievable news emerged from this ‘sanctuary of human intellect.’ Google DeepMind’s artificial intelligence, Gemini, achieved a ‘gold-medal standard’ performance in this competition Source 4. It wasn’t just about getting a high score; it was an ‘officially recognized’ record obtained while following all the official rules of the competition Source 5. How exactly did Gemini solve these difficult math problems? And what does this show us about our future?
Why It Matters
If you ask the chatbots we use every day, “What is 1234 multiplied by 5678?”, they answer very well. But when given a complex problem like “Explain logically why this theorem holds,” AI often becomes confused or exhibits hallucinations, where it provides plausible-sounding lies.
Math Olympiad problems are the pinnacle of this ‘logical reasoning.’ This is because they require a sophisticated thinking process that draws the next conclusion from a single fact, rather than just simple calculation. Gemini’s achievement is important for three main reasons:
- Evolution into True ‘Thinking AI’: Gemini has moved beyond simply memorizing and outputting data to possessing reasoning capabilities—the ability to think deeply and build logic like a human Source 7.
- Triumph of ‘General’ AI, Not Just Math-Specific: This model isn’t a specialized robot modified only to be good at math. It is remarkable that a ‘general-purpose language AI,’ like the one we use for everyday conversation, has reached a world-class level in mathematics Source 7.
- Official Recognition: While there have been announcements of AI solving math problems in the past, this achievement is decisively different as it was directly verified and officially recognized by the IMO competition coordinators Source 4.
Understanding the Secret: Gemini’s ‘Deep Think’
How did Gemini accomplish such an amazing feat? The key lies in a technology called ‘Deep Think.’ To understand this, let’s imagine a scenario.
[Imagine: Two Students in a Maze] Two students enter a complex maze.
- Student A (Traditional AI): Runs forward blindly. When hitting a dead end, they panic and try any path again. They might get out by luck, but mostly they get lost.
- Student B (Deep Think Gemini): Takes a map out of their bag and marks their current position. When coming to a fork in the road, they think to themselves, “There’s a high probability of a dead end if I go this way,” and adjust their path. If they realize they’ve entered a wrong path, they immediately turn back and devise a different strategy.
1. The Meeting of “Intuition” and “Deliberation”
Think about when we solve a quiz. There is intuition, where the answer comes to mind as soon as you see the problem, and there is a deliberation process, where you write things down on paper and weigh them one by one. While existing AIs mainly relied on the first ‘intuition’ to produce quick answers, the enhanced Gemini goes through a process of reviewing and correcting its own thoughts through ‘Deep Think’ Source 8.
2. Analogy: An Honors Student with a ‘Scratchpad’
Simply put, Gemini equipped with ‘Deep Think’ is like an ‘honors student holding a scratchpad (notebook).’ When given a problem, it doesn’t just spit out an answer. Instead, it solves the problem by talking to itself on the scratchpad: “Let’s try solving the first step like this,” “Oh? I’m stuck here. Then let’s try another method.” Through this process, it reduces errors and approaches the correct answer.
3. Mathematics Solved Through Natural Language Alone
Even more surprising is that Gemini didn’t solve the math by writing complex computer programming code. It developed logic using the words people use every day, namely ‘Natural Language’ Source 8. It won the gold medal by building logic with words, just like an experienced mathematician explaining things calmly by your side.
Current Status: Amazing Records Set by Gemini
Gemini’s performance in this competition goes far beyond just being ‘good.’
- Score: Gemini recorded a total of 35 points in this IMO competition. In a competition where the perfect score is 42, 35 points is a level on par with the world’s top students Source 6.
- Solving Ability: It perfectly solved 5 out of 6 high-difficulty problems. This is a record that even human geniuses find hard to achieve within the time limit Source 8.
- Official Certification: This entire process was conducted in strict compliance with the official rules of the IMO and received official recognition from the competition’s organizing committee Source 3Source 4.
This project was completed with Thang Luong of Google DeepMind leading the technical direction and Edward Lockhart joining forces Source 3. They have proven to the world that AI can perform high-level intellectual activities beyond being a simple tool.
What’s Next?
The Math Olympiad gold medal is about more than just “AI doing your math homework.”
- Acceleration of Science and Technology: Mathematics is the fundamental language of all science. AI capable of proving complex formulas will play a decisive role in solving humanity’s challenges, such as developing new drugs or designing efficient energy grids to tackle climate change.
- Innovation in Logic Fields: Significant changes are expected in fields requiring high-level logical reasoning, such as programming or legal document review. AI with ‘Deep Think’ will excel at writing error-free code or finding complex legal contradictions.
- Leap in Personal Education: We will have the perfect personal tutor who doesn’t just give the answer but guides you logically: “Since you thought this way here, let’s approach it in that direction next time.”
Through this achievement, Google’s Gemini has clearly shown that AI has moved beyond the stage of simply ‘summarizing’ information to the stage of ‘solving’ complex problems Source 9.
AI’s Take
The news of Gemini’s IMO gold medal poses an important question to us: “If AI enters the realm of creative logic, which we thought was the exclusive domain of humans, what is the role of humans?” However, just as mathematicians use AI as a new tool to make even greater mathematical discoveries, perhaps we can think of it as gaining a reliable partner in ‘deep-thinking AI.’ The day when we can enjoy solving the complex equations of life and the world together with AI seems not far off.
References
- Advanced version of Gemini with Deep Think officially achieves gold-medal standard at the International Mathematical Olympiad - Google DeepMind Blog
- Advanced version of Gemini with Deep Think officially achieves gold-medal standard at the International Mathematical Olympiad - AI Future Thinkers
- AI in Mathematics: Gemini with DeepThinking Sets New Standard at IMO - Promptwire
- Gemini Deep Think learns math, wins gold medal at International Math Olympiad - Ars Technica
- World’s First AI Wins Gold in IMO: Google’s Gemini Scores 35, Shocks Judges - 36Kr
- Gemini with Deep Think Achieves Gold at International Math Olympiad - Maginative
- Google DeepMind’s Gemini wins Mathematical Olympiad gold using only natural language - THE DECODER
- Google DeepMind Achieves Gold-Level Math Olympiad Result With Gemini Deep Think - TechRepublic
FACT-CHECK SUMMARY
- Claims checked: 14
- Claims verified: 14
- Verdict: PASS
- Silver medal at a world math competition
- Gold-medal standard at the International Mathematical Olympiad (IMO)
- Grand prize at an Asian math competition
- 30 points, 4 problems solved
- 35 points, 5 problems solved
- 40 points, 6 problems solved
- It was built as a math-specific AI
- It strictly adhered to official competition rules
- It solved problems much faster than humans