Google's Gemini 2.5 Deep Think model solved 10 out of 12 problems at the 2025 International Collegiate Programming Contest (ICPC), standing shoulder to shoulder with the world's top developers.
The Day AI Surprised the World’s Best Minds
Imagine this. Brilliant college students from all over the world gathered in teams. Before them lie 12 massive puzzles that must be solved within 5 hours. These are not just simple math problems. It’s an extreme battle of wits requiring the design of complex algorithms (step-by-step procedures for solving problems) and writing error-free code.
This is the scene of the World Finals of the International Collegiate Programming Contest (ICPC), often called the ‘Coding Olympiad’. However, an ‘AI contestant’ appeared at this intense event and surprised everyone. The protagonist is Google’s latest model, Gemini 2.5 Deep Think. Gemini achieves gold-medal level at the International Collegiate Programming Contest World Finals — Google DeepMind
Gemini did more than just participate; it achieved ‘gold medal-level’ results, rivaling the best human teams. We explain in easy-to-understand terms how AI was able to solve such complex problems and what this means for our future.
Why is this important?
Until now, the AI we encountered was mainly at a ‘good at talking’ level. When asked a question, it would combine information from the internet to provide a plausible answer. However, solving problems in a coding competition is a completely different story.
-
The Era of Real ‘Reasoning’ Opens: Coding problems cannot be solved by simply outputting memorized information. You must break the problem down into small units, establish a logical sequence, and verify for errors yourself. Gemini reaching the gold medal level is strong evidence that AI has begun to engage in Deep Think (the process of delving logically and deeply into complex problems) like a human. [Gemini just aced the world’s most elite coding competition - what it means for AGI ZDNET](https://www.zdnet.com/article/gemini-just-aced-the-worlds-most-elite-coding-competition-what-it-means-for-agi/) - Reaching Human Expert Levels: This achievement shows that AI is moving beyond being a simple assistant and can compete on equal footing with the top 1% of human experts in specific fields. Google CEO Sundar Pichai celebrates Gemini’s gold win at world coding contest: ‘Such a profound leap’ - The Times of India
- A Stepping Stone to AGI (Artificial General Intelligence): On the path to AGI—AI capable of performing any intellectual task a human can—’mathematics’ and ‘coding’ are considered the most difficult mountains to climb. The key is that Gemini is conquering these two in succession. Google’s AI Achieves Historic Gold Medal Performance in Programming Competition, Marking Major AGI Milestone - Folio3 AI
Easy Understanding: How did Gemini solve the problems?
The way Gemini 2.5 Deep Think works is similar to a talented architect.
| As an analogy, no one blindly starts stacking bricks when building a house, right? First, they draw a complete bird’s-eye view, design the foundation, and meticulously plan where the plumbing will go before starting construction. Similarly, Gemini 2.5 Deep Think does not immediately start writing code when faced with a complex problem. Instead, it uses ‘advanced reasoning skills’ to decompose the problem into several small components and find solutions step-by-step. [Gemini just aced the world’s most elite coding competition - what it means for AGI | ZDNET](https://www.zdnet.com/article/gemini-just-aced-the-worlds-most-elite-coding-competition-what-it-means-for-agi/) |
Amazing Scorecard in Numbers
- Solved 10 out of 12 problems: It accurately solved 10 out of the 12 highly difficult problems presented in the competition. Google’s Gemini Stuns World Finals: AI Outscores Top Coders for “Gold” Medal Performance
- 2nd Place Level Overall: If Gemini had been registered as an official participant, it would have ranked 2nd overall. Considering human gold medal teams usually solve 10-11 problems, there is effectively no difference in skill. Google’s Gemini Stuns World Finals: AI Outscores Top Coders for “Gold” Medal Performance
- A Problem That Beat Humans: Notably, Gemini alone solved a specific problem that not a single one of the 139 participating human teams could solve. Gemini AI solves coding problem that stumped 139 human teams at ICPC World Finals - Ars Technica
In simple terms, in a school where the top students are gathered, they took the hardest exam, and an AI student took 2nd place and even solved a ‘killer question’ that every other student got wrong.
Current Situation: How far have we come?
In fact, this ‘gold medal march’ by Google Gemini is not the first. The technology that forms the basis of this model has previously achieved gold medal-level results in the International Mathematical Olympiad (IMO). Gemini achieved gold-medal performance at the International Collegiate Programming Contest World Finals.
Mathematics and coding share the commonality of ‘rigor’. When writing a novel, even if a sentence is a bit awkward, the meaning gets through, but in math or coding, even a 0.1% error results in a wrong answer. Google CEO Sundar Pichai described this as “Such a profound leap,” giving it great significance. Google CEO Sundar Pichai celebrates Gemini’s gold win at world coding contest: ‘Such a profound leap’ - The Times of India
Of course, challenges remain. This achievement is a result in a competition environment with fixed rules, and it remains to be seen if it can perform just as well in actual software development sites dealing with complex and ambiguous real-world requirements.
What will happen next?
What kind of changes will the possibilities shown by Gemini bring to our daily lives?
- An Era Where Anyone Can Be a Programmer: Even without knowing professional coding syntax perfectly, the ‘democratization of coding’ will accelerate, where AI implements ideas into sophisticated programs just by explaining them well.
- Breakthroughs in Science and Technology: AI strong in complex calculations and logical modeling will become the best partner in solving human challenges such as new drug development, climate crisis response, and new material design.
- A New Definition of Intelligence: As AI enters areas requiring high-level logical thinking, the human role will evolve from ‘solving problems directly’ to ‘collaborating with AI to establish more valuable hypotheses’.
The achievements of Google DeepMind’s Gemini 2.5 Deep Think go beyond simple news. It is a signal fire announcing that AI has evolved from a ‘talking parrot’ to a ‘partner that ponders and solves together’. Google Gemini Achieves Gold-Medal Performance at International Collegiate Programming Contest World Finals
AI’s Take
This achievement by Gemini suggests that AI has officially entered the realm of human ‘wisdom (problem-solving ability)’ in addition to ‘knowledge’. In particular, solving a problem that 139 teams failed demonstrates the possibility that AI can find new logical paths that human collective intelligence has not yet discovered. In the future, AI will move beyond being a simple tool and establish itself as a ‘co-researcher’ solving the most difficult mathematical and logical challenges facing humanity.
References
- Gemini achieves gold-medal level at the International Collegiate Programming Contest World Finals — Google DeepMind
- Gemini achieved gold-medal performance at the International Collegiate Programming Contest World Finals.
- Gemini achieves gold-level performance at the International Collegiate Programming Contest World Finals - Google DeepMind
- Google CEO Sundar Pichai celebrates Gemini’s gold win at world coding contest: ‘Such a profound leap’ - The Times of India
- Google Gemini Achieves Gold-Medal Performance at International Collegiate Programming Contest World Finals
- Gemini AI solves coding problem that stumped 139 human teams at ICPC World Finals - Ars Technica
-
[Gemini achieved gold-medal performance at the International Collegiate Programming Contest World Finals. 67nj](https://www.67nj.org/gemini-achieved-gold-medal-performance-at-the-international-collegiate-programming-contest-world-finals) - Google’s AI Achieves Historic Gold Medal Performance in Programming Competition, Marking Major AGI Milestone - Folio3 AI
- Google’s Gemini Stuns World Finals: AI Outscores Top Coders for “Gold” Medal Performance
- OpenAI and Gemini Win Gold at ICPC 2025: OpenAI Scores Perfectly, Crushes Competitors
-
[Gemini just aced the world’s most elite coding competition - what it means for AGI ZDNET](https://www.zdnet.com/article/gemini-just-aced-the-worlds-most-elite-coding-competition-what-it-means-for-agi/)
FACT-CHECK SUMMARY
- Claims checked: 8
- Claims verified: 8
- Verdict: PASS
- Gemini 1.0 Pro
- Gemini 2.5 Deep Think
- Gemini Chatbot
- 5 problems
- 8 problems
- 10 problems
- 1st place
- 2nd place
- Bottom tier overall