Google's new Gemini AI model can show you that it's really "thinking"

Scrapbook style image of a Google logo sticker with Bard and Gemini AI stars, on top of a Google-themed runway with a light blue background filled with various shapes.
(Image credit: Rael Hornby / Laptop Mag, Google / Google DeepMind)

In September, OpenAI rolled out its o1-preview for ChatGPT, a new series of models designed with strong reasoning capabilities to deliver more thoughtful answers rather than speedy answers. Since then, multiple companies have rolled out their own reasoning AI models, like the DeepSeek-R1 model and Alibaba's QwQ-32B-Preview model.

Now, it's Google's turn at bat.

The new reasoning AI model from Google is called Gemini 2.0 Flash Thinking, and according to Jeff Dean, Chief Scientist for Google DeepMind, it's "an experimental model that explicitly shows its thoughts." Dean goes on to say, "this model is trained to use thoughts to strengthen its reasoning," an exciting prospect for those weary of how AI is forming its answers.

Google's new Gemini 2.0 Flash Thinking model is a step in the right direction

Because a reasoning AI model is designed to show its thoughts as it forms an answer, it's a lot easier for the AI to realize when it's made a mistake and correct itself. That said, TechCrunch tested out Gemini 2.0 Flash Thinking Experimental and got lackluster results on one question. The site asked how many R's were in the word strawberry and received an incorrect answer of "two."

This is still an experimental version of the AI model, and it's entirely possible it could be more equipped to tackle complex queries compared to simple ones. In the example below, Jeff Dean shows Google's new AI model solving a physics problem and explaining its thoughts as it goes.

Google product lead Logan Kilpatrick also took to X to share an exciting example of Gemini 2.0 Flash Thinking solving "a challenging puzzle involving both visual and textual clues." In the shared video, you can see the AI model's entire thought process from start to finish.

Kilpatrick drives home the point that this is "still an early version" of the model, but seeing the model in action is quite impressive. Rather than solely seeing the answer an AI model lands on, you can get a full, in-depth look at how the model started to approach the question and exactly how it arrived at its final answer.

The one downside? It'll take a bit longer to get your answer. Because the AI model has to write out its thoughts rather than simply "think" them, coming to a conclusion can take longer than it would for a non-reasoning AI model.

In the physics problem shown by Jeff Dean above, for example, it took just over 37 seconds for the AI model to complete its thinking process. But the end result seems incredibly worth it. For someone who turns to AI to help with a coding, physics, or math problem, being able to see the model's thought process will be able to help explain where you potentially got tripped up, which concept you failed to consider, etc.

There's clearly still a long way to go before reasoning AI models are where we'd like them to be (like recognizing three R's in strawberry instead of two), but the short steps companies are making feels like positive progress.

Category
Arrow
Arrow
Back to Apple MacBook Pro
Brand
Arrow
Processor
Arrow
RAM
Arrow
Storage Size
Arrow
Screen Size
Arrow
Colour
Arrow
Screen Type
Arrow
Condition
Arrow
Price
Arrow
Any Price
Showing 10 of 259 deals
Filters
Arrow
Load more deals
Sarah Chaney
Contributing Writer

Sarah Chaney is a freelance tech writer with five years of experience across multiple outlets, including Mashable, How-To Geek, MakeUseOf, Tom’s Guide, and of course, Laptop Mag. She loves reviewing the latest gadgets, from inventive robot vacuums to new laptops, wearables, and anything PC-related. When she's not writing, she's probably playing a video game, exploring the outdoors, or listening to her current favorite song or album on repeat.