I was hoping to consolidate my AI stack after subscribing to Google AI Pro, thinking it could act as a subscription to replace three tools (Claude, Perplexity, and ChatGPT).
Two months later, I’m still using all three plus Google AI Pro. While it unlocked some truly impressive features, I learned that having access to better tools doesn’t automatically mean they fit the way I think and work. Here’s what changed and what didn’t.
Deep research is legitimately different
Handles multi-step synthesis better than search
Deep investigation breaks down complex queries into multi-point investigation plans, autonomously searches the web, and synthesizes findings into comprehensive multi-page reports in just 5 to 10 minutes. Many people include it with a quicker web search, but it was more like hiring a junior analyst for an hour.
I used it to investigate healthcare AI compliance frameworks in three jurisdictions. Deep Research created a customized research plan, autonomously navigated hundreds of websites, compared regulatory PDFs, and delivered a cited report that I could export directly to Google Docs. ChatGPT and Claude would have asked me to break it down into five separate prompts and manually synthesize the results.
the capture: You get 20 deep investigation reports per day with AI Pro, which sounds generous until you realize each report can take more than 10 minutes. For a quick fact check or quick iteration, I still turn to Perplexity. The interface is faster when I only need answers obtained without waiting for a full report.
NotebookLM Plus eliminated the friction you didn’t know you had
Higher limits made a difference
NotebookLM Plus (included in Google AI Pro) increases feed limits to 300 per laptop and provides 20 audio summaries compared to the free tier’s 50 feeds and three daily audio summaries. That numerical jump matters more than it seems.
Now, when I’m writing for a specific client, I can ask NotebookLM to cross-reference tone preferences, pull examples from previous work, or generate an audio overview I can listen while I travel. NotebookLM relies on uploaded sources with online citations for each claim, which means I don’t receive generic AI advice, but rather a synthesis based on the actual history of the project.
where it breaks: NotebookLM is best for reference-heavy work that synthesizes existing material. Does not generate new ideas well. When I need to brainstorm creatively or need to write something from scratch without source restrictions, I still open Claude. Claude’s Artifacts feature creates independent results in a dedicated window where you can view, edit, and build on creations in real time, which works best for iterative writing.
Gemini’s integration advantage is real but limited
Works best when you’re already in the Google ecosystem
The most useful part of Google AI Pro is the Gemini side panel in Gmail, Docs, Sheets, and Slides that lets you ask questions, write content, and analyze data without leaving your workspace.
I was reviewing a 4,000-word article in Google Docs and realized I needed to adjust the conclusion. Instead of copying it into Claude or opening a new tab, I highlighted the section, opened the Gemini panel, and asked him to condense it to 120 words while retaining the key points. Done in 30 seconds without context switching.
the limitation: This only helps if you work in Google Workspace. Gemini can’t access local files, so I still use Claude Projects for work-in-progress drafts where I need persistent context across sessions. Claude Projects also allows you to upload documents and maintain context across multiple conversations, with files automatically referenced in each chat within that project.
The model itself is… fine.
I keep comparing him to Claude and he keeps losing.
Gemini 3.1 Pro is competent. It handles technical explanations well, processes large windows of context without suffocating, and rarely hallucinates compared to previous versions. But when I’m writing client content or working on complex edits, I instinctively reach for Claude.
Claude’s voice feels more natural for editorial work. When I ask him to match my tone or remove nonsense from a draft, the result reads like something I would actually write. Gemini’s suggestions often seem technically correct, but a little… wrong. The writing is too formal or adds qualifiers that I would not use.
Where Gemini wins: Structured data tasks. When I need to analyze a CSV, generate comparison tables, or organize confusing research into clean categories, Gemini handles it without a problem. It’s also noticeably faster than Claude for quick queries, as responses come almost instantly. But speed doesn’t matter if I have to rewrite the result anyway.
I am paying for the package, not the chatbot.
Deep Research and NotebookLM justify the cost
This is my honest opinion: I am not a subscriber to Google AI Pro because Gemini is my favorite model. I subscribe because it allows me to do in-depth research. NotebookLM Plus with more generation limits5TB storage and Gemini integration into Google apps, plus features I haven’t fully explored yet.
The subscription expanded the range of problems I can solve without manual solutions. And for productivity work, that’s what really matters.





