> Gemini Pro 2.5 has a context window of 1 million tokens and wants to rise that to 2 million tokens soon. 1 token is approx 0.75 words, so 1 million tokens would be in the ballpark of 3k pages of code.
You mean around 3000 files with 3000 characters? That is a lot. I've played with some other LLMs in Agentic AIs but at work we are using Copilot, and when I add context through drag and drop it seems to be limited to some dozen files.
Still I don't totally understand how that huge of a context works for Gemini. I guess you don't provide the whole context for every request? So it keeps (but also updates) context for a specific session?
Gemini is better than Sonnet if you have broad questions that concern a large codebase, the context size seems to help there. People also use subagents for specific purposes to keep each context size manageable, if possible.
On a related note I think the agent metaphor is a bit harmful because it suggests state while the LLM is stateless.
You mean around 3000 files with 3000 characters? That is a lot. I've played with some other LLMs in Agentic AIs but at work we are using Copilot, and when I add context through drag and drop it seems to be limited to some dozen files.