Google has introduced a new AI model, Gemini 3 Flash, which combines speed, intelligence, and efficiency. It is part of the Gemini 3 family, which also includes Gemini 3 Pro and Gemini 3 Deep Think. Gemini 3 Flash is built to help people learn, create, and get things done faster while being more affordable.
For developers, Gemini 3 Flash is available in Gemini API, Google AI Studio, Google Antigravity, Vertex AI, and Gemini Enterprise. It works well for coding, problem-solving, data analysis, and app development. It can handle complex tasks quickly to make it perfect for building interactive apps or agent-based systems.
For everyone else, Gemini 3 Flash is now the default model in the Gemini app and in AI Mode in Google Search. It is replacing the older 2.5 Flash. People can use it to analyze images and videos, get answers to questions, or even build simple apps using their voice. It makes learning, planning, and problem-solving much faster and easier.

Image via Google
Gemini 3 Flash is not only fast but also very smart. It performs well on tough reasoning and knowledge tests, like GPQA Diamond and MMMU Pro, while using fewer resources than older models. It can finish everyday tasks quickly and take extra time to solve complex problems.
Google is rolling out Gemini 3 Flash AI to everyone, from developers to regular users. With its speed, intelligence, and accessibility, Gemini 3 Flash is a big step forward for using AI in learning, creating, and planning.
AI
Gemini to introduce new categories in ‘My Stuff’ for easier access
Published
2 days ago
on
December 16, 2025
Google is working on some updates for its Gemini app that will make it easier to use and more organized. Gemini is a tool that helps you create images, documents, and other content. Right now, everything you make is saved in one place, My Stuff. Soon, this section may be split into separate categories for media, documents, and purchases.
At the moment, all your creations are shown together in a single list, which can feel a little crowded. The new design will separate your photos and videos from your documents.
There may also be a new section – Purchases, to keep track of items you’ve bought using Gemini’s shopping feature. This change should make it much easier to find anything you’ve made or bought.

Image via Android Authority
Gemini is also getting a new look for the input box where you type your prompts. Right now, the box appears as a card at the bottom of the screen. In the updated version, the box will be detached and give the app a cleaner look.
There’s also an improvement for new haptic feedback. When you send a prompt, you’ll feel a small vibration, and you’ll get another one when Gemini finishes processing your request. For now, this feature is only in the main app and may not be available in the overlay.
These new changes show that Google wants Gemini to be more organized and easier to use. These updates are still in development and will soon improve the overall Gemini experience. Stay tuned.
Google has pushed a new update for one of its most useful AI tools, NotebookLM. This tool already helps users understand and summarize information from their own documents. Now, Google has started integrating NotebookLM with Gemini to make research and writing even easier.
NotebookLM works like a smart notebook, as you can upload your files, and the AI helps explain, summarize, and organize the information. Earlier this year, reports suggested Google was working on bringing NotebookLM into Gemini. That update is now slowly rolling out.
With this new integration, users can ask Gemini to use a specific NotebookLM notebook when answering questions. This means Gemini can give answers based only on your trusted notes and sources.

Image via TestingCatalog
You can even select more than one notebook at the same time. The feature also works inside Gems, allowing users to build custom AI assistants around their notebooks.
At the moment, the NotebookLM integration is available only on Gemini’s web version. It does not yet appear in the Gemini app on Android. The rollout is also happening slowly, so not all Google accounts have access yet.
This update is quite useful for those people who often work with NoteBookLM. Instead of switching between apps, you can simply ask Gemini questions and get answers directly from your own research.
Google is testing a new image feature for its Gemini AI, and it aims to make working with pictures much easier. This new tool allows users to mark, circle, or draw on images before asking questions. By doing this, Gemini can clearly see which part of the image the user wants to talk about.
The new image feature is currently under testing in the Gemini mobile app and also on the web version. When users upload an image to Gemini, they can now highlight or mark certain areas in the picture. It means users can point to just one ask question about it, if there are many people or objects in one image.
On Android devices, Gemini shows a short message when an image is added, which explains the new markup action. Users can use this tool not only for analysis but also for editing images. Early testing shows that Gemini understands marked areas well, even if it sometimes makes small mistakes.

Image via Android Authority
Google has been working on this feature for the past few months. First, signs of it were found in app updates, and later it appeared in leaks showing Gemini on the web. Now, some users have started to see it live, which means Google is likely testing it more widely.
At the moment, not everyone has access to the image markup feature. Google has not yet confirmed when it will be available to everyone. So stay tuned for more information.
AI
Gemini 2.5 Flash Native Audio brings more natural, smarter voice interactions
Published
6 days ago
on
December 12, 2025
Google is rolling out a big upgrade for Gemini AI, Gemini 2.5 Flash Native Audio, and it’s all about making conversations feel more natural and human. The new update makes voice interactions smoother, smarter, and far less robotic.
The update comes with major improvements in how Gemini handles tasks while you’re talking. Now, the AI helps to recognize when it needs real-time information without breaking the flow of the conversation. Instead of awkward pauses or replies, you will now get a smooth response after this update.
Moreover, it also enhances instruction following. Gemini used to follow developer instructions correctly about 84% of the time, but after the update, it now reaches around 90%. This means it can better understand complex steps and give more accurate and dependable responses.
The update also makes conversations more connected and natural. Gemini can now remember earlier parts of the conversation more effectively. It helps to stay on topic even if the discussion is long.

Image via Google
Google has also added two small but helpful features for Gemini Live. First, it won’t cut you off if you pause while speaking. Second, you can mute your microphone while Gemini is talking so you don’t interrupt them by mistake.
These new improvements are already rolling out. You can experience the updated Gemini 2.5 Flash Native Audio in Gemini Live, Search Live, Google AI Studio, and Vertex AI. With this major upgrade, Google aims to make AI conversations feel more natural and user-friendly.
