Google has updated its Gemini AI assistant. Before, Gemini could only see your main calendar. Now, it can also see your other calendars. This includes work calendars, family calendars, personal calendars, and calendars you share with others. This makes it easier to keep track of all your events in one place.
With this update, Gemini can check all your calendars when you ask about your schedule. You don’t have to look at each calendar one by one. It shows all your events in one view.
You can also ask Gemini to add new events to any calendar. For example, you can tell it to put a meeting in your work calendar or a birthday in your family calendar. Gemini can also update or delete events on calendars where you have permission.
There are still some things Gemini cannot do. It cannot invite people to events. It also cannot change the location or description of an event that already exists. You will still need to do these things yourself.
Even with these limits, this update makes Gemini much more useful. You can now see all your schedules together. It helps you stay organized and manage your time. This is helpful if you have work, family, or personal events.
Now, Gemini can help you add events, check your full schedule, and keep everything under control. It makes managing your busy life easier and faster.
Google is preparing some exciting new upgrades for its Gemini AI, and early signs of these changes have been found inside the Google app. These features are not live yet, but they give us a clear idea of what Google may release soon through a new section, Gemini Labs.
During a recent app teardown of the latest Google app via Assemble Debug, several upcoming features have been discovered. Gemini Live is the voice based assistant, and Google now appears to be working on “Live Thinking Mode.”
In the past, Gemini had different modes for fast answers and more thoughtful answers. The new feature would allow Gemini to think more deeply while having real-time conversations.
![]()
Image – SammyFans
Google is also working on the Deep Research feature. Gemini already has a research mode, but Google seems to be improving it further. These upgrades could help users get clearer, more detailed answers when researching complex topics or professional tasks.
The company is also developing a feature called UI Control. This would let Gemini interact with apps on your phone to get tasks done for you. This idea connects closely with Project Astra, Google’s long-term goal of creating a powerful assistant that understands what’s happening on your screen and around you.
Gemini Labs system looks like it will let users turn experiments on or off, so they only test what they are interested in. While none of these features are available yet, their presence in the app codes suggests they could arrive soon. Stay tuned.
AI
Gemini now uses Personal Intelligence to connect your data
Published
3 weeks ago
on
January 14, 2026
Google has introduced a new feature in Gemini, Personal Intelligence, and it aims to make AI feel more human and useful. Gemini can now understand your personal context by safely connecting to your Google apps like Gmail, Photos, Search, and YouTube.
This feature is rolling out in beta for users in the US who subscribe to Google AI Pro or AI Ultra. Personal Intelligence is optional and turned off by default. If you choose to turn it on, Gemini can use your own data to give answers that actually fit your life.
Before this update, Gemini could only pull one thing at a time, such as a single email or photo. It couldn’t connect the dots.
Now, it can think across your data. Google solves this using something called “context packing.” This means Gemini doesn’t look at everything at once. Instead, it selects only the most important emails, images, or searches needed to answer your question.

Image via Android Authority
The feature is powered by Gemini 3, Google’s most advanced AI model so far. It has better reasoning skills and can handle very large amounts of information. Even so, Gemini only uses what is necessary to keep answers clear and fast.
Gemini can also work with text, images, and videos together. It may pull details from photos, confirm them in emails, and combine that with online search results.
Google says users stay in control. You can choose which apps to connect, turn personalization off anytime, and see where Gemini gets its information. Personal Intelligence is a big step toward smarter and more personal AI
AI
Most Samsung Galaxy users don’t realize how much AI helps them
Published
3 weeks ago
on
January 13, 2026
Samsung has been using artificial intelligence (AI) in its smartphones for many years. Galaxy AI is now available on more than 400 million devices around the world. Many people use these features often without even knowing it.
Galaxy AI is designed to make life simpler. Approximately 80% of Samsung Galaxy users have tried AI features, and more than two-thirds use them regularly. AI helps with everyday tasks like texting, calling, taking photos, editing, and checking the weather.
Interestingly, many people don’t realize how often they use AI. A recent Samsung survey of 2,000 adults in the US found that 90% of people use AI on their phones, but only 38% know it.
More than half think they don’t use AI at all. When shown a list of common phone features, 84% admitted they rely on AI-powered tools every day.

Source – Samsung Mobile Press
AI works quietly in the background to make Samsung phones smarter. Features like weather alerts, call screening, autocorrect, voice assistants, and auto-brightness all use AI. AI also improves the camera, so many people use Night Mode or automatic photo slideshows to take better pictures, often without knowing AI is helping.
The future of Samsung Galaxy AI is quite promising as it will continue to become more personal, conversational, and helpful. Samsung is helping users get things done faster, stay organized, and enjoy a more seamless mobile experience every day.
AI
Confirmed: Google’s Gemini models to boost Apple AI features
Published
3 weeks ago
on
January 12, 2026
Apple and Google have announced a new partnership that will make Apple devices smarter. Google’s Gemini AI models will be used in Apple’s AI system. This will help improve Siri and other Apple Intelligence features.
As per the official statement, Apple and Google have entered a multi-year collaboration. The next generation of Apple Foundation Models will be built using Google’s Gemini technology and cloud systems. This will help Apple deliver more personalized experiences and “innovative new features” across its devices.
The main result of this partnership will be an updated Siri, expected to arrive with iOS 26.4 in March or April. This new Siri will be able to understand personal context better, be aware of what’s on the screen, and give more control within different apps.
Joint Statement: Apple and Google have entered into a multi-year collaboration under which the next generation of Apple Foundation Models will be based on Google's Gemini models and cloud technology. These models will help power future Apple Intelligence features, including a…
— News from Google (@NewsFromGoogle) January 12, 2026
Apple also confirmed that its Apple Intelligence system will continue to run on Apple devices and Apple’s Private Cloud Compute servers. This ensures users’ data remains private, keeping Apple’s strict privacy standards in place.
Currently, it is not clear if existing Apple Intelligence features will also use Gemini models. Apple has not shared details about this.
This partnership is important because Apple has usually relied on its own AI. But by combining with Google’s advanced AI, Apple could offer smarter, more helpful features while keeping user information safe. Stay tuned for more information.
