I was genuinely impressed the first time Grok replied to me.
Elon Musk’s AI tool didn’t sound like a generic chatbot trying to be funny. It sounded like someone I actually knew. It spoke like a Nigerian, like my friends. It knew when to drop “omo” and when to “abeg.” It understood our jokes without needing a thread of context. And when it got something wrong, it corrected itself fast enough to feel almost human.
I guess the years of posts we’ve written on X have trained Grok well. We fed these platforms our personalities for free, and now Grok reflects them to us. It switches moods easily. It can be playful, sharp, unserious, and then suddenly serious again.
Like every other major AI tool, Grok also hallucinates.
It says things that are wrong with total confidence. It seamlessly blends facts with jokes and half-truths. We’ve witnessed such issues in ChatGPT, Claude, and other AI bots. Somehow, we’ve accepted it as normal.
What makes Grok different is not that it can generate harmful content. The difference is where it lives.
Grok is built into X, which has over 600 million active users.
You don’t necessarily need to download another app or switch platforms. You generate content within X and share it immediately. The content can swiftly get reposted and pushed by algorithms designed for virality.
What seems convenient is actually the risk. The network effect gives Grok power to make its mistakes louder and harder to control.
This weakness became impossible to ignore earlier this month, when Anita Natasha Akida (popularly called Tasha), a Nigerian reality TV star, called out Grok. People had been prompting the bot to generate edited versions of her photos, and Grok responded with humiliating and inappropriate images.
Grok replied and apologized to her. It promised never to edit her images again.
Minutes later, it broke that promise and generated another image mocking her.
In recent months, women across countries have described variations of the same violation: their photos turned into sexual content. The bot hasn’t even spared minors. Whether intentional or not, the tool made this harm easy.
Governments have noticed and started taking action. Malaysia and Indonesia became the first two countries to outright block Grok. Turkey has followed with its own restrictions. The U.K. and France are discussing whether Grok violates existing digital safety laws.
I use the Grok app, but I rarely use it directly on X. That choice alone says something. When you place AI inside a social network designed for attention and conflict, moderation becomes much harder, harm spreads faster, and accountability gets blurry.
I still believe Grok shows us something important. It might be the closest we have come to humanizing AI. Its cultural intelligence is real. But intelligence without judgment is not enough.
If xAI can’t teach Grok the difference between “I can” and “I should,” it might continue to face bans and threaten humans. And if we, as users, keep treating these tools like toys just because they’re entertaining, then we’re also choosing what kind of internet we’re willing to live with.