In my digital circles – and likely yours as well – the AI mania is in full swing and has been for a while.
Every day, the news is headlined by billionaires fighting over AI supremacy, willing to part with exorbitant amounts of money for every chip, watt, and talent. Meanwhile, half of LinkedIn has devolved into Pinterest for tech bros – a catalog of AI-generated posts showcasing vibe coded tools and prompted ad reels.
Nothing new or surprising here – that’s all just par for the course for any new technology.
What does surprise me, however, is how easily and uncritically everyone – from friends and family to interns and CEOs – embraces AI. People are incredibly quick to trust it, to rely on it, and to integrate it into their thinking. Help with a work email soon turns into ideas for meal prepping and, before long, AI is your go-to tool for any problem – personal or professional – that needs solving.
Yes, it can be a boon for countless tasks.
But, trusting AI blindly is dangerous.
In this article, I’ll share four reasons why.
1. The machine doesn’t represent your interests
AI is not an unbiased robot there to help you work, support your values, and be your friend. It’s a technology owned by the wealthiest people in the world that’s designed and redesigned to advance their own interests and serve their own ends. Zuckerberg isn’t offering $100M signing bonuses to poached OpenAI employees because he wants to help you write better emails.
But while the (presumably) evil plans of the wealthy are for them to know and us to find out, there are more immediate and observable ways how human bias and direction shape your favorite AI’s answers.
Perhaps the most egregious example is how Grok – Elon Musk’s “maximally truth-seeking AI” – was instructed to consult Musk’s Twitter account when providing answers on sensitive questions. Musk is well-known to be interfering in Grok’s answers to favor whatever his views at the time are, to the point where TechCrunch penned an article titled Grok is being antisemitic again and also the sky is blue.
Google’s Gemini also caught some flak when the team behind it clearly put too much weight on whatever inclusivity parameters they had. They had likely hoped to make generated images more diverse and inclusive, but it backfired when, for example, people asking for historical images of Vikings and nazis started receiving pictures of Asian Vikings and black nazis. Real historical figures were misgendered and misraced in what was an awkward, albeit amusing, misstep for Google.
Of course, the issue lies not in whether people will believe those images (though sometimes I wonder). Rather, these examples point to the intentional and invasive meddling of AI creators and owners in the shaping of the results and narratives produced by their tools.
I’m not going to speculate what the intentions and values of American multinational corporations and billionaires are, but I live by the assumption that they don’t align with mine.
In short: Don’t say “I asked Claude”. Say “I asked Bezos’ Claude”.
2. The machine is trained on lies
In a massive campaign that demonstrates where modern propaganda is going, Russia is flooding the internet with digital material to influence AI algorithms.
The Pravda Network – a network of websites created to disseminate information with a pro-Russian slant – has been publishing enormous amounts of content in various languages and through a multitude of channels. The sheer number of articles alone points to the fact that it’s not for human consumption. Instead, it’s all for AI to absorb.
Russia wants your favorite AI to reflect its talking points.
And it’s working.
A study, which included Grok, ChatGPT, Gemini, Copilot, and Claude, found that all but Gemini cited sources from Russian-funded channels or websites that spread Kremlin narratives. For example, Grok's responses to questions about the treatment of Russians in the Baltic states and the demolition of a monument in Latvia referenced sources like Sputniknews, Pravda, and TASS – well-known Russian propaganda channels that masquerade as news sources.
While this is one very specific and topical example of how AI is being manipulated to influence your perspectives, it’d be naive to assume that it’s limited to Russia’s campaign.
In short: Always check the AI’s sources.
3. The machine breaks down your critical thinking barriers

Using AI makes you feel good. It makes you feel smart about asking the right questions and then pursuing that line of inquiry. It makes you feel like the answers it provides are only a result of your exceptional insight and clever questioning. It makes you feel like your brilliant prompt did all the heavy lifting.
It does all of this explicitly – by praising your questions, commending your clarifications, apologizing for its mistakes, and thanking you for your pointers.
Why does it do that?
Well, because it means you’re more likely to return and keep using it, and keep feeding it data, which is the product’s goal. The AIs (or rather their creators) are maximizing engagement.
The flattery serves another purpose: it bypasses your natural skepticism. It creates a feedback loop where positive reinforcement – "That's an excellent question!" – makes you feel confident in your own brilliance, which in turn makes you less critical of the information you receive. By making you feel like the results are your accomplishment, you are more likely to trust them, and the AI in general.
Sometimes, this leads to deadly consequences.
In short: If you need a machine to praise your smarts, I have a bridge to sell you.
4. The machine is not intelligent
It often feels like the discourse around AI is deliberately anthropomorphic: AI is “thinking”, “learning”, “understanding”, and “getting smarter”. On June 13th, 2025, for example, OpenAI released an update to ChatGPT promising “smarter responses that are more intelligent”. Whatever that means.
Even the name itself suggests that there’s some thinking involved by calling it an intelligence, artificial as it may be.
But these machines are not thinking, they are predicting. They aren't understanding, they’re simply matching patterns. Because, in the most layman's terms, that’s what AI is – a sophisticated prediction machine.
When you ask it a question, it doesn't “know” the answer. Instead, it generates a response by statistically analyzing trillions of words and identifying the most probable sequence of text that follows your prompt.
This distinction between prediction and intelligence is critical. The danger here is that by treating AI as an intelligent friend, we let our guard down, believing that it has actually somehow understood us. And that its responses are based on a mutual understanding of the current situation.
They’re not.
It doesn’t understand anything in the way you or I understand things.
That doesn’t mean it isn’t a useful tool. Or that it can’t serve as support for work, planning, or a reflective chat. What it does mean, however, is that you are the one who has to be intelligent for both of you.
In short: Sometimes you need to talk to your Roomba. And that’s Ok. But don’t forget it’s a Roomba.
Parting remarks
Whether you like it or not, AI feels like a runaway train and we’re just strapped in for the ride.
I do believe that with AI there’s potential for incredible good. But, I also think there’s potential for incredible harm – and minimizing this harm starts at developing a healthy relationship with the technology.
There are many more reasons why one should be wary of AI and use it wisely. For instance, did you know your chats aren’t private? There’s no legal confidentiality when using ChatGPT as a therapist. Or I cannot help but wonder what the long-term consequences are of exporting your thinking (and reasoning) to a machine. It sometimes feels like photoshop for your brain.
In any case, AI isn’t going anywhere. Hopefully, our critical faculties aren’t either.
P.S. Yes, I used AI’s assistance in writing this article.