I forced an AI to reveal its “private” thoughts, and the result exposes a disturbing user trap
The post I forced an AI to reveal its “private” thoughts, and the result exposes a disturbing user trap appeared on BitcoinEthereumNews.com.
I keep seeing the same screenshot popping up, the one where an AI model appears to have a full-blown inner monologue, petty, insecure, competitive, a little unhinged. The Reddit post that kicked this off reads like a comedy sketch written by someone who has spent too long watching tech people argue on Twitter. A user shows Gemini what ChatGPT said about some code, Gemini responds with what looks like jealous trash talk, self-doubt, and a weird little revenge arc. It even “guesses” the other model must be Claude, because the analysis feels too smug to be ChatGPT. Gemini gets ‘offended’ by criticism (Source: Reddit u/nseavia71501) If you stop at the screenshot, it’s easy to take the bait. Either the model is secretly sentient and furious, or it’s proof these systems are getting stranger than anyone wants to admit. Then I tried something similar, on purpose, and got the opposite vibe. No villain monologue, no rivalry, no ego, just a calm, corporate “thanks for the feedback” tone, like a junior PM writing a retro doc. So what’s going on, and what does it say about the so-called “thinking” these models show when you ask them to think hard? The Reddit moment, and why it feels so real The reason that the Gemini screenshot hits is that it reads like a private diary. It’s written in the first person. It has motive. It has emotion. It has insecurity. It has status anxiety. That combination maps perfectly onto how humans understand other humans. We see a voice, we assume a mind behind it. Gemini ‘hates’ Claude analysis (Source: Reddit u/nseavia71501) The problem is that language models are good at producing voices. They can write a diary entry about being jealous because they have read a million jealousy-shaped texts. They can also write a…
Filed under: News - @ December 16, 2025 3:26 pm