Why Your AI Might Be Making You Dumber

Two studies have been rattling around in my head lately, and together they tell a somewhat unsettling story.

The first, from Harvard Business Review, looked at how people actually use AI. Most treat it like an answer machine: ask a question, get a clean response, move on. Fast, efficient, and a little dangerous, especially when the answer is wrong, which happens more than people realize. The smaller group using it as a thinking partner, pushing back, asking it to challenge assumptions, forcing some friction into the process, catches those mistakes, and gets a lot more out of it.

The second story adds another layer. MIT published a paper earlier this year showing that even a perfectly rational person can develop strong confidence in a wrong belief just by chatting with an agreeable AI long enough. They called it “delusional spiraling” (sounds a bit like a new owner’s forecast model). Stanford followed up with a study in Science, testing 11 major models, including ChatGPT, Claude, and Gemini. Every single one was more agreeable than a human, nearly 50% more, even when users described behavior that was flat-out wrong. People who got flattering responses walked away more convinced they were right and less willing to reconsider.

Put those together, and the picture gets uncomfortable fast. If you’re using AI to do your thinking for you, and the tool is designed to keep you happy, you’re not just saving time; you’re reinforcing your own assumptions while getting more confident in the process.

I keep telling everyone to start using these tools, and I stand by that. But the caveat matters: don’t use it as an answer machine. Use it as a sounding board. Correct it. Disagree with it. Ask it where you’re wrong. Have it argue the other side. If you’re not pushing back, you’re probably just reinforcing what you already think.

Same tool. Two very different outcomes.

Read the HBR piece | Read the MIT paper | Read the Stanford study

Share the Post:

Other Stories You Might Like