Is ChatGPT really making us dumb and lazy?

Is ChatGPT really making us dumb and lazy?


Since ChatGPT’s debut in 2022, generative AI quickly entered our work, study, and personal lives, helping to speed up research, content creation, and more at an unprecedented rate.

Enthusiasm for generative AI tools has understandably gained traction experiencing an even faster adoption rate than the Internet or PCs, but experts warn we should proceed with caution. As with every new technology, generative AI can launch society forward in a number of ways, but it can also bring consequences if left unchecked.

One of those voices is Natasha Govender-Ropert, Head of AI for Financial Crimes at Rabobank. She joined TNW founder Boris Veldhuijzen van Zanten on the latest episode of “Kia’s Next Big Drive” to talk AI ethics, bias, and whether we’re outsourcing our brains to machines.

Check out the full interview — recorded en route to TNW2025 in Kia’s all-electric EV9:

One question that should be on our minds is, as we turn to generative AI more and more for answers, what impact could this reliance have on our own intelligence?

A recent study by MIT into the use of ChatGPT to write essays has spiralled out into a slew of sensationalist headlines, from “Researchers say using ChatGPT can rot your brain” to “ChatGPT might be making you lazy and dumb.” Is that really the case?

Your brain on gen AI

Here’s what actually happened: Researchers gave 54 Boston-area students an essay task. One group used ChatGPT, another used Google (without the help of AI), and the third had to write using nothing but their brains. While they wrote, their brain activity was measured using electrodes.

After three sessions, the brain-only group showed the highest levels of mental connectivity. ChatGPT users? The lowest. It seemed the AI-assisted folks were cruising on autopilot while the others had to think harder to get words on the page.

For round four, roles reversed. The brain-only group got to use ChatGPT this time, while the AI group had to go solo. The result? The former improved their essays. The latter struggled to remember what they’d written in the first place.

Overall, the study found that over the four months during which it was conducted, brain-only participants outperformed the other groups in terms of neural, linguistic, and behavioral levels, while those using ChatGPT spent less time on their essays, simply hitting copy/paste instead.

English teachers who reviewed their work said it lacked original thought and “soul.” Sounds alarming, right? Perhaps, but the truth is more complicated than the sensationalist headlines suggest.

The findings were less about brain decay and more about mental shortcuts. They showed that over-relying on LLMs can reduce mental engagement. But with active, thoughtful use, those risks may be avoided. The researchers also emphasised that, while the study raised some interesting questions for further research, it was also far too small and simple to draw definitive conclusions.

The death of critical thinking?

While the findings (which are yet to be peer reviewed) certainly require further research and deeper reflection into how we should be using this tool in educational, professional, and personal contexts, perhaps what might actually be rotting our brains is TLDR headlines devised for clicks over accuracy.

The researchers seem to share these concerns. They created a website with an FAQ page where they urged reporters not to use language that is inaccurate and sensationalises the findings.

Disclaimer that reads: Is it safe to say that LLMs are, in essence, making us
Source: FAQ for “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task” https://www.brainonllm.com/faq
Disclaimer that reads: Is it safe to say that LLMs are, in essence, making us

Ironically, they attributed the resulting “noise” to reporters using LLMs to summarize the paper and added, “Your HUMAN feedback is very welcome, if you read the paper or parts of it. Also, as a reminder, the study has a list of limitations we list very clearly both in the paper and on the webpage.”

There are two conclusions that we can safely draw from this study:

  • More research into how LLMs should be used in educational settings is essential
  • Students, reporters, and the public at large need to remain critical about the information we receive, whether from the media or generative AI

Researchers from the Vrije Universiteit Amsterdam are concerned that, with our increasing reliance on LLMs, what might really be at risk is critical thinking, or our ability and willingness to question and change social norms.

“Students may become less likely to conduct extensive or comprehensive search processes themselves, because they defer to the authoritative and informed tone of the GenAI output. They may be less likely to question — or even identify — the unstated perspectives underlying the output, failing to consider whose perspectives are being glossed over and the taken-for-granted assumptions informing the claims.”

These risks point to a deeper problem in AI. When we take its outputs at face value, we can overlook embedded biases and unchallenged assumptions. Addressing this requires not just technical fixes, but critical reflection on what we mean by bias in the first place.

These issues are central to the work of Natasha Govender-Ropert, Head of AI for Financial Crimes at Rabobank. Her role focuses on building responsible, trustworthy AI by rooting out bias. But as she pointed out to TNW founder Boris Veldhuijzen van Zanten in “Kia’s Next Big Drive,” bias is a subjective term and needs to be defined for each individual and each company.

“Bias doesn’t have a consistent definition. What I consider to be biased or unbiased may be different to somebody else. This is something that we as humans and as individuals need to decide. We need to make a choice and say this is the standard of principles that we will enforce when looking at our data,” said Govender-Ropert.

Social norms and biases are not fixed but ever-changing. As society evolves, the historical data we train our LLMs on does not. We need to remain critical and challenge the information we receive, whether from our fellow humans or our machines, to build a more just and equitable society.



Source link

Posted in

Glamour Canada

I focus on highlighting the latest in news and politics. With a passion for bringing fresh perspectives to the forefront, I aim to share stories that inspire progress, critical thinking, and informed discussions on today's most pressing issues.

Leave a Comment