Is AI Diminishing Our Ability to Think?

10 min read Original article ↗

Is AI Causing Cognitive Erosion or Cognitive Evolution?

Ritvik Nayak

Just yesterday, I experienced that feeling when you suddenly realize just how much technology has changed the way you think, learn, and live, when I asked an AI Coding CoPilot to help me write a bit of code for a website that I’ve been building. While this was convenient and surprisingly accurate it made me disappointed as I thought about what I was doing — outsourcing my brain. This isn’t just about me, though. AI is becoming really popular in our daily lives. I agree to the fact that it might just be a tool to boost efficiency and make our lives easier, but in the process, it might indeed be reducing our own brainpower.

Press enter or click to view image in full size

If AI does all the work and all the thinking for us, are we slowly but surely losing our cognitive abilities? (IMAGE SOURCE)

However, this is not a new fear. Throughout history, every transformative technology has been blamed for making humans lazy. The calculator was blamed for poor math skills. The Internet was seen as the death of deep thinking and physical encyclopedias. Now, it seems that AI is today’s fixer, with critics arguing it’s eroding our critical thinking skills and creative processes.

The Allure of Convenience That AI Offers

Undeniably, one thing is true: AI tools like ChatGPT, Google Bard, or DALL-E have made some previously difficult task seem easy and completable in seconds. Why bother spending hours writing an essay when you can get a decent draft in seconds? Why toil trying to debug code when a tool can tell you where you’ve gone wrong? As I like to say, AI is the intellectual equivalent of fast food — fast, accessible, and easy to consume (please excuse me for my terrible humor). As we get used to using AI regularly for our own convenience, we lose the ability to have grit, deep thought, and determination, that we would use for these tasks previously.

Press enter or click to view image in full size

Before the advent of AI, solving a Rubik’s Cube used to be a really difficult task that required focus and determination. Now, a young child who wants to solve it has the ability to ask ChatGPT for the exact solution. But does that make the next generation dumber? Image by Jadson Thomas on Pexels.

Cognitive offloading, the psychological term for relying on external aids to perform mental tasks, has been around for centuries. Several hundred thousand years ago, writing replaced memory, calculators replaced complex mental arithmetic, and search engines replaced encyclopedias. But AI represents a leap and it doesn’t really just store or search information; it generates ideas, answers, and even art. This makes its influence on human cognition uniquely significant.

Killing Creativity and Re-Living It

One of the more polarizing effects of AI is in creativity or art. According to critics, AI is doing nothing but generating content and making humans lose their authenticity and originality. When students write essays with the help of AI, or artists use generative tools to create masterpieces, some wonder if they are losing their ability to think creatively. Especially in schools or universities, where using AI tools is highly discouraged as people believe it causes students to lose their ability to think.

Press enter or click to view image in full size

In most universities or schools, the use of AI tools to help with schoolwork or essay-writing is very highly discouraged as teachers and professors believe that it causes students to become more lazy, lose their essential thinking skills, and not truly learn — simply rely on tools to get work done. In fact, a pHD accused of using AI to write his final thesis (causing him to be expelled) went as far as suing his university to prove that he was innocent. of Image by Julia M Cameron on Pexels.

Yet, there’s a counter-argument, AI is simply a tool — not a human or a proffessional at any task. This is indeed very true. While I did mention that artists might use AI to create masterpieces, this probably will never happen at a large scale. The reason being that AI is not perfect. Instead of spending the time to write an entire prompt to specifically capture everything an artist wants to portray, they would rather do it themselves, keeping their authenticity, originality and credibility. Artists who were caught using AI to generate their works probably would earn much less compared to those creating them themselves. Think of authors using AI to brainstorm plot lines or musicians using AI tools to compose harmonies they might never have thought of. Far from eroding creativity, AI could amplify it, provided we remain actively involved in the process rather than passively consuming what it offers.

Press enter or click to view image in full size

While it is true that AI-generated art is decieving, modern AI art detectors and even regular human artist or art inspectors can tell the difference between an authentic, original piece, and a somewhat sloppy AI artwork. The essence of real art is it’s beauty and it’s authenticity. Otherwise, anyone could make a masterpiece with just 2 sentences. Image by mali maeder on Pexels.

But, We Might Actually Be Getting Smarter…

There’s another way of looking at this. Perhaps AI isn’t degrading how we think but instead upgrading it. By automating the more mundane tasks and decision-making, AI liberates mental real estate for higher-order thinking. When calculators became ubiquitous, mathematicians didn’t become obsolete-they moved on to higher-order problems. There are several hundreds of different areas in mathematics that calculators can never come even remotely close to doing. Similarly, there are tons of different things that AI can’t do. I’m not saying that AI won’t be advanced or developed further in the future, it most certainly will. But one thing is certain — there is always room for improvement.

And AI will never be able to do what humans are capable of doing.

The Education Dilemma

As I’ve mentioned before, the tussle is most fierce in classrooms. With AI-powered learning platforms like Koobits or AI tutors from Khan Academy, students receive individualized explanations and problem-solving strategies. But critics counter that these might discourage independent learning and prevent actual improvement without completing very specific tasks. They argue that if every question has an immediate answer, where’s the room for struggle-the very crucible of learning?

Press enter or click to view image in full size

Here’s a school teacher spreading his concerns about Khan Academy. While it is indeed over a decade old, it does address the concerns of many other educators who feel the same way. They believe that the platform’s mastery level challenges are not suitable for actual learning — the necessity to get 5 in a row is highly demanding and students may accidentally mis-enter their answers, especially when we consider the fact that it is all digital. Image taken by author. (ORIGINAL REVIEW SOURCE)

But I myself disagree with this. I used to use Khan Academy to learn much about the world — hundreds of different topics. Without it, I probably would never have become interested in artificial intelligence in the first way. The website’s detailed explanations don’t follow some critics’ words of being direct or immediate. Every single question or subject on the platform is explained slowly, making sure the viewer understands every single concept. Additionally, because of this detailed learning process, the annoying mastery challenge that the school teacher above’s review mentioned simply stands as a test to ensure that one has learned every aspect thoroughly.

Get Ritvik Nayak’s stories in your inbox

Join Medium for free to get updates from this writer.

But even educators are beginning to seek common ground. Rather than flatly banning AI, they have started teaching responsibility in using AI, focusing their students on thinking and problem-solving skills while leveraging the use of AI as a secondary tool. There is no inherent reason why a role for AI in education diminishes human intelligence; it has the potential, if used effectively, to be a pathway to deeper knowledge. However, students must also be a part of this by making sure that their main objective is not to just complete a task; it should be to learn, to understand.

Press enter or click to view image in full size

With the modern generation, most young students’ real goal in school is to simply complete a task; not to do it to the best of their capacity. I’m a 12 year-old myself and I’m seeing this a lot in my class. Young students are abusing the AI educational tools that my school offers them to simply find the answer to questions, not to work them out — much like what critics mention. This type of somewhat lazy learning most definitely won’t work out in the real world; truly understanding the concept and learning it’s solution is the only possible way to apply it to the real world. Image by Anastasiya Gepp on Pexels.

The Technology Underpinning

But what really makes AI powerful, and possibly disruptive, is the technology behind it — the vast amounts of data and the complexity of the neural networks that underpin modern AI, especially LLMs. As I’ve mentioned in a ton of my articles, these machines do not think in the same way as humans do; however, they can process and produce information at unprecedented speeds and volumes.

Take GPT models, for example: these have been trained on billions of text samples. That explains their ability to predict exactly the right word in any sentence with almost a perfect accuracy. This is why ChatGPT almost knows certainly what you’re going to ask before you even finish a prompt — it’s like autocompleting the words of a text message but way more accurate.

Press enter or click to view image in full size

Look at ChatGPT’s suggested autocomplete for my prompt ‘Suggest me…’ I haven’t completed it yet. However, it takes the most popular prompts for ‘Suggest me…’ from it’s database and displays them here. Image by author.

As great as that makes a performance, it also makes a system no more biased and insightful than its training data. Their overuse begets not merely intellectual laziness but also propagates every one of those latent societal biases they’ve been nurtured on through datasets.

In doing tasks for us that we once did for ourselves, it may render us less capable of problem-solving, critical thinking, and even empathy-skills that emerge from grappling with complexity.

The question, then, is not whether AI is intrinsically good or bad — it is a matter of how we use it. Do we let it replace our thinking entirely, or do we integrate it as a tool that augments human intellect? The answer lies in intentionality. By deploying the technology strategically-asking it for clues, not solutions, we can safeguard the core of human intelligence while embracing the benefits of automation.

In the end, AI doesn’t diminish our ability to think unless we allow it to. Like any tool, its impact depends on the user. And if we approach AI with curiosity and caution, it might not replace human thought but spark it in ways we’ve yet to imagine.

This story is published on Generative AI. Connect with us on LinkedIn and follow Zeniteq to stay in the loop with the latest AI stories.

Subscribe to our newsletter and YouTube channel to stay updated with the latest news and updates on generative AI. Let’s shape the future of AI together!