
In 2022, OpenAI released the first major large language model to the public, ChatGPT, starting an industrial arms race that would turn the world on its head and forever change how we work, create, live, and most importantly think.
Generative AI (i.e. AI that generates content from a user prompt has been championed as the great solution to human productivity, a wonderful digital assistant that can lift all those petty menial tasks off your hands and free up your mind to deal with the real problems while it takes care of the rest. The capability to offload unwanted work and planning onto your AI secretary is a fantastic proposition that LLMs like ChatGPT and Google’s Gemini have so far been surprisingly capable of fulfilling. So, if generative AI is all that it’s cracked up to be, then why don’t we see a huge boost in efficiency
Recent studies present an interesting view of why this is and how AI may not be the digital savior Silicon Valley keeps pushing it to be. To put it bluntly, AI is fundamentally changing how we think.
Results from new research out of several German universities are starting to point towards a hunch educators have been holding since they first started seeing AI-generated papers on their desks: AI is hurting our ability to think critically and independently about information and dulling our personal problem-solving capabilities.
In a survey conducted by Pew Research, it was found that among a sample of 1,453 teen students aware of ChatGPT, 69% found researching a new topic as an acceptable use case of generative AI, which shows that a large percentage of students are likely to use gen AI to expedite their research. While initially, this seems like a legitimate and harmless use of LLMs in academics, the issue becomes apparent when this statistic is combined with results from the study mentioned above. The joint study by Mattias Stradler, Maria Bannert, and Micheal Sailer titled “Cognitive ease at a cost: LLMs reduce mental effort but compromise depth in student scientific inquiry” produced results showing that while students using LLMs demonstrated less cognitive strain, they also showed reduced engagement with the source material and created less relevant or substantial arguments compared to students who engaged directly with research. Stradler et al. Continues, stating, “This outcome supports the findings of prior studies suggesting that a higher degree of interaction with diverse and sometimes challenging information—as often encountered in traditional searches—may promote better comprehension and processing of learning material [as opposed to searches using LLMs].” This study shows a concerning relationship between the use of generative AI and a reduction in an individual’s ability to think critically about the information in front of them.
Stradler et al.’s study provides considerable new evidence to back the claims of 25% of K-12 teachers that AI is harmful to education, which is only worsened when students use generative AI for less ethical uses, such as summarizing readings, thus depriving themselves of the opportunity to come to their conclusions or to generate writing/edits for them. In these circumstances, the ideas of neuroplasticity and skill atrophy become the driving force behind the negative consequences of AI.
Neuroplasticity is one of the primary factors behind skill acquisition as the brain strengthens certain areas based on the strain placed on them, similar to how muscle growth works during exercise. This means that to improve your ability to think in a certain way or handle different problems, you need to give your brain practice in those areas. When we outsource our activities to AI, our brain is no longer interacting with the process of writing, researching, coding, etc. in the way it needs to to adapt and get stronger. Neuroplasticity also goes both ways, so if you aren’t using certain skills, your brain will devote those resources to something else, leading to skill atrophy. In other words, outsourcing work to generative AI means losing the skills to produce the work on our own, or in the case of education, preventing us from learning those skills in the first place.
Unfortunately, AI is going nowhere, with Salesforce (a corporate software company that conducted several large-scale surveys about the use of AI in the workplace) stating that 45% of the Americans they surveyed were already using generative AI regularly. As AI continues to develop, it will be necessary to treat the way we think about and use AI the same way we view the importance of general media literacy. Ethical, proper, and responsible use of AI tools will need to become a part of the curriculum to prevent over-reliance and allow the next generations to think critically about the impact AI has on our interactions with information.
Understanding how to use AI properly as a tool instead of a crutch along with what kinds of tasks should be delegated can not only protect against skill atrophy but enable us to use generative AI tools in a beneficial manner.