Artificial intelligence may one day make humans obsolete—just not in the way that you’re thinking. Instead of AI getting so good at completing tasks that it takes the place of a person, we may just become so reliant on imperfect tools that our own abilities atrophy. A new study published by researchers at Microsoft and Carnegie Mellon University found that the more humans lean on AI tools to complete their tasks, the less critical thinking they do, making it more difficult to call upon the skills when they are needed.
The researchers tapped 319 knowledge workers—a person whose job involves handling data or information—and asked them to self-report details of how they use generative AI tools in the workplace. The participants were asked to report tasks that they were asked to do, how they used AI tools to complete them, how confident they were in the AI’s ability to do the task, their ability to evaluate that output, and how confident they were in their own ability to complete the same task without any AI assistance.
Over the course of the study, a pattern revealed itself: the more confident the worker was in the AI’s capability to complete the task, the more often they could feel themselves letting their hands off the wheel. The participants reported a “perceived enaction of critical thinking” when they felt like they could rely on the AI tool, presenting the potential for over-reliance on the technology without examination. This was especially true for lower-stakes tasks, the study found, as people tended to be less critical. While it’s very human to have your eyes glaze over for a simple task, the researchers warned that this could portend to concerns about “long-term reliance and diminished independent problem-solving.”
By contrast, when the workers had less confidence in the ability of AI to complete the assigned task, the more they found themselves engaging in their critical thinking skills. In turn, they typically reported more confidence in their ability to evaluate what the AI produced and improve upon it on their own.
Another noteworthy finding of the study: users who had access to generative AI tools tended to produce “a less diverse set of outcomes for the same task” compared to those without. That passes the sniff test. If you’re using an AI tool to complete a task, you’re going to be limited to what that tool can generate based on its training data. These tools aren’t infinite idea machines, they can only work with what they have, so it checks out that their outputs would be more homogenous. Researchers wrote that this lack of diverse outcomes could be interpreted as a “deterioration of critical thinking” for workers.
The study does not dispute the idea that there are situations in which AI tools may improve efficiency, but it does raise warning flags about the cost of that. By leaning on AI, workers start to lose the muscle memory they’ve developed from completing certain tasks on their own. They start outsourcing not just the work itself, but their critical engagement with it, assuming that the machine has it handled. So if you’re worried about getting replaced by AI and you’re using it uncritically for your work, you just might create a self-fulfilling prophecy.
who could have predicted this
dunno who could but the “AI” definitely couldn’t
Perfect, definitely using this in my class for current events tomorrow.
: i feel like this is in the same category of thing as “calculators make your math skills worse” because, yeah, they kinda do for the same reasons outlined here. You arent exercising the skills necessary to do it by hand, and so they atrophy or never develop. Obvi with AI the consequences are more dire; a calculator is right 100% of the time (provided you gave it correct inputs), but generative AI can be wrong, make shit up, and miss things in ways that are harder to catch since they’re frequently fed more complex problems
To that end, i appreciate the point this article seems to put forward: don’t never use it, but use it with care and understanding of the principles behind it. The people who don’t trust the AI and take time to double check and improve its work are using it well, I feel, because they’re still exercising the problem solving skills and knowledge base they need, while outsourcing some of the busywork. A good parallel, I think, is using a calculator to find the sin or square root of a number. You can do it by hand, but it’s so tedious and lengthy that having a table or calculator do it for you frees you up to get to the meat of whatever you’re actually trying to do
I have a mild pet peeve with the student mantras of “will we ever use this in real life” and “why cant i use a calculator” for this reason exactly; even if outside tools exist to solve these problems, it’s deeply important to build the thinking skills to solve them yourself