AI in education is no longer a future scenario. Students at every level, from middle school to university, already use AI tools to write essays, solve problems, and prepare for exams. The debate is not whether AI belongs in classrooms. It is whether students will use it to learn faster or to stop learning altogether.
The research collected across 2024 and 2025 gives a clearer picture than most people expect. The answer is not simple, and it depends heavily on how AI gets used.
Where the Evidence Shows Real Learning Gains
A 2025 randomized controlled trial published in Scientific Reports compared students learning through a custom AI tutor with students in standard active-learning classes. The result: students using the AI tutor learned significantly more in less time. They also reported feeling more engaged and more motivated than the classroom group.
That is not an isolated finding. A systematic review published in npj Science of Learning analyzed 28 studies covering 4,597 K-12 students using intelligent tutoring systems. Findings were generally positive, particularly when the AI system adjusted content in real time based on individual student performance. Systems that included self-assessment prompts and skill-level tracking improved metacognitive skills alongside academic scores.
A Brookings Institution review from February 2026 reinforced this. It found that well-designed AI tutoring platforms consistently produced learning gains, greater knowledge transfer, improved motivation, and efficiency across multiple studies. Specifically, it noted that AI tools can end the problem of classes “pitched to the median,” where strong students get bored and struggling students fall behind. Personalized pacing solves something traditional classrooms have never fully solved.
So the potential benefit is real. But the conditions that make it real are specific.
Where the Risk of Cognitive Dependency Is Also Real
The same research that shows learning gains also shows a consistent concern: overreliance. And students themselves are raising this alarm.
A December 2025 RAND Corporation survey of 1,214 American youth found that 67 percent of students agreed that the more students use AI for schoolwork, the more it will harm critical thinking skills. That figure was up more than 10 percentage points from just ten months earlier. The concern grew as usage grew.
A May 2025 survey of 262 undergraduate students, published on arXiv, found that students’ top concerns about AI chatbots in education were risks to academic integrity, accuracy of information, loss of critical thinking skills, and the development of overreliance. These were not researchers raising these concerns. These were the students using the tools every day.
The research from the University of Oslo, published in late 2024, framed this clearly. When students use AI as a shortcut rather than as a thinking aid, deep learning stops. The paper argued that AI can assist in producing text and enhancing productivity, but it undermines genuine cognitive effort when it replaces, rather than supports, the student’s own reasoning process.
This matters because critical thinking is not a side skill. A February 2026 paper that synthesized research across education, cognitive science, and psychology identified what it called AI-driven cognitive offloading: the tendency to delegate thinking to the AI rather than developing it. The paper found that this pattern risks undermining intellectual autonomy and the ability to reason independently.
Furthermore, a September 2025 international workshop at Cambridge, involving 19 researchers from 11 countries, concluded that AI works best as a mediating tool within human-centered learning, not as a replacement for dialogue, debate, and teacher interaction. The collaborative and social elements of learning are not optional extras. They build the kind of thinking AI cannot replicate.
The Difference Between Good and Bad AI Use in Education
The distinction the research keeps returning to is not whether students use AI, but how.
AI tutoring that asks follow-up questions, prompts reflection, and adjusts difficulty based on where a student actually struggles tends to improve learning. AI tools that simply produce answers on demand tend to replace the cognitive work that produces learning.
A literature review from November 2025, covering studies from 2005 to 2025, found that AI tutoring systems are most effective when used to complement, rather than replace, human instruction. Effectiveness dropped when AI substituted for teacher interaction entirely.
The practical implication is this: a student who uses AI to check their reasoning, explore a concept from a different angle, or get instant feedback on a draft essay is learning. A student who pastes a question and copies the output is not.
Schools and universities have started setting rules around this distinction. Some require students to show their reasoning process. Others use AI-detection tools. Both approaches treat the tool itself as neutral and focus on the behavior around it.
What This Means Going Forward
AI in education is not going to disappear. The question is whether the rollout gets shaped by evidence or by enthusiasm.
The research from 2025 and 2026 suggests that AI tools can genuinely improve outcomes when designed around sound pedagogy, paired with teacher involvement, and used to support thinking rather than replace it. When those conditions are absent, the same tools accelerate exactly the dependency students already say they fear.
That tension is not a reason to ban AI from classrooms. It is a reason to be precise about how it is introduced.
Frequently Asked Questions
1.Does AI in education improve student learning outcomes?
Yes, under the right conditions. A 2025 randomized controlled trial in Scientific Reports found that students using a well-designed AI tutor learned significantly more in less time than students in active-learning classes. A systematic review of 28 studies covering 4,597 K-12 students also found generally positive effects, particularly when AI systems adapted to individual student performance and included self-assessment features.
2.Does AI harm critical thinking skills in students?
Research suggests it can, depending on how it is used. A December 2025 RAND Corporation survey found that 67 percent of students believed increased AI use would harm critical thinking. A 2026 review across cognitive science and education research identified AI-driven cognitive offloading as a real risk, where students delegate reasoning to AI rather than developing it themselves. The risk is highest when AI replaces thinking rather than supporting it.
3.How should schools use AI in education responsibly?
Research consistently recommends using AI as a complement to human instruction, not a replacement for it. Effective AI tools ask students follow-up questions, prompt self-reflection, and adjust difficulty based on real performance. Schools should pair AI use with clear expectations around reasoning and process, and maintain teacher involvement as the primary instructional relationship. An international research workshop at Cambridge in 2025 concluded that AI works best within human-centered learning environments.