A recent Fox News article highlighted a growing concern among educators: artificial intelligence is making students dangerously dependent on technology and eroding critical thinking. Teachers and professors alike report seeing more polished but shallow work—assignments that look impressive on the surface but reveal little real reasoning underneath. Those concerns are understandable. But blaming AI misses the deeper truth.
AI did not break learning. It exposed how fragile our educational model already was.
For decades, schools have operated on a flawed assumption: if a student produces the right output, then learning must have occurred. Essays, worksheets, and discussion posts are graded as artifacts, not as evidence of how a student actually thinks. In a world where AI can generate a well-structured, grammatically perfect paper in seconds, that assumption collapses. The problem is not that students now have powerful tools. The problem is that our system was built to reward finished products instead of intellectual process.
This weakness begins long before college.
For more than two decades, K–12 education has been shaped by standardized testing regimes that emphasize recall, speed, and selecting the “right” answer over deep analysis and original thought. Students learn to optimize for points rather than understanding. They become skilled at recognizing patterns on multiple-choice exams but have far fewer opportunities to defend an argument, synthesize ideas, or wrestle with ambiguity. By the time they reach college, many students have rarely been asked to explain why they think something — only whether they chose the correct option. AI did not create this gap. It simply makes it visible.
In the modern workplace, no one is paid to avoid tools. Engineers use Copilot. Analysts use AI to summarize and analyze data. Consultants use AI to draft and refine reports. Leaders use AI to test scenarios and explore strategy. What separates high performers from mediocre ones is not whether they use AI, but whether they can judge, challenge, and improve what it produces. That is what critical thinking looks like in practice.
Education, however, still treats learning as if producing a document proves understanding. Students are graded on what they submit, not on how they reasoned their way there. When AI can produce polished work on demand, that gap becomes impossible to ignore. The real risk is not that students will use AI. It is that schools will keep assessing learning in a way that no longer measures thinking at all.
The most important question is no longer, “Did the student use AI?”
The real question is, “Can the student explain how they arrived at this conclusion?”
A student who truly understands their work can defend it. They can explain why they chose a particular thesis, what evidence persuaded them, what alternatives they considered, and where their reasoning changed. A student who outsourced their thinking cannot. This is how doctoral committees operate. It is how professional interviews work. It is how accountability functions in the real world. Education at every level should be held to the same standard.
One practical way to move in this direction is to require what can be called a Chat Prompt Bibliography. Just as students cite books and articles, they should also submit the prompts they used with AI tools and short reflections explaining how they evaluated and refined the outputs. This does not punish AI use. It makes thinking visible. It shows whether a student simply accepted what a machine gave them or whether they questioned, iterated, and improved it. In industry, skilled professionals never take the first output as final. That cognitive work should be what schools evaluate.
A significant part of the tension around AI in education is not coming from students at all, but from educators. Many teachers and faculty were trained for a world that did not include generative AI, and they have not been given the time, training, or institutional support to adapt their teaching to it. When instructors are unsure how these tools work or how they should be used, it becomes difficult to design assignments that meaningfully engage students rather than simply police them. The result is frustration on both sides of the classroom.
The solution is not to ban AI, but to train educators to use it well. Schools should invest in professional development that helps teachers understand how generative tools work, how to design assignments that require explanation and defense of ideas, and how to use AI as a teaching assistant rather than a threat. When educators are equipped to work with these tools, they can shift the classroom from monitoring compliance to cultivating thinking — which is exactly what AI makes possible when used correctly.
At the same time, schools must update what “technology literacy” actually means. Teaching students how to format a document in Microsoft Word — or how to use any single office application in isolation — belongs to another era. Spreadsheets, presentations, email, and collaboration tools are now deeply integrated with AI. The skill students actually need is not learning menus and buttons, but learning how to use intelligent systems to analyze information, generate drafts, test ideas, and collaborate more effectively. AI literacy belongs alongside writing, logic, and mathematics as a core intellectual discipline.
But integrating technology intelligently does not mean saturating classrooms with screens.
There has been consistent evidence for many years that heavy screen use interferes with attention, focus, and sustained reading — the very cognitive abilities students need in order to think deeply, write clearly, and reason well. When screens replace long-form reading, handwriting, and face-to-face discussion, they change how students process information. This directly affects the kind of thinking students are able to do.
That is why K–12 education should be far more thoughtful about when and how technology is introduced, rather than assuming more screens automatically mean better learning. Students need to learn how to think without machines before they learn how to think with them. Physical books, handwritten notes, sustained attention, and real dialogue build memory, reasoning, and synthesis in ways no device can replicate. When schools issue devices like Chromebooks for constant use, they are not reducing screen time — they are extending it into the home, compounding the effect.
This is not an argument against artificial intelligence. In industry, we are not slowing down with AI — we are accelerating. A new kind of workforce is being built right now, and it will rely on these tools every day. But the people who succeed will not be the ones who generate the most text. They will be the ones who think the clearest.
The future of education does not belong to those who ban AI.
It belongs to those who teach students how to think in a world where AI exists.
References
Au, W. (2023). Unequal by design: High-stakes testing and the standardization of inequality (2nd ed.). Routledge.
Koretz, D. (2024). The misuse of testing: How standardized tests distort education. Harvard Education Press.
National Center for Education Statistics. (2025). Assessment trends in U.S. elementary and secondary education. U.S. Department of Education.
Lee, H. P. (2025). The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. Microsoft Research.
Lee, J., Hung, J.-T., Soylu, M. Y., Popescu, D., & Gašević, D. (2025). Socratic Mind: Impact of a GenAI-powered assessment tool on student learning and higher-order thinking. arXiv.
OECD. (2024). Artificial intelligence in education: Shaping the future of learning. OECD Publishing.
UNESCO. (2024). Guidance on generative AI in education and research. UNESCO.
Elon University Imagining the Digital Future Center & American Association of Colleges and Universities. (2026). Faculty views on artificial intelligence, student learning, and critical thinking. AAC&U.
Zawacki-Richter, O., Bond, M., Marin, V. I., & Gouverneur, F. (2024). Artificial intelligence in higher education: A systematic review of impacts and pedagogical implications. International Journal of Educational Technology in Higher Education, 21(3).
Feng, X., Zhang, Y., & Liu, M. (2025). Screen exposure, sustained attention, and academic performance in school-aged children. Journal of Educational Psychology, 117(2), 355–372.
Madigan, S., Racine, N., Browne, D., Mori, C., & Tough, S. (2023). Association between screen time and children’s performance on a developmental screening test. JAMA Pediatrics, 177(3), 245–253.
Twenge, J. M., & Campbell, W. K. (2024). Digital media use and psychological well-being among adolescents. Preventive Medicine Reports, 34.
