HEat Index, Issue 95 – AI for Learning and AI for Operations

February 20, 2026

0

Comments

This week, I came across an interview that put into words something I've been thinking about for a while but hadn't fully articulated. An AI administrator at Columbia University is publicly skeptical of the very technology his institution has asked him to champion. I found his perspective worth spending some time with, because I think the questions he's raising deserve more than a passing read. And although my second article this week is also about AI, it is not a contradiction. There is an important difference between AI as a shortcut through learning and AI as a tool for smarter institutional work, and I think it is worth drawing that line clearly. 

After reading today’s issue, share your thoughts about AI's role on your campus in the comments! 

AI Skepticism 

From Why One AI Administrator Is Skeptical of AI | Inside Higher Ed 

Matthew Connelly, vice dean for AI initiatives at Columbia University, discusses his skepticism that AI tools improve learning. 

Our Thoughts  

Connelly's skepticism resonates with me, and not just because I've seen the pattern play out before. I think healthy skepticism about technology in education is not only warranted but necessary. Learning is genuinely hard. It's cognitively demanding, it's nonlinear, and it requires struggle. That's not a bug in the process. It is the process. When we introduce tools that make it easier to produce the appearance of learning without doing the underlying cognitive work, we aren't helping students. We're helping them bypass the thing we're supposed to be delivering.

Connelly's concern that students are "losing the ability to think for themselves" connects directly to what researchers have found about how learning actually works. Cognitive load theory tells us that meaningful learning requires effortful processing, making connections, retrieving information, grappling with new material, and building durable mental models over time. Research on desirable difficulties has shown for decades that the conditions that make learning feel easier in the moment tend to produce weaker long-term retention. If AI removes the friction from the learning process, it may also remove the learning itself.

Reading this article, I kept thinking about Audrey Watters' book Teaching Machines, which chronicles more than a century of promises that automation would bring efficiency and improvement to education. Psychologist Sidney Pressey wrote in 1933 that "there must be an industrial revolution in education." We are still hearing that same argument, just in a different wrapper. The narrative has always been the same: technology will individualize instruction, free teachers from drudgery, and make the whole enterprise cheaper and more effective. And the research has consistently shown that the outcomes rarely match the pitch. Connelly is right to note that the people pushing ed-tech have been proven wrong repeatedly, and the burden of proof should sit squarely with them. Saying "this time is different" is not evidence.

There is an important distinction worth drawing here, though, because I don't think all AI use in education is equivalent. There is a meaningful difference between learning to use AI as a tool to supplement and extend professional work and using AI to replace the work that produces learning in the first place. A historian who uses machine learning to analyze thousands of archival documents is doing something fundamentally different than a student who uses a chatbot to generate a paper they were supposed to write themselves. The first application is a professional skill that could be worth developing. The second is a shortcut through the very process that builds the judgment students need to evaluate AI output critically. And this is the part that worries me most. If students don't develop that judgment through real practice, they won't know when the AI is wrong, when it's oversimplifying, or when it's missing context that matters. We will have produced graduates who are fluent in prompting but unable to assess what comes back.

Connelly's sharpest observation is also his simplest. "It's like they're eating the seed grain." That framing captures the real risk. If we allow AI to replace the intellectual labor that builds capability in our students, we are not preparing them for a more complex future. We are producing a generation that is dependent on systems they cannot interrogate or override. That is a bad outcome for students, for institutions, and for the employers who will eventually hire them. Institutions that want to prepare students for an AI-integrated workforce should be teaching them to think alongside AI critically, not outsourcing their thinking to it entirely. Those are very different goals, and right now, too many campus AI strategies are chasing the first without acknowledging the risk of the second. 

Using AI Safely 

From Here are 3 ways to mine AI for insights, and do it safely | University Business 

As a part of a series on Navigating AI, Alcino Donadel looks at ways institutions are using AI without compromising institutional data. 

Our Thoughts  

If you read our first article this week and came away thinking I'm skeptical of AI across the board, I want to offer some important nuance. My concern about generative AI in the classroom is specifically about what happens when we allow it to short-circuit the cognitive work that produces learning. That is a distinct issue from how administrators and staff use AI to improve institutional operations, and it is worth separating the two conversations clearly.

This short piece from University Business is worth your time precisely because it keeps that distinction in focus. The administrators interviewed are not talking about replacing professional judgment with AI output. They are talking about using AI as what one of them calls a "strategic thinking partner" and another describes as a "table-side consultant" that helps surface blind spots and generate ideas. That framing matters. It positions the human as the author of decisions and the AI as a prompt for better thinking, which is exactly the right orientation for those who wish to use it.

While all three principles mentioned in the article are good advice, my favorite is the third one: do not outsource your thinking. This connects directly to what Connelly was warning about in our previous article, just from the other side. The risk is not AI itself. The risk is dependency. When we stop interrogating AI output because it is faster and easier to accept it, we lose the very institutional knowledge and contextual understanding that make our judgments valuable in the first place. The "verify, verify, verify" mantra mentioned in the article is not just caution. It is the practice that keeps professional expertise intact.

One thing I would add that the article does not address directly is that higher education has a genuine structural advantage here that we do not always acknowledge. We are actually quite good at exploring new ideas collaboratively, sharing what works, and adapting practices across institutions. EDUCAUSE, regional consortia, and peer networks give us channels to share responsible AI use cases in ways that most industries simply do not have. Rather than waiting for a vendor to tell us how AI should work in higher education, we have the communities and the expertise to figure that out ourselves. We just have to use them. 

Allen Taylor
Allen Taylor
Senior Solutions Ambassador at Evisions |  + posts

Allen Taylor is a self-proclaimed higher education and data science nerd. He currently serves as a Senior Solutions Ambassador at Evisions and is based out of Pennsylvania. With over 20 years of higher education experience at numerous public, private, small, and large institutions, Allen has successfully lead institution-wide initiatives in areas such as student success, enrollment management, advising, and technology and has presented at national and regional conferences on his experiences. He holds a Bachelor of Science degree in Anthropology from Western Carolina University, a Master of Science degree in College Student Personnel from The University of Tennessee, and is currently pursuing a PhD in Teaching, Learning, and Technology from Lehigh University. When he’s trying to avoid working on his dissertation, you can find him exploring the outdoors, traveling at home and abroad, or in the kitchen trying to coax an even better loaf of bread from the oven.

Related Posts

0 Comments

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *