When I sat down to write this week’s issue, I didn’t intend to write solely about AI. Perhaps, having recently edited the first blog in our new series, AI, Responsibly, I had lingering thoughts about it. Or maybe, it’s simply so impactful to higher education that there’s often a lot of news about it. Either way, here are three interesting ideas about AI from this week’s readings: reimagining higher education’s purpose in an AI-driven world; why AI literacy matters; and how including AI critics can strengthen AI policymaking.
After reading today’s issue, share your thoughts about AI in the comments!
AI and Challenges for Higher Education
From Beyond Tool or Threat: GenAI and the Challenge It Poses to Higher Education | EDUCAUSE Review
Generative artificial intelligence isn’t just a new tool—it’s a catalyst forcing the higher education profession to reimagine its purpose, values, and future.
Our Thoughts
There is a lot to like in this piece. It asks us to look at the worth of higher education through a different lens, not by defending old habits, but by naming what colleges and universities uniquely do when machines can draft so much of a final product. I read that as an invitation to design for thinking, judgment, and public purpose. With public confidence waffling on higher education, now is the time to show how institutions do the human parts of work and citizenship that AI tools cannot touch.
As someone pursuing a PhD in learning science, I appreciate the call to shift assessment from product to process. That is not a retreat from rigor. It is simply a clearer way to see learning. When we ask students to document how they plan, iterate, and justify decisions, we evaluate the quality of thinking rather than the accident of a polished product. There are nudges in this direction already, with guidance that favors authentic tasks, explicit transparency about tool use, and assessment designs that make reasoning visible. The point is not to try to outrun AI, but to embrace it where it helps and judge students on the choices they make with it.
If higher education takes this view seriously, the narrative changes. We reclaim value by making the process of learning visible and by showing how students use AI to think better, not to think less. We rebuild trust by graduating people who can frame the problem, weigh the tradeoffs, and act with judgment in an algorithmic world. That is a hopeful path. It is also a practical one that aligns mission, pedagogy, and market signals. The result is not a smaller claim for college; it is a sharper one that meets this moment on purpose.
AI Literacy First
From Why AI literacy must come before policy | Times Higher Education
Faculty at the University of Canterbury argue that students need AI literacy before they can be expected to follow AI policies effectively.
Our Thoughts
What stands out to me here is the way it surfaces something we have largely overlooked. We sprinted to write rules for AI and only later asked whether students and faculty had the foundational understanding to follow them. The evidence says many campus policies still lean toward protecting the institution and policing authenticity rather than empowering people who choose to use the tools. This indicates we have not properly provided an educational scaffold that allows our communities to learn and grow. Perhaps, the better sequence is the one argued here. Build literacy first, translate that literacy into clear guidelines, and only then set policy that can be taught, practiced, and assessed.
Luckily, the authors provide an AI literacy framework that you can put to use on your own campus. Their SAIL framework provides a foundation on which you can build your own institutional AI literacy curriculum. It starts with concepts, so people understand how AI works and where it fails. Second are cognitive and applied skills so they can interrogate outputs, surface assumptions, and make better choices. Finally, digital citizenship so the work is ethical, transparent, and aligned with mission. Short courses built on SAIL make it easier to scale that capacity across a campus rather than leaving it to early adopters.
Finally, there is also an equity case here. Literacy is a gatekeeper. When institutions publish rules without shared understanding, students with less prior exposure to AI are more likely to stumble, and faculty are left to reconcile mixed messages in the classroom. Treating AI literacy as a foundation, not an afterthought, is how you avoid uneven enforcement and build trust that policies are for learning rather than only for discipline. By centering literacy in our AI strategies, we can create a practical path to responsible use, better teaching, and policies people can follow with confidence.
Including AI Critics
From Sometimes We Resist AI for Good Reasons | The Chronicle of Higher Education
Kevin Gannon, professor of history and director of the Center for the Advancement of Faculty Excellence at Queens University, details why you should include AI critics in discussions of AI policy.
Our Thoughts
What I appreciate about this piece is the reminder that policy gets better when more people help write it. For two years we have asked campuses to police AI while also telling students to build AI skills, and the result has been mixed messages and uneven practice. Students say they are often unsure when AI is allowed, and only a small share of institutions report having a comprehensive strategy for AI use. If trust is the goal, invite the full set of voices into the room, including the skeptics who are naming the tensions the rest of us feel. That is how you move from conflicting signals to policies people can actually follow.
The timing is right for that shift. Surveys find many campuses are still drafting or piloting guidance, and global bodies keep pointing us to the same foundation. Build literacy first, co-design clear guidelines with faculty and students, and only then set policy that can be taught, practiced, and assessed. UNESCO’s guidance and EDUCAUSE’s action plan push in that direction, and the research literature on GenAI policy warns against narrow, integrity-only approaches that ignore teaching and learning. Policies that begin with shared understanding and classroom-level use cases tend to travel farther than rulebooks written to minimize institutional risk.
Hearing all voices also makes the policy stronger, and when diverse perspectives show up together, the guidance that emerges is both principled and workable. That is the kind of steady, inclusive work that will help us figure out the future together.
0 Comments
0 Comments