HEat Index, Issue 100 – Fixing College Assessment, the Importance of Writing, and Something New

March 27, 2026

0

Comments

Wow! Our 100th issue. When we started this weekly blog, I never imagined we'd build hundreds of regular weekly readers. A huge thanks to each and every one of you who show up week after week to read my ramblings on the latest higher ed news! I'm humbled that you find value in what we're doing here. Now, on to this week's issue. 

First up, we look at an innovative approach from an Australian university to tackling classroom assessments in the age of AI. From there, we discuss the importance of writing and how the process of editing and revising written work is an important part of a student's learning experience. Finally, we close with a new section of featured stories worth sharing. 

After reading today’s issue, share your thoughts about the importance of long writing in the comments! 

Fixing Assessment That AI Broke 

From AI Broke College Assessment. One University Believes It’s Got a Fix. | The Chronicle of Higher Education 

The University of Sydney proposes a solution to the AI-allowed-or-not-allowed assessment problem. 

Our Thoughts  

There is a lot to admire in what the University of Sydney is trying to do. Its two-lane framework is genuinely practical in its approach. Rather than trying to build higher and higher walls around assignments in an arms race that AI developers will eventually win, it starts with the more honest question of what good assessment actually looks like and works backward from there. That focus on learning is what makes this approach compelling. The finding that roughly 90 percent of existing assessments were no longer AI-proof is striking, but I suspect it would not look much different at most American institutions if anyone ran the same audit. 

Although American higher education is structurally different from Australian higher education, I don’t think that’s a good reason to simply dismiss their ideas. AI presents real challenges for higher education globally. The most impactful moment in this article, for me, was Danny Liu's comment about business leaders telling him they don't know if they can trust universities anymore. That sentence should land hard for anyone working in higher education right now. We have spent the last several years worrying about public confidence in higher education, debating whether degrees are worth the investment, and watching employer surveys from AAC&U show that while 70 percent of employers still express confidence in higher education, that confidence is increasingly conditional on graduates demonstrating specific competencies, including AI literacy. If employers begin to suspect that the credential itself no longer verifies what a graduate actually knows and can do, the consequences for institutional credibility are significant. 

Sydney's model is not perfect, and it may not be directly portable. But it represents the kind of serious, institution-wide rethinking that this moment demands. The two-lane framework gives faculty a structured set of options rather than leaving them to figure it out alone, which is where most American institutions currently are. It also sends a clear signal to students and to the market that the university takes the verification of learning seriously. Whether American institutions adopt this specific model or develop their own, the underlying question is the same: Can you prove that the students you are graduating have actually learned what your credential says they have? If the answer is "not reliably," then no amount of AI detection software or syllabus language is going to solve the problem. The redesign has to happen at the level of assessment itself, and it needs to happen with urgency. 

The Importance of Wrestling with Ideas 

From In Defense of Long Writing | Inside Higher Ed 

Julia Morgan McKenzie, director of the Writing Center at Williams College, reflects on why time spent writing matters. 

Our Thoughts  

While I don't normally feature opinion pieces, I chose to feature this one because I think it says something important that a lot of people in higher education are feeling but may not have the language for yet. The author is watching faculty voluntarily dismantle one of the most foundational learning practices in higher education, not because they have found something better, but because they are afraid of the challenges AI presents to written assessments. It's worth spending a few minutes with.  

I might be naive, or maybe just nostalgic, but I believe that writing is one of the most important things students do during their time in college. Writing is not just a form of communication. It is one of the oldest and most enduring forms of it, predating printed books and universities alike. For thousands of years, the act of putting thoughts into written language has been one of the primary ways human beings have organized what they know, challenged what they believe, and worked out what they think. The reason writing has persisted as a cornerstone of education is not tradition for its own sake. It is because writing requires a kind of sustained cognitive effort that few other activities demand. You have to slow down. You have to commit to a position. You have to defend it. And when you revise, you have to be willing to admit that your first attempt was incomplete or wrong, and then do the hard work of making it better.  

The research supports this. One study found that students who engaged in writing-based learning significantly improved their critical thinking skills, specifically in analysis and inference, compared to students who did not write. The researchers attributed this to the cognitive demands of drafting and revision, which require students to organize their thoughts, evaluate their reasoning, and clarify meaning through iteration. That process, the messy, nonlinear work of writing and rewriting, is not a byproduct of learning. It is learning. As the National Commission on Writing emphasized, writing is not simply a way for students to demonstrate what they know; it is a way for them to discover what they know. 

2025 survey by the American Association of University Professors found that 69 percent of faculty believe AI is hurting student success. A College Board survey of over 3,000 faculty found that more than 84 percent agree AI reduces students' critical thinking, originality, and deep engagement with course material. Yet only 21 percent feel confident guiding AI use in their classrooms. That gap between concern and confidence is where the damage is happening. Faculty know something is being lost, but many feel unequipped to protect it, and some are choosing to simply remove the opportunity rather than navigate the uncertainty. As the article puts it, some faculty are "choreographing this loss," and their students are noticing. 

The section about students coming to the writing center worried because they are not being asked to write at all stuck with me. The students feel shortchanged. They feel like their professors are taking the easy way out. Those are not the words of students looking for a loophole. Those are the words of students who came to college expecting to be challenged and finding that the challenge has been removed. Students should sit with their ideas. They should wrestle with them. They should engage with other people's arguments and try to understand what an author meant before responding. That process is uncomfortable, and it is supposed to be. It is also irreplaceable.  

That feeling of being shortchanged is not just a student satisfaction issue. It is a retention issue and a credibility issue. Students who feel they are not getting the education they were promised are more likely to disengage, transfer, or leave entirely. At a time when public confidence in higher education is near historic lows and institutions are fighting to justify the cost of attendance, the last thing any campus needs is for its own students to feel like the rigor has been hollowed out. If a student is paying tens of thousands of dollars per year and walking away from courses without ever having written a substantive paper, that is not just a pedagogical failure. It is a broken promise. We spend enormous energy trying to recruit students, retain students, and demonstrate value to families and employers. Voluntarily removing one of the most rigorous and developmentally important parts of the academic experience works against all three of those goals. 

I understand the pressure faculty are under. I understand that 85 percent of college students are using generative AI for coursework and that the tools are only getting better. I understand the temptation to design around the problem rather than through it. But when we stop asking students to write, we are not protecting them from AI. We are conceding to it. And in doing so, we are taking away one of the most powerful tools for intellectual development that we have ever had. The author calls it "long writing," and I think that phrase is exactly right. It requires time, trust, and the willingness to let students struggle. That is not a problem to be solved. It is the point.  

Sparks 

Each week, we try to bring you two or three interesting stories and commentary from the week's higher education news, but that often means leaving some good articles on the cutting room floor. For our 100th issue, we decided to shake things up and highlight a few more articles we feel are worth your time. We hope you enjoy this new addition to The HEat Index!

  • Responding to the Signs of the Times (Inside Higher Ed) - Sara Custer, editor in chief at Inside Higher Ed, makes the case that higher education institutions should take a both-and approach to preparing students for the workforce while maintaining their principles and practices that make a four-year degree valuable. She presents an approachable justification with examples that leaves the reader believing her ideas are achievable.
  • The AI ‘hivemind’: Why so many student essays sound alike (The Hechinger Report) - A study from the University of Washington finds that different AI models often produce eerily similar text from the same prompt. If you’re interested in AI, this was a fascinating look at how the models behave.
  • Cornell Module Builds Critical Thinking in AI Era (Inside Higher Ed) - Researchers at Cornell University have developed an online course module that improves students’ critical thinking skills. Since critical thinking is one skill that employers say they expect from college graduates, exploring new ways of teaching it is something all institutions should be interested in.  
    Allen Taylor
    Allen Taylor
    Senior Solutions Ambassador at Evisions |  + posts

    Allen Taylor is a self-proclaimed higher education and data science nerd. He currently serves as a Senior Solutions Ambassador at Evisions and is based out of Pennsylvania. With over 20 years of higher education experience at numerous public, private, small, and large institutions, Allen has successfully lead institution-wide initiatives in areas such as student success, enrollment management, advising, and technology and has presented at national and regional conferences on his experiences. He holds a Bachelor of Science degree in Anthropology from Western Carolina University, a Master of Science degree in College Student Personnel from The University of Tennessee, and is currently pursuing a PhD in Teaching, Learning, and Technology from Lehigh University. When he’s trying to avoid working on his dissertation, you can find him exploring the outdoors, traveling at home and abroad, or in the kitchen trying to coax an even better loaf of bread from the oven.

    Related Posts

    0 Comments

    0 Comments

    Submit a Comment

    Your email address will not be published. Required fields are marked *