For those able to attend, I hope you had an energizing experience at the Evisions Conference, Ellucian Live, or both over the past week! In this week's issue, we look at two reports about changes in the composition of financial aid packages and close with an opinion piece about the importance of learning the "why" in the new age of AI.
Merit Aid vs Need-Based Aid
From Merit Aid Outpacing Need-Based Aid Among All Institutions | Inside Higher Ed
Two new reports find that merit aid usage has increased and now outpaces need-based financial aid.
Our Thoughts
I've written before about the need for real price transparency in higher education, and this article gives me another reason to keep pulling on that thread. Before I get into the specifics, though, I want to start by reframing what's actually happening here. "Merit aid" sounds like an institution rewarding academic excellence. However, when you read that 75.2 percent of students with no financial need at small-endowment private institutions received institutional aid, or that a student with no demonstrated need at a large-endowment private walks away with an average award of over $24,000, we're no longer discussing financial aid earned by academic achievement. Instead, we're looking at a pricing strategy dressed up with a friendlier name.
Enrollment management offices call this tuition discounting, and they run sophisticated models to calculate exactly how much discount a particular student needs to yield at a particular institution. What shows up in a student's award letter as a scholarship is, on the institutional side, a line item in a net tuition revenue projection. Some of these awards are funded through institutional endowments, but many are simply a true discount on the price. According to the NACUBO annual tuition discounting study, the institutional discount rate at private nonprofits has recently reached above 56 percent for first-year students. This means that almost no one at private nonprofits pays the published sticker price, and that distance between what the price says and what the price is has become the single most confusing aspect of how families experience the college financial process.
Once you see merit aid as a pricing strategy rather than a program that awards academic achievement, a lot of the distributional patterns in the NACAC study make more sense. Small-endowment privates are the most aggressive discounters because they have the least pricing power and the most acute revenue pressure. Wealthier students receive larger total aid packages at private institutions because enrollment models show that a student who can pay $60,000 but is offered $20,000 generates more net revenue than a student who can pay $20,000 and is offered $15,000, even though the second student has greater need. White students are more likely to receive merit aid than nonwhite students because the metrics that trigger merit awards (test scores, unweighted GPAs, activity profiles, honors curricula) track the accumulated resources of a student's upbringing. When you build a rewards system around inputs that correlate with wealth and family social capital, you produce an award distribution system associated with racial and socioeconomic inequality rather than actual merit or potential. That outcome isn't a failure of the system; it's the system performing as designed.
I know that last point may feel uncomfortable for some, and to others, it may feel like I'm dancing around some type of DEI discussion. That's not my intention, but I'd rather name it clearly. We tend to talk about these disparities as though they reflect a gap in the aid process that could be closed with better intentions, but the aid process isn't broken. It's doing what enrollment management designed it to do—maximize net tuition revenue by matching discounts to students whose yield behavior can be predicted. Equity isn't a variable in that equation unless an institution explicitly makes it one. Many don't, not because they don't care, but because they can't afford to.
This brings us back full circle to price transparency. Families making what is, for many of them, the largest financial decision of their lives are doing it without access to anything resembling a real price. Net price calculators were supposed to help with this, but many produce estimates with wide variance. The actual award comes late in the decision cycle, after students have already shaped their college list based on assumptions about affordability that the institution could have corrected months earlier but chose not to. The College Board's most recent Trends in College Pricing report shows the average published price at private nonprofit four-year institutions sitting above $43,000, while the average net price after grant aid lands closer to $17,000. Those are not the same number, and a family looking at the first one has no reliable way to predict the second.
For me, the question is not whether merit aid is good or bad as a general practice. Each campus will have to decide that for itself. The more pressing question is whether your institution is prepared for a more informed public conversation about what tuition pricing is actually doing. If two major studies in quick succession draw attention to how much institutional aid is going to students who don't need it, and if that conversation reaches families and legislators already skeptical of the cost of attendance, institutions that can explain their pricing clearly and defend it on something more than revenue grounds will be in a stronger position than those that can't. That requires some uncomfortable internal honesty about what merit aid actually accomplishes, who it serves, and whether the current discount strategy is sustainable given the demographic and financial pressures every campus is already navigating. The institutions that come through the next decade in the best shape will probably be the ones that stop treating their pricing model as something to be obscured and start treating it as something to be communicated.
The Why of Learning
From When AI Can Do Everything, What Is Left to Learn? | The Chronicle of Higher Education
Chrysanthos Dellarocas, professor of information systems at Boston University, argues that instructors should spend less time focusing on the what of learning and more time focusing on why the material is learned in the first place.
Our Thoughts
I wish I saw more of this type of writing about AI in higher education. As I finish up my PhD, I've spent quite a bit of time reading and thinking about the impacts of AI in the classroom. So much of it is either panicked accounts of students cheating or proclamations that everything will be fine if faculty just adapt to the new normal. Instead, Dellarocas found a middle path where he sat with the uncomfortable moment when his course felt obsolete, worked through it honestly, and came out the other side with a redesigned course and a useful framework. I'm sure he's not the only faculty member to work through this problem in his course. We need to see more examples of faculty writing about what they learned when they did something similar.
I really like his distinction between artifact production and artifact reasoning because I think it separates the two parts of the learning activity in the classroom. For decades we treated the ability to produce an essay, a query, a model, or an analysis as evidence of the thinking that produced it. That shortcut worked because you could not build the artifact without building the capability, but AI has broken that pathway. Production of a learning artifact no longer equates to mastery or even understanding of the subject matter. Instead, we are forced to dig deeper to validate whether students actually understand the material.
This leads to the most practically useful idea in the piece—the reframing of the post-exercise debrief from "Did it work?" to "What did you have to decide to make it work, and what would have happened if you had decided differently?" That is a small change in classroom practice with significant implications for what students actually learn, and a faculty member can try it next week without redesigning anything structural.
Between this piece, the University of Sydney's two-lane assessment framework, and the defense of long writing we discussed a few weeks ago, the outlines of what serious adaptation looks like are starting to come into focus. It is not detection software, honor code revisions, or waiting for the technology to stabilize. It is faculty and institutions redesigning the experience so it still teaches what they were actually trying to teach, while helping our students sit with the discomfort that real learning requires. The more examples like this one we have in circulation, the easier it becomes for the next faculty member facing that mid-lecture moment to imagine what to do next.
Sparks
- Colleges Were Sweating a Major Compliance Deadline. Now the Justice Dept. Has Delayed It. (The Chronicle of Higher Education) - The Justice Department has delayed compliance with the new federal accessibility standards for digital materials until April 2027. I included this in case you missed the announcement and were panicking about being out of compliance.
- The college transfer generation (Community College Daily) - Bruno V. Manno looks at recent research on the increasing population of transfer students. I appreciate that in addition to looking at the data, he also provides some suggestions for how to improve the transfer process.
- Employers say they struggle to find graduates with the right AI skillset (Higher Ed Dive) - A new report from Pearson and Amazon Web Services reveals a mismatch between employer expectations and higher education preparation of new graduates with AI skills. Given the pace of AI development and the number of failed enterprise AI projects, I'm not convinced employers know what to expect, but the study still matters because it feeds the narrative that institutions aren't preparing graduates for the workforce.


0 Comments