What’s Happening in the Market
Mos, a student aid startup, is being accused (including by former employees) of massively inflating their outcomes and numbers (for instance, claiming they’ve helped 400,000 students access financial aid, when only 30,000 have paid for services—the rest received an email from Mos). The founder, human rights activist Amira Yahyouai, denies any wrongdoing. (Source: NYT)
Spotify has tossed their hat into the EdTech ring. They just launched an online freemium course offering, in partnership in the UK with Skillshare and BBC. The courses cover everything from music production, business tools, and…how to become a course creator! Last week we talked about how upskilling winners command a captive market, and this is a great example.
What We’re Talking About
The criticism lobbed at Mos seems to have some merit, and is concerning following the whole Frank (“Former executives at Frank college aid startup plead not guilty to JPMorgan fraud”) and JP Morgan fiasco as well. Why is it appealing for VC-backed startups to try to monetize low-income students and FAFSA? Sure, there’s probably “something something helping the underbanked,” but it is weird, right? Regardless of whether the criticism against Mos carries weight, or whether Frank’s founders go to prison, why do startup founders think there is a ton of money to be made connecting aid-seeking students to FAFSA forms? Sure, the inefficiency of the federal government to deploy FAFSA successfully does seem to imply a role for middle-men
Another FAFSA delay has meant that two-thirds of colleges now don’t believe they’ll be able to process financial aid in time for decisions in April. There is a backlog of roughly 2 million applications, and many states are now contemplating pushing back their own statewide financial aid deadlines.
One Big Idea: Guest Post
We’re so excited to host our second guest post, by James Mattiace. We love learning about assessments and best practices, and James thinks a lot about assessment reform.
James is a veteran educator, teacher, principal, and author of several articles on assessment reform. He has been featured on podcasts and webinars as well as in Ken O’Connor’s Repair Kit for Grading, 3rd Edition, and is a consultant for Big Questions Institute. He is currently authoring a book on what to do once you have adopted Standards Based Grading.
“The Average is Mean” by James Mattiace
Before we dive in, a heads up to our diverse audience: from educators to athletes, and chefs to musicians, there's something here for you. If you are a statistician or actuary, then you should stop reading right now before your brain explodes.
The average is mean. It is a truly aggressive, bully level, steal your lunch money kind of way to represent one’s achievements. It works great for random sampling and baseball stats, but the way we employ it in educational settings is downright cruel.
Take Sloan, for instance, a student who initially struggles with geometry. Despite a rocky start and some early low scores, Sloan eventually finds motivation through a tutor who connects the dots (pun intended) between geometry and its importance for law school entrance exams. This revelation kick-starts Sloan's improvement, yet the traditional method of averaging grades – including early quizzes, exams, a final project, and class participation – pegs them at a C+. This average fails to reflect Sloan's late but significant understanding and mastery of the subject.
Any endurance athlete (runners, cyclists, swimmers) knows that the first attempt at a distance shouldn’t be averaged in with all of the other attempts, but rather they measure their performance by their best time, or even more accurately, their most common time. Similarly, if you measure the average wealth of a group of 10 people, they all become billionaires when Bill Gates walks in the room.
Moreover, the inclusion of zero scores in education can drastically lower an average, demonstrating how a few missteps can disproportionately penalize a student's overall grade.
But students aren’t the only ones being wronged by averages. Teacher recruitment databases sometimes use a 1-10 rating for administrators to score them on a host of issues like classroom management, timeliness, or attitude. One lackluster or underwhelming rating could give you an average score below the threshold to make the first cut in a recruiting situation, meaning you don’t even get looked at. So the shoe can sometimes end up on the other hand.
To address these issues, schools could adopt more refined grading strategies. One approach is to prioritize the most frequently occurring scores (mode) and recent achievements, offering a more accurate representation of a student's current abilities. Alternatively, a decaying average that assigns less weight to earlier scores could mitigate the impact of a rough start. Some teachers toss the lowest score before performing the final calculations, though that seems more of a band-aid than a solution.
The most effective method, however, might be to move away from numerical grading altogether, evaluating students against specific success criteria or standards. This approach would provide a clearer picture of a student's competencies and readiness for future challenges, ensuring that Sloan's journey towards law school is judged on relevant achievements rather than a flawed numerical average.