Looking back at how accurate last year’s rankings ended up being
For the last several years I have been putting out my numbers-based transfer portal rankings. The job of doing it has gotten harder as there are more entries every year but I’ve done my best to figure out ways to streamline the process.
The objective is to try to put out rankings that are based on more than just gut feeling and are directionally correct. I am not advocating that Player A has a score of 78.3 and Player B has a score of 78.1 and therefore Player A is definitively better than Player B. There are the inherent difficulties in comparing across positions in addition to some teams might be looking for a 6’3 outside red-zone threat to complement their roster while others need a 5’10 speedster slot guy. The goal is to put them into similar tiers.
I use three main components as part of the rankings and admittedly all of them are flawed in some way. But the only way to do rankings like these are to have some type of numerical input for every player and so options are somewhat limited.
The first part uses the 247 Sports Composite recruiting rankings. If Player A and Player B have the exact same college resume, you’re probably going to prefer the one that was a five-star over the one that was a two-star as a tiebreaker. For those that have never appeared in a college game yet, recruiting rankings are the only data input we have. They aren’t perfect but they’re better than nothing. I scale back the impact of recruiting ranking the longer a player is in college but never drop it to zero.
The second part uses adjusted snap counts. That means I take the # of offensive/defensive snaps a player has played and compare them to the average number expected for someone in college as long as they’ve been. That means a freshman starter probably will get a higher experience score from my system than a senior backup even though they’ve played fewer overall snaps because the average freshman doesn’t play very much.
The third part uses PFF player grades. There are plenty of flaws with PFF’s methodologies but they provide a numerical way to compare across positions and are usually directionally correct. The players who most people view as the best tend to have the highest PFF grades even if they aren’t one-to-one overlaps.
What are rankings though without some measure of accountability? This is my attempt at said accountability.
One year of data isn’t necessarily the best way to track things. There are going to be players who had multiple years of eligibility last offseason who were only so-so this season but become all-conference level in future seasons. But for right now, this is what we’ve got. I’m putting together my own productivity measure to factor in how much someone played and how well they played using snap counts and PFF grade. A player who played 700+ snaps this season with a PFF grade of 85.0+ would receive a perfect 100.0 for productivity. Things scale down from there with the playing time and performance aspects evenly weighted.
I’m making 2 exclusions for the analysis. I’m removing players that played 0 snaps and also players who had a transfer grade below 40.0.
The reason for taking out the ones who played 0 snaps is so that I’m not punishing the rankings if a player suffered a preseason ACL tear for instance. I could go through every single player and research to find out if they just got benched or got hurt but that would take me months to do so instead I’m going to assume it’s injury/eligibility-related for this part.
The reason for taking out the ones who had a transfer score of 40.0 is that there were very few players transferring up to the P4 level who scored below that threshold. Either they were coming from D2 or below where I didn’t have any data or they excelled in at best one of the three main categories. An example of a 40.0 player was OT Jordan Hall who transferred from UAB to North Carolina last year. He was a low-three-star recruit in 2023 who redshirted his freshman year then played 156 snaps as a reserve in 2024 making one start with a below average PFF grade. That’s who we’re talking about. Only about one in eight transfers below a 40.0 ended up at a power conference school and many of those were either walk-ons or kickers/long snappers.
TRANSFERS FROM FCS
Let’s start by looking at the players transferring up from the FCS level. I have players grouped here by their Transfer Grade (total) in 10 point bins with the average of their productivity score in each category. The darker the color gets indicates that there were more total players in that bin. So the 60-70 range is the most common overall.
You can see here that, for the most part, the productivity score went up as they received higher transfer grades when you exclude the bins with very low sample sizes. The ones who went up from the FCS to G5 ranks climbed higher until you get to those with an 80+ transfer grade. There were only two though that fell into that bucket and one was a solid starting QB in Conference USA while the other flamed out.
It’s a similar story for the P4 transfers where there are slight increases up until you get to the 80+ category which has a very small sample size since just about the only players that can score that high coming from the FCS are former P4 players that dropped down and now are transferring back up.
It also isn’t surprising that the scores for the FCS to G5 level are higher than the FCS to P4 level within their similar bins. If those players are in fact similar in quality of player then you’d expect those players to play more/play better against worse competition in the G5 than they would for a power conference team.
TRANSFERS FROM G5
The results are similar for the players transferring from the G5 ranks although there are definitely some bigger plateaus. For the players transferring down to FCS there are essentially just two categories of ones who scored between 40-60 and those in the 60-80 range.
It’s an even bigger middling mess for the players transferring within the G5 as anyone with between a 40 and a 70 in that range finished with nearly identical year one productivity. There’s a big jump up in the 70-80 range and another jump for the 80+ although that’s admittedly a very small sample once again. Only four of the players staying within the G5 were in the 80+ range but three of them played 750+ snaps with a 73+ PFF grade which signals they were all-conference or close to it type performers.
For the ones moving up a level to the P4 we see that there is a lot more separation between the various bins up until there is almost no difference between the 70-90 range. Roughly half of those players wound up being consistent starters at their new school with 500+ snaps and about 40% of them finished with a PFF grade of 70+.
TRANSFERS FROM P4
Now let’s look at the transfers coming from the P4 level.
The ones who dropped all the way down to FCS once again had almost two groupings of 40-60 and 60+ that were very close in their ultimate production scores. If a player drops all the way down to that level who hasn’t performed very well then they probably will play but maybe not be a clear star. For the ones who showed they could at least be part-time contributors at the power conference level they were usually impact players in FCS.
Each of the bins for the ones who just dropped down one level from the P4 to G5 go up until you hit the 80+ mark which only had two members (QB Maalik Murphy from Duke to Oregon State and RB Jaylan Glover from Utah to UNLV).
Finally, the P4 to P4 transfers went up between every bin as their transfer grade went up which is what I’d like to see. There were bigger jumps up from the 40 to 50, 60 to 70, and 80 to 90 bins and then smaller jumps between the others.
TOP-RATED TRANSFERS
The highest grouping was that 90+ bucket that I would normally say are clear all-conference candidates. There were 7 who qualified last year. Here’s how they did:
QB Tyler Van Dyke, Wisconsin to SMU- Tore ACL previous season and did not return in 2025 while rehabbing
RB Jaydn Ott, Cal to Oklahoma- 69 snaps, 54.5 PFF grade; Never broke into RB rotation for a CFP team
QB Carson Beck, Georgia to Miami- 776 snaps, 80.6 PFF grade; Starting QB for CFP team
QB Nico Iamaleava, Tennessee to UCLA- 684 snaps, 74.6 PFF grade; Starting QB for non-bowl team
ED David Bailey, Stanford to Texas Tech- 510 snaps, 92.4 PFF grade; All-conference pass rusher for CFP team
WR Dane Key, Kentucky to Nebraska- 581 snaps, 64.7 PFF grade; Starting WR for bowl team, career low rec yds
QB Preston Stone, SMU to Northwestern- 778 snaps, 67.6 PFF grade; Starting QB for bowl team, career low stats
It ended up a bit of a mixed bag. Four of the seven were quarterbacks who either had been supplanted at their school after starting previously or the school felt they could find better NIL value elsewhere. That included three of them leaving schools that made the College Football Playoff but failed to advance to the semifinals and who thought they could upgrade at quarterback. Nico and Stone both went to schools with much worse supporting casts and unsurprisingly struggled at times. Beck went to another team with CFP-level talent and was able to squeak into the tournament as the last team in the field.
David Bailey was by far the biggest home run. He put up great advanced stats as a former four-star recruit while at Stanford but when surrounded by other all-conference level players he became arguably the best pass rusher in the country this season.
Dane Key was never a star for Kentucky but was 3/3 in putting up above average seasons while playing for a team that didn’t have a great passing game around him. He went to Nebraska which theoretically should’ve had a good passing game but Dylan Raiola greatly underperformed despite having plenty of weapons to throw to and Key saw career worst stats almost across the board as a senior.
Finally, Jaydn Ott had battled injuries his last season at Cal but the thought was he’d get healthy and once again be an impact player for Oklahoma. Instead, he was seemingly in the doghouse from day one and it appeared to be a case of a GM getting a player with NIL money that the coaching staff didn’t really want all that bad and he never made an impact.
What lessons can we take from those case studies?
If a player didn’t finish out the previous season as a starter despite success before that then they probably aren’t going to just immediately bounce back by finding a new home. For this year’s rankings I added in a clause that penalizes players who appeared to be going backwards in their playing time during the year right before they transferred to account for either injury or just declined performance.
LIKELY STARTERS AND STANDOUTS
Let’s finish out with two more views. The first here is the percentage of players transferring into that level of the sport who wound up playing at least 400 regular season snaps (again excluding those that played 0 snaps). I used Washington as an example and 400 is roughly the cutoff where there were 11 players on each side of the ball who met that threshold. Depending on injuries and how often a coaching staff rotates that may be a little higher or lower but if you played 400 snaps you were at the very least a heavy rotation piece and probably started at least a few games.
For players who transferred down to the FCS level there’s a clear jump up from the 50 bin to the 60 bin but then what should’ve been the elite players actually didn’t play all that much. It’s probably reasonable to conclude that if a player has a 70+ grade but still can’t find a spot in the G5 or P4 then there’s likely something else going on that my system isn’t capturing.
At the G5 level there’s a relative plateau for just about everyone between a 40 and a 70 which caps out below 40%. That takes a big jump up to over 50% for the 70 bin and finally to over 60% for the 80 bin.
You can tell that there’s a minimum score at the P4 level to have any kind of reasonable shot at starting as only 14.5% of those between a 40 and 50 ended up playing that much at the P4 level. There were gradual steps up in the 50 bin and the 60 bin but just like the G5 level they still only started less than 40% of the time. That popped up to closer to 60% or better for anyone with at least a score of 70. So if you’re hoping for Washington to land a starter in the portal then it’s safest if they’re at least at that score.
To finish it off, let’s add in what it looks like if we also care about the performance rather than just the playing time. Here’s our true standout list with the percentage of players who both played at least 400 snaps and finished with a PFF grade of 70.0+ overall. For Washington this past year there are only two transfers that would’ve qualified: LT Carver Willis and S Alex McLaughlin. DT Anterio Thompson played 397 snaps including the bowl game so didn’t quite make it on that part of the criteria.
By definition these numbers had to be smaller that the ones for the starter section since this is a subset of that group. The sub-60 bins are below 20% at all levels. Maybe you end up finding a high level starter who scored that low in the rankings but the odds say that’s incredibly unlikely and is closer to 5% at the P4 level.
Once you get above 70 though it becomes closer to a one-in-three proposition to try to get that caliber player among those that stay healthy enough/eligible to play at least one snap. It would be nice to have a little more reliability than that but even the NFL draft is a crapshoot with millions of dollars poured into scouting a much smaller group of players.
Category: General Sports