How to Evaluate Transfer Portal Players Using Advanced Analytics

The transfer portal has fundamentally changed how college basketball rosters are built. Every spring, thousands of players enter the portal, and coaching staffs at every level — from D1 power conferences to JUCO — are sorting through them trying to find the right fits.

The problem is that most of this evaluation is still done the old way: watching highlight clips, calling around for background, and eyeballing box score stats. That approach gets coaches burned. You bring in a guy averaging 18 points in a one-bid league, his highlights look electric, and then he gets to the SEC and cannot score 8 because the athleticism and defensive intensity are on a completely different level. But the same approach also causes you to miss things — the mid-major forward averaging 8 points whose advanced metrics scream breakout candidate, the conference translation gap you did not account for, or the undervalued role player whose efficiency and defensive impact would make him the perfect fit for your roster if you knew where to look.

I have been on both sides of this. I played D1 basketball at the University of Maryland as a walk-on and transferred to Coppin State as a scholarship athlete, where I led the team in conference three-point percentage at 41% (min. 1 attempts per game) and won the “Unsung Hero” award. Then, I spent three years as an assistant coach at Coppin State under Juan Dixon, where part of my job was evaluating transfer portal targets. During that time, we won a conference regular season championship, reached a conference tournament championship, and tied the Coppin record for non-conference D1 wins. After that, I became the head coach at Clarke County High School in Virginia, where we had 20 wins and made the state tournament for the first time since 2008 — which earned me Winchester Star Coach of the Year in 2024. Last summer I coached alongside Juan Dixon in The Basketball Tournament (TBT), reaching the Elite 8 for the first time in Shell Shock history.

I am telling you all of that not to brag, but so you understand that the framework I am about to walk you through is not theoretical. It is an approach used to evaluate real players at the D1 level — refined with publicly available analytics tools that any coach or analyst can access.

Here is how to evaluate transfer portal players using advanced analytics, step by step.

Step 1: Start With the Right Data Sources (Not Just Highlights)

Before you look at a single stat, you need to know where to get reliable data. Highlight tapes are marketing materials. They show you what the player wants you to see. Analytics show you what actually happened, in addition to the highlights.

Here are the tools I recommend. All of them are either free or very cheap:

KenPom (kenpom.com, $24.95/year) — This is the gold standard for tempo-adjusted team and player ratings. KenPom adjusts every stat for opponent strength, pace, and game context. When someone says a team has the “#1 defense in the country,” they are usually citing KenPom’s adjusted defensive efficiency. For player evaluation, KenPom gives you offensive and defensive ratings, usage rates, and the Four Factors for team context.

Barttorvik (barttorvik.com, free) — Barttorvik is the most underrated free tool in college basketball. The T-Rank ratings are built on a similar framework to KenPom, but Barttorvik adds recency bias (games from more than 40 days ago are gradually de-emphasized) and offers incredible customization. You can filter player stats by opponent strength tier, by game location, by conference-only play. You can pull game logs and see exactly how a player performed against top-100 opponents versus bottom-200 opponents. That distinction alone can change your entire evaluation.

Sports Reference / CBB (sports-reference.com, free) — This is your basic box score stats, game logs, season totals, and career numbers for every D1 player. It is the starting point for any evaluation.

EvanMiya (evanmiya.com, $30/year, optional) — EvanMiya brings Bayesian Performance Rating (BPR), lineup data, and an Indispensability Score that measures how much worse a team would be without a given player. His transfer portal rankings use projections rather than just raw past stats, which gives you a forward-looking view. Multiple D1 coaching staffs have publicly endorsed his tool.

You do not need all of these. KenPom plus Barttorvik plus Sports Reference gives you 90% of what you need. Add EvanMiya if you want the extra edge on lineup impact and player projections.

Free Download: Basketball Analytics Glossary & Cheat Sheet

Every stat in this post (and more) explained in plain English — with benchmarks, where to find each one online, and a printable cheat sheet. Enter your email and we’ll send it to you.

    Step 2: Look at the Stats That Actually Matter

    Not all stats are created equal when evaluating transfers. Here is what to focus on and what to be cautious about.

    Stats that matter most:

    True Shooting Percentage (TS%) — This is the single best measure of scoring efficiency because it accounts for twos, threes, and free throws in one number. A player shooting 45% from the field but 38% from three and 82% from the line is a very different scorer than a player shooting 45% from the field but 28% from three and 65% from the line. TS% captures that difference. For reference, the D1 average is typically around 54%. Anyone above 58% is efficient. Below 50% is a concern.

    Usage Rate — This tells you what percentage of team possessions a player uses while on the floor (via a shot attempt, free throw attempt, or turnover). Usage is essential context. A player averaging 18 points per game on 30% usage is a primary scorer carrying a heavy load. A player averaging 18 points on 20% usage is hyper-efficient in a smaller role. You need to know which one you are getting because it tells you how their production will translate when their role changes on your team.

    Per-40 Minute Stats — Per-game stats punish players who did not get enough minutes and inflate players who played 35+ minutes a night. Per-40 minute stats normalize for playing time, which is critical in the portal because you are often looking at players who were stuck behind somebody else. The caveat: per-40 stats can be misleading for very low-minute players (under 15 minutes per game). Use them as directional, not gospel, for guys playing limited minutes.

    Offensive and Defensive Rating (ORtg/DRtg) — These are points produced (or allowed) per 100 possessions. They strip out pace and give you a tempo-free look at individual impact. Available through KenPom and Barttorvik.

    Turnover Rate (TOV%) — The percentage of possessions a player turns the ball over. Anything above 20% is a red flag, especially for guards. High turnover rates are one of the hardest habits to fix. If a guy is turning it over 22% of the time in the OVC, it is probably going to get worse in the Big Ten.

    One important note: what matters most depends on position. For a point guard, assist rate and turnover rate carry more weight than raw scoring — a PG who creates for others efficiently and takes care of the ball is more valuable than one who just puts up points. For a wing, scoring efficiency and versatility on both ends matter most. For a big, rebounding rates (both offensive and defensive), rim efficiency, and shot-blocking impact are the priority. Evaluate players against the standards for their position group, not against a universal checklist. A 12% assist rate is fine for a center but a red flag for a point guard.

    Stats to be cautious with:

    Points per game — Meaningless without context. Conference level, pace, usage rate, and efficiency all matter more than the raw number.

    Raw field goal percentage — Does not account for threes or free throws. Use TS% instead.

    Counting stats in non-conference play — Non-conference schedules are wildly uneven. A player putting up 22 and 10 against three mid-majors and two low-majors in November may look very different when you check conference-only splits. Always filter for conference play first.

    Putting It Together: The Patrick Wessler Example

    Let me show you what this looks like in practice with a real player who is having a breakout season this year.

    Patrick Wessler is a 7-foot, 250-pound center from Charlotte, North Carolina. He was the 148th-ranked recruit in the 2022 class according to 247Sports, the third-best player in North Carolina, and the 24th-best center nationally. He chose Virginia Tech over Providence, USC, NC State, and Ole Miss.

    Then he disappeared.

    He redshirted his entire freshman year. As a redshirt freshman, he appeared in just 13 games with a total of 50 minutes on the floor for the entire season. As a redshirt sophomore in 2024-25, he played in 31 games but started only once. He averaged 3.9 points and 2.9 rebounds in 10.6 minutes per game. If you pulled up his page on Sports Reference and glanced at the box score, you would see a three-year college player averaging under 4 points a game. A lot of evaluators would move on right there.

    But the analytics told a completely different story.

    Efficiency was elite. Wessler shot 62.5% from the field at Virginia Tech — the highest effective field goal percentage on the entire roster, per KenPom. That is not a fluke number from garbage time. He scored 10 points at Duke on 5-of-6 shooting. He scored 10 against Miami. When he got real minutes, he produced at a high rate every single time.

    The late-season sample confirmed it. After teammate Mylyjael Poteat went down with an injury in late February, Wessler finally got extended minutes against ACC competition. Over that stretch against Miami, Louisville, Syracuse, North Carolina, and Clemson, he averaged 7.4 points and 4.0 rebounds per game while shooting 81.8% from the field — 18 of 22. He also blocked four shots in that span. This was not a guy feasting on weak opponents. This was ACC production at an elite efficiency rate.

    The per-40 minute numbers were screaming. When you take Wessler’s production and scale it to 40 minutes, his projected output was dramatically higher than the 3.9 points his box score showed. He was producing at a high level on a per-minute basis — he just was not getting the minutes. The raw stats were a product of opportunity, not ability.

    What the framework said: If you ran Wessler through the evaluation steps in this post — check efficiency metrics, look at per-40 numbers, account for the conference he was playing in (ACC), and flag the late-season sample as a confirmation signal — you would have identified him as a player who could be a dominant starter at the mid-major level if given a full role.

    What actually happened: Wessler transferred to UNC Wilmington in the CAA. This season, he is averaging 13.8 points and 9.6 rebounds per game while shooting 60.6% from the field. He was named All-CAA First Team. He posted a 19-point, 14-rebound, 6-block game against Hampton. He put up 21 and 13 in the CAA Tournament. UNCW won the CAA regular season championship with a 26-5 record, and Wessler was the anchor of the team.

    From 3.9 points per game in the ACC to 13.8 points, 9.6 rebounds, and an All-Conference selection. The box score at Virginia Tech said “backup.” The analytics said “starter who has not gotten his chance yet.” The analytics were right.

    That is what this framework does. It finds the Patrick Wesslers before everyone else does.

    Step 3: The Conference Translation Problem

    This is where most evaluations go wrong — and where analytics give you the biggest edge.

    A player averaging 16 points and 7 rebounds in the Patriot League is not the same player as someone averaging 16 and 7 in the SEC. The competition level is drastically different. If you do not account for conference strength when projecting how a player will perform on your roster, you are guessing.

    Here is how to do it with publicly available data:

    Use KenPom’s conference average adjusted efficiencies. Every conference has an average AdjOE (adjusted offensive efficiency) and AdjDE (adjusted defensive efficiency). The difference between conferences tells you how much harder it is to score and defend at the next level. If a player is coming from a conference with an average AdjDE of 100 to your conference where the average AdjDE is 94, he is facing significantly tougher defenses. His raw stats will almost certainly drop. The question is by how much.

    Use Barttorvik’s opponent-strength splits. This is the single most useful filter for projecting transfers. Pull up the player’s game logs on Barttorvik and filter for games against top-100 KenPom opponents only. How did he perform against good teams? If a guy scores 20 per game overall but drops to 11 per game against the top 100, that tells you his production is heavily inflated by weak competition. Conversely, if a mid-major scorer maintains 16+ against top-100 opponents, that is a strong signal his game will translate up.

    Factor in role changes. A player going from a 28% usage rate on a bad team to a 18% usage rate as your third option is going to see stat decreases no matter what. That does not mean the transfer failed — it means his role changed. Project the new usage rate based on your existing roster, then estimate what his efficiency would look like at that usage based on the trends in his game log.

    The Wessler example from Step 2 is a textbook case of conference translation working in a player’s favor. He was competing against elite-level athletes in the ACC every night, and his efficiency was still off the charts. Moving to the CAA, where his 7-foot frame, soft touch, and high-level efficiency created a mismatch that most opponents simply could not handle, was exactly what the conference translation data would have predicted. The same framework works in reverse — it tells you when a player moving up in competition is likely to see his numbers drop, and by roughly how much.

    Step 4: The Red Flag Checklist

    Before you get excited about any portal prospect, run through these warning signs:

    High usage + low efficiency. A player with a 28% usage rate and a TS% below 52% is a volume scorer. He is getting shots because his team has no one else, not because he is efficient. When he moves to a better team with better options, he either accepts a reduced role (not guaranteed) or continues chucking (bad outcome).

    Conference play stats significantly worse than non-conference. If the gap is more than 15-20% on key stats, the non-conference numbers are inflating the picture.

    Low free throw percentage for a guard. Below 70% from the line for a guard is a real concern. Free throw shooting is the best predictor of whether shooting percentages are sustainable. A guy hitting 36% from three on low volume but shooting 63% from the line is a regression candidate.

    High turnover rate (above 20%). Turnovers are one of the stickiest habits in basketball. They rarely improve significantly in a new environment, and they often get worse against tougher competition.

    Declining year-over-year stats without injury. If a junior is producing less than he did as a sophomore, something is going on. Could be coaching fit, could be effort, could be that the game sped up on him. Regardless, a downward trajectory is a red flag.

    Very low minutes despite being an upperclassman. If a junior is playing 12 minutes a game at a mid-major, his coaching staff does not trust him. Coaching staffs see these players in practice every day. If a player cannot earn minutes from a mid-major coach, ask yourself why you think he will perform for you, even if in a slightly lower tiered conference.

    Step 5: Build a Systematic Evaluation (Not a Gut Feeling)

    The biggest mistake coaches make in the portal is evaluating players based on vibes. You watch a highlight, you like what you see, you call around and hear good things, and you pull the trigger.

    The coaches who win in the portal are the ones who build a system. They define exactly what stats they care about, what thresholds they set for each position, and what conference translation adjustments they apply — before they start looking at individual players. Then they run every prospect through that same framework so they are comparing apples to apples.

    That is exactly what I built The Breakdown’s Transfer Portal Evaluation Engine to do.

    The Engine pulls player data from Barttorvik, runs conference-to-conference translation projections, identifies comparable historical transfers, and flags red flags automatically. Instead of spending hours manually pulling stats from three different websites, you type in a player’s name and get a comprehensive evaluation in seconds. You can run 50 players through it in the time it would normally take you to evaluate 3.

    It covers every D1 player, with JUCO and D2/D3 coverage expanding over time. It includes a watchlist feature where you can track prospects and save notes.

    The Engine launches before the portal opens on April 7. If you want early access, join the waitlist here — you will be the first to know when it goes live.

    The Bottom Line

    The transfer portal is not going away. If anything, it is going to get more competitive every year as more programs invest in analytics-driven roster building. The coaching staffs that win in the portal will be the ones who combine traditional evaluation (film, relationships, character assessment) with systematic analytics.

    You do not need a six-figure analytics budget to do this. KenPom, Barttorvik, and a disciplined framework will put you ahead of the staffs that are still relying on points-per-game and highlight reels.

    Start with the data. Build a system. Trust the process.


      Kent Auslander is the founder of The Breakdown. He is a former D1 player (Maryland, Coppin State), former D1 assistant coach (Coppin State), and 2024 Coach of the Year (Clarke County HS, Virginia). His Transfer Portal Evaluation Engine launches April 2026. Join the waitlist →

      Leave a Comment

      Your email address will not be published. Required fields are marked *

      Scroll to Top