On Improving the NCAA Tourney: Part I

Posted by Brad Jenkins (@bradjenk) on March 28th, 2017

Last June the National Association of Basketball Coaches (NABC) empaneled an ad hoc committee whose stated purpose was to provide the NCAA Division I Men’s Basketball Selection Committee a perspective from men’s basketball coaches and their teams regarding selection, seeding and bracketing for the NCAA Tournament. The NCAA has in recent years become increasingly receptive to considering and making changes, and as this year’s event reaches its climax, we decided to offer some specific recommendations to bolster the best three weeks in sports. Let’s focus today on improvements to the selection and seeding process.

John Calipari is one of the members of the recently created NABC ad hoc committee formed to make recommendations to the NCAA Selection Committee. (Kevin Jairaj / USA TODAY Sports)

John Calipari is one of the members of the recently created NABC ad hoc committee formed to make recommendations to the NCAA Selection Committee. (Kevin Jairaj/USA TODAY Sports)

Bye Bye, RPI

Whenever the subject arises of improving the primary metric that the Selection Committee uses, there is one recurring response: Either dump the RPI altogether, or dramatically limit its influence. The good news is that we may finally be headed in that direction. A month after the NABC formed its committee and began communicating with the NCAA, the following statement was made as part of an update on the current NCAA Selection Committee:

The basketball committee supported in concept revising the current ranking system utilized in the selection and seeding process, and will work collaboratively with select members of the NABC ad hoc group to study a potentially more effective composite ranking system for possible implementation no earlier than the 2017-18 season.

Moving away from the RPI as the primary method for sorting teams into composite tiers would be a huge step toward improving the balance of the field. We have heard committee members for years make the point that a school’s RPI ranking is just one factor of many on its resume. But then the same committee members turn right around and cite that team’s record against the top-50 or top-100 — or its strength of schedule rating — all of which, of course, are derivative of the RPI. That means that the outdated metric is still, even now in an environment of Big Data, a highly significant influence on how teams are judged. The real harm occurs when the RPI results in entire conferences being overrated, which leads to those member institutions likewise being over-seeded. Placing five to seven teams well above their proper seed lines can have a substantial negative impact on the overall balance and corresponding fairness of the entire NCAA Tournament. Here are three recent examples.

conf-rpi

In each of the above scenarios, the conference’s RPI rating — and therefore the rating of all its members — was considerably higher than the rankings of the two most respected metrics in the business: Ken Pomeroy and Jeff Sagarin. When the committee seeded schools from those three conferences, it was clear that they relied too heavily on the RPI’s evaluations, placing their teams several slots higher in the bracket than they deserved. And probably not coincidentally, those teams very much underperformed over those three years. In fact, that lousy 4-7 performance from the Pac-12 in the 2016 NCAA Tournament was achieved entirely against lower-seeded opponents. This isn’t to say that the Pac-12 wasn’t a very good conference that year, but it was obvious to nearly any observer that it didn’t boast seven of the nation’s top 32 teams (how it was seeded).

The good news is that we appear to be moving away from the RPI-centric model. In late January of this year, representatives of the NCAA hosted a meeting which can be generally described as an NCAA Tournament Analytics Summit. The meeting included some of the best college basketball ratings experts around, including Pomeroy and Sagarin. Pomeroy in particular came away impressed and encouraged about the process that future selection committees may be inclined to use to evaluate and compare teams, and the hope is that a new and improved methodology can be put in place in time for the 2017-18 NCAA Tournament selection process.

Location, Location, Location

Switching to a straight-up composite ranking derived from multiple metrics, however, won’t completely fix things. We would also like to see the committee take another step to incorporate game location in its evaluations of selection and seeding. Currently, a school’s win-loss record in road and/or neutral-site games is the focus when a team’s performance away from home is evaluated, but that fails to consider the strength of those opponents in those particular games. The solution here is to adjust how we rate a team’s schedule based on both factors — quality of opponent and location of the game. According to KenPom, beating the 90th-best team on the road is about equally as difficult as defeating the 50th-best team at a neutral site or the 20th-best team at home. So let’s just come up with an arbitrary number — maybe 25? — that we add or subtract to an opponent’s rating to determine the true difficulty of each game. As an example, if a school plays the 40th-rated team at home, that game becomes adjusted to a difficulty rating of 65. KenPom already does something similar in giving an “A” rating to a game played against a top 25 team after the contest is adjusted for location.

Wayne Tinkle: Coach of the Year? (Godofredo Vasquez, USA Today)

Was Wayne Tinkle’s Oregon State squad overrated in 2016? Probably. (Godofredo Vasquez/USA Today)

To see how much this could impact things, just look at Oregon State‘s 2015-16 schedule. Using the RPI as its sorting tool, the committee observed that the Beavers compiled a 4-8 record versus the top 25 teams in college basketball. Had they instead used KenPom’s adjusted top 25 — a more accurate assessment of schedule strength — Oregon State’s record would have been a less impressive 1-8 against the top 25. And it almost assuredly would have lowered Oregon State’s placement on the #7 seed line.

Open Hearts and Minds for the Little Guys

Perhaps the biggest problem for the committee each year is how to assess the teams that play in non-power conferences. With so few opportunities for resume-enhancing wins using the current methodology, those squads are at a distinct disadvantage. The solution is for the committee to consider other evaluation tools when its standard method does not give enough information. For example, here is a situation that occurred during the 2016 selection process.

The committee chose Tulsa and Temple ostensibly because they owned more top-50 wins, but those two teams also had numerous more chances to play top-50 opponents. Since St. Mary’s and Valparaiso didn’t have those same opportunities, the committee should have utilized another metric to properly consider them. This methodology should also apply to seeding — making Wichita State (KenPom top 10) a #10 Seed in this year’s event was extremely unfair to both the Shockers and the other teams in their pod (i.e., Dayton and Kentucky). This inherent availability problem in evaluating teams from mid-major leagues is only going to get worse as several power conferences move to 20-game league schedules and further limit opportunities for the teams from the other leagues to schedule tough non-conference games.

Two More For Consideration

Up until several years ago, the committee valued teams’ recent performance more than an overall season’s results. We would like to see a return to that protocol, although in a slightly different way. Rather than simply considering schools’ win-loss records in their last 10 or 12 games, the committee should find a way to equalize the metric. Conference tournament play at the end of the season can create several extra contests for some teams but not others, thereby removing key data points from a month ago for the teams that end up playing more games. We’d instead like to see the committee start with a fixed date of review (maybe February 1, or perhaps when it releases the top 16 overall seeds in mid-February) and then consider performance adjusted for game location in the same way described above. The logic for putting slightly more emphasis on recent play is simple: Teams develop differently over the course of the season (some get better; still others get worse), but it is the March version of that team, not the November one, that will be competing in the Big Dance.

One last suggestion for the committee is to change the policy of not counting games played against non-Division I schools when evaluating teams. The stated intent for this rule is to not reward teams for beating up on those overmatched schools, but the opposite is actually happening. Total number of wins is no longer a major factor in the committee’s decision-making process, but non-conference strength of schedule most certainly is. Right now, if a team plays a Division II or NAIA school as a patsy game instead of a very weak Division I squad, its overall schedule strength is not negatively affected. It would be much fairer to give all non-Division I opponents a rating equal to the lowest Division I program (#351), and thereby appropriately account for scheduling such games. We’d probably see substantially fewer of those games on schedules in very short order.

In Part Two, which will publish later this week, we will look at some possible improvements to the NCAA Tournament’s bracketing process.

Brad Jenkins (383 Posts)


Share this story

Leave a Reply

Your email address will not be published. Required fields are marked *