Every season pundits and commentators often gush about the importance of team continuity and experience. The teams who return the most players, the thinking goes, are those that will transition most seamlessly into the new season. The teams that are integrating a bunch of new freshmen and transfers, by contrast, aren’t as likely to play up to their full potential before the new year. But is this maxim, repeated by so many inside the game, actually true? Is there data to support it?
KenPom recently developed a new statistic called minutes continuity, which measures “what percentage of a team’s minutes are played by the same player from last season to this season.” This allows us to analyze whether teams with greater continuity overachieve at the beginning of the season and teams with lesser continuity underachieve. While it is possible that most any preseason ranking mechanism (including KenPom) would already account for player continuity, any positive effect would most likely be exhibited in the first half of the season. The teams with more continuity would benefit earlier while the teams with less continuity would catch up as the season wears on. To determine if this is true, we examined team performance versus preseason expectation in two groups (based on Pomeroy’s list (paywall)): the 40 teams with the most continuity, and the 40 teams with the least continuity.
First, let’s discuss the teams with greater continuity. The top of this list seems to defy conventional wisdom as teams such as UT Arlington, Weber State, Princeton, and Belmont are a combined 6-12 against Division I opponents this season. Expanding past this small sample size, though, the top 40 teams in continuity are, on average, 2.75 KenPom rankings lower (<1%) now than they were at the beginning of the season. This defies the conventional wisdom that teams with greater continuity adjust to the new season better than those without, but a drop of less than one percent is not a statistically significant deviation. On the other end of the spectrum, the bottom 40 in continuity are 0.43 rankings better, which is hardly any deviation at all.
That is not to suggest that continuity has no measurable effect. Interestingly, teams with greater season-to-season continuity tend to deviate less from their preseason ranking, as the standard deviation for the top 40 teams in continuity is 19.8 ranks, while the standard deviation for the bottom 40 teams in continuity is 27.4 ranks. The fact that the net deviation for teams with less continuity is close to zero masks a few wild over- and under-performing teams canceling each other out, like Central Florida jumping from 185th in the preseason to 84th, or Hawaii’s free fall from 159th to 200th. Both teams are among the bottom 40 in continuity so far this season, and such dramatic deviations from expectation only occur in the group with less continuity.
What we’re left with is that continuity does not seem to affect team quality in any unaccounted-for way, but it does affect our ability to accurately project team quality. Thinking logically, this makes good sense. It’s easier to predict a team with more known quantities. Inexperienced or newly constructed teams typically do not take longer to gel over the course of the season; instead, it is our perceptions of them that take a while to accurately crystallize. Another explanation is that continuity makes a team more consistent and less prone to wild swings in quality (which would make them harder to predict). Either way, the two explanations for the trend go hand in hand.