The NBA regular season is one heck of a marathon. From the dawn of October all the way through the ides of April, NBA teams compete in eighty-two 48-minute basketball contests—41 in their hometown venue and another 41 in a range of various road cities and stadiums. Out of this, a natural “home-court advantage” tends to materialize. Teams, for as long as the Association has been around, have been more successful at home than on the road. In fact, home teams, in any one season in league history, have never won less than 54% of their games, and have even managed to establish single season winning percentages as high as 68.5% in the past forty years.

This kind of home-court advantage has been theorized to exist for a multitude of reasons; for instance, players and team personnel may be more comfortable performing at home, where they have an established routine and a consistent schedule. After all, being at home allows for a player to sleep in his own bed, drive to the game in his own car, and surround himself with his own friends and family. Meanwhile, visiting teams may be at a disadvantage because road venues are inconsistent in their divergence from home (there’s only one “home,” but twenty-nine different “road” environments, after all), give rise to various unforeseen circumstances, breaks from routine, hostile environments, and the like. Furthermore, it has been found that visiting teams often tend to be given fewer days of rest between games than do their home counterparts. Even still, these factors, in their multiplicity, materialize before the game itself, which brings forth a whole other spectrum of potential nuisances. Amongst these are often belligerent, jeering fans–ones who task themselves with disadvantaging and making the opposing team as uncomfortable as possible—and even referees, who have shown to be susceptible to the psychological toll that home venues and their crowds take pride in inflicting on both opponents and independent officials.

Precedentary research to many of these points exists. For starters, Gregory Trandel and John Maxey, via their book on competitive balance, found that there is insurmountable evidence in support of the idea that home-court advantage plays a significant and consistent role in tilting the outcomes of games towards the host. Cecilia Noecker and Paul Roback, meanwhile, dug deeper and found that referee bias is a significant byproduct of home-court advantage and its quirks. Referees have shown an eagerness to seem fair and unbiased—both to fans and players–, and their inclination to do so may benefit players (or even teams, who recognize and manipulate this trend) who drive towards the basket and embellish contact disproportionately. Marshall Jones, then, discovered that home-court advantage largely materializes (by way of large leads and such) very early in the game, and perhaps lends itself more advantageously towards players who occupy the floor at these times (in other words, starters). Lastly, Oliver Entine and Dylan Small added to prior research and hammered home the idea that road teams are hampered by schedule makers via the fact that they, as visitors, tend to receive less rest than would a corresponding home team, on average.

As a result of this home-court advantage that pervades NBA basketball on a team level, many assumptions about the manner in which player performance is impacted by home or road environments exist. The nature of these is crucial, for team rotations are largely founded upon beliefs held by team management and coaching staff. Such people determine how much and how often certain players play; if they hold misguided opinions about the manner in which starters, bench players, or even both, wax and wane in performance depending on venue, they may disservice their teams, their fans, and the players themselves. For instance, many assume that star players, those at the absolute height of the NBA game, are not as dependent on their surrounding environment(s) than less-heralded players, and are thus able to assert themselves and their skills more readily. Bench players—on a general scale–, then, would seem to be the losers in these instances. Not only are they assumed to be more susceptible to outside influences (and, thus, thrown off their game more readily), but if starters, by virtue of their status, are regarded as the most capable, effective players on a team, they’d figure to be more heavily utilized on the road, where games have proven to be tougher to win for the visitors. As such, bench players would get the short end of the proverbial stick as far as the team rotation is concerned, and the road, where fans are more hostile, hotels and strangers replace homes and family members, surroundings become unfamiliar, and otherwise uniform schedules fall prey to all kinds of unforeseen circumstances, would act as a kryptonite of sorts. With all this said, it is also possible that teams, citing pre-conceived beliefs that bench players are less reliable in the face of adversity, simply short their reserves’ road productivity (as it may be reflected via composite statistics) by limiting their playing time more significantly on the road than they would at home. This would hamper bench players’ ability to contribute on the road, inflate the discrepancy in road vs home production, and effectively contribute to the idea that bench players are far less reliable in hostile environments than starters.

To actually explore these kinds of common assumptions and clarify my (and the public’s) understanding of home-court advantage and the manner in which it affects not only teams but also individuals in the NBA, however, I decided to tailor an investigation to this very end. My study will revolve around a fundamental question: “How does the discrepancy in home vs road ability for NBA starters compare to the discrepancy in home vs road ability for NBA bench players?”

I will answer this question by utilizing a particular metric created by John Hollinger, game score, which is calculated as such: (pts+0.4*FG-0.7*FGA-0.4*(FTA-FT)+0.7*ORB+0.3*DRB+STL+ 0.7*AST+0.7*BLK-0.4*PF-TOV). This process takes the number of points scored by a player, adds it to the number of shots he converts from the field multiplied by 0.4, and subtracts this sum by 0.7 times the number of shots he attempts from the field. Then, it subtracts the resulting number by 0.4 times the difference in free throws attempted and made, which is consequently added to 0.7 times the number of offensive rebounds a player grabs and 0.3 times the number of defensive rebounds a player grabs. From here, the remaining value is added to the number of steals made by a player, 0.7 times the number of assists recorded by a player, and 0.7 times the number of blocks made. Lastly, this number is subtracted both by 0.4 times the personal fouls a player converts and by the turnovers he commits. This value is calculated on a per-game basis and pools the aforementioned statistics via basic NBA box scores, weights them, and provides a method by which to measure individual players’ composite per-game productivity.

With this in mind, I chronicle the name of each player who qualified for statistical achievements in the most recent NBA season (minimum 58 games played, as determined by league officials, in 2014-15) and thus exclude any player who does not qualify. Then, I categorize each value, depending on whether the player started or came off the bench for the majority of games he appeared in, as either a starter or a bench player. From here, I add up each of these players’ “game scores” (via for a composite “home” value, do the same for all road games, then divide each by the total number of home or road appearances for an average value. Upon doing so, I divide each player’s per-game road statistic by his corresponding home game score value, and this number (something like, say, 0.95) serves as one of many such data points. Essentially, if one of such values is sub-1.00, it is fair to surmise that the player for whom this is the case tended to perform better at home than on the road—at least as far as ‘game score’ is concerned. However, if this number proves to be greater than 1.00, the opposite is true. I repeat this process for every qualified player in the most recent NBA season from which I have data (all of which is found each qualified player’s ‘2014-14 game logs’ page on—an example of which would be, say, “” for Jameer Nelson).

Upon finishing this particularly lengthy and extensive data collection process, I have a home-road statistic for each qualified player in the 2014-15 NBA season. This information allows me to compare home-road discrepancies among any particular players of interest, to determine the typical distributions of this data across entire seasons, and the like. Ultimately, however, it allows me to answer my guiding question: “Is the discrepancy in home vs road ability for NBA starters particularly different from the discrepancy in home vs road ability for NBA bench players?” Accordingly, in my quest to find a statistically substantive answer to this query, I have a pair of hypotheses. My null hypothesis states the following: There is no difference in the discrepancy in home vs road ability between starters and bench players in the NBA. On the other hand, my alternate hypothesis asserts that there is a greater discrepancy in home vs road ability for bench players than there is for starters. Ultimately, using game score data provided by, I will be able to identify evidence in support of one of two initial hypotheses, make corresponding observations and conclusions about the variability of player performance in the NBA, and shed light on the topic as it pertains to the modern era.

The following visuals display the fruits of the data collection process. First, a dotplot of road-home game score values for all 134 qualified starters from the 2014-15 NBA season:

Next, all basic statistics and five-number valuations summarizing the distribution of these starters’ values:

Then, a dotplot of all 134 qualified bench players’ road-home values:

Lastly, the statistics that summarize the aforementioned data points for qualified bench players in the 2014-15 NBA regular season:

As far as how these compare to each other, both distributions of road-home values are mostly symmetric, though slightly skewed to the right (a little moreso in the bench players’ case than in the starters’ case). Furthermore, the mean and median of the starters’ data (0.947 and 0.934, respectively) are slightly less than both the mean (0.964) and the median (0.935) for the bench players, giving the entire distribution a lower center as a result. This is incredibly important, for, even before I run a simulation, I see that qualified bench players in the ’14-15 season actually experienced a lesser drop in performance on the road than did starters. This, in other words, has the potential to directly contradict the assumptions crucial to the alternate hypothesis. As far as the spread, the bench players’ standard deviation is 0.083 greater than that of the starters, and the greater IQR (by about 0.129) further lends to the fact that the bench players’ data is more spread out and variable than the starters’. Lastly, the distribution of data for starters has four outliers (at 0.488316, 1.43754, 1.46343, and 1.59757), while the bench players’ only has one (at 1.93782).

To further explore, and to actually test my initial hypothesis, I ran a simulation. The following serves as my test statistic: difference in means (starters – bench players) = 0.947-0.964 = -0.017.

While I simulated the distribution of this test statistic via an online statistical applet, one can also be run as such: Write each of the 268 road-home values on note cards, shuffle the cards, and distribute them at random into two piles of 134 (on for the starters and one for the bench players, essentially). Then, find the mean of each pile, subtract the means (starter-bench), and record the simulated difference on a dotplot. Repeat this process many times.

I conducted one hundred trials of such a simulation, assuming the discrepancy effectuated by starters was equal to that effectuated by bench players, and the difference in means for each trial was recorded to the following dotplot:

The above visual shows the possible differences in mean road-home values that could occur simply by random chance, assuming that the discrepancy in ability on the road vs at home was the same for starters and bench players. Because there were 76 simulated differences greater than or equal to -0.017, the p-value comes in at 0.76. Assuming starters and bench players had the same discrepancy in ability on the road vs at home, this tells us that there is a 76% chance of obtaining a difference in mean performances of greater than or equal to -0.017 by mere random chance.

Since the p-value is so high, at 0.76, I fail to reject the null hypothesis, which states that there is no difference in home vs road ability between starters and bench players in the NBA. I do not have convincing evidence in favor of the idea that the discrepancy in ability on the road vs at home is greater for bench players than it is for starters.

While I do not have convincing evidence in favor of the aforementioned alternate hypothesis, it is important to note that I cannot extrapolate this beyond the 2014-15 season, to which my investigation is limited. The breadth of my data is limited to that one season, and so the scope of inference does not stretch any further.

Moreover, it ought to be acknowledged that I do not know the potential cause of lesser ability on the road, for either starters or bench players. It could be via hostile environments, scheduling, rotations, ill-conceived gameplans that disadvantage certain players, or any such combination of factors. To actually explore this, I would have to run an experiment tailored to that very end, which I henceforth did not do.

There were, then, a few ways to make this investigation better. For one, I could have taken a much larger sample size—namely, one that spanned multiple seasons, pools of players, and eras. This would allow me to make more wide-ranging conclusions about starters and bench players in general. Furthermore, I could have converted game score into a per-minute value, and then added a minute qualification to protect against insubstantial sample sizes. This would be valuable because it would allow me to judge players in and of themselves more specifically, as their own statistics would be less dependent upon potentially ill-conceived, generalized minute restrictions imposed by coaches and team personnel—ones who could be acting according to any kind of unsubstantiated assumption about the reliability of certain types of players on the road. As a result, it does not clarify whether certain groups of players perform better in and of themselves or because of the degree to which their ability to contribute is manipulated by the coaching staff.

As far as the results of this investigation itself, I do not have any convincing evidence in support of the idea that bench players are more significantly hampered by the disadvantages spawned by road venues. I speculated that perhaps they’d be trusted less, that their playing time would be shorted via teams having a lesser margin for error on the road, and the like, but it is equally possible that more losing by road teams lends to more playing time (of the garbáge variety) for marginal bench players and inflates their box score statistics disproportionately. With all of this said, however, it may also simply be the case that no particular pattern or explanation in favor of a particular group of players on the road exists. Life on the road is a heavy burden for players, no matter their role. And, as far as this investigation is concerned, there is not much evidence in support of the idea that bench players, on a grand scale, are any more susceptible to its whims than their counterparts on the starting line.

Leave a Reply

Your email address will not be published. Required fields are marked *