A lot to do recently, but here comes my belated blog post describing the ranking method that Wesley Colley invented. This post is to a great deal a simplified rewording of his paper describing his method, with some commentary and fencing-specific notes added by me. This blog post is just the introduction to the problem, the actual solution that Colley came up with will be described in a subsequent blog post.
The ranking method invented by Colley was invented as an answer to the question: ”What is a fair ranking of the college football teams competing in the Division I-A?”
”Why not use the usual ranking method that is used for just about every other sport? One point for a tie, two or three for a win, add up the points over the whole season, and use aggregate score differential as a tiebreaker? Is there anything special with just college football?” many a sports fan would respond in bewilderment. In response to that, Colley makes the argument that: No, it would not give reasonable results, and that is because there indeed is something special with college football.
”What? What can be so special with college football?” Our sports fan would respond. The answer to that is actually several things:
- There are many teams involved, in excess of 100
- Each team will only play a small fraction of the total number of other teams – 12 matches in a season
- The opponents that a given team plays against in a season are not chosen in order to give each team a equal and fair schedule, but in order to satisfy a whole lot of other wishes and constraints
- Once the regular season is finished, there is no playoff to find which two teams are worthy of playing in the final match – instead, the two finalists are chosen according to their regular season performances
The astute reader that is, anyone who has come this far – will of course notice a difficult problem: How can one possibly chose the two teams most worthy of finalist status, given that different teams can have wildly different levels of difficulty in the regular season, there might be several teams without losses in the regular season, and there is no playoff to weed out those whose good results were due to dumb luck and/or a bunch of weak opponents?
A vexing problem indeed. Not only is it so, it also has a lot of real-world consequences riding upon it. The final of the college football season draws tens of thousands of spectators, with millions more on TV. The total amount of money involved in the game is several millions of dollars.
For some time, the finalists were selected by polling knowledgeable people (coaches, journalists). There is a big drawback to that approach – it is hard to combat subjectivity, and even harder to combat the appearance of subjectivity. In some years, both of the teams selected to play the final have lost at least a game during regular season, while a third team has won all its games. Those doing the selecting have considered the opponents of that third team of lesser strength, so that its straight-win record means less than the near-perfect records of the finalists during their regular season. Good luck convincing the fans of the team not selected, though.
”Ok then – why not simply adjust the schedules of the teams so that all teams get schedules of equal strength then? That should solve that problem!” our sports fan will retort. However, that is not so easy. First of all, this is college football. That means that many of the players will not have had played at this level before, and the overall player turnover from year to year is quite high. That means that previous results do not mean as much as in a series where player careers are longer – if the players that won lots of games last year have graduated, those wins should count for less when ranking the teams so that all teams get opponent lists of equal strength. But how much less? That is an essentially unanswerable question. Secondly, when the schedules are made, there are lots of other preferences that must be considered. The various colleges want to play against old antagonists, since such matchups will generate lots of viewership and thus revenue. For several colleges, travel distances are a factor – teams are quite large, and USA is a big country. Some teams will want to maximise their chance of a regular season result with lots of wins by choosing weak opponents, a practice known as padding the schedule. Other teams will try to maximise viewership by having well-known, and thus generally good, opponents in their home games. Finally, there is no governing body that has the power to set a schedule that would ensure aggregate equal opponent strength against the wishes of individual colleges, even if what the aggregate equal opponent strength for a given list of opponents could be aggreed upon.
So, we have a competition in which some competitors have several strong opponents, while others have a lot of weak opponents, and we are going to figure out which two out of more than a hundred competitors are the two best ones, after each competitor has played 11 games.
That means that the original question: ”What is a fair ranking of the college football teams competing in the Division I-A?” can be transformed into: ”Given just the information of which competitor won over which opponent for all games played in a competition, how does one rank the competitors in a fair way that takes into account that some competitors have played much harder opposition during the competition than others?”
In the next blog post, I will outline how Colley solves that problem.