Thursday, October 24, 2013

College Football, the BCS, and the So-Called Playoff System


I will start this post with a disclosure; I am a Baylor graduate, and therefore regard the BCS with more than a small measure of mistrust.  Having seen how large schools with money and influence have used a great deal of immoral influence to essentially buy championship trophies over the years, I doubt the claims from media and those same self-anointed feudal barons of football that the system works very much at all in a fair and reasonable fashion.  The simple fact that after more than a century of organized football there is no national championship for FBS football built off a playoff system, even though literally every other major sport has a functional playoff system, including the ‘lower’ divisions of football. 

But it’s one thing to be cynical, and another to dive into the deep end without a thought to the process.  We are in the last year of the BCS system for – allegedly – determining the national champion of college football, to be replaced by a skimmed-milk version of a playoff system next year. This post examines elements of both the BCS and the 4-team playoff system starting next year.

So far as I am concerned, college football has never had a true national champion.  There were teams so dominating that they were the champion by consensus, there are schools with trophies in showcases that claim championships, which in truth prove only that school’s ability to manipulate voters and a corrupt system, and there are schools which played well enough to deserve a place in the decision but which were wrongly denied for any number of subjective reasons, such as belonging to the ‘wrong’ conference, not beating the ‘right’ opponents, or some other excuse.   The BCS system was allegedly going to correct this, but in practice the façade quickly failed.  Fans did not accept the BCS as valid, even though in most years the championship placed favorite contenders against one another and produced a credible result. 
The simple fact is that fans demand playoffs for FBS football.  Not some ‘+1’ system or another contrived means to keep fat cats happy, a legitimate playoff where the champion proves themselves on the field against the top contenders.  The BCS simply does not pass the smell test.




For this essay, I did some looking into the BCS system, primarily using the official BCS site and a very helpful blog called BCS Know How.



The basics of the BCS are pretty simple; the system uses a formula to determine a 1st and 2nd team for BCS rankings, which play each other for the championship.  The problems start with the way those rankings are determined.  The short version of the BCS formula is that two human polls, the USA Today Coaches Poll and the Harris Poll count for two/thirds of the formula, with four selections from six computer polls making up the remaining one/third.  The human polls are the heavy movers.  

The Harris Poll, according to the New York Times, is a “motley group of voters in the poll — which includes former financial consultants, television executives and Internal Revenue Service employees


I’m sure we all are glad to know that one/third of the decision to rank the top college football team have a base of voters who often “have nothing to do with football”.  Particularly troubling about the NYT story is the observation that most Harris Poll voters know only about the teams they see on network TV; if a team is not nationally televised, the voters will not be able to know much at all about them, and will rely on secondary impressions.

The other human poll, the USA Today Coaches Poll, might at first appear to be a more reasonable look at the best teams.  But there’s maggots in the cheerios here, too.  First, the voting panel is just 62 coaches from the American Football Coaches Association; this means that dozens of FBS coaches are denied a vote for purely arbitrary reasons.  But the bias goes far deeper.

A 2011 empirical study by Doctors Michael Strodnick and Scott Wysong determined that coaches tend to favor their own teams and conferences, and were consistently biased in favor of large and traditional schools and against smaller and non-traditional schools.  


Essentially, the Harris and Coaches Polls both rely on emotional preference, something not used in NCAA Basketball RPIs or valid playoff systems.

Recognizing this flaw, the BCS committee decided to add the results of six computer polls; the polls from Jeff Anderson & Chris Hester,

Richard Billingsley,


The Colley Matrix,


Kenneth Massey,


Peter Wolfe,


and Jeff Sagarin.


At first these polls may seem very scientific and trustworthy, but problems show up pretty quick.  First, of course, is the point that the computer polls only make up one/third of the BCS score – the subjective human polls are twice as heavy in weight as the computer polls.  Second, the BCS formula rejects the highest and lowest computer ranks, for no valid reason.  The argument might be made that the BCS is trying to eliminate outliers, except that with only six computer polls, rejecting two of the results would mean rejecting fully one/third of the data, comparable to rejecting the votes of 21 coaches or 35 of the Harris Poll voters.  Further, any valid poll methodology eliminates outliers, so again there is no valid reason to reject any of the poll results, unless doing so is a tacit admission that the poll is invalid.  But since selected results from any of the polls are rejected by the BCS process, this indicts the entire poll spectrum.
A closer look at the computer polls reveals reason for concern as well. For example, Massey Wolfe and Sagarin include FCS schools in their rankings but Anderson & Hester, Billingsley and Colley* do not.  This alters the population sample of the various polls, preventing a genuine apples-to-apples comparison.

(* Colley does not list discrete FCS schools in its rankings, but includes ranks for ‘FCS Groups’ without explanation or member identification)

At the express direction of the BCS committee, margin of victory may not be considered in the computer polls.  Pollsters like Sagarin and Massey have shown a polite dissent by releasing not only the official BCS rankings using an ‘approved’ formula, but also a set of rankings using margin as a valid consideration.  Sagarin goes to the point of listing a formula called ‘PREDICTOR’ which include victory margins, and has commented that he considers those rankings more valid in projecting actual game winners.  I understand the desire to eliminate manipulation of a process by dumping points on an opponent, as Alabama did to Baylor in 1979 for example, by using their starts to score 28 points in the fourth quarter to make their win look more impressive.  However, when a team controls the game throughout, pulls its starters in the third quarter and still wins by a crushing margin, that fact is a salient indicator which should be considered.  I found the refusal by the polls to reveal their exact formula/algorithm a bad sign, as this prevents the replication which is a requisite condition of any valid scientific case.  

In 2010, BCS Know How did a series on the computer polls and revealed some specific by each poll:

Anderson & Hester:  Does not consider previous years results.  For SOS counts opponents and opponents of opponents.  Sets a ‘conference strength’ then uses that as a benchmark to set specific SOS.  Ranks according to wins, losses, SOS by formula, wins against poll top 25, losses to poll non-top 25.


Billingsley:  Does consider previous years as a starting point for rankings.  Premium awarded for staying undefeated.  Considers the SIZE of the attendance where the game is being played as a factor in game quality, which is bias in favor of larger schools.


Colley Matrix:  Based on winning percentage and SOS.  Does not consider previous years in rankings.  Punishes teams for playing ‘weak’ opponents.  


Kenneth Massey:  Claims to base rankings on ‘equilibrium point for probability model applied to binary (win or loss) outcome of each game’.  That seems to mean you get points for winning when you are not supposed to win, and lose points for losing when you are not supposed to lose.  Previous years are considered, and so is date of the game as well as venue. 


Peter Wolfe:  Eats small children then spits out their bones to find the results.  Well, maybe not, but Dr. Wolfe does not say much at all about how he determines his rankings.   All he will say is that he tries to create a “maximum likelihood estimate” of a team winning a given game.   Wolfe counts results from 730 different schools, by far the broadest population sample, but his limited explanation seems to suggest a modified transitive property theory, mitigated by not counting margin.  He does not say, but the explanations imply that prior years are considered when determining expectations.  Finally, Dr. Wolfe is a professor at UCLA, which may or may not influence his initial assumptions.


Jeff Sagarin:  Counts preseason rankings, ratings compound both opponent records and records of opponents’ opponents.  The preseason ranking factors are removed when the BCS ranks are first released.  Undefeated and single-loss teams gain a premium value, and road wins are considered especially important.

Now, let’s have a look at the playoff system starting next year.  According to BCS Know How,


The playoffs will use a six-bowl system using three ‘contract’ bowls and three ‘host’ bowls for the abridged playoff system, using four teams.  The highest-ranked champion from five minor conferences (CUSA, Mountain West, Sun Belt, AAC, and MAC) gets one of twelve spots in a system suspiciously similar in appearance to the BCS, with champions from the six major conferences and five ‘at-large’ positions to be determined by a selection committee, which also announces the rankings.   

The selection committee will release a ‘Top 20’ ranking each week beginning in Week 8 of the season, very much like the BCS rankings.  Two semifinal games will be held either Dec. 31 or Jan. 1 of each year, with a championship game the first Monday in January that is at least six days after New Year’s Day. 

The committee will have absolute authority in selecting the four teams in the playoffs, and although they “will be instructed to weigh strength of schedule, win-loss record, head-to-head victories with other teams in contention and whether the team won its own conference“, it’s impossible to know how objective that process will prove to be in practice. 
No information has been released about who will be on the Selection committee, or what criteria will be used to choose the members.
No metric has been announced to determine which teams should be chosen for the playoffs.

No information has been released regarding whether polls will play a role in the playoff team selection, or if so how they could be used.