Dropped Scores and Championship Tables

Over the years, and especially at the final round at Loton a few weeks back, people have asked me why my British Championship Table doesn't always match up with the official table. The reason behind that is because I handle the dropped scores differently to others. I'll get into more details in a moment but first a quick bit of background

The reason I have a table on this site is because, before BARC took over the championship, there wasn't a reliably updated public table available and, since I wanted to see how things stood, decided it was worth setting one up here for people to use. Over time it has got gradually more automated to the point where I can normally get it updated before the cars have returned back down the hill - subject to phone signal, battery life and the screen not filling with water as it did once this year! Now that there is a table on the official championship site this one isn't needed as much but it doesn't take much effort to keep it updated so it will probably stick around for a bit.

So, why are the tables sometimes different and which one is wrong? Well neither is wrong, it is just down to a difference in the way dropped scores are included in the total. The way other people work is they focus on the total points scored until they get to the final 6 rounds where they take off the lowest scores one at a time. This makes it simple to calculate but means that for 28 of the 34 rounds you are looking at total points scored when the championship is decided with 6 dropped scores. The way I work is I always drop the 6 lowest scores which I think makes for a more useful figure. This does mean that on my table everyone has 0 points for the first 6 rounds but that isn't really a problem as my table also takes into account the tie break rules, which includes counting dropped scores, so as it can still put the drivers in order.

To try to prove, to myself as much as anyone, that my method was more informative I decided to look at the data for some of the closest battles of the season, Scott vs Trevor, Alex vs Roger & Will Vs John plus Scott Vs John chosen at random. Below you will see a graph that shows the points difference between the two drivers after each round as shown by my table (green line) and the official tables (blue line). Use the drop down box to select a different battle.

Starting with the Scott Vs Trevor graph you will notice both methods agree on the final outcome, Scott winning by 12 points, but the way they got there are quite different with the most significant point being after round 22 when, just looking at totals, Trevor was 5 pts ahead but, taking into account dropped scores, Scott was 8 pts ahead. Taking the dropped scores out early has the effect of smoothing the graph, in the same way the dropped scores are, partly, there to help smooth out the effect of missed rounds.

The Alex Vs Roger graph shows at round 32 a difference of Alex 2 pts ahead using my method but 3pts behind using the other way. Alex had a late surge to pass Roger and claim number 4. By taking into account dropped scores my table showed Alex passing him before the other table.

Will Vs John is a bit less clear, and on first sight seems to have the standard method of leaving dropped scores till the end predicting the end result sooner, but it doesn't show how close it was around round 28. At that point it was much closer than the 5 pts the standard table shows.

Scott Vs John was thrown in just to see how a non-close battle would look but it does show some interesting points. There are a few rounds where John outscored Scott (3, 14, 21 & 22) and looking at the standard blue line it would appear that the gap between the drivers was reducing. What actually happened is Scott picked up a low score that he could then include in has drops and so include a larger score that he would have otherwise dropped. This means that in those rounds John actually drifted further behind and the green line shows that.

So this is just a post to explain why there may appear to be some differences between the tables and why I have decided to handle the dropped scores the way I have. The are good reasons for doing it the other way, mostly that it is much easier to calculate, but I feel the way I work it out shows a clearer picture of the state of play.

Let me know if you have any thought or questions. I may add these graphs to be updated live next year.