This is a follow up to my report on 2023's King In The North (KITN) competition. In it I noted that KITN attracted a lot of players with high UK BHGS rankings. However, whilst preparing some tables for publication, I noticed something unexpected concerning the rankings: hence this (rabbit hole) post.
Going into the competition, my expectation was that I would struggle against significantly higher ranked players and that the top places would all be taken by the highest ranked players. I certainly found the former to be all too true at KITN 2023 but did the latter turn out to be true too?
What do rankings really measure?
After the competition, and for probably the first time, I began to think about
what the rankings represent. All systems have their quirks. The
BHGS simply add the first six tournament's weighted scores and from then on
the best six scores are retained.
This means playing often (up to six tournaments) will improve your BHGS ranking irrespective of your performance. So, if anything, and for many, rankings are more a measure of tournament attendance than an indicator of performance.1, 2 For those that play in lots of tournaments their ranking switches to representing them at their best.
Are rankings predictive?
At this point I began to wonder just how predictive the rankings actually are.
To investigate this I ran a simple comparison. I asked how well
did the UK BHGS rankings predict the final placings at KITN? Not in each
game but across the whole weekend.
The prediction I used was the simplest I could think of: the highest ranked player was predicted to finish first and the lowest ranked player would finish last. Nothing complex.
In the above you can see how well, or indeed how badly, the top ten places (Position) matched those predicted by the UK Rankings (Predicted) along with the difference between the two. Specifically:
- There were only four players in the top ten from the top ten ranked players at the competition. John Hogan and Gary Lind did significantly better than their rankings predicted finishing second and fourth respectively.
- Likewise were there only six players in the bottom ten from the bottom ten ranked players. Chris Proudfoot and Dave Allen seem to have had particularly "bad" tournaments (I am sure they enjoyed the weekend).
- Looking more broadly, there were only four players from outside the top fifteen ranked players in the top fifteen places.
All the data is tabulated here.
From this you can see that the rankings were not a very good predictor of performance at KITN. They had some predictive value, as the last point above demonstrates, but only in the very broadest sense. Certainly not good enough to make things a forgone conclusion. It's always dangerous to generalise from just one piece of analysis but I suspect this will always be the case.
Other factors involved
Why should rankings be so poorly predictive? Don't high rankings mean stronger
players? I see a number of factors influencing why this might not always
be the case:
- The BHGS rankings favour players who play regularly, and frequently. But not every good player does this.2, 3
- BHGS rankings to do not represent the full range of results for players who've played in more than six tournaments: they represent their best six results not their overall performance.
- Players may have chosen an army for fun, and to challenge themselves, rather than simply picking a "killer" army.
- The theme may contain a player's favourite army; one that really suits their style of play. This should boost their chances of outperforming their rankings. Of course the obverse is also true.
- The mechanism of the draw usually means that players quickly end up playing opponents of similar ability (on the day). This makes matches closer and the result more subject to random factors.
- Random chance (luck). A few bad dice rolls at the wrong time can sway a game; especially if the game is close.
- Players are human and not every player plays at their very best all the time.
So it's not a case of rankings being good for "absolutely nothing", things seem to work out if you look at a broad enough picture but as soon as you look at the detail they are far less useful. I suppose, after all, they are just one (slightly complicated) way of looking at previous performance; like any league table.
Finally, award yourself a gold star [sic] if you spotted the Edwin Starr allusion.
- A player who has attended four tournaments will have a better ranking than an equally proficient player who has only attended two. Once players have played in more than six tournaments the situation changes. ↩
- By June 2023, only 31 (16.8%) of 185 UK players had played in more than six tournaments. ↩
- At KITN 2023, only 11 (36.7%) of 30 players had played in more than six tournaments. ↩
2 comments :
Rankings probably don't mean very much except to the players in the rankings.
I'm biased as in over 30 years of gaming, competition players at shows tend to have competition winning badly painted armies sat on appalling terrain (there are exceptions) and trend to the "my army should have won xyz war rather than the other side who did win.
Sorry if this is rather downbeat
No need to apologise for being downbeat or heavy on the stereotypes Scarlet! Next thing you'll be talking about the great unwashed and their rucksacks.
Now I agree, there can be some shocking attempts at painting on show at competitions but not always. I try to buck this trend. Terrain has to be practical and easily transported, and I agree, is often symbolic rather than a piece of scenic modelling.
The point of the post was to ask: can rankings predict the outcome of a competition or are they a record of events gone by?
Post a Comment