The Waving Flag: Musing On ADLG & BHGS Rankings

Friday, 29 August 2025

Musing On ADLG & BHGS Rankings

Introduction

Every once in a while I check the UK Art de la Guerre (ADLG) ranking provided by the British Historical Games Society (BHGS). Not least because I find them statistically quite intriguing if somewhat strange. This week I decided to dig a little deeper and do more than just note my latest ranking.

How the rankings work

They are convoluted and closer to an algorithm than commutative arithmetic:

  • Players receive points based on where they finish not on the result of individual games.
  • The points available differ by event: larger competitions offer more points.
  • The rankings only contain the results of events in the previous twelve months.
  • All results are included until the seventh event and beyond, when only the player's six best scores count.

Until players have attended their seventh event, their ranking records both their performance and their attendance (a tracker). For example: two players can have different rankings not because of performance differences but simply because one player has attended more events.

More importantly, below seven events the ranking is a "warts and all" measure. Beyond that, poorer results get dropped and the ranking begins to represent the best of a player.

Confused? I was and I still am. Read on for more detail and analysis. Be warned it's a longish read.

Rankings! What are they good for?

In a effort to understand was was going on, I put together some basic facts and figures for the rankings for the twelve months to 20 August, 2025:

Metric Value
Total Attendance 733
Number of active players 186
Attended 1 Event 63 (33.9%)
Attended 1 to 6 Events 151 (81.2%)
Attended 7 or more Events 35 (18.8%)
Average Events per player 3.9
Most Events attended (one player) 15
Average score per round (All) 47.2
Maximum average score (per player) 93.7

I think this gives a decent overview of the rankings and raises some interesting points.

A tale of two halves

The first thing that struck me was how many players had only attended one event: 63 of 186 (33.9%) ranked players. I was also surprised at the very large majority that attended fewer than seven events: 151 of 186 (81.2%).

Looking a bit deeper, I saw that the 35 players (18.8%) who attended seven or more events accounted for 44.6% of the total attendance in the last twelve months. In this group the average attendance was 9.3 events compared to 3.9 for all players.

So the pool of ranked players can be split in to two groups. Both contribute roughly half the total attendance, but the rankings measure something quite different for each group.

There's the "occasionals" (1-6 events, 81.2% of players) where the ranking is basically an individual tracker and the "play-a-lots" (7-15 events, 18.8% of players) where the ranking measures a player's best results.

The attendance data is best illustrated in these charts:

This clearly shows the 55:45 split between 1 to 6, and 7 to 15 events attended.

The above shows the importance of the 81% (151) of "occasional" players who attended fewer than seven events. Bear in mind six events is an average of one competition every two months. This group accounted for just over half (55.4%) the attendance at events; their low individual attendance being offset by their far greater numbers.

Who are the rankings for?

I'm not sure about the purpose and consistency of the current rankings. To summarise the data for the rankings dated 20 August, 2025:

Group Occasionals Play-a-lots
Events Attended 1 to 6 7 or more
Size (Players) 151 (81.2%) 35 (18.8%)
Share of Attendance 55.4% 44.6%
Average Events 2.7 9.3
Ranking measures All results & attendance Best results

The split looks quite neat to me. Both groups are equally important to event attendance if of very different sizes. I'm sure the numbers will vary with each issue of the rankings, but I doubt the pattern will change significantly.

However, it does raise the question of which group is best served by the rankings? I can't see a way they serve either group well and wonder if something based on average performance might better serve both groups?

Perhaps this is why the international ADLG rankings use the ELO system which uses all game results (not event placings) to produce a rolling ranking? Of course, the current system is simple to maintain and an ELO ranking would require some form of database.

More is better?

I've always felt that ADLG is a game, like other wargames, where players improve the more often they play. That's certainly been my experience. As this chart shows the data seem to bear this out:

I was initially tempted to draw the conclusion that the average score for the "occasional" players attending 1 to 5 events is fairly flat (between 32-38), and it's only once players rack up six or more events (and then join the "play-a-lot" group) that their average score start to increase. I was also tempted by notion that the increase is mainly due to better play. On reflection, I doubt the latter is the case.

From six events on, a player's ranking stands a chance of rising: each additional event offers an opportunity to improve on a low score. It's possible, but not guaranteed. Conversely, performing badly doesn't always have a negative effect. Only if a player has a run of poor results will their ranking decline, but this could take many months.

Viewed this way the "algorithm" behind the rankings is a big factor behind the increase in average scores with event attendance. It's not just player performance. As more and more scores are dropped the rankings take on a really different hue. They start to truly represent the "best a player can get" (with apologies to Gillette) not the average standard of play.

Everywhere I looked I found anomalies. For example: there are four players with six events in the top twenty. Although they are not "play-a-lots", they are clearly good players and probably better than some above them who have dropped poor results. They are better than their ranking suggests.

On this basis I'm not really sure what faith to place in the "play-a-lot" rankings. There are good players out there who play often, are consistent and who do well in many events. For these players I'm sure the rankings are meaningful, but I'm not sure about rest.

One way round the problem with the "play-a-lots" would be to compare annual UK rankings over a number of years or to compare the UK with the international rankings. I'm not sure I want to attempt either of these (before you ask); a step too far perhaps?

Closing remarks

Now that I've had a good poke around in the data (which I really did enjoy), the best I can say is that the rankings exist and are updated regularly. And, if you one of the 80% of players who don't attend more than six events, they are a good way of tracking your results.

I know this sounds harsh given the work that goes into them, but they don't really stand up to rigorous scrutiny.

I have been told that I look into things like this far too deeply. Yet I know players who take a very keen interest in the rankings (bragging rights and all that). I've also heard it said that the UK rankings are just a bit of fun, but they are too misleading for that.

I will be treating them with a large pinch of proverbial salt from now on and possibly looking at the international ELO rankings a little more often.

2 comments :

Jonathan Freitag said...

Hey Martin, if I am understanding these data and your results, I suggest that the two types of player groups really out to be stratified and analyzed separately since each is measuring results differently. Also, how do you reconcile the claim that players tend to improve with number of games played against the alternate the the good players play more frequently to reinforce success.

Since you have the underlying data, perhaps you could test different ranking systems to see if results inferences can be improved? That might be an interesting exercise.

Vexillia said...

Thanks for the comments.

Firstly, I agree with you that the two groups should be treated separately. They are very different.

Secondly, the only way to reconcile the two is using experience and arithmetic:

[1] Experience tells me that playing often, means you play quicker, can be more decisive, and learn to avoid stupid mistakes increasing the chance of a better score.
[2] Arithmetically, when you start picking the top six scores players stand a chance of improving their top six every time they play another game by chance, weaker opponents or a stronger army choice. The more they play the more chances they have. If they don't they stay still (in the short term). This may well be why the average score increases with events attended; not necessarily better play.
[3] If they used an average score across all events then that would rise and fall with each additional game.

Finally, I don't have the underlying data so testing other options isn't possible. Plus I've spent enough time on this as it is.

Salute The Flag

If you'd like to support this blog why not leave a comment, or buy me a beer.