What are the Odds? Modeling Win Probability in League of Legends

My personal Holy Grail of League of Legends statistics has always been an accurate, theoretically sound predictor of in-game win probability. Some of my earliest explorations in advanced LoL stats came in the form of win probability modeling, and I’ve always kept a close eye on the attempts others were making in that space, but up until now I haven’t had the necessary data structure, resources, or time to put together my own model.

Shortly after joining Esports One as the Head of Esports Data Science, I knew I’d found the right environment and opportunity to make this happen. I’m now excited to share an early look at the beta version of my win probability model.

Scroll down to see an example of the model in action!

I’m not going to go into technical details aside from saying the core model is a logistic regression—most of the details will remain proprietary—but in a moment I’ll share an example of how the model interpreted one of the games from the LCS 2019 Summer Finals.

I knew the time was right to start talking about this model publicly after I spent part of the LCS Finals sitting with Tyler “FionnOnFire” Erzberger and “field testing” the model’s predictions. Every so often, I would plug game state numbers from the current game into a calculator and ask Fionn to make a prediction about which team was favoured to win, and at what percentage. Time after time, the calculator landed within 5 percentage points of Fionn’s estimate! That outperformed even my own expectations, and I think it says something about Fionn’s understanding of LoL, too!

My model is not only built on sound statistical foundations and a comprehensive understanding of the underlying data, it also effectively captures the nuances of pro LoL with real authenticity to the nature of the game and its complex interrelationships between game variables. I’ve controlled for factors like game time, the different types of elemental drakes, towers, Baron Nashor, Elder Dragon, Inhibitors, and much more, all appropriately reflected based on the ways they influence the game.

When you put it all together and apply it to Game 4 of the LCS Finals between Cloud9 and Team Liquid, one of the most hotly contested games of the series, you get a data visualization like this:


Click for full-size image

The chart doesn’t indicate every event, but I highlighted certain key moments to illustrate where the swings in probability came, and in some cases where there wasn’t a swing.

When Cloud9 picked up First Blood it moved the needle in their favour, giving them their highest probability of the game. For the next several minutes, the game felt subjectively like it was moving further in Cloud9’s direction, but in reality Team Liquid was actually holding the game quite even. Around 12/13 minutes, Cloud9 took down a tower, but Team Liquid equalized with a mountain dragon, again holding the game’s overall state fairly close.

The first big swing came around 20 minutes, when Team Liquid won a team fight and took a tower, then followed it up with a cloud drake a couple of minutes later. This brought Liquid’s probability up around 85%.

Cloud9 fought back with a team fight win of their own, dropping Liquid’s win probably to around 65% at 27:00, before Team Liquid dropped the hammer in a sequence from 28:00 to 30:00, taking a mountain drake, multiple kills, a Baron, and an Inhibitor. By 30:00, Team Liquid were nearly 95% favourites to win, and they closed out the game expeditiously from there.

The value of modeling win probability

The ability to predict and chart win probability is enormously valuable to LoL data science, and not only as a way to generate predictions and betting models (though that’s the context most analysts working in this space start from). If we only think about the betting industry, we’re drastically underselling the value of this type of analysis. There are substantial practical applications for understanding the game itself and enabling further analysis of teams, players, and the metagame.

Consider a common topic of discussion in LoL over the last few years: the relative usefulness of each type of elemental drake. With a well-specified win probability model, we can accurately measure the difference each drake type makes on predicting a team’s ability to win the game, depending on the game state at the time it was killed. (I will probably do a separate post digging specifically into this topic, since I find it personally fascinating.)

We can also extend some of my 2015 work on evaluating teams’ performance in different phases of the game, adding much more nuance to an assessment of how well they perform in the early game compared to the mid or late game, for example. Since this model isn’t locked to the 15:00 mark, we can segment the changes in win probability at any time interval we choose, and paint a much more useful picture of expected vs. actual outcomes.

A win probability model unlocks these types of applications because it is the most fundamentally solid way we can measure a game of League of Legends. What I mean is that, as I’ve said on the stage at the Sloan Sports Analytics Conference and elsewhere, LoL really only has one reliable “dependent variable”. There’s only one way to win a game of LoL: kill the enemy Nexus. Literally every other component of the game must be interpreted in light of how much it helps or hinders in killing the Nexus. This is different from any traditional sport, where there is always an intermediary dependent variable—runs, goals, points, lap times—that is directly, fundamentally tied to the win/loss outcome.

Statistically, this means that we can’t create a measurement chain to smaller components of the game unless we can draw a clean statistical line from a game action all the way through to the win/loss outcome. In baseball, it’s possible to statistically link the outcome of a specific pitch to the effect that it has on the likelihood of giving up a run. Since runs = wins, that model is very effective and (relatively speaking) uncomplicated. In League, we must take the further step of relating a game action (say, a champion kill) all the way to the eventual game outcome. That drastically broadens the complexity of what we’re modeling, and we haven’t begun touching on the snowball effect or other complicators yet!

A reliable, robust win probability model becomes the anchor for further analyses, so it’s crucially important that we get the win probability right. To ensure that the model functions well, we must feed it clean, well-structured data, and we must understand every nuance of every input variable. The model can’t have theoretical gaps like failing to recognize inhibitor respawns; it can’t ignore key factors like the current time on the game clock; it can’t treat every point of gold as equal at all times. It must hold up under any combination of game circumstances.

What comes next?

With standards so high, a model like this one can never be completely, 100% perfected. Aside from keeping the model’s coefficients up to date with new data as the game itself changes, I want to continue to broaden the model to capture more nuance. For example, ideally I’d like to be able to react to champion power curves and matchups to better measure the influence of team compositions. I believe that adjusting for team composition in an incomplete way could be more harmful than just not factoring it in at all, so I’m going to approach this challenge with caution, but some level of champion handling should be possible.

Another way to extend the model is factoring in which teams are playing, and favouring teams that have proven their superiority in past matches. In other words, instead of starting the model at an even 50/50, we might give SK Telecom T1 a pre-game advantage if they are facing the Jin Air Green Wings.

The model also needs to be recalculated using second-by-second data, instead of minute-by-minute as I’ve done with this beta. I don’t expect this change to produce any real difference in the specification of the model, but it may allow greater precision in separating the effects of some events, and of course it will generate more fluid prediction lines. To get this per-second data, I’ll rely on a combination of official live data feeds (which will only be available for a few pro leagues) and Esports One’s computer vision technology.

Even though there’s still more work to do, I’m incredibly excited about what this model has accomplished so far. A comprehensive treatment of win probability unlocks a whole world of analytical potential. It has been a dream of mine for a long time to achieve this model, so I’m proud of these results, and I’m thankful for the opportunity to work within Esports One to make it happen.

One thought on “What are the Odds? Modeling Win Probability in League of Legends”

  1. I wouldn’t give one team more weight at the beginning of each game based upon their consistency against a certain team in the past as this may inaccurately weight the per game data. (Teams change and grow outside of the quest to destroy the enemy Nexus and the META changes too) Rather I would suggest a separate module that uses consistency or lack thereof against specific opponents to assist in predicting winning or losing a series overall based upon past encounters between teams.

    I’m excited to see the champion power curve effect on the favorability calculator. In fact I would be more excited to see how it weights each champion throughout the game in general as this would be amazing to help RIOT/Tencent balance their champions in a more effective way if they care to do so. It seems like a great tool and a great opportunity for game improvement.

Leave a Reply to GoldGrenade Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.