Wednesday, April 13, 2011

Setting Lineups in Daily Leagues

There's always the temptation to play matchups in daily leagues. And to the extent you possess correct additional information (such as batter-vs-pitcher stats) it helps. But it is possible to overdo this kind of guessing. And, generally speaking, your best bet is to play your best players any given day.

This post is about to get a little wonky, so stay with me. I'll try and walk through this as simply and clearly as possible.

Let's simplify the situation: we'll say it's a 5x5 roto league, and a player can have either a good day or a bad day. We'll call a good day when a player contributes to two or more statistics (nets a run, RBI, home run, steal, or raises your team average) and a bad day when he fails to do so. We'll use a binary outcome measure, so a good day has a value of 1 and a bad day a value of 0.

Further, we'll simplify the situation to a choice of two players. Player A has a good day 60% of the time, and Player B has a good day 50% of the time. That means, any given day, Player A has an expected value of .6 (60% of 1) and Player B an expected value of .5- this is what you'd expect, on average, starting either player on any given day.

In a vacuum, Player A is better. Playing him any given day is a better move than using Player B. But 40% of the time, Player A gets you nothing. On those days, it might be better to use Player B. But how often would you have to be right to switch players day by day, rather than simply stick with player A? How often do you have to guess correctly to have a season value better than .6?

First off, let's assume the success of each player is independent- Player A's good days don't have any effect on whether or not Player B has a good day. In that case, 30% of the time it doesn't matter who you play, since both Player A and Player B will have good days (0.6*0.5=0.3). And 20% of the time, it doesn't matter because both Player A and Player B will have bad days (0.4*0.5=0.2; one minus the probability of a good day for each player, or 1-0.6 for A and 1-0.5 for B). The remaining 50% of the time, only one of Player A and Player B will have good days.

[Before we continue, I need to define an expected payoff. An expected payoff is simply the sum of the quantity of an outcome multiplied by the probability that outcome will happen for all outcomes. For example, let's say you have a raffle of 100 tickets, and you buy one for a dollar. The prize for the ticket drawn is $200. That means if your ticket is drawn, you net $199 ($200-$1) and if not, you simply lose one dollar. There's a 1% chance your ticket is the one out of one hundred that wins (1/100=.01). The other 99% of the outcomes, you don't win anything (1-.01=.99). The expected payoff for the raffle is then .01($199)+.99(-$1)= $1. So you can expect each one dollar ticket to net you roughly one dollar. If you think this through, that makes sense. If you bought all 100 tickets, you'd spend $100 but be guaranteed the $200 prize, so for each dollar you spend you get that plus one more dollar back. Hence, the expected payoff is $1.]

Continuing with the problem at hand, for half of the days in question your payoff is 0.3. The remaining 50% of the games, you have to guess right often enough to exceed a total expected value of 0.6 (which is what you would get just playing Player A every day). That means you have to get at least an additional 0.3 out of the remaining 50% of games.

Since you have to get an expected .3 points out of 50% of the games, the ratio is .3/.5, or 60%. You have to guess right 60% of the time on the remaining half of the games just to break even. If you have enough information to tilt the odds that far in your favor, then bully for you. But chances are, you don't. In that case, you are going to guess right closer to 50% of the time, in which case your expected payoff for that half of games is 0.25. That means, if you try to switch players based on matchups, your expected payoff for a season is 0.55, which is significantly worse than the 0.6 you'd get just starting player A every day.

These numbers get more extreme the bigger the difference between the two players. If player A had a success rate of 80%, then 40% of the time they both have good days and 10% of the time they both have bad days (giving you 0.4 for half the games). For the remaining half, you'd then have to guess right 80% of the time by the same math. If player A has a success rate of 80% and Player B has a success rate of 40%, then 32% of the time they both have good days and 12% of the time they both have bad days. In that case, you would be guessing on the remaining 56% of the games. To get all the way up to .8, you'd have to guess right 86% of the time for the remaining 56% of games. If you're only going to guess right 50% of the time, then your total payoff for those remaining games is .28, leaving you with a payoff of 0.5.

OK, that's a lot of numbers. I get it. It might be hard to follow. There's a lot of probability theory inherent in this analysis- expected payoffs, etc. But the basic idea is this- if you have two players, you generally speaking have to be better, and often much better, than 50% right on your start/sit decisions to have a meaningful increase in production from playing matchups to make it a better strategy than simply starting the better of the two players. The better one of the players is, or the bigger the discrepancy between the two players, the harder it gets to improve upon simply playing the better player every day.

No comments:

Post a Comment