I’ve written in the past about using machine learning to predict a pitcher’s next pitch, which is a particularly difficult problem. This blog attempts to tackle this topic again. Spoiler alert: I didn’t find the Rosetta Stone in predicting the next pitch, though I did uncover a few interesting items.
About the Data and Approach
The dataset is comprised of (almost) all 30K of Max Scherzer’s MLB pitches. The data includes the counts and sequences of all his pitches, which is what we need for a Markov chain, the chosen approach for this problem. A Hidden Markov Model (HMM) is comprised of observed states and the latent states that determine them. This modeling technique essentially outputs how states transition from one to another.
Here’s how a HMM works in this application of predicting the next pitch.
We first calculate transition probabilities, which is how often a certain type of pitch is followed by another type of pitch. For example, after throwing a fastball, there might be a 50% probability of throwing another fastball, a 25% probability of throwing a curveball, a 15% probability of throwing a slider, and a 10% probability of throwing a change-up.
We next calculate emission probabilities. Given a certain pitch, the emission probability returns the likelihood of a certain count (e.g. 3-2, 1-2). For example, given the pitch was a fastball, what is the probability of the situation being a 3-1 count?
Lastly, we use the Viterbi algorithm to find the most likely sequence of hidden states. In this instance, a hidden state would be a “pitch.” Essentially the idea is to see if, given the count and the predicted previous pitch, the Viterbi algorithm can reproduce the pitches thrown.
Below are Scherzer’s pitch transition probabilities for fastball, change-up, curveball, slider, and “other”. (“Other” is a catchall for the small number of cutters and sinkers he throws). As we know, Scherzer goes to his fastball a lot. After any given pitch, he is pretty likely to follow-up with a fastball.
The chart below displays Scherzer’s pitch emission probabilities, which, given a certain pitch, returns the likelihood of a certain count. For example, many of the dots on the fastball line are comparatively close together. However, the dots on the curveball line are more spread out. Therefore, given just a count, we can more easily predict whether or not the pitch was a curveball than if it was a fastball.
I fed the Viterbi algorithm the sequence of counts for all of Scherzer’s pitches in his career. The algorithm uses the transition and emission probabilities to predict the sequence of pitches. (This approach isn’t perfect, as the last pitch in a game won’t necessarily be correlated to the first pitch in the following game, but this small glitch will only impact the first observation in a new game).
Well, the algorithm has pretty poor accuracy, under 50%, mostly predicting fastball in all scenarios. Some adjustments could likely be made to improve performance, though we expect to have pretty low accuracy on such a problem.