clock menu more-arrow no yes mobile

Filed under:

Analytics: Improvement in the San Diego Chargers Offense

This author's attempt to create something akin to a cash flow analysis (starting cash, changes, ending cash) for the Chargers, starting with 2012 value, value of players lost, value of players gained (draft, FA) and changes in player performance that were on the roster for both years totaling to the ending 2013 value.

If you buy something from an SB Nation link, Vox Media may earn a commission. See our ethics statement.

Ron Chenoy-USA TODAY Sports

As we look forward to free agency, the draft and all the decisions facing the San Diego Chargers and other teams, I thought it would be interesting to revisit all the changes from last year to now, but using metrics as much as possible.  The idea was something very similar to a cash flow statement, outlined below:

  • Starting cash
  • Changes in cash (operations, financing and investments)
  • Ending cash

So I have a subscription to Pro Football Focus (PFF) and wanted to use their points as a kind of metric to allow such an analysis. So, the parallel "PFF Flow" would look something like:

  • Ending PFF points as of 2012
  • Changes in points from then:  value of players lost, value of those gained (FA, draft) and changes in player performance on the roster
  • Ending PFF points as of 2013

So, I have done that and will show the results below for the offense.  There were several methodological obstacles to overcome leading me down an Excel wormhole from which I think I emerged largely intact.  In summary, the methodological problems were as follows:

  • PFF scores for players are helpful to compare players playing the same position (e.g. P. Manning vs. Philip Rivers) but not different positions (J.J. Watt vs. P. Manning).
  • Aggregate point totals for a position (e.g. cumulative 401 points for all WR vs. 248 for all HB) do not connote any aggregate offensive value (e.g. WR position not necessarily more important than HB position)

So, in summary, I could not just add them up.  I did do the following to help circumvent these problems:

  • I tallied up players’ PFF points and gave each player a standard deviation score for their position (e.g. Rivers is 2.2 standard deviations above the mean for all QB’s measures in 2013)
  • I weighted the players’ deviation scores for a team total based on snaps. For example, the HB position for SD is a weighted blend of the deviation score for Woodhead (44%), Mathews (42%) and Brown (14%).
  • I weighted each position (QB, RB, WR, etc) based on overall contribution to team offense by optimizing the data to team metrics, specifically DVOA for Offense (Football Outsiders metrics).

I can expand on my methods for anyone interested (assuming there is).  But, here are some charts showing the type of analysis I was trying to do.


The blue bars are the metric I developed based on the PFF data (my abbreviations are trying to connote PFF data, Weighted, Team, Offense) and the red bars are DVOA data shown in standard deviations.  The -0.38 in 2012 should be read that San Diego's offense was -0.38 standard deviations worse than the mean offense in 2012.  The -0.65 for DVOA should be read very similarly; it says that their offensive DVOA score (total) was 0.65 standard deviations below the mean for the league in 2012 based on DVOA scores.

The metrics are not identical but at least somewhat similar.  So what?  If I kept it at the team level, this would be creating a new metric that was probably worse than existing team metrics at explaining team performance.  What I like about this metric is that it's granularity goes down to the player level, so you can drill down to positions and players.

So here is the same data for San Diego, but providing aggregated position value for the offense.


The idea is you can see the same change in aggregate at the team level (-.379 to 1.254) but also the component parts of the offense that comprise that score.  DVOA and other team level metrics typically don't allow that.

The obvious observations are as follows:

  • QB play dominated the offensive metrics, both last year, in bringing down the score and this year in turning it up.
  • RB had a surprisingly large impact on the aggregate score, but the PFF metrics loved Danny Woodhead and HB position had a fairly high weight on the team level performance, meaning, according to this data, HB performance had a strong impact on team performance.
  • It is hard to read, but all of the metrics were negative last year, except a small positive score for WR.  In 2013, all were positive, except for a small negative score for TE.

The validity of these conclusions are clearly open to discussion but I find it interesting to see what the data says.  The following table may help make clear some of the arithmetic calculations.  Below is a chart showing San Diego's calculations for 2012 and 2013.


The WGT STDEV by POS shows the team STDEV for a particular position (detail shown below), the weight column is the weighting given each position to as it corresponds in importance to explaining overall offensive performance and the final two columns are the product of the STDEV x WEIGHT.

To emphasize, the Weight column is the result of Solver (linear optimization) in Excel trying to assign weights to the various positions as to maximize the correlation between this PFF-based data and DVOA at the team performance level.

Below is the calculation for the HB's, highlighted in purple above:


Most of the above should be intuitive, except the STDEV column.  That is calculated taking the player's score, comparing it to the mean of the all the scores for players in the same category (e.g. HB).  So, Danny Woodhead was 2.4 standard deviations above the mean, which is a very high score and he got 44.2% of the snaps, and contributed 1.07 of the total 1.14 team score.

Finally, I want to show a "roll forward"  So below shows the waterfall chart for HB position.  It starts with last year's score, takes away the players that left, adds in FA (no drafted players) and ends with this year's score.


This starts with last year's score of 0.33 or 1/3 of a deviation above the mean.  The players in the LOST category included Jackie Battle and Curtis Brinkley (when playing HB).  They had negative scores so losing them counted as a gain.  Then the FA bounce driven by Danny Woodhead's extraordinary year.  The ROSTER was the change in performance by players still on the roster, Brown and Mathews.  Their scores both went down, so a bit of a loss.  We did not have any draftees at HB, so nothing there, ending at 1.14, the score for 2013.  So our HB production was a bit over a standard deviation over the mean, up from 0.33, driven mostly by Woodhead.

Here is a summary of all players, showing the flow from their own score to the impact on the aggregate team score.



  • POS (players can play multiple positions, especially the OL)
  • Player name
  • Player STDEV:  this is the player's score derived from their PFF score, their standard deviations away from the mean of all players at that position in that year
  • Snap %:  the portion of the team's snaps for that position that the player played.
  • Player STDEV x Snap %:  this shows the contribution by the player to the team score for the position
  • POS Weight:  this is the weight the position (e.g. QB, RB, etc) received based on the position's score relating to total team offensive performance
  • Player Impact:  this is (Player STDEV x Snap %) x Position Weight (~importance to total team offense) = impact the player had total team offense

In this case, unsurprisingly, Rivers had a huge impact .913 out of 1.254 for the team.  What was troubling was how much, according to this data, Gates hurt the team.  According to the data the TE position is pretty important and he had a pretty bad year.

So, in conclusion, this data correlates well with metrics that explain team level performance (DVOA) but also allows very granular analysis within and across teams and positions.  My next steps, as I see them, are:

  • Refine the approach to improve the correlation to team level metrics and ensure intuitive useful position weightings in the process.  Using linear optimization can lead to funky results.  I ought to be able accomplish similar goals doing linear regression analysis.  This could lead to materially different weights assigned to the positions.
  • Show the defensive data

One interesting thing was how dramatic the offensive improvement was across the board, save TE.  It would interesting to see how many times teams had such a dramatic changes by teams with the same personnel.  This offense was driven by the same Rivers who struggled last year.  KC had a similar offensive improvement, but they also got a new, better QB so may be hard to separate the impact of the coaching.  McCoy/Whisenhunt worked some serious magic.