Moneyball for software engineering

How metrics-driven decisions can build better software teams.

While sports isn’t always a popular topic in software engineering circles, if you find yourself sitting in a theater watching “Moneyball,” you might wonder if the story’s focus on statistics and data can be applied to the technical domain. I think it can, and gradually, a growing number of software companies are starting to use Moneyball-type techniques to build better teams. Here, I’d like to provide you a brief introduction to some of these ideas and how they might be applied.

The basic idea of the Moneyball approach is that organizations can use “objective” statistical analysis to be smarter and build more competitive teams, which means:

  • Studying player skills and contributions much more carefully and gathering statistics on many more elements of what players do to categorize their skills.
  • Analyzing team results and player statistics to discover patterns and better determine what skills and strategies translate to winning.
  • Identifying undervalued assets, namely those players, skills, and strategies that are overlooked but which contribute heavily to winning.
  • Creating strategies to obtain undervalued assets and build consistent, winning teams.

While smaller-market baseball teams such as the Oakland A’s were the original innovators of Moneyball, these techniques have now spread to all teams. The “old-school” way of identifying talent and strategies based on experience and subjective opinion has given way to a “new school” that relies on statistical analysis of situations, contributions, and results. Many sports organizations now employ statisticians with wide-ranging backgrounds to help evaluate players and prospects, and even to analyze in-game coaching strategies. Paul DePodesta, who was featured in both the book and the film, majored in economics at Harvard.

In software engineering, most of us work in teams. But few of us utilize metrics to identify strengths and weaknesses, set and track goals, or evaluate strategies. Like baseball teams of the past, our strategies for hiring, team building, project management, performance evaluation, and coaching are mostly based on experience. For those willing to take the time to make a more thorough analysis through metrics, there is an opportunity to make software teams better — sometimes significantly better.

Measure skills and contributions

It starts with figuring out ways to measure the variety of skills and contributions that individuals make as part of software teams. Consider how many types of metrics could be gathered and prove useful for engineering teams. For example, you could measure:

  • Productivity by looking at the number of tasks completed or the total complexity rating for all completed tasks.
  • Precision by tracking the number of production bugs and related customer support issues.
  • Utility by keeping track of how many areas someone works on or covers.
  • Teamwork by tallying how many times someone helps or mentors others, or demonstrates behavior that motivates teammates.
  • Innovation by noting the times when someone invents, innovates, or demonstrates strong initiative to solve an important problem.
  • Intensity by keeping track of relative levels and trends, increases or decreases in productivity, precision, or other metrics.
  • Effort by looking at the number of times someone goes above and beyond what is expected to fix an issue, finish a project, or respond to a request.

These might not match the kinds of metrics you have used or discussed before for software teams. But gathering and analyzing metrics like these for individuals and teams will allow you to begin to characterize their dynamics, strengths and weaknesses.

Web 2.0 Summit, being held October 17-19 in San Francisco, will examine “The Data Frame” — focusing on the impact of data in today’s networked economy.

Save $300 on registration with the code RADAR

Measure success

In addition to measuring skills and contributions for software teams, to apply Moneyball strategies, you will need to find a way to measure success. In sports, you have wins and losses, so that’s much simpler. In software, there are many ways a software team may be determined to succeed or fail depending on the kinds of software they work on, such as:

  • Looking at the number of users acquired or lost.
  • Calculating the impact of software enhancements that deliver benefit to existing users.
  • Tracking the percentage of trial users who convert to customers.
  • Factoring in the number of support cases resulting from user problems.

From a Moneyball perspective, measuring the relative success of software projects and teams is a key step to an objective analysis of team dynamics and strategies. In the end, you’ll want to replicate the patterns of teams with excellent results and avoid the patterns of teams with poor results.

Focus on relative comparisons

One important Moneyball concept is that exact values don’t matter, relative comparisons do. For example, if your system of measurement shows that one engineer is 1% more productive than another, what that really tells you is that they are essentially the same in productivity. But if one engineer is 50% or 100% more productive than another, then that shows an important difference between their skills.

You can think of metrics as a system for categorization. Ultimately, you are trying to rate individuals or teams as high, medium, or low in a set of chosen categories based on their measured values (or pick any other relative scale). You want to know whether an engineer has high productivity compared to others or shows a high level of teamwork or effort. This is the information you need at any point in time or over the long term to identify patterns of success and areas for improvement on teams. For example, if you know that your successful teams have more engineers with high levels of innovation, intensity, or effort, or that 80% of the people on a poorly performing team exhibit a low level of teamwork, then those are useful insights.

Knowing that your metrics are for categorization frees you up from needing exact, precise measurements. This means a variety of mechanisms to gather metrics can be used, including personal observation, without as much concern that the metrics are completely accurate. You can assume a certain margin of error (plus or minus 5-10% for example) and still gather meaningful data that can be useful in determining the relative level of people’s skills and a team’s success.

Identify roles on the team

With metrics like those discussed above, you can begin to understand the dynamics of teams and the contributions of individuals that lead to success. A popular concept in sports that is applicable to software engineering is that of “roles.” On sports teams, you have players that fill different roles; for example, in baseball you have roles like designated hitters, pitchers, relievers, and defensive specialists. Each role has specific attributes and can be analyzed by specific statistics.

Roles and the people in those roles are not necessarily static. Some people are truly specialists, happily filling a specific role for a long period of time. Some people fill one role on a team for awhile, then move into a different role. Some people fill multiple roles. The value of identifying and understanding the roles is not to pigeon-hole people, but to analyze the makeup and dynamics of various teams, particularly with an eye toward understanding the mix of roles on successful teams.

By gathering and examining metrics for software engineering teams, you can begin to see the different types of roles that comprise the teams. Using sports parlance, you can imagine that a successful software team might have people filling a variety of roles, such as:

  • Scorers (engineers who have a high level of productivity or innovation).
  • Defenders (engineers who have a high level of precision or utility).
  • Playmakers (engineers who exhibit a high level of teamwork).
  • Motivators (engineers who exhibit a high level of intensity or effort).

One role or skill is not necessarily more valuable than another. Objective analysis of results may, in fact, lead you to conclude that certain roles are underappreciated. Having engineers who are strong in teamwork or effort, for example, may be as crucial to team success as having those who are highly productive. Also, you might come to realize that the perceived value of other roles or skills doesn’t match the actual results. For example, you might find that teams filled with predominantly highly productive engineers might not necessarily be more successful (assuming that they lack in other areas, such as precision or utility).

Develop strategies to improve teams

With metrics to help you identify and validate the roles that people play on software teams, and thereby identify the relative strengths and weaknesses of your teams, you can implement strategies to improve those teams. If you identify that successful teams have certain strengths that other teams lack, you can work on strengthening those aspects. If you identify “undervalued assets,” meaning skills or roles that weren’t fully appreciated, you can bring greater attention to developing or adding those to a team.

There are a variety of techniques you can use to add more of those important skills and undervalued assets to your teams. For example, using Moneyball-like sports parlance, you can:

  • Recruit based on comparable profiles, or “comps” — profile your team, identify the roles you need to fill, and then recruit engineers that exhibit the strengths needed in those roles.
  • Improve your farm system — whether you use interns, contract-to-perm, or you promote from within, gather metrics on the participants and then target them for appropriate roles.
  • Make trades — re-organize teams internally to fill roles and balance strengths.
  • Coach the skills you need — identify those engineers with aptitude to fill specific roles and use metrics to set goals for development.

Blend metrics and experience

One mistaken conclusion that some people might reach in watching the movie “Moneyball” is that statistical analysis has completely supplanted experience and subjective analysis. That is far from the truth. The most effective sports organizations have added advanced statistical analysis as a way to uncover hidden opportunities and challenge assumptions, but they mix that with the personal observations of experienced individuals. Both sides factor into decisions. For example, a sports team is more likely to draft a new player if both the statistical analysis of that player and the in-person observations of experienced scouts identify the player as a good prospect.

In the same way, you can mix metrics-based analysis with experience and best practices in software engineering. Like sports teams, you can use metrics to identify underappreciated skills, identify opportunities for improvement, or challenge assumptions about team strategies and processes by measuring results. But such analysis is also limited and therefore should be balanced by experience and personal observation. By employing metrics in a balanced way, you can also alleviate concerns that people may have about metrics turning into some type of divisive grading system to which everyone is blindly subjected.

In conclusion, we shouldn’t ignore the ideas in “Moneyball” because it’s “just sports” (and now a Hollywood movie) and we are engineers. There are many smart ideas and a lot of smart people who are pioneering new ways to analyze individuals and teams. The field of software engineering has team-based dynamics similar to sports teams. Using similar techniques, with metrics-based analysis augmenting our experience and best practices, we have the same opportunity to develop new management techniques that can result in stronger teams.

Jonathan Alexander discussed the connection between “Moneyball” and software teams in a recent webcast:

Photo: TechEd 617 by betsyweber, on Flickr

Related:

tags: , , ,