Wednesday, December 30, 2009

Simplicity and the art of performance measurement



I spend a lot of time measuring things. Because its the end of the year, where I work, that means doing reviews. After spending lots of years measuring things, I've come up with a few simple tips that I hope offer you some help next time you have to measure something important.

"Make things as simple as possible, but no simpler."

The quote above was often attributed to Einstein, but what he actually said was:

It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.

The quote and its variants provide an interesting lesson on how a proper balance of simplicity and detail can change based on the audience. The shorter version is a lot more meaningful to a general audience, but to his 1933 Oxford audience of theoretical physicists, his original wording was more appropriate. The lesson to us: recognize the background and interests of our audience and provide an appropriate level of simplicity. How simple to make things is not always obvious.

Many years ago I was put in charge of a fairly large technology group in a Philadelphia based ad agency (now part of G2 Worldwide).  One of my first changes to the department was to think about all the intricate bits that make programmers (who made up the majority of my staff) successful -- training, dedication, demonstrated skill, defect rate, in process and end point quality assessments, breadth of experience, utilization and billability, etc. etc..  I created a point based system which provided a weighting for each of the skills and communicated it to the team. The "dungeons and dragons review system" (as it quickly became known) did not last long.

While perfectly transparent (very much in vogue these days) and logical (no one ever really disputed that the factors were wrong), and almost completely objective, the D&D system was near impossible for anyone to focus on while they do their job. They didn't get it, and I scrapped that system.

The replacement system was done using a simple rubric that folks could easily understand. The work that went into the D&D system wasn't wasted, I just found a better way to present it. A simple letter grade was assigned per project, and the rubric worked like this (working from the bottom up):

  • F -- Do nothing and you fail.
  • D -- Earn a D by having a high quality work product.  Nothing else matters more than quality, but that only earns you a D.
  • C -- A passing grade was something on-time and on-budget.  Note that if you were on-time and on-budget, it didnt matter unless you have high quality.
  • B -- Reuse something from another project.  This was something critical to our strategy at the time, and helped keep us competitive in technical bids.
  • A -- Contribute something for reuse.  This was one of those things that everyone wants to do anyway.  Its clearly valuable and was an important part of the strategy, but unless more people reuse than create things that are reusable, it wouldn't work financially.
The actual presentation was done with a triangle graphic with the higher grades toward the top, but you get the idea: quality is foundational, reuse is an even-better-if.

The new system was easily understood by the team and by management.  Additional supplementary detail was provided for exactly what we meant by key terms like "high quality", "on-time", "on-budget", etc..  In short, the detail was available when you needed it, but it didnt cloud the high level.     

Using a rubric like the one shown above is helpful, but there are many ways to accomplish a similarly successful set of metrics for your team, your site, or pretty much anything complicated that involves a lot of people. Just keep the following in mind:

  • Focus on only the most important aspects of the thing you're measuring.
  • Reduce emphasis or eliminate metrics for the things you dont control or cant impact.
  • Find a way to express the goals in terms that the audience can understand.
  • Have background materials available when more detail is needed.  Understanding in concept is one thing, doing and affecting positive change requires greater comprehension and detail.
  • Make sure the goals are measurable and that everyone understands how to calculate them.
  • Encourage review and discussion of the relevant metrics as things change.
  • Be forward looking: communicate the goals and metrics up-front. The more people understand how the system works, the more they can do to in support of them.
  • Set expectations with the relevant stakeholders (the team and management in this case) that the metrics will show opportunity for improvement, followed by improvement, followed by new opportunities.

And through the process, inevitably you wont ace everything. We are all high-achievers (well, at least where I work) and we should not be discouraged when the metrics identify room for improvement -- that is what they're there for.

No comments: