Thursday, March 4, 2010

Doing it the right way: Asking "why am I doing this?"

It's a simple question: "why am I doing this?"  Unfortunately, too often we forget to ask ourselves why we are doing something. In marketing/advertising, some people call it the "so what?" test. I've heard a tech colleague call it the "stupid test" -- but call it whatever you want, it just doesn't get asked enough. Below are a few of my favorite patterns of this:

The Builder

I love hard-core tech people. The sorts of people who have deep, intellectual and detailed technical debates like this one on SOAP vs. REST. The kind of people who just live to build things with technology. Often these people take overly complicated routes to doing something just because they can. At times, I am one of these people. For example, I want to do this so my tomato plants stay healthy this summer.  Just because it is fun does not mean its the right thing to do.

Even if it takes very little time, building the wrong thing can be very expensive over the life of what you're building, so make sure you know why it's there in the first place and make sure there isn't a more efficient way to get it done before you start.

The Template

Templates are wonderful. Patterns are great. But they don't always fit every problem. Before you use a pattern, ask what part is relevant to the task at hand. What outcome do you want to drive? What risks exist? Quantify them and make a decision about what part to use and what not to. Here are a few common examples of the productivity-killing template:
  • The software document that someone filled out instead of authored. It has all the fields filled in, its umpteen pages, but its not useful to its audience.
  • The form that asks several questions but never seems to hit the core issue.
  • The software process that includes steps that do not create any value or reduce any risk in your scenario.
  • The software package/library/API that you chose too quickly yet doesn't fit what the business needs.
We can't always control the template, but we can apply common sense. Keeping in mind why it exists, who will use it and what the real purpose is (which may not be obvious) can mean a big boost in effectiveness.

The Review

I am very particular about code reviews. If done properly, they can be incredibly effective in removing defects. Unfortunately, most code reviews are not done properly.  The problem isn't limited to code reviews. Security audits, requirement or prototype reviews, software package selection can all fall into the trap where we focus only on what is top of mind rather than the core purpose.

Too many code reviews focus on items of little value to anyone or, worse, on trivial debates about spacing or naming.  Security audits can focus on compromises or breeches that have little quantifiable cost to the business, leaving more significant risks untested.  Business reviews of requirements too often focus on what is top of mind rather than what is most critical to resolve early.

It is amazing what we can accomplish when we step back and take an orderly and rigorous approach to solving the problem at hand. Review the code to remove defects. Review the requirements to understand the limitations. Audit the software to understand risk to the business.

Best Practice?  Not if you're doing it wrong.

Occasionally it is tempting to just stop building anything. Don't write any documents. Don't have templates for IT requests, forms for employee reviews, requirements documents or code reviews. After all, if it doesn't work, why do it?

Because if you have more than a handful of people at your company, then you do need process. Lots of them, probably. You need forms and checklists and reviews to make sure that risk is adequately controlled, people remain productive and things keep running even when the really smart key people aren't there. Most importantly, you need to adapt and adjust those processes to be more efficient by having everyone on the team asking why and contributing to making the whole system run more effectively.

...and luckily, it costs almost nothing.

Friday, February 12, 2010

3 Sure-Fire Ways to Make Your Website Fail


I have been part of some spectacular failures in my career (if you've ever done anything on the web and you're honest, you probably have too). I'm not talking about the stuff that is late or a little over budget or whatever -- these are the projects that never produced a return anywhere near the amount invested. I believe most of them were avoidable, and many with not much more than a little forethought to realize that the optimal solution was -- as is sometimes the case -- simply not to play the game.

No goals? Fix that. Fast.

I'll start by taking it for granted that you already believe that a project needs goals to be successful. At the very least, even if you don't, someone managed to get some money to pay for the project and they probably (hopefully) had a reason, even if its not immediately obvious (if you're a vendor, sometimes your client wont tell you right away which is a big mistake on their part). If you work for a company that routinely gives large sums money to projects for no reason, you should leave that company because they are going to go out of business.

That said, lets assume that it is your job to come up with, review, or approve the key performance indicators (KPIs) that are used to set and monitor your project success. Here are 3 quick and easy ways to make sure the project is a complete disaster.

Tip 1: Use only industry specific goals

Do the same thing as everyone in your industry and you have to compete with all of them. Not fun. This might work for a time, but eventually a more efficient competitor will enter your industry and shut you down. Just ask any airline that was around before about 1990.

If you actually want to be successful, figure out what about your project supports your unique position in the market and as a company. Hint: page views isnt it. Visitors? Better, but no. Carve out what is unique about your strategy and make sure your KPIs reflect what you're trying to do. More on this in a later post.

Tip 2: Set it and forget it

Let's say that someone else accidentally found the perfect mix of goals and metrics, fully optimized your site and marketing programs to maximize the key outcomes of units sold per marketing spent. No more chance to really hurt the company right? Nope! You can do still do some real damage. How? Its as easy as can be -- just keep doing what you're doing.

Most sites do a full redesign every 2-3 years. Some adjust incrementally a lot more often than that. Why do they bother? Because tastes change. So do entire industries, the economy, target consumers and their behavior and pretty much everything else. If a site does any A/B testing at all, they've learned something and likely adjusted their site accordingly. Your KPIs need to evolve with the business, its target audience and your overall strategy and tactics.

Validate your tactics and metrics on a regular basis to make sure you're still measuring the right things. Even if you dont update the site often (which alone is a pretty good way to lose relevancy), it is the rare tactic or metric that holds optimal for half a year, so make plans to re-validate at least once a quarter and adjust as necessary to keep things optimal and on the good path. Your site is changing, make sure your dashboard is too.

Tip 3: Make the goals incredibly complicated

This is easy as can be -- the detailed reports and raw data you're looking at are almost certainly too complicated for your peers. Your detailed and incredibly complicated charts might look great, but they're not going to convince your boss, peers, and the folks with the money to spend correctly. As I posted before, figuring out how simple to make things isn't as easy at it seems.

A recent example of mostly pointless complexity can be found in the twitter blog posting on the superbowl. It's really pretty (its the image at the top of this posting), but its hard to imagine how anyone could use that to improve anything. Even a little. If you want to have busy executives understand the insights you're trying to convey, you need to make it dead simple. Focus. Simple. Good.

Of course you'll need to do a lot of analysis to be successful -- good metrics dont come cheap. If you want the people making the decisions to listen, find a way to simplify and focus on the items that are the most important. Process the data down using terms the business can understand and believe in.

These simplified goals need to be:
  • Easy to understand
  • Expressed using terms familiar to your audience
  • Involve very simple math or transformations (ideally none)

The last point is a tough one. Throw up a sigma on the projector (that E-looking thing that means summation) and I'll love it, but most people fear greek symbols and they wont trust what you're saying. Explain in very basic terms your audience can understand. You can gradually increase the complexity over time assuming the audience grows with you, but if a 5th grader doesnt understand the basics of how you got to your numbers, neither will a busy executive.

Monday, February 1, 2010

The Toyota Recall: In Defense of Lean

 

Why Lean needs defending

In case you haven't heard, Toyota recently had a really big issue requiring a massive recall. Since Toyota is closely associated with the Lean process, some pretty reputable folks are taking it a little too far and hinting that Lean could be part of the failure. This is a mistake for many reasons, but most importantly misses several key points of what Lean is and what it isn't. It also touches on a few of my favorite methodological pet peeves. Before we get to that, let's start with the basics:

Toyota doesn't use Lean

Toyota uses TPS: the Toyota Production System. It significantly predates Lean, which was coined in the late 80s, derived from a graduate thesis recognizing the unique Toyota processes. The TPS has much in common with Lean (since Lean was based on TPS), but they're not the same thing. Early Lean literature was focused primarily on systematic improvement to reduce waste at the same level of quality; quality was not explicitly a Lean goal even though Toyota included Deming's Total Quality Control techniques into TPS in the 1960s. TPS includes a number of things that Lean doesn't and for a very simple reason -- TPS has been custom fit to the Toyota corporate strategy over the last 50+ years. Lean can help pretty much anyone do anything so you're going to run into lots of edge cases where it simply has no opinion. TPS is pretty specific about building cars, and it *does* include quality control measures, so what happened?

Processes only work when you use them

According to an Economist article several months ago, "People inside the company believe [earlier] quality problems were caused by the strain put on the fabled Toyota Production System by the headlong pursuit of growth." In the same article, the Economist points out that Toyota sales were down nearly 24% in 2009; it does not take much imagination to guess how much management and the folks on the line want you pulling the andon cord for something less than obvious.

But assuming that the all-too-easy-to-believe executive pressure or line laziness isn't fair to Toyota. Given that the National Highway Traffic Safety Administration (NHTSA) conducted 6 separate investigations which failed to find the issue, the issue is clearly more complex than a simple assembly line issue.

Quality control can't end once the product has been delivered

One pet peeve of mine is that it is very difficult to tell how good something is until after you own it for a while. We need something that provides a broader view of quality than narrowly defined, short-term surveys that can't tell the difference between bad cupholders and cars that break down. There are parts of the car that matter a lot more than others, and to optimize your quality spend, you must target these areas appropriately. Catastophies are sure to occur, and to resolve them we do not need public apologies, noise in the press and mass speculation, we need systematic increased focus on the areas that matter and methods of measuring accountability.

The very worst thing that Toyota can do -- and I doubt they will -- is what quality expert and statistician George Box calls "Management by Disaster" where one disaster leads to a chain of inefficiency without addressing the core problem. Accoring to Box, this overreaction to unlikely events happens when:
  • Systems are changed in response to occasional disasters,
  • by people with little direct knowledge of the problem, and
  • never checked to see whether or not they solve the issue
Luckily, most data driven organizations avoid such mistakes, and having a well established process like TPS provides a foundation for enacting effective change and improved measurement to provide meaningful accountability. Processes like Lean, TPS, Six Sigma and others were designed to use defects as input to improve the process. While the tragic deaths of the past months provide a very public and painful lesson for us, it should also remind us of why we have process in the first place. To be sure, someone (probably lots of people) screwed up, but terminating them won't stop this from happening again. Improvement for anything complicated happens only with careful and systematic change and we should not limit our scope just to Toyota. We must demand more precise and meaningful measures across the industry rather than meaningless apologies and finger pointing between Toyota and its suppliers.

Oh, and...

If you're wondering what happened to the guy that wrote the graduate thesis that coined lean... he was recently named President of Hyundai America, one of the only car companies that saw positive sales growth in the 2009 recession. This and his prior role as VP of Product Development and Strategic Planning should remind us that lessons of effective systematic thinking apply to a lot more than just building cars and writing code.

Wednesday, January 6, 2010

Reflections on a waterfall



I recently re-read Tom DeMarco's 1982 classic Controlling Software Projects: Management, Measurement, and Estimates. It is interesting how little changes in almost 20 30 years (wow). One passage in particular intrigued me:


Software developers may indeed resent the idea that an empirical projection approach could be meaningfully applied to their work.  It implies that yet another area of anarchy has begun to succumb to order.  Human beings love anarchy, in limited doses, and regret its passing.  The more order a manager introduces into the the development process, the more he or she ought to consider ways to reintroduce small amounts of "creative disorder" (brainstorming sessions, war games, new methods, new tools, new languages) as a means of keeping bright minds involved.

I picked up the book again because I remembered reading something -- years ago -- about how someone did a study that underestimating tasks actually caused them to take longer -- or at least thats how I remembered it. I'm a lot more careful with believing (or assuming) causation these days, but the argument seemed perfectly logical and I can say unscientifically that I owe much of my on-time percentage to being realistic with the estimates and the staff (even with fixed deadlines). For anyone that happens to know the study or the book, I'd love if you let me know.

In either case, it was a pleasant surprise to pick back up some DeMarco, and it was remarkable how little of it has been changed by nearly 30 years of new technologies, processes, and the like.  For a few, he even has some brilliant predictions:

While an ansynchronous implementation would be in many ways the most natural one (that is, most closely related to the underlying perception of the requirement), present day technology discourages it.  ....  I believe that this practice will change drastically during the 1980s: By the end of the decade, we'lll be routinely building systems with a maximum of asynchronism, rather than the minimum.
Clearly, the web pushed asynchronous development far past what DeMarco would have predicted, but it was certainly helped along quite a bit by object oriented methods, which would take several years after he wrote this to come into vogue.  You cant write a tech book and expect all of it to stay relevant (it wouldn't be technology if it didn't change), and clearly there are some outdated sections.  Much more interesting are the sections that should be long-ago defunct but aren't.

In 1982, the software development lifecycle at any respectable dev shop was based in the waterfall method (heavy design, then dev, test, etc.).  The term "Agile" wouldn't be introduced for another 20 years (although Boehm -- who wrote the introduction -- would introduce some papers on spiral processes, a predecessor of the Agile movement, a few years later).  As such, you might expect a great deal of the Waterfall-centric procedures and suggestions to become outdated and irrelevant.  Not so. Agile and iterative development don't much change the core of message that a process should be repeatable, measurable and, as he notes above, predictable.

I wonder how many of the other tech books I've read recently will last on my shelf nearly as long.

Wednesday, December 30, 2009

Simplicity and the art of performance measurement



I spend a lot of time measuring things. Because its the end of the year, where I work, that means doing reviews. After spending lots of years measuring things, I've come up with a few simple tips that I hope offer you some help next time you have to measure something important.

"Make things as simple as possible, but no simpler."

The quote above was often attributed to Einstein, but what he actually said was:

It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.

The quote and its variants provide an interesting lesson on how a proper balance of simplicity and detail can change based on the audience. The shorter version is a lot more meaningful to a general audience, but to his 1933 Oxford audience of theoretical physicists, his original wording was more appropriate. The lesson to us: recognize the background and interests of our audience and provide an appropriate level of simplicity. How simple to make things is not always obvious.

Many years ago I was put in charge of a fairly large technology group in a Philadelphia based ad agency (now part of G2 Worldwide).  One of my first changes to the department was to think about all the intricate bits that make programmers (who made up the majority of my staff) successful -- training, dedication, demonstrated skill, defect rate, in process and end point quality assessments, breadth of experience, utilization and billability, etc. etc..  I created a point based system which provided a weighting for each of the skills and communicated it to the team. The "dungeons and dragons review system" (as it quickly became known) did not last long.

While perfectly transparent (very much in vogue these days) and logical (no one ever really disputed that the factors were wrong), and almost completely objective, the D&D system was near impossible for anyone to focus on while they do their job. They didn't get it, and I scrapped that system.

The replacement system was done using a simple rubric that folks could easily understand. The work that went into the D&D system wasn't wasted, I just found a better way to present it. A simple letter grade was assigned per project, and the rubric worked like this (working from the bottom up):

  • F -- Do nothing and you fail.
  • D -- Earn a D by having a high quality work product.  Nothing else matters more than quality, but that only earns you a D.
  • C -- A passing grade was something on-time and on-budget.  Note that if you were on-time and on-budget, it didnt matter unless you have high quality.
  • B -- Reuse something from another project.  This was something critical to our strategy at the time, and helped keep us competitive in technical bids.
  • A -- Contribute something for reuse.  This was one of those things that everyone wants to do anyway.  Its clearly valuable and was an important part of the strategy, but unless more people reuse than create things that are reusable, it wouldn't work financially.
The actual presentation was done with a triangle graphic with the higher grades toward the top, but you get the idea: quality is foundational, reuse is an even-better-if.

The new system was easily understood by the team and by management.  Additional supplementary detail was provided for exactly what we meant by key terms like "high quality", "on-time", "on-budget", etc..  In short, the detail was available when you needed it, but it didnt cloud the high level.     

Using a rubric like the one shown above is helpful, but there are many ways to accomplish a similarly successful set of metrics for your team, your site, or pretty much anything complicated that involves a lot of people. Just keep the following in mind:

  • Focus on only the most important aspects of the thing you're measuring.
  • Reduce emphasis or eliminate metrics for the things you dont control or cant impact.
  • Find a way to express the goals in terms that the audience can understand.
  • Have background materials available when more detail is needed.  Understanding in concept is one thing, doing and affecting positive change requires greater comprehension and detail.
  • Make sure the goals are measurable and that everyone understands how to calculate them.
  • Encourage review and discussion of the relevant metrics as things change.
  • Be forward looking: communicate the goals and metrics up-front. The more people understand how the system works, the more they can do to in support of them.
  • Set expectations with the relevant stakeholders (the team and management in this case) that the metrics will show opportunity for improvement, followed by improvement, followed by new opportunities.

And through the process, inevitably you wont ace everything. We are all high-achievers (well, at least where I work) and we should not be discouraged when the metrics identify room for improvement -- that is what they're there for.

Sunday, December 27, 2009

3 Tips to grow leaders (or improve most anything)

I like to take Christmas week off every year. I do this mostly to spend some quality time with my family, but one of my favorite fringe benefits is that I typically get just enough down time to think about the things I've done that I'm proud of, how to do more of them and what I'd like to accomplish in the coming year. This year was no exception.

One of the things I'm most proud of this year has been my role as a mentor; I've had some good success with a number of the people I've worked with: some in an official capacity, some former employees who continue to ask my advice, and a few who've never worked for me at all but somehow have come to value my perspective. While I'd like to think that I have some special coaching gene and a unique situation, its probably not true. The formula I follow is fairly standard and I hope this post is able to help a few people get started or refine their efforts to growing leaders.

Tip 1: Find out if they want to be a leader

...because some people don't. Being an effective leader in most companies means that you're going to have to stay ahead of the fires and that takes a lot of work. It also means you're going to have to deal with problems that other people create and put your ego aside, which some people simply cant -- or dont care to -- do.

Even if they do want to get to the end state of being a leader (if there is such a thing), many arent willing to put in the effort it takes just to get there. This is OK. For ambitious people (like me and many of my friends) this is very hard to understand and at times, deeply frustrating. Don't get frustrated. Not everyone has to be a leader and not everyone has to keep moving up in their career. Be happy for their contentedness and move on.

If they dont want to be a leader or they're not willing to put the effort in to get there, move along and find someone else to mentor. No one likes being nagged and you're doing them and yourself a disservice if you try to push someone who doesnt want to be pushed.

Tip 2: Find some goals they can actually achieve

Every company I've ever worked at has a standard review template and part of it generally has a slot for "things to work on." This section of the review is typically filled with things like:
  • Improve written communication
  • Learn two new technolgies
  • Gain deeper database skills
Which are almost totally meaningless if the person getting the review isnt in a position to use these two new technologies. With something extremely vague like communication skills -- what needs to improve? Even if this mythical employee works on all three of these items -- how do we know they've succeeded? What happens when they do? Where is the follow up? This system doesnt work if your goal is actually to help someone grow. Avoid the review style goal setting and start with something meaningful, measurable and attainable.

Look at the project they're currently on or likely will be in the future. If you dont know what they're going to be working on, either wait or set some kind of meaningful preparation goal. Do not set a goal that is not at least a little bit measurable. It wont help them and it wastes a lot of time. If you already have a motivated leader-in-training, setting pointless goals is a good way to lessen their motivation and confidence. Instead, find something they can actually achieve in a real life situation, such as:

  • Write 3 emails to the client or team about a contentious situation (if you work in consulting, these are often in abundance). Make sure they are timely (sent soon after the event), informative, include a meaningful follow up, are free from spelling errors and grammatical mistakes.
  • Using the technology of your current project, identify one of most critical code areas (run often, high risk, etc.), review the code and explain what is done optimally and what can be improved and how. Review the suggestions for improvement with an expert in that technology and make the appropriate improvements without negatively affecting timelines or project quality.
Note that these goals are measurable -- we know if they've happened or not, and (at least in our hypothetical situation) feasible. This leader-to-be should be able to send a few emails. As long as these code improvements are not done at the expense of other tasks, even giving it a try should make everyone happy.

A goal isnt feasible if it goes cross-purposes with your daily tasks, so dont put someone in that situation. Often, this means they'll have to work more hours or spend some time on a weekend. There are always constraints of some kind, so figure out what they are and make sure to work with them.

Tip 3: Follow up and revise

The first time you set a goal for someone, you will probably set a bad one. It might have been too much of a stretch and they cant do it (although this is uncommon), it might have been something unrealistic or cross-purposes with their immediate supervisor (not a good thing) or, most commonly, it will be too easy.

Easy goals probably wont help them grow much. Hard goals will demotivate. Take some time up front to calibrate so you can set goals that allow the leader-in-training to grow without getting demotivated, reprimanded (learning usually means making some mistakes, so choose the task carefully), or overconfident. Give meaningful and constructive feedback often. If you're not good at giving meaningful and constructive feedback -- get good, fast. Make the review and revision part of the process so you can identify unrealistic goals early and get a quick easy win out of the way so neither of you waste time that could be spent working on something more effective.


With so much else to do, why bother?

The process for developing a leader isnt much different from improving anything else: departmental processes, website conversion, performance/scalability testing, business KPIs, etc.. Budget a realistic amount of time and assuming you have a willing participant, you will have something to be very proud of and -- if my experience is at all representative -- make some wonderful long-term trusted colleagues and friends. The recurring dividends they provide will be significant to the company and to you personally.

Saturday, December 26, 2009

...and we're back

After a haitus due to a move from Philly to NYC, a job change, some wedding planning and countless other minor adjustments, I'm going to try to start this back up. I figure if I can keep up my travel blog, I should be able to write a few posts about what I do with the rest of my time -- or at least write a bit more about the stuff I post to my twitter feed.