Thursday, March 4, 2010

Doing it the right way: Asking "why am I doing this?"

It's a simple question: "why am I doing this?"  Unfortunately, too often we forget to ask ourselves why we are doing something. In marketing/advertising, some people call it the "so what?" test. I've heard a tech colleague call it the "stupid test" -- but call it whatever you want, it just doesn't get asked enough. Below are a few of my favorite patterns of this:

The Builder

I love hard-core tech people. The sorts of people who have deep, intellectual and detailed technical debates like this one on SOAP vs. REST. The kind of people who just live to build things with technology. Often these people take overly complicated routes to doing something just because they can. At times, I am one of these people. For example, I want to do this so my tomato plants stay healthy this summer.  Just because it is fun does not mean its the right thing to do.

Even if it takes very little time, building the wrong thing can be very expensive over the life of what you're building, so make sure you know why it's there in the first place and make sure there isn't a more efficient way to get it done before you start.

The Template

Templates are wonderful. Patterns are great. But they don't always fit every problem. Before you use a pattern, ask what part is relevant to the task at hand. What outcome do you want to drive? What risks exist? Quantify them and make a decision about what part to use and what not to. Here are a few common examples of the productivity-killing template:
  • The software document that someone filled out instead of authored. It has all the fields filled in, its umpteen pages, but its not useful to its audience.
  • The form that asks several questions but never seems to hit the core issue.
  • The software process that includes steps that do not create any value or reduce any risk in your scenario.
  • The software package/library/API that you chose too quickly yet doesn't fit what the business needs.
We can't always control the template, but we can apply common sense. Keeping in mind why it exists, who will use it and what the real purpose is (which may not be obvious) can mean a big boost in effectiveness.

The Review

I am very particular about code reviews. If done properly, they can be incredibly effective in removing defects. Unfortunately, most code reviews are not done properly.  The problem isn't limited to code reviews. Security audits, requirement or prototype reviews, software package selection can all fall into the trap where we focus only on what is top of mind rather than the core purpose.

Too many code reviews focus on items of little value to anyone or, worse, on trivial debates about spacing or naming.  Security audits can focus on compromises or breeches that have little quantifiable cost to the business, leaving more significant risks untested.  Business reviews of requirements too often focus on what is top of mind rather than what is most critical to resolve early.

It is amazing what we can accomplish when we step back and take an orderly and rigorous approach to solving the problem at hand. Review the code to remove defects. Review the requirements to understand the limitations. Audit the software to understand risk to the business.

Best Practice?  Not if you're doing it wrong.

Occasionally it is tempting to just stop building anything. Don't write any documents. Don't have templates for IT requests, forms for employee reviews, requirements documents or code reviews. After all, if it doesn't work, why do it?

Because if you have more than a handful of people at your company, then you do need process. Lots of them, probably. You need forms and checklists and reviews to make sure that risk is adequately controlled, people remain productive and things keep running even when the really smart key people aren't there. Most importantly, you need to adapt and adjust those processes to be more efficient by having everyone on the team asking why and contributing to making the whole system run more effectively.

...and luckily, it costs almost nothing.

Friday, February 12, 2010

3 Sure-Fire Ways to Make Your Website Fail


I have been part of some spectacular failures in my career (if you've ever done anything on the web and you're honest, you probably have too). I'm not talking about the stuff that is late or a little over budget or whatever -- these are the projects that never produced a return anywhere near the amount invested. I believe most of them were avoidable, and many with not much more than a little forethought to realize that the optimal solution was -- as is sometimes the case -- simply not to play the game.

No goals? Fix that. Fast.

I'll start by taking it for granted that you already believe that a project needs goals to be successful. At the very least, even if you don't, someone managed to get some money to pay for the project and they probably (hopefully) had a reason, even if its not immediately obvious (if you're a vendor, sometimes your client wont tell you right away which is a big mistake on their part). If you work for a company that routinely gives large sums money to projects for no reason, you should leave that company because they are going to go out of business.

That said, lets assume that it is your job to come up with, review, or approve the key performance indicators (KPIs) that are used to set and monitor your project success. Here are 3 quick and easy ways to make sure the project is a complete disaster.

Tip 1: Use only industry specific goals

Do the same thing as everyone in your industry and you have to compete with all of them. Not fun. This might work for a time, but eventually a more efficient competitor will enter your industry and shut you down. Just ask any airline that was around before about 1990.

If you actually want to be successful, figure out what about your project supports your unique position in the market and as a company. Hint: page views isnt it. Visitors? Better, but no. Carve out what is unique about your strategy and make sure your KPIs reflect what you're trying to do. More on this in a later post.

Tip 2: Set it and forget it

Let's say that someone else accidentally found the perfect mix of goals and metrics, fully optimized your site and marketing programs to maximize the key outcomes of units sold per marketing spent. No more chance to really hurt the company right? Nope! You can do still do some real damage. How? Its as easy as can be -- just keep doing what you're doing.

Most sites do a full redesign every 2-3 years. Some adjust incrementally a lot more often than that. Why do they bother? Because tastes change. So do entire industries, the economy, target consumers and their behavior and pretty much everything else. If a site does any A/B testing at all, they've learned something and likely adjusted their site accordingly. Your KPIs need to evolve with the business, its target audience and your overall strategy and tactics.

Validate your tactics and metrics on a regular basis to make sure you're still measuring the right things. Even if you dont update the site often (which alone is a pretty good way to lose relevancy), it is the rare tactic or metric that holds optimal for half a year, so make plans to re-validate at least once a quarter and adjust as necessary to keep things optimal and on the good path. Your site is changing, make sure your dashboard is too.

Tip 3: Make the goals incredibly complicated

This is easy as can be -- the detailed reports and raw data you're looking at are almost certainly too complicated for your peers. Your detailed and incredibly complicated charts might look great, but they're not going to convince your boss, peers, and the folks with the money to spend correctly. As I posted before, figuring out how simple to make things isn't as easy at it seems.

A recent example of mostly pointless complexity can be found in the twitter blog posting on the superbowl. It's really pretty (its the image at the top of this posting), but its hard to imagine how anyone could use that to improve anything. Even a little. If you want to have busy executives understand the insights you're trying to convey, you need to make it dead simple. Focus. Simple. Good.

Of course you'll need to do a lot of analysis to be successful -- good metrics dont come cheap. If you want the people making the decisions to listen, find a way to simplify and focus on the items that are the most important. Process the data down using terms the business can understand and believe in.

These simplified goals need to be:
  • Easy to understand
  • Expressed using terms familiar to your audience
  • Involve very simple math or transformations (ideally none)

The last point is a tough one. Throw up a sigma on the projector (that E-looking thing that means summation) and I'll love it, but most people fear greek symbols and they wont trust what you're saying. Explain in very basic terms your audience can understand. You can gradually increase the complexity over time assuming the audience grows with you, but if a 5th grader doesnt understand the basics of how you got to your numbers, neither will a busy executive.

Monday, February 1, 2010

The Toyota Recall: In Defense of Lean

 

Why Lean needs defending

In case you haven't heard, Toyota recently had a really big issue requiring a massive recall. Since Toyota is closely associated with the Lean process, some pretty reputable folks are taking it a little too far and hinting that Lean could be part of the failure. This is a mistake for many reasons, but most importantly misses several key points of what Lean is and what it isn't. It also touches on a few of my favorite methodological pet peeves. Before we get to that, let's start with the basics:

Toyota doesn't use Lean

Toyota uses TPS: the Toyota Production System. It significantly predates Lean, which was coined in the late 80s, derived from a graduate thesis recognizing the unique Toyota processes. The TPS has much in common with Lean (since Lean was based on TPS), but they're not the same thing. Early Lean literature was focused primarily on systematic improvement to reduce waste at the same level of quality; quality was not explicitly a Lean goal even though Toyota included Deming's Total Quality Control techniques into TPS in the 1960s. TPS includes a number of things that Lean doesn't and for a very simple reason -- TPS has been custom fit to the Toyota corporate strategy over the last 50+ years. Lean can help pretty much anyone do anything so you're going to run into lots of edge cases where it simply has no opinion. TPS is pretty specific about building cars, and it *does* include quality control measures, so what happened?

Processes only work when you use them

According to an Economist article several months ago, "People inside the company believe [earlier] quality problems were caused by the strain put on the fabled Toyota Production System by the headlong pursuit of growth." In the same article, the Economist points out that Toyota sales were down nearly 24% in 2009; it does not take much imagination to guess how much management and the folks on the line want you pulling the andon cord for something less than obvious.

But assuming that the all-too-easy-to-believe executive pressure or line laziness isn't fair to Toyota. Given that the National Highway Traffic Safety Administration (NHTSA) conducted 6 separate investigations which failed to find the issue, the issue is clearly more complex than a simple assembly line issue.

Quality control can't end once the product has been delivered

One pet peeve of mine is that it is very difficult to tell how good something is until after you own it for a while. We need something that provides a broader view of quality than narrowly defined, short-term surveys that can't tell the difference between bad cupholders and cars that break down. There are parts of the car that matter a lot more than others, and to optimize your quality spend, you must target these areas appropriately. Catastophies are sure to occur, and to resolve them we do not need public apologies, noise in the press and mass speculation, we need systematic increased focus on the areas that matter and methods of measuring accountability.

The very worst thing that Toyota can do -- and I doubt they will -- is what quality expert and statistician George Box calls "Management by Disaster" where one disaster leads to a chain of inefficiency without addressing the core problem. Accoring to Box, this overreaction to unlikely events happens when:
  • Systems are changed in response to occasional disasters,
  • by people with little direct knowledge of the problem, and
  • never checked to see whether or not they solve the issue
Luckily, most data driven organizations avoid such mistakes, and having a well established process like TPS provides a foundation for enacting effective change and improved measurement to provide meaningful accountability. Processes like Lean, TPS, Six Sigma and others were designed to use defects as input to improve the process. While the tragic deaths of the past months provide a very public and painful lesson for us, it should also remind us of why we have process in the first place. To be sure, someone (probably lots of people) screwed up, but terminating them won't stop this from happening again. Improvement for anything complicated happens only with careful and systematic change and we should not limit our scope just to Toyota. We must demand more precise and meaningful measures across the industry rather than meaningless apologies and finger pointing between Toyota and its suppliers.

Oh, and...

If you're wondering what happened to the guy that wrote the graduate thesis that coined lean... he was recently named President of Hyundai America, one of the only car companies that saw positive sales growth in the 2009 recession. This and his prior role as VP of Product Development and Strategic Planning should remind us that lessons of effective systematic thinking apply to a lot more than just building cars and writing code.

Wednesday, January 6, 2010

Reflections on a waterfall



I recently re-read Tom DeMarco's 1982 classic Controlling Software Projects: Management, Measurement, and Estimates. It is interesting how little changes in almost 20 30 years (wow). One passage in particular intrigued me:


Software developers may indeed resent the idea that an empirical projection approach could be meaningfully applied to their work.  It implies that yet another area of anarchy has begun to succumb to order.  Human beings love anarchy, in limited doses, and regret its passing.  The more order a manager introduces into the the development process, the more he or she ought to consider ways to reintroduce small amounts of "creative disorder" (brainstorming sessions, war games, new methods, new tools, new languages) as a means of keeping bright minds involved.

I picked up the book again because I remembered reading something -- years ago -- about how someone did a study that underestimating tasks actually caused them to take longer -- or at least thats how I remembered it. I'm a lot more careful with believing (or assuming) causation these days, but the argument seemed perfectly logical and I can say unscientifically that I owe much of my on-time percentage to being realistic with the estimates and the staff (even with fixed deadlines). For anyone that happens to know the study or the book, I'd love if you let me know.

In either case, it was a pleasant surprise to pick back up some DeMarco, and it was remarkable how little of it has been changed by nearly 30 years of new technologies, processes, and the like.  For a few, he even has some brilliant predictions:

While an ansynchronous implementation would be in many ways the most natural one (that is, most closely related to the underlying perception of the requirement), present day technology discourages it.  ....  I believe that this practice will change drastically during the 1980s: By the end of the decade, we'lll be routinely building systems with a maximum of asynchronism, rather than the minimum.
Clearly, the web pushed asynchronous development far past what DeMarco would have predicted, but it was certainly helped along quite a bit by object oriented methods, which would take several years after he wrote this to come into vogue.  You cant write a tech book and expect all of it to stay relevant (it wouldn't be technology if it didn't change), and clearly there are some outdated sections.  Much more interesting are the sections that should be long-ago defunct but aren't.

In 1982, the software development lifecycle at any respectable dev shop was based in the waterfall method (heavy design, then dev, test, etc.).  The term "Agile" wouldn't be introduced for another 20 years (although Boehm -- who wrote the introduction -- would introduce some papers on spiral processes, a predecessor of the Agile movement, a few years later).  As such, you might expect a great deal of the Waterfall-centric procedures and suggestions to become outdated and irrelevant.  Not so. Agile and iterative development don't much change the core of message that a process should be repeatable, measurable and, as he notes above, predictable.

I wonder how many of the other tech books I've read recently will last on my shelf nearly as long.