Six lessons learned from six failed software implementations.

addtoany linkedin

We’ve been working with a customer on a RapidResponse implementation. The customer is delighted to witness a successful implementation after six previous failed attempts with other solutions. That’s right, six attempts – three different solutions, each tried twice. Imagine the time and money lost. Here are the lessons learned from that experience – each applicable to both vendors and their customers:

  • Long implementations are a recipe for disaster! You can expect management changes certainly within a two year time frame. If a project is not delivering value when management changes, it will likely be scrapped!
  • The absolute key to managing duration is managing scope and expectations. Repeat after me…. “that can be done in phase two”.
  • Don’t underestimate a user’s need to understand HOW a solution works. Users need visibility to see both what the solution recommends and why it made those recommendations. Without an understanding of the decision making path, users are more likely to lose confidence and reject the results.
  • Planning optimizers are very difficult to keep “fed” with current data and to accommodate changes in products, production processes and the supply chain network. The broader the scope in terms of the planning horizon and the range for optimization, the harder it is to keep it producing useful results. In very many cases, the entire approach to using optimization may need to be re-thought.
  • Real systems require high quality data. If the data is not great to start, tools and processes must be part of the solution to improve and maintain quality. A single, agreed-to, set of data is an absolute must.
  • Going live doesn’t equal success. Despite many solutions “going live”, if they don’t deliver real value, they will inevitably be abandoned or scrapped and processes will returned to the previous status quo (in supply chain management, that usually means Excel)

While much of the above is not new, it’s amazing to me how often these pitfalls rear their ugly heads. Better implementation processes and practices can certainly help, but at the end of the day, if the solutions’ technology paradigm is inconsistent with the ultimate needs/wants of the user, then no “best practice” is going to rectify that.

Enhanced by Zemanta


- March 12, 2011 at 4:23am
+1 lesson - weekley meeting with pushing the project forward (kiking the ass mostly) from really big boss in the company.
Giulio Cantone
- March 18, 2011 at 3:13am
My lesson's name is: simulation.

You can spend weeks analyzing business processes and company procedures; more weeks mapping them into system functions and writing down pages of specification documents.
But you'll never have the full view of ALL system requirements and their relations.
Key users must touch with their hands, ASAP, the system in order to validate their ordinary and non ordinary procedures. This should happen very early in the implementation process.
Forget boring and neverending workshops; meetings with presentations and lots of words. Just seat in front of a computer with a small group of expert users and USE the system.
Of course this is possible if the software vendor has already a "touchable" product to show. If not, ask the vendor to come back with a base product and start simulating from that.

Thus, a second lesson: mistrust software you can not immediately test by yourself (check the customer's list of the vendor and ask them opinions), at least partially, with real data. Verify the response time of the vendor in making changes to the system to meet your requirements, both before going live and especially after the go-live. In the last case, even one day makes the difference.

Leave a Reply