I am adamant that good decisions can be made without perfect data

addtoany linkedin

Tom Wailgum over at CIO.com wrote a blog titled “Supply Chain Data: Real-Time Speed Is Seductive and Dangerous” in which he quotes from an Aberdeen report by Nari Viswanathan and Viktoriya Sadlovska. Tom writes about the adoption of real-time data that “Before any company hits the accelerator, it would be wise to ensure that the existing and new supply chain data is sound: Bad data delivered that much faster is still bad data—and can lead to worse decision-making.” I agree with nearly everything Tom writes, but I don’t buy into this quest for data nirvana. Let us look outside of supply chain for examples where the data quality is good enough to make sensible decisions. We know that child mortality is higher in poor countries than in rich countries.

The UNICEF mission is “To reduce child mortality by two-thirds, from 93 children of every 1,000 dying before age five in 1990 to 31 of every 1,000 in 2015.” I’m good with the first part of the UNICEF mission (reduce child mortality by two-thirds) and will continue to donate to them on this basis. It’s the second part that confuses me. Does is really matter that in poor countries child mortality is 93 per 1000 births or 100 per 1000 births? I would just go with “more than 90 per 1000 births”. I just don’t see how the precision of the statistics improves the quality of UNICEF’s decisions. And I think too often we confuse the 2 issues of quality of decision and quality of data.

Before I am misunderstood, let me state quite clearly that data quality can always be improved. There is no question in my mind that all enterprises should have “data police” that ensure that the data is of reasonably good quality. But let’s all recognize that data quality is like a sales forecast. We all need to get better at it, but we will NEVER get it absolutely right, as in complete, correct, and there when we need it. What we want to avoid is getting it absolutely wrong. In addition, I think that data latency is a key element of data quality. So I don’t agree with Tom Wailgum. The speed with which you receive data is a big part of its quality. I would much rather have partially correct data quickly than precise data slowly. One of the most important insights that can be gained from data is trend. Trend is often more important than the actual value, and trend is totally absent from Tom Wailgum’s discussion. In other words, data should have 3 major measures of quality:

  • Completeness
  • Correctness
  • Timeliness

In case we forget, people are operating supply chains right now with the quality of the data they have right now. They are making multi-million dollar decisions in the long term based upon the current data. They are making 1000's of decisions on a daily basis - expedite this PO, cancel that PO, promise this date to a customer, ... - based upon the current data. In many cases they are using paper and pencils and gut-instinct to make these decisions, and more often than not they are the right decisions. Maybe not precisely correct, but still correct.

I am sure many of you have horror stories of when bad decisions were made using bad data. I am equally sure that you have horror stories of bad decisions have been made on good data. And, by the way, I am sure you have many stories of good decisions having been made on bad data. Above all, I am sure that many of your horror stories will revolve around having known about something too late, and many of your good stories will revolve around having known about something quickly.

The value of knowing sooner is the central lesson to be learned from the famous Beer Game that illustrates the Bull-Whip Effect. Also, let us not confuse the quality of the decision with the quality of the data. In other words, the decision might be directionally correct without being precise, and infinitely better than doing nothing. For example, it may be correct to split a purchase order (PO) for 1000 units and expedite part of the quantity in order to meet unexpected customer demand. We may chose to expedite 500 when it would have been better to expedite 600, but expediting 500 would be a lot better than not expediting any of the PO. The decision to split the order and expedite part of it is 100% correct.

The quality of the data may mean the difference between expediting 500 and not 600. I can accept that imprecision better than the inaction caused by waiting for “better” data before making a decision. Naturally, we all want to avoid the situation where it would have been better not to split the order, but because of poor data quality a decision is made to split the order. In general I think the quality of supply chain data is a lot better than that. The reason to have “data police” is that of course no-one knows which incorrect data will lead to disastrous decisions. If the current data says "go North East" that is good enough for me.

Leave it to the “accountants” to decide to "go 47 degrees 28 seconds", especially if it takes 2 minutes to decide to “go North East” and 2 days to decide to "go 47 degrees 28 seconds". By the time the “accountants” have reached their conclusion, the entire demand and supply picture will have changed anyway. In closing, I think we should all take a word of advice from Warren Buffet when he wrote in the Washington Post that "... it is better to be approximately right than precisely wrong." I argue that most of the time waiting for precise data is precisely wrong, and that acting quickly based upon the existing data is approximately right.

Discussions

Ryan Humphrey
- 4月 27, 2010 at 4:15午後
I agree with Trevor Miles. I’d also add that what delays the timeliness of decisions in many cases is deciding not to act if you don’t have enough data. I’ve seen many business cases get sent back to the drawing board because someone wanted “more data.” For me, it comes down to a few simple questions: By not having the data or doubting the data quality, would I make a different decision? Would I go in a different direction? Does it really matter? I’ve also been a fan of quick decision making that is “directionally correct.” I’d say that no decision, in some cases, can be worse than a bad decision.
Ron Freiberg
- 4月 28, 2010 at 10:09午前
I totally agree with both Trevor and Ryan; we all know the personality traits of differing business managers. There are the 80/20 guys like myself who make rather critical decisions based on 80% of good data being available and then there are those more obsessive types who never make a decision based on the fact that they never have 150% of the available data regardless of how trivial the data may be. The obsessive sorts just frustrate me to no end, no amount of data is good enough to blast them off dead center, in the end run the missed opportunities out weigh most errors in the decision that would have been made otherwise. Additionally we all know that business is cyclical and things change but at least if the decision is made to move forward, you are heading in a direction you wanted to go, you just have to be mindful that things change and you may have to modify your course along the way.
Nari Viswanathan
- 4月 28, 2010 at 10:26午後
Interesting discussion. Reminds me of the binary operation NOT(NOT(1))=1.
Ryan Humphrey
- 4月 29, 2010 at 7:54午前
or ABS(-1) = 1

Leave a Reply

CAPTCHA
この質問はあなたが人間の訪問者であるかどうかをテストし、自動化されたスパム送信を防ぐためのものです。