Truth, Lies, and Statistical Modeling in Supply Chain – Part 1

Truth, Lies, and Statistical Modeling in Supply Chain – Part 1

By Kinaxis 1 Feb 2013

I want to take all of us down into the weeds for my next blog, "Truth, Lies, and Statistical Modeling in Supply Chain.

I came to the conclusion that this would be necessary after talking to colleagues and customers about the how we model all of our manufacturing and supply chain systems using deterministic models, when in fact everything around us is stochastic. My first clue that this was necessary was when I got a lot of puzzled looks and someone was brave enough to ask me to explain deterministic and stochastic. I came to these terms late in my education and purely through luck so I am not surprised that there is little knowledge and understanding of these terms.

Conflict #1: Most systems that run our supply chain use precise mathematical models that assume complete identification and prediction of variables (Deterministic), yet we operate in a highly unpredictable environment (Stochastic).

In essence, a deterministic approach assumes that:

  • A system always operates in an entirely repeatable manner, i.e. no randomness
  • We have an exact understanding of how a system works
  • That we can describe the way a system works in precise mathematical equations

 

 

 

In contrast a stochastic approach assumes that:

  • There is always some element of a system that cannot be understood and described, which exhibits itself as randomness
  • We can never fully understand and describe how a system works because it is boundless
  • As a consequence, systems cannot be described in precise mathematic equations

 

I am an engineer by training, and like most engineers I had very little exposure to statistics and probability theory until I studied Industrial Engineering and Operations Research at the graduate level. And I cannot tell you how many of my friends, studying ‘harder’ engineering courses such as Electrical or Mechanical, referred to Industrial as ‘Imaginary Engineering’ precisely because it deals in ‘fuzzy’ concepts that cannot be described fully.  I stumbled into Queuing Theory by accident, which was a complete revelation to me.  It is the study of the effect of randomness on serial processes, such as manufacturing and supply chain.  Even in Queuing Theory most of the analysis focused on Exponential distributions because solutions are easier to derive, while using a Normal distribution makes it near impossible to derive solutions other than for very simple problems.  As I have discussed previously, given my deep grounding in deterministic analysis, it took me a long time to accept some of the fundamental observations of impact on measures such as throughput and capacity utilization that result from stochastic analysis.

But it wasn’t until I started a course in Discrete Event Simulation that I was introduced to the LogNormal distribution.  And this is what I want to concentrate on for this blog. My next blog will be on what this means to a supply chain planning approach based upon optimization.

Conflict #2: Many supply chain models act as if events, over time, are evenly distributed (Normal Distribution), yet most incidents and their respective magnitude (e.g. demand spikes, supply delays) are highly random (LogNormal Distribution).

Anyone who has dealt with equipment failure will be intuitively familiar with the LogNormal distribution. This is because, most of the time, failures occur after a fairly well establish mean time between failures, and every now and again the equipment fails very soon after being commissioned or repaired, but very seldom will a piece of equipment run for much longer than the mean time between failure (the Weibull distribution can also be used for failures). If the time between failures was distributed according to a Normal distribution, we would expect the time between failures to be evenly distributed around the mean, meaning it would be just as likely that the equipment would last a little longer than normal as it is to fail a little earlier than normal.  But equipment failures don’t work that way.  Using a Normal distribution rather than a LogNormal distribution means that we under-estimate the risk of the equipment failing early and over-estimate the risk of it failing late. There is a very good article in the Oct 2009 Harvard Business Review (HBR) titled “The Six Mistakes Executives Make in Risk Management” by Nassim N. Taleb, Daniel G. Goldstein, and Mark W. Spitznagel that addresses the consequences of assuming a Normal distribution.  In a section called “We assume that risk can be measured by standard deviation”, they state that:

Standard deviation—used extensively in finance as a measure of investment risk— shouldn’t be used in risk management. The standard deviation corresponds to the square root of average squared variations—not average variations. The use of squares and square roots makes the measure complicated. It only means that, in a world of tame randomness, around two-thirds of changes should fall within certain limits (the –1 and +1 standard deviations) and that variations in excess of seven standard deviations are practically impossible. However, this is inapplicable in real life, where movements can exceed 10, 20, or sometimes even 30 standard deviations. Risk managers should avoid using methods and measures connected to standard deviation, such as regression models, R-squares, and betas.

And yet the Normal distribution and associated standard deviation measures are used in the quintessential supply chain risk management practice of Inventory Optimization.  The classic inventory optimization equations all try to mitigate the risk of not having inventory to satisfy customer demand by defining a certain safety stock required to achieve a desired customer service level.  Here is the rub: By assuming a Normal distribution when in fact a LogNormal distribution applies, we are in fact both underestimating the risk of running out of inventory and carrying too much inventory. For those of you who doubt that a LogNormal distribution applies to demand, just think of the number of times there is an unexpectedly large order versus an unexpectedly small order.  Let’s explore the consequence to inventory optimization by starting with the distribution of demand and supply lead times, the principal variables used to calculate inventory.

Conflict #3: The more variable the elements, the less effective the standard models are (the proof is in the math!)

Starting on the demand side, the Coefficient of Variation – CoV = StdDev/Mean – is often used to measure demand variability, with CoV of below 0.25 being considered stable demand and CoV above 1.5 being considered very variable. Below is a diagram – I apologize for the quality – of example CoV for several industries.  As we can see from these diagrams it is really only Consumer Packaged Goods companies – Bottled Food Product Mixes and Dry Packaged Food Products - that see demand with a CoV much below 1 for a significant portion of their items, and only a few experience items with a  CoV below 0.25.  These are so-called High Volume/Low Mix industries.  The rest of the industries experience a significant proportion of their demand from items with a CoV greater than 1.  These are Low Volume/High Mix industries.

The significance of these graphs is that any demand with a  CoV over 0.2 does not follow a Normal distribution. Demand with a CoV greater than 0.2 cannot be following a Normal distribution, and is most likely following a LogNormal distribution. To prove that a CoV greater than 0.2 means that demand is not Normally distributed I ran a little experiment in Excel that resulted in the following graph which measures the probability of generating negative values from a Normal distribution with a mean of 100 and different standard deviations determined by the CoV (the Excel code is at the end of the blog). For each CoV value I sampled 1,000,000 values from a Normal distribution with a mean of 100 and the standard deviation of 100*CoV, and counted the negative values generated.  The probability is simply the number of negative values divided by 1,000,000.

As can be seen from the diagram, there is about a 17% chance of a negative value with a CoV of 1, and nearly a 29% chance of a negative value with a CoV of 2. In other words, if we assume that demand is Normally distributed and from experience we know the CoV to be 2, then we must be experiencing negative demand nearly 1/3 of the time. So, here is my question: When last did you see a negative demand?  Sure there are returns, but demand in a period isn’t going to be negative.  So there must be something else driving the high variability, namely that demand does not follow a Normal distribution, and most likely it is following a LogNormal distribution. In many cases in business-to-business transactions there is a minimum purchase quantity, which very often drives purchasing behavior. But every now and then there will be a large demand spike.  This is the classic behavior of LogNormal distribution.  The same can be said for the supply lead time, the other input to inventory optimization calculations. Much of the time we experience lead times we expect, but never a negative lead time, and sometimes we experience long lead times. In other words very often supply lead time will also follow a LogNormal distribution. Modeling supply lead times using a Normal distribution will mean that at times we experience a negative lead time. So what does a LogNormal distribution look like? Example 1: A key point to note is that both distributions in the diagram below have a median of 1, but the mode and mean are very different.   Also note that the CoV for the solid line is about 0.25 (0.25/1) whereas for the dashed line the CoV is about 0.61 (1/1.63).  Neither is anywhere near the CoV of 3 experienced in some of the industries above.

Example 2: Conversely, every LogNormal distribution in the diagram below has a mean of 100.  The only difference between the curves is the CoV, or the degree of variability. The smaller the standard deviation (smaller CoV) the more the LogNormal distribution looks like a Normal or Gaussian distribution. But note that by the time the CoV is 0.4 there is a big difference. Also note that there are no negative values. Even for a CoV of 0.2 the distribution is not perfectly symmetrical about the mean, as can be seen by the fact that the curve goes to zero at about 60 on the left (40 from the mean of 100), but at about 175 on the right (75 from the mean of 100).

 

 

So what does this mean for supply chains?  Let’s go back to the idea that safety stock is used to mitigate against the risk of losing a sale because there was no inventory available to satisfy demand. The classic equation used for a single tier reorder point calculation is:

    The key point is that the classic equation used to calculate inventory safety stock is flawed because it assumes a Normal distribution. See these links for confirmation that a Normal distribution is assumed and that the calculations are based upon the mean and standard deviations.

http://en.wikipedia.org/wiki/Safety_stock http://www.inventoryops.com/safety_stock.htm

Each of these articles uses the mean and standard deviation of both demand and supply lead times, as well as the Z factor from Gaussian or Normal distributions. If we determine safety stock based upon the average demand, which is the usual manner of determining safety stock, we are keeping too much inventory to satisfy ‘most’ demand – as measured by the mode, or peak, of the distribution – and yet too much inventory to satisfy peak demand – as measured by the upper 95% confidence limit. As illustrated by the graph below, for a 95% confidence limit, once the CoV is above about 1.3 demand (that follows a LogNormal distribution), it would have a lower peak demand than demand that follows a Normal distribution. At a CoV of 2.5 the Normal distribution over-estimates the peak demand by nearly 36% using a 95% confidence limit, and by over 86% using a 90% confidence limit.  Since safety stock is used to mitigate against the risk that we will run out of inventory to satisfy peak demand, clearly we are carrying too much stock in the cases where demand follows a LogNormal distribution.  The same can be said for supply lead time where we are over-estimating the risk of not being able to get supply in time.

 

 

 

  And it’s not just about demand and supply But it isn’t only demand or supply that follows a LogNormal distribution, or some other skewed distribution. In a paper titled “Log-normal Distributions across the Sciences: Keys and Clues” the authors point to many natural processes that follow a LogNormal distribution. In manufacturing we see skewed distributions for failure rates, yield, lead times, and many more variables. And yet we model all of these as single values in our planning systems and assume that if there is some variability that it follows a Normal distribution. The consequence is that we over-compensate with inventory and capacity buffers without truly understanding the associated risks. We would be far better off reducing these buffers and adopting a much more agile approach that accepts that occasionally shift happens.  Plan for the expected and adjust to the exception.   Side note: For those of you interested in running the experiment to determine how many negative samples will be drawn from a Normal distribution with different CoV, the code is below can be dropped into Excel provided you rename one of the worksheets “Norm_Inv”: Sub getCount() Dim i As Integer Dim j, n As Long Dim r As Single On Error Resume Next   With Sheets("Norm_Inv") For i = 1 To 350 n = 0 For j = 1 To 1000000 r = WorksheetFunction.Norm_Inv(Rnd, 100, i) If r < 0 Then n = n + 1 Next j .Cells(i + 1, 2) = i .Cells(i + 1, 3) = n Next i End With End Sub   More blogs in this series: Truth, Lies, and Statistical Modeling in Supply Chain – Part 2 Truth, Lies, and Statistical Modeling in Supply Chain – Part 3


Additional Resources