There has been a lot of talk about in-memory databases, but it's important to recognize that not all in-memory databases are created equal. Typically, these mega-software vendors’ databases are optimized for performing queries on large but fairly simple data sets. While some of the query times cited are appealing, raw query performance is only a part of the equation….and I would argue, that it’s the easy part. Certainly, the ability to query millions of records in a couple of seconds is scientifically interesting, but there is little business value in query speed. Business value comes from computational speed: the speed it takes to support an online and mobile supply chain community of thousands simultaneously performing complex “what-if” scenarios and analytics on large data sets – calculating things like capacity constraints, clear-to-build, ATP and CTP, Excess & obsolescence avoidance, part substitutions, S&OP, forecasting, etc… That's where we come in. When dealing with complex data relationships, the proprietary database employed by RapidResponse provides a significant performance advantage over the alternatives. Here’s just one example (this example is detailed out in the technical paper, RapidResponse - How is it so fast?): Using a Dell Desktop system, RapidResponse can do a simple query on over 4-million BOM records in 0.170 seconds. Ok, so that's pretty cool, but where it really gets meaningful is when you look at a complex calculation like running complete netting and counting the number of planned order recommendations. That was done in 45 seconds! RapidResponse is also very efficient at caching, so when you run the same complex calculation again, it takes only 5 seconds. This is the type of calculation that our customers are able to perform when and how often they wish. In fact, users can simultaneously request the same calculation and still get their independent answers within seconds. That's because our analytics code is directly compiled into the database engine where it has direct access to the in memory data. Less moving of the data between database and analytics means much better performance. The superficial is fun, but it’s the substance that matters. By the way, John Sicard posted a great blog last year that gives the history behind our in-memory technology leadership that dates back 25 years, you can check it out here: http://blog.kinaxis.com/2010/06/in-memory-in-style/
For supply chain management, not all in-memory databases make the cut.
Recommended for you
Never stock out: It’s the goal supply chain planners at biopharmaceutical company Ipsen set at the start of the company’s digital transformation. Learn how Ipen’s supply chain planning teams turned their aspirations into reality in the latest video in our Big Ideas in Supply Chain series.