A successful simulation project is a lot more than just building a model. When making important decisions, you want to make sure that you are fully confident in the simulation results.
Recently, Production Support 56 shared their latest LinkedIn newsletter where they discussed the joys of simulation. Were your great simulation results due to a feature or was it a bug? We thought it was such an interesting read, that we wanted to share it with our users!
Read on to find out more and check out Production 56’s newsletter here.
One of the great joys of simulation is when you hit upon an interesting result. A result that means the operational design needs to be changed. A result that could save the client lots of money in terms of capital or operational costs. Or could generate lots more product in the same time-frame with no extra resources. The client is going to love this result.
Then the fear sets in, and you are wondering is this a great result or is there something wrong with my programming? Was the great result due to a feature or a bug?
I’m going to use the term client for the person who commissioned the simulation. It could be your boss or the project engineer or whoever the shouty one is.
Does your simulation have good foundations?
If you have just thrown the simulation together to have a quick look then you need to be cautious about getting too excited. The result indicates you should investigate further and produce a more detailed simulation. You don’t want to get too excited and start insisting on operational changes. Giving bad advice at this stage could really undermine confidence in your model.
So, does your simulation have good foundations? When you started did the simulation have a clear question that needed answering, and more importantly have you stuck to it? If mission creep has set in, then the foundation to the great result might not be there. I have talked about getting the question right elsewhere.
Secondly, do you understand the mechanisms underneath the simulation. If you do not understand how changing a key variable affects the outcome, how can you be certain of the real-life result? I would recommend a good process map here, and an understanding of how each cog in the operation works.
Thirdly, have you validated the base case with a process owner? Running through the simulation with somebody who is familiar with the operation and using a standard set of conditions, can quickly help identify some silly oversights. Once the simulation is matching the performance of normal operations you can have the confidence to take it further.
A more detailed discussion about getting the foundation right can be found here.
Taking it too far
When you were collecting data for the model, you’ll have defined normal operational ranges for each element. For example, a production line might work with between 2 and 6 operators, or a piece of equipment could run at between 10 and 20 litres per minutes. These variables are the levers within the model, you can pull the levers and see how it affects the outcome. If your interesting result had all the levers set within their normal ranges, then great, pass go and collect £200.
However, if you are working outside of the normal ranges, you’ll need to talk to the process owners to determine if your scenario is feasible. Putting 50 people on a production line that has been designed for 4 is going to be a tight squeeze, and there may not be the physical room to expand.
Okay, so if the foundations are good, and the optimised scenario is feasible, I would do one more check before popping the champagne corks, and that’s to look at the trend data.
Trend data
Trend data is a really powerful way of looking at simulation results. This is where you plot the simulation’s output as a function of a key variable. For example, the time it takes to make a product against the number of people on the production line. It might take one person a year to build a car, but a hundred people could make it in a day, and a thousand people could not make it any faster than a hundred.
Trend data makes it easier to spot optimum conditions, it also describes the relationship between the variable and the output. The process owner may have a feel for this, and if the trend data agrees with their experience and intuition, then they are more likely to believe and act on the result.
It was a feature
Congratulations, you have confirmed your model has a solid foundation, your scenario is feasible, and the trend behaviour makes sense, you can now be confident your great result is due to a feature and not a bug. Your next job is to make sure it is acted upon. I would recommend the following two activities:
- Organise a meeting with all the project stakeholders and demonstrate the model using the normal operation conditions and the optimised ones. Let them ask lots of ‘what-if’ questions and answer these through the model.
- Provide each stakeholder with a short summary of the results adding in any key ‘what-if’ questions asked during the meeting.
Whether it goes any further is up to the project leader now.
It was a bug
Oh dear, you have my commiserations. If it helps, we’ve all been there. If you are lucky, you now have a couple of hours of hair pulling as you track the bug down. On a positive note, you now have a better model and understand the process you are modelling a little better. Happy hunting.