In his article "The FUTURE of WFM" (click here to read), Blair talked about how the Erlang formulas have been at the heart of determining staffing requirements for a long time, and he pointed out some of the challenges around using this basic formula for today's complex contact centers. But, of course, whatever model you use to determine staffing requirements, if you put rubbish into a model, you will get garbage out no matter what method you are using.
In 2012, one of the first articles I wrote was about "volume forecast" or the number of calls forecasted to be offered in the case of a call center. This, of course, could just as easily be webchats offered, email's offered, or other types of back-office admin offered for an agent to complete depending upon the channel you are forecasting. Along with other key assumptions such as handle time (how long is a piece of work going to take to complete), this is a crucial component to be fed into the staffing requirement model. In the original article, I mentioned two primary methods if you put aside guesswork - not that you should dismiss this entirely as there is a time and place even for guesswork. In this article, I will focus on time-series methods and the types of models that exist today that feasibly could be used for contact center forecasting, depending on the granularity of intervals and the data you have available. In essence, time-series methods cover situations based solely on history to extrapolate forward as a forecast. Typically, a time-series forecast will capture elements such as the current level, trend (is it going up or down), and seasonal patterns. Obviously, a time series method would generate a flat-line forecast without trends or seasonal patterns.
Common Time Series Methods
There are several time series methods, which can be classified into three broad categories: simplistic (conceptually linear), Holt-Winters exponential smoothing, and the Box-Jenkins (ARIMA) method.
Simplistic Method
This includes moving averages (simple, cumulative, or weighted forms), percentage growth (the difference between two values in time), and a line of best fit (least square method). The "simplistic" label is a giveaway. Still, there are benefits to using these methods, such as producing results quickly and without any strong statistical expertise by the analyst.
Surprisingly, most WFM systems rely on simple forecasting methods; the most common of these is weighted moving averages, which allow for some seasonality and trend. This is often sufficient, especially when there are strong leading indicators, low volatility, and you are not attempting to forecast too far into the future. However, as contact centers become more difficult to forecast, many workforce planning analysts supplement their forecasting process with tools (often Spreadsheets driven) outside of the WFM system because more accurate forecasts can almost always be generated using other time-series methods.
Holt-Winters exponential smoothing method (triple exponential smoothing)
This method typically performs well in terms of accuracy and is simple enough to create in Excel in many cases. In layman's terms, the Holt-Winters method is a method for structurally modeling three aspects of a time series: a typical starting value (level), a slope over time (trend), and a cyclical repeating pattern (seasonality). Each of these three variables is adaptive and can be calibrated using the excel solver. If you want to learn more about this method, there is a wealth of online information. I've even found YouTube videos with step-by-step instructions on creating a version of the model in Excel. In summary, exponential smoothing is useful when there aren't enough data points or when arrival is too volatile for more complex models like Box–Jenkins.
Box-Jenkins (ARIMA) models.
Both George Box (pictured left) and Gwilym Jenkins were British statisticians, so including them gave me a sense of national pride.
Box-Jenkins models are similar to exponential smoothing models in that trends, and seasonal patterns are adaptive; however, while they can be automated, they remain too complex (on average) for Excel to handle, necessitating the use of specialized software languages such as Python or C++ to transform the data into a state where meaningful analysis can be applied. ARIMA is also distinguished by its reliance on autocorrelations (time patterns) rather than a more structural method of adjusting level, trend, and seasonality. Box-Jenkins models outperform exponential smoothing models when more data is available, and arrivals are less volatile (stable), i.e., when the past is a stronger predictor of the future.
Which method is best?
So there are a lot of methods here, but this isn't a "how-to guide" (though if you're interested in learning, there's a lot of free information out there on the web), and this article may raise more questions than it answers. For example, which method is the most effective? The simple answer is that none of the methods listed above are optimal in every situation. Choosing the best method is frequently accomplished by drawing on the analyst's expert knowledge and experience with the data at hand. It is also recommended that experiments of the various techniques be conducted and that a choice be made that produces the least variance when comparing past actuals to what you would have forecasted with the data available at the time.
Of course, combining all of the above through multilevel forecasting can produce excellent results on paper. However, it should be noted that I have yet to see a situation in which the time and effort required to produce multilevel forecasting in a contact center environment outweighs the value in forecasting accuracy provided. Therefore, it is typically reserved for the manufacturing sector when forecast levels in different hierarchies are required (e.g., product hierarchy, customer hierarchy, region hierarchy). However, I'd love to hear from someone who has had a different experience.
So, to end on a potentially contentious note, the most accurate method is not always the best method. As mentioned in the Measuring Forecast Error, it is critical to measure your accuracy constantly, and techniques such as MAPE will tell you the size of your forecast error, and Poisson (as described in Ger Koole's article "What is the best achievable forecast accuracy?" will tell you the probability of a deviation and its likely impact on accuracy to help determine what size of error objective is fair.
However, neither MAPE nor Poisson will tell you how efficiently you are forecasting or whether the various methods you employ improve or degrade the forecast. To determine this, a simple process known as forecast value added is used (FVA). It requires a little extra effort upfront, but in the long run, it can significantly improve forecasting accuracy and reduce forecasting man-hour costs by assisting you in avoiding pointless forecasting processes. It basically compares the forecasting accuracy of each stage in your current approach to the simplest, least labor-intensive method of forecasting (namely a "nave" forecast).
Check out the weWFM Podcast on Apple or Spotify
Spotify: https://spoti.fi/3J5gsJh
Apple: https://apple.co/3HskI58
Comments