A few years ago, a study conducted on audit frequencies found that companies who completed three store audits per year had the lowest shrink. Of course, loss prevention professionals realize that “auditing” doesn’t reduce shrink, but rather it is the actions associated with correction and the importance of proper practices that are being highlighted by audits that reduce loss. Still, audit frequency is an on-going discussion point in every customer facing business. The challenge is not just one of frequency but also whether or not an audit score accurately indicates true performance over time. So how often should we audit and how do we know if our scores are reflective of average performance?
To answer that question, first we must consider the content and parameters of our audit tools.
Audits serve many masters. There tends to be information and inspection related to a range of activities: safety questions, compliance to POS standards, physical security measures and customer service. The more of these things we combine in a single audit event, the less likely our audit score will be reflective of any one area of concern. While this is more a conversation on weighing the questions and reviewing scores by sections, experiences teaches that people will focus more on a single score than on any scores within the audit. That means that when we consider audit frequency we must contend with the notion that the total score will determine the opinions on the operational success of a particular location - so this final score is a more important reflection of performance than any problems revealed by subsections.
An audit is just a snapshot in time. It covers a prescribed period for review. Unless all the audits occur in all the locations within the same week or month then some stores may benefit from timing while others are penalized. Technically then it is possible for a store with overall better performance to score lower than a store with overall worse performance based solely on the inspection period. Although we tend to focus on “exceptions” it is still our goal to ensure that the audit scores are a proper reflection of performance and compliance. If we state that a store received a 95 on their audit and is therefore “operationally” sound then we want to ensure that the statement is true and valid. This is a good argument against a one time per year audit. Timing could impact results.
If our goal is to use audit score or snapshots to determine compliance and performance then we must ensure that the snapshot includes the proper sampling to make such predictions. Sampling size provides for interesting, however, somewhat confusing, discussions. We know for example that we can poll 500 people and determine the political decision making of 500,000 people with a very small margin of error and with 99% confidence. Although I am not a statistician, those same mathematical relationships apply to the sampling of behaviors found in our audits. But the success of that sampling rests heavily on both the sample size and the randomness of selection.
In short, if we want our audit scores to represent a more accuarete picture of store performance we must ensure we have sampled the correct amount of materials (in number of days) and that we have selected a random sample (a variety of days). If we want to determine the overall compliance to refunds we need to know how many refunds are conducted in a year and then sample in accordance with the statistical requirements. That would be a cumbersome requirement when one considers that an audit might include fifty to seventy-five questions and that each question would require a count of the total population (number of activities) to determine the correct sample. Since, however, we mainly rely on the total score or the average of all the questions within the audit, then we can shortcut this requirement by looking strictly at the number of operating days in a year - I’ll say 363.
With 363 days in the year, in order to be 95% confident that our score is reflective of all days, we need to sample 77 days of activity. Another argument against a once a year audit. Too many days and too much information to review in a four to six hour audit. However, if we increase the audit frequency to two times per year, we can sample a month's worth of information two times a year and now that average will be reflective of true store performance 95% of the time. This is why three times a year is even better! Target store audits conducted four times a year almost always reduce shrink (95% of the time in our experience). Within the audit questions we can make other sampling decisions too. For example if we conduct 3500 refunds a year then we need to review about 94 of them for 95% confidence - that means about 9 days of random refunds.
Greater sampling leads to a truer picture and therefore better predictors and allows for quicker correction.
Although this may sound counter-intuitive, it is not the bad scores that we should question. A low score indicates an issue for the period of inspection and once the issue is identified we can correct it. A passing score may suggest no further action or concern - but that is true only if we have a high degree of confidence that the passing score is actually reflective of overall daily performance. To have that confidence, we need to sample the proper number of days - a total of 77. As stated, that’s a lot of information to review in one day so our best method is to divide the sampling across time to add randomness, increase the focus, and reduce any one day’s required resources.
The bottom line is we don’t need to review a 100% of activities, we just need the proper sample. That sample becomes a much better indicator when spread over time and conducted 3 or 4 times a year.