Introduction
The term Accuracy in scientific terms refers to how close a measurement is to the true or accepted value.
In the world of people counting it is how close the number of people reported by the counting system is to the number of people that actually went IN and Out of an area.
Note that because a Vectors count lines work independently and can be positioned in different parts of the field of view, with different settings, it is possible and likely that the accuracy of each will be slightly different. Even lines positioned in exactly the same place with the same underlying settings will still most likely have slightly different accuracies due to the different paths taken across each line (i.e. from opposite directions) and because of the human factor which can often be unpredictable and sometimes confusing.
The main purpose for deriving a Vectors accuracy in a given location with a given setup, is to prove to the end customer that the count data coming out of it is sensible and useable. If the accuracy is deemed to not be sufficient then the aim should be to use the data to make informed changes to the installation or configuration to improve accuracy until the installation is optimum. Using the Vectors built in Validation functionality is the best way to see where a device went wrong in order to make changes so that future similar occurrences are counted correctly. For example tweaking line positions and/or Register Height Filter settings.
Accuracy Examples
Irisys quotes the accuracy of a Vector as 99.5%.
That means that for every 200 people walking IN and OUT of a doorway, one person maybe inadvertently missed or double counted.
To show the difference, if a 3rd party device states an accuracy of 99% this would mean that for every 100 people walking IN and OUT it would miss or double count 1 person. And for a device stating an accuracy of 90%, one in ten people walking IN and OUT are either missed or double counted.
Calculating Accuracy
It is common support question as to why the total IN and total OUT values are not the same at the end of the day, and inversely, to be happy with the accuracy when they are the same or very similar.
But it is a common mistake to compare the IN count value with the OUT count value and derive some meaning from the difference between the two values. If the IN and OUT count are very similar then it might be a good sign, but it could hide an issue which affects the accuracy of both count values equally.
Just by looking at the IN and OUT values, it cannot be known if they are both accurate, both too low, both too high, or one is too high and the other too low.
To determine accuracy you need to compare the value(s) reported by the counting system with the true value(s) seen at the entrance/doorway. And the only way to get the true values is via a manual validation where you count the traffic your self.
Manual validation can be done 'live' i.e. whilst on site and watching what happens, or by making use of the Vectors validation recording functionality whereby you can record a sequence and then play it back later.
As an example:
- If a Vector is configured to count in both directions through a doorway, and over a few hours of manual auditing you count 367 people going IN and 362 going OUT, and the Vector over the same timeframe counts 365 people going IN and 360 going OUT.
- The IN error rate is ( 365 - 367 ) / 367 = -0.0054, which equates to an accuracy of 99.46% (slight undercount)
- And the OUT error rate is ( 362 - 360 ) / 360 = 0.0055, which equates to an accuracy of 100.55% (slight over count)
Statistical Relevance
Statistical relevance helps to ensure that a result is valid and not due to chance or to some other factor. When a finding is significant, it simply means you can feel confident that’s it real, not that you just got lucky (or unlucky) in your counting sample.
In the people counting world this simply means that you need a large enough sample of counts in order to derive some valid conclusion from the data..
As an extreme example, if one person walks in through a door and the Vector counts that person, this does not mean the device is 100% accurate. Equally, if a second person walks in and for some reason the Vector misses that person, that does not mean the accuracy is 50%. Only when a sufficient number of people have entered will the Vectors natural accuracy be achieved. As seen here, if flows are too low, it is possible for small errors to feature too prominently, and conversely, a very successful short run, can give an unfair confidence in the accuracy.
Most statisticians agree that 100 people across each count line is the minimum required to provide a sensible accuracy figure, but the more people above and beyond that, the better. In quiet environments where 100 people may not be achievable in a day, any accuracy figure derived should be used with caution. In reality more than 100 per direction is preferred and for this reason validations over known busy times of the day (lunchtimes for example) are the best time to schedule a validation.