Sadly, spin and distortion in an attempt to support a dubious parochial objective is a feature of pretty much everything these days, so nothing can be taken at face value any longer. This just wastes a huge amount of time and effort arguing over the numbers rather than spending it on solving the problem.
A case in point is the suggestion that anything up to 70% of Atos WCAs are wrong. Nobody quite says this, but many are happy that this might be the impression you walk away with, after all it proves without doubt that Atos is rubbish.
However it appears that only 9% of assessment decisions have been overturned at appeal – as recently verified by the independent fact-checker Fullfacts.org. (I have always found Fullfacts to be about as straight as anyone can be).
The REAL issue is therefore whether or not 9% is acceptable and if not what is? It doesn’t sound a lot – less than 1 in 10 – but in such a sensitive area it is clearly not good enough, but no “errors” is unrealistic.
The sooner we do this, the sooner we can have a sensible discussion about how best to achieve the target (and by when) and establish a series of milestones to get us there. This is no more than good business practice and I really cannot understand why this was not sorted out at the very beginning of this project. The fact that it was not is inexcusable and even Harrington reports do not address this basic issue. How can anyone possibly say that the current overall process is capable of achieving an acceptable error rate without knowing what it is?
The concept of KPIs etc has been much maligned but the one and only management cliché that I have found always holds true is “if you don’t measure it, you can’t manage it”.
[PS I do realise that WCA “errors” are not all of equal size and perhaps views would be different if they were just at the margins.]