Saturday, 4 August 2012

WCA - back to basic management best-practice

We (including Prof Harrington) have lost sight of some of the basics.   They are not to do with politics or health particularly, just sound management best-practice.  At present, we are in a steady downward spiral – more decision errors, more appeals and longer queues, all making it harder to measure what is really going on.  Improving the right-first-time rate is the logical place to start reversing the trend.

The quality of any discrimination process (which is essentially what the WCA is - FFW or not FFW) is judged by its error rate in both directions – false-positives and false-negatives as the statisticians say.  It is not just about the proportion of errors, but also their size – are they just at the margins or at the extremes?  The latter would be for example declaring terminally ill people fit for work – something we know has happened and the type of error that is wholly unacceptable and totally avoidable.  The errors at the margins are far tougher to eliminate, so we have to be a bit more tolerant.

No discrimination process is perfect and in the same way that occasionally people are wrongly sent to prison despite the rigours of the process, we have to accept the fact that even the very best WCA process will make some errors – the question is from a compassionate and humanitarian standpoint, what error rate we would regard as “acceptable” realistically.  Once we have a view on this, we can think about whether or not the current process is capable of achieving it – if it is not, the sooner we realise it the better.  As it is, everyone assumes it can but in honesty how can anyone say when we have no idea what we are aiming for?
Measuring the first-time decision errors against all WCAs undertaken, suggests it is around 8 – 10%, so the questions for HMG are (hoping to extract straight answers):
1)      Does it regard this as acceptable?
2)      Can it prove the improvement it claims through the changes implemented over the past 4 years?
3)      What success rate is it aiming for? Having an ambition is perfectly reasonable, indeed desirable.   [It will of course avoid answering this at all costs, but if they do not know, how will they know when we get there – no point looking for further improvement if we have reached the ceiling as it’s a complete waste of time and money].
4)      Is the current process capable of reaching this ambition:
·         If yes, what specific changes are needed to bridge the gap?
·         If no, when does the re-think start?
My own view on 1) is possibly yes as long as they are just at the margins and are resolved quickly – a subject for debate.

2 comments:

Anonymous said...

You seem to assume that the WCA is a valid test.

It is important to keep apart two very different concepts: ‘Fit for Work’ (FFW), as defined by DWP, and ‘Truly Fit For Work’ (TFFW), i.e. capable of doing the kind of work that exists in the real world.

HCPs/DMs determine whether you are FFW; they are not interested in whether you are TFFW. Appeal tribunals determine whether the HCP/DM has assessed you according to the law; they don’t care whether you are TFFW.

We know how many people have been found FFW, how many have appealed and won their appeals. But we don’t know how many of those found FFW are TFFW, or how many of those who are found FFW and choose not to appeal are FFW or TFFW. Nor do we know how many of those who lose their appeals are TFFW. (It may well be that many of them are not fit for work in any meaningful sense; they just don’t fit into DWP’s boxes.)

Improving the right-first-time rate will not help those who may be FFW, but are definitely not TFFW. Unfortunately for them, in the real world employers expect job applicants to be TFFW.

Tia Junior said...

I would first say that my posts here are not intended as scientific theses so do not spell out all of the aspects of what I thought to pontificate about.

The suitability of the test is irrelevant to the point I am making here. I would prefer they didn’t, but if the Government is intent on screwing me, I would like it to be efficient and not waste my money too.

It is impossible to design a test that is both 100% accurate and cost effective in an area like this where the science is inexact, so you can only drive the errors as low as you reasonably can - the real skill then lies in the way in which residual errors are sympathetically managed.

I agree with you – the focus should be on what it takes to hold down a job, not just the ability to undertake a series of relatively meaningless tasks. More than anything else, employers need reliability and this is the one thing many sick/disabled people find it hardest to provide. There has been euphemistic talk about task repeatability that largely misses the point and is only now after all this time gaining some real traction through descriptor definitions – we’ll see to what degree when the next wave is published officially. It would help a lot if (for example) the CBI was to make this point.

Linked to this is the other issue conspicuous by its absence – the slightest clue as to what the “W” in WCA means – “some general form of work” is a cop-out to say the least. How can you possible test something’s ability to do something without saying what that “something” is? The only indisputable definition of work is the one the physicists use, which in this context is useless.

The other option is to create jobs around these limitations – flexitime, WFH etc. etc. but ironically HMG has decided to wind down the few facilities that do exist.

We can only work with the data available and have to accept that some that may be interesting or even useful might be too difficult to collect. The Government claims all sorts of improvements that it has not one shred of evidence to support, but other than point this out there is not much else one can do. As a result much energy goes into debating the stats rather than the problem itself – a common tactic for any regime indulging in disingenuous and nefarious activities to divert attention. You will have seen figures that almost suggest the error rate is 70%, which is simply not the case.

My aim here was to highlight how GENUINE improvements can be progressed quickly by better using the information available, rather than design the panacea.