Guided by voices

A press release rehashed as a news article on the BBC website today – “Lie detector software saves ?500k” states:

Lie detector software has saved a south London council almost ?500,000 it was losing through benefit fraud.

During a phone call Voice Risk Analysis (VRA) software analyses minute changes in the voices of those claiming benefits to see if they are lying.
During the pilot, almost 1,700 people were assessed, of which 377 had their benefits stopped or decreased.

For starters, VRA isn’t a lie detector – there is no such thing. Even the polygraph isn’t really a lie detector, and in any case evidence from either machine cannot be used in court. The success rate of VRA itself is not particularly encouraging:

What we see here is that 198 true statements were coreectly determined to be true but 118 true statements were incorrectly determined to be false. We also see that 127 false statements were correctly determined to be false but 73 false statements were incorrectly determined to be true. In short, 37.3 per cent of the true statements were adjudged to be false and 36.5 per cent of the false statements were adjudged to be true.

While a strong case can be made for using them as one of many tools in an assessment process, with due care and attention taken as regards their fallibility and the risks involved, there’s nothing to suggest this is the case here, particularly given how the council is trumpeting it. Processing 1,700 cases in five months (17 or so per working day) is pretty swift work, to boot.

This is further exacerbated by the numbers – the false positive and false negative rates are similar, but assuming the vast majority of claims are genuine, then there are going to be more genuine claimants are going to be denied benefits they deserve, than bogus ones who get away scot-free.

The only remaining thing in their favour is they can be a deterrent – albeit a fairly expensive one – and that their ongoing usefulness as a deterrent depends on how they are used. Used carefully and judiciously these are a good idea; used as a quick fix to plough through thousands of claims and they quickly become ineffective and even oppressive.

2 thoughts on “Guided by voices

  1. You’ll be glad to know that, on those figures, the FP/FN rate can’t have been anywhere near as high as 36-7%, or far more than 377 people would have been caught in the net (mostly false positives).

    It’s a base rate fallacy thing – the rate of the actual phenomenon in the population interacts with the FP/FN rate. Assuming for simplicity that the FP and FN rates are the same, if you plug in a ‘false’ rate of 20% (i.e. VRA gets it wrong one in five times) you’re forced to conclude that the software’s identified 49 true positives and 328 false positives. Lower miss rates give better results, obviously – 5% ‘false’ gives 308 true positives and 69 FP – but the rate needs to be pretty low to give decent results (12.5% would give approximately 50/50 FP/TP). All this is holding the final figure of 377 constant, but since we know that number it seems reasonable to do so.

No new comments may be added.