Statistics, surveys and the truth

Do our survey exercises bring us close to an accurate view of the state of things in our industry? “Yes, but…,” says Richard Schwartz, as Global Custodian publishes its annual Fund Services surveys – Prime Brokerage, Hedge Fund Administration and Private Equity Administration.

Most people are familiar with the phrase “Lies, damn lies and statistics”, popularised by Mark Twain, though whether he was actually quoting Benjamin Disraeli or came up with it himself is a matter of conjecture. 

The late newspaper columnist Art Buchwald once came to the (tongue-in-cheek) conclusion that, “The buffalo isn’t as dangerous as everyone makes him out to be. Statistics prove that in the United States more Americans are killed in automobile accidents than are killed by buffalo.” 

When I first became involved with Global Custodian’s surveys, there were still firms that (allegedly) took survey results into account when deciding on annual bonuses. While this may have been a sign of the regard with which GC survey results were held, it was also, to my mind, an invitation to try and game the survey. 

Statistical conclusions do not carry the certainty of the kind of maths we learned at school with its right and wrong answers. The truth is that there is an element of subjectivity to the results and the way they are selected for presentation, however rigorously we try to manage the data collection process.  

For a start the respondents are attempting to quantify a qualitative judgement, which itself may fluctuate with circumstance. Secondly, there is no unanimity on what the results should reflect beyond a broad assessment of what clients of individual providers think of the service they are receiving. Although people may like to turn the process into an informal competition – and it can obviously be spun like that – it isn’t really. 

By way of example, not everyone agrees on how different factors should be weighted. Are some services more important than others? Are some respondents more important than others? The question of weighting is one on which I think I’m slowly changing my mind. GC surveys have long been weighted to give a greater say in the results to the largest respondents, the argument being that the more assets they represent the more their views should be considered. Most of us would find that objectionable in a political context so why is it acceptable here?  

There is an answer to that. Weighting is traditionally used to correct for biases in the response data actually collected. But this assumes a target demographic for the survey. If, for example, you want to gather opinions on a particular issue that represent, let’s say, equal numbers of blue-eyed and brown eyed adults over 70 and you end up with a data set that comprises 70% blue-eyed and 30% brown-eyed responses, you may want to correct for that in some way.  

GC surveys, however, do not start out with such a defined demographic. As long as the respondent is a verified client of the provider they are rating, their response will be accepted. Granted a provider may want to attract a response profile that maps to their overall client base for purposes of comparison to their own internal market research, but there is no retrospective adjustment of participants other than to verify their client status. 

 I certainly don’t want to give the impression that customer perception surveys should be regarded flippantly. There is much that participants, both rated service providers and respondents, can learn from the results to reward the effort of participation, But, although I wouldn’t go nearly as far as Mark Twain, I can sort of see his point. 

«