DOCUMENTS

Is Afrobarometer's latest poll reliable?

Gareth van Onselen questions the dependability of the recent Idasa opinion survey

Afrobarometer (in conjunction with the Institute for Democracy in South Africa, IDASA) has released the findings of a recent poll it undertook: "A comparative series of national public attitude surveys on democracy, markets and civil society in Africa"; and while it contains a few interesting insights, they are obscured by a series of poorly thought through questions and a generally shoddy questionnaire. It's indicative of the low standard of market research and its interpretation, in South Africa today.

The poll comprised face-to -face interviews, conducted in 11 languages with a nationally representative, probability sample of 2 400 respondents (a random sample where each member of the population has an equal chance of being selected). It was conducted across all 9 provinces in October and November 2008 and the fieldwork done by a company called Citizen Surveys.

A poll of this size (over 150 questions) costs a vast amount of money to administer, millions of Rands to be more precise. The majority of that cost goes to paying field workers - the people who go door-to-door and conduct the interviews. So a basic requirement is ensuring you have a good questionnaire; get that wrong and all the money you spend on field work will be wasted: a bad question will produce a meaningless answer.

This necessity, however, doesn't seem to carry much weight with Afrobarometer and IDASA - one or both of whom presumably drew up the survey - as there are a series basic errors throughout the questionnaire.

Take Question 9.c (page 9), for example. It asks: "Over the past year, how often, if ever, have you or anyone in your family been physically attacked?" The options each respondent were presented with were the following: "Never"; "Just once or twice"; "Several times"; "Many times"; "Always" and "Don't know".

How people experience crime is a relative concept. Being attacked, say three times, might well constitute "several times" for some people and "many times" for other people. Perhaps more significantly, Afrobarometer itself, which would have to interpret the findings, might have its own idea of how many times "several times" is, exactly.

In short, by not quantifying the options there is every chance the way in which both the respondents and Afrobarometer interpret the answers will differ.

And that is before you get to the option "Always" - an answer as odd as it is meaningless. How is someone attacked "Always"? What does it mean? Bizarre.

Here is another bit of mad polling: Question 18 (page 12). It presents respondents with two statements and asks them to identify which one of the two they agree with. The options are: "Agree very strongly with 1"; "Agree with 1"; "Agree with 2"; "Strongly agree with 2"; "Agree with neither" and "Don't know". Here are the two statements:

Statement 1: "People are like children; the government should take care of them like a parent. Since leaders represent everyone, they should not favour their own family or group."

Statement 2: "Government is like an employee; the people should be the bosses who control the government."

This kind of question is standard in social research. It is an attitudinal question designed to gauge what ideas/values/principles people associate with. Political parties often use them to test slogans and key messages.

The trick, however, is to ensure you have one idea per statement. If you conflate two ideas, then you have no idea what people are associating themselves with.

Statement 1 is a good example of this sort of bad practice. The first half of the statement concerns the role of the state, the second half nepotism and corruption. So, when people say they agree or strongly agree with Statement 1, it's hard to know what they are agreeing with: that nepotism is bad or that the idea of the welfare state is good.

The consequence is that the answers are ambiguous and, as a result, ultimately worthless.

Question 49 (page 35) is also problematic. It asks: "How much do you trust opposition political parties, or haven't you heard enough about them to say?" The options: "Not at all"; "Just a little"; "Somewhat"; "A lot" and "Don't know/Haven't heard enough".

To suggest that "opposition parties" are a uniform group is, of course, silly. South Africa 's opposition parties differ fundamentally in a number of different respects; from the Afrikaner Weerstandsbeweging on the far right to the Azanian People's Organisation on the far left. In the same fashion, people will inevitably have different opinions about different parties - they may distrust the Independent Democrats, say, but trust the Democratic Alliance. To force respondents to amalgamate their views is, in essence, to generate an artificial answer; which would again be meaningless.

You can be sure that were Afrobarometer to ask the same question with regards to a series of individual political parties the various responses generated would differ dramatically.

There are a seemingly endless number of other mistakes and space does not allow for a full interrogation of each of them, but here's one more worth mentioning: Although the blurb on the IDASA website doesn't say whether those respondents selected are or are not registered voters one must assume they aren't, simply because the survey describes them as ‘citizens;' and, at no point, identifies them as registered voters. Either way, however, there is a problem.

If the respondents aren't registered voters, what's the point of asking a voting intention question (Question 97, page 85)? You will be getting the opinion of people who aren't going to vote, skewing the response of those who actually are eligible to vote. There might be some esoteric reason for generating that sort of information but it's entirely unhelpful in the run-up to an election for it tells you very little of substance.

If the respondents are registered voters then it is hard to fathom why Afrobarometer hasn't cross tabulated by race (the best it has to offer is urban versus rural respondents). South African politics is divided along racial lines, one might not like that fact but that doesn't mean it isn't a fact. And you aren't going to tell much of any real consequence without cross tabulating people's responses by race.

(And, I have to say, the voting intention question in and of itself is problematic (Question 97). For one, 21% of respondents refused to answer - which means there is something wrong with the way the question was asked; that is simply too high. A further 8% were undecided. So, some 30% of respondents didn't answer the question, which makes the figures for those who did answer entirely unreliable because you don't know which way those people who were undecided or refused to answer will fall.)

Market research in South Africa is defined by poor standards at both ends of the spectrum: on the one hand, as the Afrobarometer poll illustrates, conceptualising a good methodology and producing a sound questionnaire, as well as properly carrying out field work is often a bridge too far for a number of companies that claim to specialize in this sort of thing; on the other hand, those ‘experts', whose job it is to interpret the results often fail to properly understand raw data and what, exactly, a particular response tells you (but that's the subject for another discussion).

Polls are often used by the media, the subject of big headlines and dramatic statements (see here and here for example) but, look a little closer and very often those statements don't hold up to close scrutiny, the reason being the poll they are based on is unreliable.

What one can say with some conviction is that this latest Afrobarometer poll is certainly not a model of good market research.

Gareth van Onselen is director of special issues for the Democratic Alliance. This article was originally posted on the DA's volunteer website: www.contributetochange.org.za (free sign up required)

Click here to sign up to receive the free Politicsweb daily headline email newsletter