The story so far: Ahead of election results on June 4, several exit polls predicted the return of the Bharatiya Janata Party-led National Democratic Alliance to power, with a tally of more than 300 seats for the BJP alone. One pollster, Today’s Chanakya, predicted 400 seats (plus or minus 15) for the NDA, and another, Axis MyIndia, said the NDA would win an average of 381 seats. All these polls were way off the mark, as results showed.
What were the vote share projections?
The CSDS-Lokniti post poll predicted that the NDA would receive a vote share of 46% while that of the INDIA bloc (excluding the Left, Trinamool Congress in West Bengal and AAP in Punjab) would be 35%. The error margin was 3.08% points. The results showed NDA had bagged 292 seats (43.63% vote share) and the INDIA bloc 205 seats (excluding the Trinamool Congress which won 29 seats) with a vote share of 37%. CSDS-Lokniti did not project seats for the alliances but predicted that the NDA would return with a majority. Its vote share figures were within the error margins but were 2.5 points higher for the NDA.
Watch: How are Exit Polls conducted?
Axis My India projected 47% for the NDA and 39% for the INDIA bloc and the actual results showed that the projections overestimated the NDA vote share beyond the error margins. C-Voter projected a seat tally of 353 to 383 seats for the NDA, with a vote share of 45.3% and 38.9% for the BJP alone, which was 2.3 points higher than the actual vote share for the party — 36.56%. Its figures for the INDIA bloc were also roughly 2.4 points lower than the actual mark. While the vote shares were within error margins nationally, its seat tallies were way off across several States.
What are exit polls? How are they different from opinion polls?
Opinion polls are sample surveys where a cross section of the electorate is randomly chosen and interviewed about their choice of party or candidates. These polls could either be conducted in-person or over devices as is the case with telephonic surveys. Exit polls ask voters about their choice right after they have exercised their mandate, sometimes outside the polling booth. Some pollsters prefer to do “post-poll surveys” which are conducted at the residence of the voters after they exercise their mandate. CSDS-Lokniti’s poll is a “post-poll survey”. Other surveys such as Axis MyIndia’s were “exit polls”.
Did methodology matter in the way exit polls got the numbers wrong?
For exit polls to be accurate, certain factors have to be kept in mind like the sample size of the survey, the selection process of the sample, the manner in which the survey is conducted, the weighting of the sample according to estimates of the population.
The size of the sample has to be representative and the largeness of the sample is immaterial as long as it is significant enough to statistically predict the winner. If the sample is randomly chosen and the size of the sample is enough to accurately predict the possibility of a certain candidate winning more than 40-45% of the vote — which is generally the case with Indian elections — then even a representative sample of around 20,000-odd respondents is enough to predict winners in a country of a voting population of close to 100 crore. One can conduct larger surveys with more than 20,000 respondents or even with lakhs of respondents but the key to track the mood of the electorate is to get good representation.
CSDS Lokniti’s total sample size was 19,663 across 23 States and 193 parliamentary constituencies while that of Axis MyIndia was 5,82,574 across all the 543 constituencies. But the former got its vote share predictions within the error margins while the latter didn’t.
For good representation, the samples have to be chosen randomly (so as to avoid bias) and also be chosen in a stratified manner (so as to avoid missing out on any section of the population). The most ideal way of choosing a random, but stratified sample, is to use electoral rolls for identifying respondents.
Once sampling is done and the list of respondents are identified, they need to be weighted on the basis of the representation of sections in the population — the percentage of women, Dalits, minorities, majority population, urban vs rural voters. After a representative sample has been prepared, interviews are conducted by surveyors. Ideally, a face-to-face interview works better, and in the same language as the respondent.
As many respondents will not be comfortable in relaying their voting choice to the surveyor, questions should be framed in such a way that the voting choice can be ascertained through this or a mechanism provided in such a way that the respondent records his voter choice without revealing it in the open to the surveyor.
Do respondents reveal their choice?
There is a high probability that many respondents, especially those from marginalised sections, either do not reveal their voting preferences or require a measure of trust with the surveyor before opening up on their choices. There is a possibility that the pollsters who got this election wrong either under-sampled marginalised voters or their surveyors were not trusted by the respondents to reveal the right answer or were misled by them.
Once the information is collated, there are other steps to be followed. How do surveyors allocate “undecided” choices? For example, to a question like ‘Who do you vote for?’, there may be answers including ‘Don’t know’ or ‘No opinion’ or ‘No response’ or ‘Won’t tell’? Should surveyors omit them? Or should they weigh them on the basis of the proportion of decided choices? These are important considerations for the surveyor.
What is the process after the surveys are carried out?
Once the survey is done, the results should be matched with the estimated demographic information. If there are 12 respondents among Dalits whose choices are recorded in a population of 100 and the actual proportion of Dalits in that population is 15, then the weighting can be uniformly done for the 15 based on the 12 respondents. But if only 39 women in a population of 100 are interviewed, extrapolating the views of the 39 to that of 48 women (the possible actual estimate of the women population) would be problematic because women do not vote as a single category. This could be one reason why the Axis MyIndia poll got its estimates wrong for many States. The men-women representation in the poll’s sample was 69 to 31.
Most of the pollsters, who had tied up with TV channels, used their surveys to predict seat shares — CSDS Lokniti didn’t. Vote share to seat share conversions can be done in different ways. The most commonly used method is by assessing the swing in vote share for a particular party from previous elections, either in a State or to be more accurate, in a particular region and to be more precise, if the sampling allows the pollster to do so, in a particular constituency. The swing for or against a party as against the same for its opponent(s) can provide the basis for whether an incumbent will be returned from a particular constituency or whether a party can retain a certain number of seats in a region of a State or a State as a whole.
As veteran psephologist and media personality Prannoy Roy points out in his book, The Verdict, written with Dorab R. Sopariwala, some pollsters look at swings from previous polls and the “index of opposition unity” to determine the margin of victory for a particular candidate and predict seat share from vote shares.
There are other methods to evaluate this as well, but conversion of vote shares to seat shares also requires the pollster to be aware of the political dynamics of a particular State, its regions, and what transpired in proximate elections there.
For example, in a State where a particular party wins an Assembly election, it enjoys what is called, a “honeymoon effect” and this has a bearing on the Lok Sabha poll if that election is conducted only a few months after the Assembly polls. In such cases it is better to calculate swings for parties using the Assembly polls in those States as a base rather than the previous Lok Sabha poll.
None of the pollsters who tied up with major television channels got their vote to seat shares right. Since none of them have revealed what they consider their “secret sauce” — the conversion process — it is difficult to ascertain why they got it wrong.
Is a close election difficult to predict?
It is evident that pollsters in India mostly get the winner of an election and the seat shares closer to reality when the outcome is decisive. When elections are close, as was the case in this Lok Sabha election, pollsters rarely tend to be accurate on vote and seat shares.
Whether a polling agency has done a good survey is clear from what it reveals in its methodology — the sample size, the mode of survey, the representation of the sample, and the inbuilt error margins. If a survey doesn’t reveal these, it should not be considered serious enough.