OPINION

Poll surveys, toll fees

Primer Pagunuran

It’s akin to comparing apples and oranges. Drawn on a Venn diagram, poll surveys and toll fees share commonalities as differences. When author Darrell Huff wrote his book, “How to Lie with Statistics,” it was a conjectural admission that, in effect, statistical data could lie, bluff, mislead, even become unjust if any trivial extrapolation is pushed to the extreme.

In practice, election polling requires a sponsor. Any polling firm worth its salt would prefer that business arrangement than an opinion survey as a public service done at its own expense. There are costs involved in launching a survey of, say, 1,800 or so respondents over geographical areas and across demographics to arrive at a representative sampling of the whole population.

Like toll fees, poll surveys tend to be rivalrous and exclusionary. This is especially true with election polling with a “sponsorship” or an upper middle-class client that consequently would not be able to insulate itself from the typical corporate bias of, say, the think-tank commissioning or sponsoring the poll.

In the first place, a survey attended by any degree of bias would defeat its purpose. The poll surveys released by Pulse Asia and Social Weather Stations (SWS) on senatorial candidates — their shortlisting indicatively hasty guess work — could be morally invalidated.

SWS limited the field of senatorial wannabes to only 40, while Pulse Asia extended it to 74. One can readily say that a survey focused on a broader demographic field, more so of a greater number of respondents (i.e., Pulse Asia), would have a smaller margin of error in terms of a representative sampling of the population.

But the non-inclusion, for example, of an easily identifiable senatorial wannabe like Rep. Rodante Marcoleta in their presupposed universe mirrors the apparent weakness, nay bias, of the polling outfit. How conveniently it over-fixated itself by bunching the same old dynastic personages on the list prior to their filing their Certificates of Candidacy.

This alone proceeds from a mistaken, if premature, assumption, probably path-dependent on some patronizing bourgeoisie or class bias, but reflective of one poorly grounded on the political environment. If the universe is pre-selected sans rational justificatory basis nor any scientific explanation, then it clearly reeks of injustice to extrapolate only 20 percent or 40 percent of 184 senatorial bets.

Surveys could be said to be unreliable despite self-serving claims by the two major survey firms of sound methodologies and the quality of random sampling as reasonably representative of the whole universe. However, nothing could explain how both pollsters exhibited “striking” variances in their ranking of the top 12 senatorial bets.

Recall how actor Robin Padilla topped the senatorial race in the May 2022 elections much to the surprise of political analyst Ranjit Rye who is affiliated with OCTA Research, which conducted a poll survey. The actor was not number one in their pre-election poll, albeit he was in the top 10. What OCTA predicted was either Loren Legarda or Raffy Tulfo as the “top pick in the senatorial race.”

Opinion survey statistics can lie, proof that Huff is correct in his theory. It can happen that an outlier will rise to the top of the actual results.

Surveys have a powerful influencing effect on voting patterns that they may well be banned throughout the pre-election period because of their mind-conditioning, bandwagon effect and class patronage. Clearly, our Supreme Court never thought of statistics as the “dismal science” at all.

What incontestably compelling norm governs shortlisting senatorial candidates to be over or under ranked in election polling? At a future time, scholars can invoke studies already done of the “existential threats” of poll surveys.

Whether poll surveys are biased or not, when presented to either informed or uninformed populations, laboratory experiments manifest some pre-deterministic Darwinian effect.