The polls were off in Australia’s election – but it’s the uniformity that has experts really asking questions

TruthLens AI Suggested Headline:

"Experts Question Uniformity of Polling Results in Australian Elections"

View Raw Article Source (External Link)
Raw Article Publish Date:
AI Analysis Average Score: 8.2
These scores (0-10 scale) are generated by Truthlens AI's analysis, assessing the article's objectivity, accuracy, and transparency. Higher scores indicate better alignment with journalistic standards. Hover over chart points for metric details.

TruthLens AI Summary

In the recent Australian elections, while pollsters accurately indicated that the Labor party had the most support, they significantly underestimated Labor's actual vote in both primary and two-party preferred measures. Experts have raised concerns about the uniformity of the polling results across multiple organizations, noting that all polls showed a similar trend of exaggerating the Coalition's support while underestimating Labor and other parties like One Nation. Murray Goot, an emeritus professor of politics, highlighted that this consistency raises questions about the reliability of polling methodologies, especially since ten different polls conducted around the same time yielded similar inaccuracies. Although the final election results fell within the margin of error for some polls, the collective miscalculation indicates a systemic issue in polling practices rather than isolated errors.

Pollsters have cited the unprecedented number of votes for third parties and a significant proportion of undecided voters as challenges that complicated their predictions. Some analysts, like Peter Lewis from Essential, noted that their methodology, which included undecided voters, helped capture the momentum towards Labor even if it did not fully predict the extent of their victory. Concerns about 'herding'—where pollsters may adjust their results to align with competitors—have surfaced, suggesting a reluctance to deviate from established trends. Even after the polling failures witnessed in the 2019 elections, the lack of transparency surrounding polling methodologies remains a critical issue. Goot pointed out that modern polling techniques often rely on non-random samples, which may not accurately represent the broader population, leading to skewed results. Without improvements in transparency and methodology, the future reliability of polling in Australia remains uncertain.

TruthLens AI Analysis

The article provides insights into the discrepancies in polling data during Australia’s recent election. While it accurately indicated that the Labor party had a lead, it failed to capture the actual extent of that support. This uniformity across various polls raises significant questions among experts regarding the reliability and methodology of polling organizations.

Concerns About Uniformity in Polling Data

Experts like Murray Goot point out that the uniformity of the polls—where all underestimated Labor and overestimated the Coalition—suggests potential flaws in polling methodologies. This consistent bias in the results, despite marginal errors being within acceptable ranges, indicates a systemic issue that could mislead public perception and understanding of voter sentiment.

Challenges Faced by Pollsters

Some pollsters have acknowledged that the presence of a record number of third-party votes and undecided voters complicated their predictions. Despite these challenges, they maintain that the overall trends were captured correctly. This highlights a possible disconnect between the methodology used and the dynamic political landscape, suggesting that traditional polling methods may need reevaluation.

Public Perception and Political Implications

The article hints at a broader narrative where public confidence in polling could be undermined due to consistent inaccuracies. Such a situation could influence voter turnout and trust in political institutions. The implications of this could extend beyond just election outcomes, affecting the overall political climate in Australia.

Potential Manipulative Elements

While the article does not overtly manipulate facts, the focus on the uniformity of polling results can create a narrative that questions the integrity of polling organizations. This could lead to skepticism among voters regarding the validity of future polls. Such skepticism can be detrimental, particularly when elections are approaching, as it may foster disillusionment with the democratic process.

Reliability of Information

The article presents a combination of factual reporting and expert opinion, which lends it a degree of credibility. However, the emphasis on the uniformity of polls and the potential for systematic bias raises concerns about the objectivity of polling practices. This indicates that while the information is grounded in reality, it also serves to provoke thought about the reliability of polling data.

The intention behind publishing this article seems to be to highlight and question the reliability of polling data in the context of the recent election, encouraging readers to critically evaluate the information they receive from these sources.

Unanalyzed Article Content

Pollsters correctly called that Labor had the most support going into Saturday’s election, but all the polls also underestimated Labor on both primary and two-party preferred measures.

While the final results are within the stated margin of error for some of the polls, experts are worried about something else: across all of the polls, the results are too uniform.

“They all exaggerated the Coalition. They all underestimated Labor. They all exaggerated One Nation and so on. All of them,” says Murray Goot, an emeritus professor of politics at Macquarie University.

“And there are 10 of these polls. Roughly [taken] at the same time.”

A simple average of the final polls had Labor’s two-party preferred at 52.3%, and primary vote at 31.6%. On the current count, these are smaller misses thanin the 2019 election, when pollsters incorrectly showed Labor ahead.

But all of the polls this election were wrong in the same direction.

Several pollsters have told Guardian Australia the record number of votes for third parties and the record level of soft and undecided voters made their jobs more difficult – but say most were not too far off the mark in the end.

“Essential, along with most polls, accurately picked up that the trend was moving towards Labor during the course of the campaign,” says Peter Lewis, the executive director of Essential, who runs Guardian Australia’s Essential poll.

“Additionally our methodology of including undecideds means that with the final 4.8% that declared ‘unsure’ the week before the poll leaning Labor, the polling captured the momentum, if not the final ‘landslide’.”

Sign up for Guardian Australia’s breaking news email

The RedBridge director, Kos Samaras, says there was “a lot of heavy backgrounding” that the public polls were wrong. RedBridge’s final poll had Labor at 53% to 47%, and it was showing Labor was doing well in key seats and in Queensland.

“We were recording pretty big numbers for Labor and I thought maybe that was an aberration,” he says.

“Clearly it was not.”

There should be some “noise” in polling, with estimates bouncing around. A month before election day,some on social mediawere already questioning if the polls were too stable. There were similar concerns by election watchersin 2019.

Adrian Beaumont, an election analyst at the Conversation, suspects “herding”, when pollstersconsciously or unconsciouslyadjust their results to match those published by their competitors, so they won’t be singled out if they are wrong.

Guardian Australia is not accusing any of the pollsters of herding in this campaign.

In the 2019 election, there was such a small spread among the polls that the Nobel prize-winning astrophysicist Brian Schmidtcalculated the oddsof it being chance at greater than 100,000 to 1.

“The polls were afraid of showing a Labor victory by a landslide margin,” Beaumont says of the polls this year. “That’s why they were out – the polls understated Labor’s vote.

“If you go back a couple of weeks, Roy Morgan had Labor winning 55.5% of the two-party vote. But then in the week before the election they came back down to 53%. They stayed at 53 in the final poll, which was published on the Friday before the election. If they hadn’t herded they may well have been the most accurate of pollsters.”

In response to questions, Roy Morgan says there were “no changes in sampling and methodology” over the final weeks of the campaign, except to stop survey respondents nominating candidates who were not running in their electorates after the final candidate list was announced.

“This is standard practice for pollsters during election campaigns,” Roy Morgan’s poll manager, Julian McCrann, says. “If there was any ‘herding’ it was towards us – we led the pack and picked up the swing to the ALP well before any other pollster.”

McCrann also points out that several projections now show the final result will be about 54-46, “which is closer to a 53-47 result [final published Roy Morgan Poll] than a 55.5-44.5 result”.

Goot dismisses the idea there could have been a “late swing” towards the Labor party in the days after polls were collecting data. There were five polls conducted within a day or two before the election, he says, and they were no more accurate than earlier polls.

Sign up toBreaking News Australia

Get the most important news as it breaks

after newsletter promotion

“I can’t speak for other pollsters, but as far as Essential is concerned there is no herding by us,” says Lewis.

“We ran double samples through the campaign but stuck to the methodology which we disclose through the Australian Polling Council.”

There’s a lot of art to polling, including making assumptions about how preferences will flow, and the choice of how to “weight” survey samples so that they match the population at large.

Throughout the campaign, some election analysts showed these methodological assumptions can have huge impacts, byrecalculating published pollsusing preference flows from the last election rather than respondents’ stated preferences.

“There was a fair discrepancy [among polls] depending on what you did,” says Goot.

“In one case it was a difference of at least two percentage points between going with one [method] or another.”

But there is no consensus on which method is more accurate.

Goot thinks herding was possible in this election – but “without transparency it’s difficult to know”. Either way, he says, the industry has more fundamental problems – such as whether survey samples cover everyone who should be included, and how pollsters handled “nonresponse” groups unable or unwilling to participate.

Modern polling samples aren’t randomly drawn from the population. Rather, companies get access to “panels” of people from online databases. These databases are put together from a variety of sources, including loyalty programs, but we know very little about them.

“What we suspect is they contain a very small percentage of all possible people that could be in it, and that should be in it,” says Goot.

“[Pollsters] all say that they’ve got the best selection to draw on, but one possibility is that most of them go to the same source, and that doesn’t have all that many people in it. Some of the people answering the polls may be in more than one [poll].”

Lewis says sourcing samples is a “challenge”, but that Essential’s outreach team “work hard to minimise the need for weighting”.

McCrann says Roy Morgan interviews about 1,500 Australians each week. “And that is via multi-mode interviewing including online, telephone and face-to-face interviewing and we aren’t using the same databases of any rival pollster,” he says.

While polling companies request a spread of genders, ages and locations for their panels, not all respond, requiring them to proportionately scale up or down those that do – weighting.

But weighting relies on the assumption that those who do and don’t respond to surveys are roughly similar. If this isn’t true, or the number of responses is very small, it can introduce other issues.

“If, for example, the young people who respond are going to vote Green in reasonably large numbers, and the young people that don’t respond are going to vote Green in much smaller numbers, then if you weight you’re going to exaggerate the Green vote,” Goot says.

Even after the 2019 election polling failure, there’s little transparency – but Goot believes there was a “good step” towards it with the polling council, formally established in 2020.

Still, he notes, “not all the pollsters are members. And members don’t have to disclose very much.

“They have to tell us what factors they weight by, but not how they do this. They have to put up their questions. They don’t tell us anything much about sampling, response rates or any of the other things that can go wrong with the sample.”

Back to Home
Source: The Guardian