Trial reveals flaws in tech intended to enforce Australian social media ban for under-16s

TruthLens AI Suggested Headline:

"Preliminary Findings Highlight Flaws in Age Verification Technology for Australia's Social Media Ban"

View Raw Article Source (External Link)
Raw Article Publish Date:
AI Analysis Average Score: 8.0
These scores (0-10 scale) are generated by Truthlens AI's analysis, assessing the article's objectivity, accuracy, and transparency. Higher scores indicate better alignment with journalistic standards. Hover over chart points for metric details.

TruthLens AI Summary

The preliminary findings from a trial assessing technology aimed at enforcing Australia's social media ban for users under 16 indicate significant flaws in the current methods. The operators of the trial, which includes various artificial intelligence tools for analyzing voices and facial features, admitted that the age verification systems are not guaranteed to be effective. The trial revealed that face-scanning technologies had an accuracy rate of only 85% within an 18-month range, raising concerns about the potential misidentification of users. For instance, some teenage participants were mistakenly classified as being in their 20s or 30s. Experts involved in the trial acknowledged these limitations, emphasizing that the best reported accuracy of age estimation was only within a year and a month of the actual age, suggesting that the technology requires careful consideration and design to manage these constraints effectively.

Moreover, the trial's findings highlighted privacy concerns, with some technology providers reportedly seeking to collect excessive personal information. The operators noted that certain systems might allow regulators or law enforcement to trace individuals' actions related to age verification, potentially increasing the risk of privacy breaches. While the trial identified a variety of technological options and commended some for their handling of personal data, it also raised alarms about the disproportionate collection and retention of data by some providers. As the Albanese government prepares to implement the social media ban in December, the trial's outcomes are critical, as the legislation does not specify how enforcement should occur. The final report from the trial, which is expected to detail its findings more comprehensively, will be crucial in shaping the approach to age verification in Australia and addressing concerns about the effectiveness and privacy implications of the proposed systems.

TruthLens AI Analysis

You need to be a member to generate the AI analysis for this article.

Log In to Generate Analysis

Not a member yet? Register for free.

Unanalyzed Article Content

Technology to check a person’s age and ban under 16s from using social media is not “guaranteed to be effective” and face-scanning tools have given incorrect results, concede the operators of a Australian government trial of the scheme.

The tools being trialled – some involving artificial intelligence analysing voices and faces – would be improved through verification of identity documents or connection to digital wallets, those running the scheme have suggested.

The trial also found “concerning evidence” some technology providers were seeking to gather too much personal information.

As “preliminary findings” from the trial of systems meant to underpin the controversial children’s social media ban were made public on Friday, the operators insisted age assurance can work and maintain personal privacy.

Sign up for Guardian Australia’s breaking news email

The preliminary findings did not detail the types of technology trialled or any data about its results or accuracy.Guardian Australia reported in Maythe ACCS said it had only trialled facial age estimation technology at that stage.

One of the experts involved with the trial admitted there were limitations, and that there will be incorrect results for both children and adults.

“The best-in-class reported accuracy of estimation, until this trial’s figures are published, was within one year and one month of the real age on average – so you have to design your approach with that constraint in mind,” Iain Corby, the executive director of the Age Verification Providers Association, told Guardian Australia.

Tony Allen, the project director, said most of the programs had an accuracy of “plus or minus 18 months” regarding age – which he admitted was not “foolproof” but would be helpful in lowering risk.

The Albanese federal government’splan to ban under 16s from social media, rushed through parliament last year, will come into effect in December.

The government trial of age assurance systems is critical to the scheme. The legislation does not explicitly say how platforms should enforce the law and the government is assessing more than 50 companies whose technologies could help verify that a user is over 16.

The ABCreported on Thursdayteenage children in the trial were identified by some of the software as being aged in their 20s and 30s, and that face-scanning technology was only 85% accurate in picking a user’s age within an 18-month range. But Allen said the trial’s final report would give more detailed data about its findings and the accuracy of the technology tested.

The trial is being run by the Age Check Certification Scheme and testing partner KJR. It was due to present a report to government on the trial’s progress in June but thathas been delayed until the end of July. On Friday, the trial published a two-page summary of “preliminary findings” and broad reflections before what it said would be a final report of “hundreds of pages” to the new communications minister, Anika Wells.

The summary said a “plethora of options” were available, with “careful, critical thinking by providers” on privacy and security concerns. It concluded that “age assurance can be done in Australia”.

The summary praised some approaches that it said handled personal data and privacy well. But it also found what it called “concerning evidence” that some providers were seeking to collect too much data.

“Some providers were found to be building tools to enable regulators, law enforcement or coroners to retrace the actions taken by individuals to verify their age, which could lead to increased risk of privacy breaches due to unnecessary and disproportionate collection and retention of data,” it said.

Sign up toAfternoon Update

Our Australian afternoon update breaks down the key stories of the day, telling you what’s happening and why it matters

after newsletter promotion

In documents shared to schools taking part in the study, program operators said it would trial technologies including “AI-powered technology such as facial analysis, voice analysis, or analysis of hand movements to estimate a person’s age”, among other methods such as checking forms of ID.

Stakeholdershave raised concerns about how children may circumvent the banby fooling the facial recognition, or getting older siblings or parents to help them.

Friday’s preliminary findings said various schemes could fit different situations and there was no “single ubiquitous solution that would suit all use cases” nor any one solution “guaranteed to be effective in all deployments”.

The report also said there were “opportunities for technological improvement” in the systems trialled, including making it easier to use and lowering risk.

This could include “blind” verification of government documents, via services such as digital wallets.

Corby said the trial must “manage expectations” about effectiveness of age assurance, saying “the goal should be to stop most underage users, most of the time”.

“You can turn up the effectiveness but that comes at a cost to the majority of adult users, who’d have to prove their age more regularly than they would tolerate,” he said.

Corby said the trial was working on risks of children circumventing the systems and that providers were “already well-placed” to address basic issues such as the use of VPNs and fooling the facial analysis.

Back to Home
Source: The Guardian