Whitehall’s ambition to cut costs using AI is fraught with risk

TruthLens AI Suggested Headline:

"UK Government Faces Challenges in Implementing AI Solutions for Public Services"

View Raw Article Source (External Link)
Raw Article Publish Date:
AI Analysis Average Score: 7.9
These scores (0-10 scale) are generated by Truthlens AI's analysis, assessing the article's objectivity, accuracy, and transparency. Higher scores indicate better alignment with journalistic standards. Hover over chart points for metric details.

TruthLens AI Summary

This week, a Dragons’ Den-style event is taking place where technology companies will pitch their ideas for enhancing automation within the British justice system. This event is part of a broader initiative by the cash-strapped Labour government to leverage artificial intelligence (AI) and data science as cost-saving measures while improving public services. Critics have warned that the government may be overly optimistic about the capabilities of AI, with the Department of Health and Social Care announcing an AI early warning system aimed at identifying dangerous maternity services following recent scandals. Health Secretary Wes Streeting expressed his ambition for robotic assistance to be involved in one in eight operations within the next decade, highlighting the government's reliance on technology to address pressing issues that traditionally would have been resolved through increased staffing or funding.

The push for digitization, spearheaded by Prime Minister Keir Starmer and Science and Technology Secretary Peter Kyle, has led to closer partnerships with major U.S. tech firms such as Google and Microsoft. While proponents argue that technology is essential for modernizing public services, there are significant risks involved in implementing AI in sensitive areas like welfare assessment. A recent report from the Ada Lovelace Institute revealed that a majority of the public (59%) expressed concerns about AI being used for welfare eligibility assessments, indicating a lack of trust in private companies to manage such responsibilities over government entities. The institute has urged for a comprehensive review of the influence of technology firms in shaping public policy and the need for transparency to ensure that AI solutions prioritize the public good over corporate profit. As the government continues to explore the potential of AI, the balancing act between innovation and public trust remains a critical challenge.

TruthLens AI Analysis

You need to be a member to generate the AI analysis for this article.

Log In to Generate Analysis

Not a member yet? Register for free.

Unanalyzed Article Content

A Dragons’ Den-style event this week, where tech companies will have 20 minutes to pitch ideas for increasing automation in the British justice system, is one of numerous examples of how the cash-strappedLabourgovernment hopes artificial intelligence and data science can save money and improve public services.

Amid warnings from critics that Downing Street has been “drinking the Kool-Aid” on AI, the Department of Health and Social Care this week announced an AI early warning system to detect dangerous maternity services after a series of scandals, and Wes Streeting, the health secretary, said he wants one in eight operations to be conducted by a robot within a decade.

AI is being used to prioritise actions on the 25,000 pieces of correspondence the Department for Work and Pensions receives each day and to detect potential fraud and error in benefit claims. Ministers even have access to an AI tool that is supposed to provide a “vibe check” on parliamentary opinion to help them weigh the political risks of policy proposals.

Again and again, ministers are turning to technology to tackle acute crises that in the past might have been dealt with by employing more staff or investing more money.

The push to digitise government, which is led by the prime minister, Keir Starmer, and his science and technology secretary, Peter Kyle, has brought the government into close contact with the biggest US tech companies. Google, Microsoft, Palantir, IBM and Amazon were all in attendance at a Ministry of Justice roundtable discussion last month. Starmer and Kyle are not alone. Countries from Singapore to Estonia have been increasingly embracing AI in public service delivery.

Jeegar Kakkad, a director at the Tony Blair Institute for Global Change – one of the organisations arguing for greater use of technology and which is part-funded by tech firms – put the argument like this: “Our systems are broken. They cannot keep up with demand. You have a couple of choices: keep trying to make a broken system work with traditional approaches – more money, more immigrants to fill the gap in the workforce – or you have to usetechnology.

“I think the answer is technology, but we have to make sure we have agency in how we design these systems, they are human-designed and we put rules in place.”

Kyle has recently been at pains to stress that the government is doing everything it can to enable big tech firms to thrive in Britain. At London Tech Week last month, he told executives about regulation and planning policies designed to make their businesses run better, saying: “All of this adds up to a government that is on your side.”

When it comes to injecting technology into public services, ministers face a choice: whether to “build or buy”. The temptation is often to issue contracts to private companies to achieve the fastest and greatest impact. For the tech companies, a huge pot of revenue is at stake. The value of UK public sector tech contracts rose to £19.6bn last year, up from £14.4bn in 2019, according to Tussell, which researches government contracts.

But introducing AI and automation into public services is riskier than using it to help drivers navigate busy roads or recommending songs to music fans. When citizens interact with public services they are often at their most vulnerable.

For example, last week the Ada Lovelace Institute, an independent research body,foundthat 59% of the public were concerned about the idea of AI being used to assess welfare eligibility, compared with 39% of people worried about the use of facial recognition technology in policing.

And the public is also showing signs of concern about the motivations of private technology companies. The same polling found the public was significantly less likely to trust private companies to deliver technology that could assess welfare eligibility or predict the risk of cancer than government bodies (although governments are less trusted than academics and not-for-profits).

The institute urged MPs to launch a review of the “role of technology companies and the bodies funded by them in shaping the policy and media narrative on the benefits of public sector AI; and the effectiveness of existing measures that aim to tackle conflicts of interest and ‘revolving door’ dynamics between government and the technology sector”.

It said: “At a time when AI is being offered as a solution to a wide range of public sector problems, the public are concerned about the motivations of private sector involvement. The public expects transparency and that public sector AI prioritises people over profit.”

Back to Home
Source: The Guardian