The BBC was among the groups to complain about the feature, after an alert generated by Apple’s AI falsely told some readers that Luigi Mangione, the man accused of killing UnitedHealthcare CEO Brian Thompson, had shot himself.
The feature had also inaccurately summarised headlines from Sky News,, external the New York Times and the Washington Post, according to reports from journalists and others on social media.
“There is a huge imperative [for tech firms] to be the first one to release new features,” said Jonathan Bright, head of AI for public services at the Alan Turing Institute.
Hallucinations – where an AI model makes things up – are a “real concern,” he added, “and as yet firms don’t have a way of systematically guaranteeing that AI models will never hallucinate, apart from human oversight.
“As well as misinforming the public, such hallucinations have the potential to further damage trust in the news media,” he said.
Media outlets and press groups had pushed the company to pull back, warning that the feature was not ready and that AI-generated errors were adding to issues of misinformation and falling trust in news.
The BBC complained to Apple in December but it did not respond until January when it promised a software update that would clarify the role of AI in creating the summaries, which were optional and only available to readers with the latest iPhones.
That prompted a further wave of criticism that the tech giant was not going far enough.
Credit: Source link