Apple is facing criticism from the BBC after its new AI-powered iPhone feature, Apple Intelligence, generated a misleading headline about a high-profile murder case in the US.
Launched in the UK earlier this week, Apple Intelligence uses artificial intelligence to summarise and group together notifications for users. However, the system incorrectly summarised a BBC News article, making it appear that Luigi Mangione, the man arrested in connection with the murder of UnitedHealthcare CEO Brian Thompson in New York, had shot himself.
The headline read, "BBC News: Luigi Mangione shoots himself," a claim that was false.
#AppleIntelligence - the new AI feature on Apple phones - effectively made up a BBC News story and broke it as a headline story. The BBC isn't happy about it. pic.twitter.com/JBsueLAimA
— Back the BBC ???????????????????? (@back_the_BBC) December 13, 2024
A spokesperson for the BBC confirmed the corporation had contacted Apple to address the issue and resolve the problem. "BBC News is the most trusted news media in the world," the spokesperson said, saying it was important to maintain trust in the journalism published under the BBC's name.
Despite the error, the rest of the AI-powered summary, which included updates on the overthrow of Bashar al-Assad's regime in Syria and South Korean President Yoon Suk Yeol, was reportedly accurate.
The BBC is not alone in encountering misrepresented headlines due to the technology.
A similar issue occurred in November when Apple Intelligence grouped three unrelated New York Times articles into a single notification, one of which incorrectly read, "Netanyahu arrested," referencing an International Criminal Court warrant for Israeli Prime Minister Benjamin Netanyahu, rather than an actual arrest.
Apple AI notification summaries continue to be so so so bad
— Ken Schwencke (@schwanksta.com) November 22, 2024 at 12:52 AM
[image or embed]
Apple's AI-powered summary system, available on iPhone 16 models, iPhone 15 Pro, and later devices running iOS 18.1 or higher, is designed to reduce notification overload, allowing users to prioritise important updates. But concerns have been raised about the reliability of the technology, with Professor Petros Iosifidis of City University in London calling the mistakes "embarrassing" and criticising Apple for rushing the product to market.
This isn't the first time AI-powered systems have been inaccurate. In April, X's AI chatbot Grok was criticised for falsely claiming Prime Minister Narendra Modi had lost the election before it even took place.
Seriously @elonmusk?
— Sankrant Sanu सानु संक्रान्त ਸੰਕ੍ਰਾਂਤ ਸਾਨੁ (@sankrant) April 17, 2024
PM Modi Ejected from Indian Government. This is the "news" that Grok has generated and "headlined." 100% fake, 100% fantasy.
Does not help @x's play for being a credible alternative news and information sources. @Support @amitmalviya @PMOIndia pic.twitter.com/lIzMSu1VR8
Google's AI Overviews tool also made bizarre recommendations, such as using "non-toxic glue" to stick cheese to pizza and advising people to eat one rock per day.
from NDTV News- Topstories https://ift.tt/YEwIvft
No comments:
Post a Comment