home bbs files messages ]

Just a sample of the Echomail archive

05174047:

<< oldest | < older | list | newer > | newest >> ]

 Message 20,881 of 20,883 
 Leroy N. Soetoro to All 
 Google AI Overviews put people at risk o 
 04 Jan 26 01:18:10 
 
XPost: comp.ai.philosophy, comp.internet.services.google, sac.politics
XPost: alt.fan.rush-limbaugh, talk.politics.guns
From: leroysoetoro@americans-first.com

People are being put at risk of harm by false and misleading health
information in Google’s artificial intelligence summaries, a Guardian
investigation has found.

The company has said its AI Overviews, which use generative AI to
provide snapshots of essential information about a topic or question,
are “helpful” and “reliable”.

But some of the summaries, which appear at the top of search results,
served up inaccurate health information and put people at risk of harm.

In one case that experts described as “really dangerous”, Google wrongly
advised people with pancreatic cancer to avoid high-fat foods. Experts
said this was the exact opposite of what should be recommended, and may
increase the risk of patients dying from the disease.

In another “alarming” example, the company provided bogus information
about crucial liver function tests, which could leave people with
serious liver disease wrongly thinking they are healthy.

Google searches for answers about women’s cancer tests also provided
“completely wrong” information, which experts said could result in
people dismissing genuine symptoms.

A Google spokesperson said that many of the health examples shared with
them were “incomplete screenshots”, but from what they could assess they
linked “to well-known, reputable sources and recommend seeking out
expert advice”.

The Guardian investigation comes amid growing concern that AI data can
confuse consumers who may assume that it is reliable. In November last
year, a study found AI chatbots across a range of platforms gave
inaccurate financial advice, while similar concerns have been raised
about summaries of news stories.

Sophie Randall, director of the Patient Information Forum, which
promotes evidence-based health information to patients, the public and
healthcare professionals, said the examples showed “Google’s AI
Overviews can put inaccurate health information at the top of online
searches, presenting a risk to people’s health”.

Stephanie Parker, the director of digital at Marie Curie, an end-of-life
charity, said: “People turn to the internet in moments of worry and
crisis. If the information they receive is inaccurate or out of context,
it can seriously harm their health.”

The Guardian uncovered several cases of inaccurate health information in
Google’s AI Overviews after a number of health groups, charities and
professionals raised concerns.

Anna Jewell, the director of support, research and influencing at
Pancreatic Cancer UK, said advising patients to avoid high-fat foods was
“completely incorrect”. Doing so “could be really dangerous and
jeopardise a person’s chances of being well enough to have treatment”,
she added.

Jewell said: “The Google AI response suggests that people with
pancreatic cancer avoid high-fat foods and provides a list of examples.
However, if someone followed what the search result told them then they
might not take in enough calories, struggle to put on weight, and be
unable to tolerate either chemotherapy or potentially life-saving
surgery.”

Typing “what is the normal range for liver blood tests” also served up
misleading information, with masses of numbers, little context and no
accounting for nationality, sex, ethnicity or age of patients.

Pamela Healy, the chief executive of the British Liver Trust, said the
AI summaries were alarming. “Many people with liver disease show no
symptoms until the late stages, which is why it’s so important that they
get tested. But what the Google AI Overviews say is ‘normal’ can vary
drastically from what is actually considered normal.

“It’s dangerous because it means some people with serious liver disease
may think they have a normal result then not bother to attend a
follow-up healthcare meeting.”

A search for “vaginal cancer symptoms and tests” listed a pap test as a
test for vaginal cancer, which is incorrect.

Athena Lamnisos, the chief executive of the Eve Appeal cancer charity,
said: “It isn’t a test to detect cancer, and certainly isn’t a test to
detect vaginal cancer – this is completely wrong information. Getting
wrong information like this could potentially lead to someone not
getting vaginal cancer symptoms checked because they had a clear result
at a recent cervical screening.

“We were also worried by the fact that the AI summary changed when we
did the exact same search, coming up with a different response each time
that pulled from different sources. That means that people are getting a
different answer depending on when they search, and that’s not good
enough.”

Lamnisos said she was extremely concerned. “Some of the results we’ve
seen are really worrying and can potentially put women in danger,” she
said.

The Guardian also found Google AI Overviews delivered misleading results
for searches about mental health conditions. “This is a huge concern for
us as a charity,” said Stephen Buckley, the head of information at Mind.

Some of the AI summaries for conditions such as psychosis and eating
disorders offered “very dangerous advice” and were “incorrect, harmful
or could lead people to avoid seeking help”, Buckley said.

Some also missed out important context or nuance, he added. “They may
suggest accessing information from sites that are inappropriate … and we
know that when AI summarises information, it can often reflect existing
biases, stereotypes or stigmatising narratives.”

Google said the vast majority of its AI Overviews were factual and
helpful, and it continuously made quality improvements. The accuracy
rate of AI Overviews was on a par with its other search features like
featured snippets, which had existed for more than a decade, it added.

The company also said that when AI Overviews misinterpreted web content
or missed context, it would take action as appropriate under its
policies.

A Google spokesperson said: “We invest significantly in the quality of
AI Overviews, particularly for topics like health, and the vast majority
provide accurate information.”


--
November 5, 2024 - Congratulations President Donald Trump.  We look
forward to America being great again.

We live in a time where intelligent people are being silenced so that
stupid people won't be offended.

Every day is an IQ test. Some pass, some, not so much.

Thank you for cleaning up the disasters of the 2008-2017, 2020-2024 Obama
/ Biden / Harris fiascos, President Trump.

Under Barack Obama's leadership, the United States of America became the
The World According To Garp.  Obama sold out heterosexuals for Hollywood
queer liberal democrat donors.

--- SoupGate-Win32 v1.05
 * Origin: you cannot sedate... all the things you hate (1:229/2)

<< oldest | < older | list | newer > | newest >> ]


(c) 1994,  bbs@darkrealms.ca