BԪַ

Skip to content

Health misinformation creates BԪַwhack-a-moleBԪַ situation for tech platforms

World Health Organization analysis found misinformation in up to 51 per cent of social media posts
web1_2024031817038-63930360bdcd42a4da8befa3c7791e074832fd58736454c9a4e63ddfecfe1747
For social media companies, how to address health information has become a perennial question that has only grown in importance as the number of platforms multiplied and people began spending increasing amounts of time online. Now, itBԪַs not uncommon to spot medical misinformation with almost every scroll.This March 18, 2010, file photo shows the YouTube website in Los Angeles. THE CANADIAN PRESS/AP/Richard Vogel

When Dr. Garth Graham thinks about health misinformation on social media platforms, he envisions a garden.

No matter how bountiful or verdant that garden is, even the head of YouTubeBԪַs global health division admits itBԪַs often in need of tending.

BԪַHow do you weed and pull out the bad information?BԪַ he questioned.

BԪַBut alsoBԪַhow do you plant the seeds and make sure people have access to good information as well as high quality information?BԪַ

For social media companies, these have become perennial questions that have only grown in importance as the number of platforms multiplied and people began spending increasing amounts of time online.

Now, itBԪַs not uncommon to spot misinformation with almost every scroll.

A 2022 paper published in the Bulletin of the World Health Organization reviewed 31 studies examining how prevalent misinformation is. The analysis found misinformation in up to 51 per cent of social media posts associated with vaccines, up to 28.8 per cent of content associated with COVID-19, and up to 60 per cent of posts related to pandemics.

An estimated 20 to 30 per cent of YouTube videos about emerging infectious diseases were also found to contain inaccurate or misleading information.

The consequences can be harmful, if not deadly.

Research the Council of Canadian Academies released in 2023 said COVID-19 misinformation alone contributed to more than 2,800 Canadian deaths and at least $300 million in hospital and ICU visits.

Platforms take the risks seriously, Graham said in an interview. BԪַWe are always concerned about anything that may produce harm.BԪַ

That concern often leads platforms to remove anything violating their content policies.

YouTube, for example, has banned content denying the existence of some medical conditions or contradicting health authority guidance on prevention and treatment.

Examples embedded in its medical misinformation policy show the company removes posts promoting turpentine, gasoline and kerosene as a treatment for certain conditions because these substances cause death. Ivermectin, used to treat parasitic worms in animals and humans, and hydroxychloroquine, a malaria drug, are also barred from being promoted as COVID-19 cures.

When it comes to vaccines, YouTube bans videos alleging immunizations cause cancer or paralysis.

Facebook and Instagram parent company Meta Platforms Inc. refused to comment for this story and TikTok did not respond to a request for comment, but in broad strokes, these companies have similar policies to YouTube.

Yet Timothy Caulfield, a University of Alberta professor focused on health law and policy, still spots medical misinformation on platforms. He recently asked his students to search for stem cell content and several posts spreading unproven therapies came up easily.

Still, he sympathizes with some of the challenges tech companies face because he sees conquering health misinformation as a game of BԪַwhack-a-mole.BԪַ

He says thereBԪַs a nimbleness to spreaders of misinformation, who are often motivated to keep finding ways to circumvent removal policies because their posts can boost profits and brands or spread an ideology.

BԪַThey can work around the moderation strategies, but that just shows how weBԪַre not going to fix this with one tool,BԪַ Caulfield said.

BԪַThis is going to be an on ongoing battle.BԪַ

In its misinformation policy posted on its website, Meta acknowledges the difficulties, saying BԪַwhat is true one minute may not be true the next minute.BԪַ

BԪַPeople also have different levels of information about the world around them and may believe something is true when it is not,BԪַ the policy says.

In an attempt to keep up with everything, Meta relies on independent experts to assess how true content is and whether it is likely to directly contribute imminent harm before it is removed. Third-party fact checking organizations are also contracted to review and rate the accuracy of its most viral content.

At YouTube, workers, including some who form an BԪַintelligence deskBԪַ monitoring posts and news to detect trends that might need to be mitigated, are used along with machine learning programs, which the company says are well suited to detecting patterns in misinformation.

Some responsibility is also put on credible health-care practitioners and institutions, whose content platforms highlight recommendations to make it easier for users to find trustworthy information.

YouTube, for example, has partnered with organizations including the University Health Network and the Centre for Addiction and Mental Health in Toronto.

CAMH runs a YouTube channel where medical professionals explain everything from schizophrenia to eating disorders. Production funding came from YouTube, but the institutionBԪַs resources were used for script writing and clinical review, CAMH spokeswoman Hayley Clark said in an email.

Graham sees it as a good example of the health-care profession BԪַmeeting people where they are,BԪַ which he said is BԪַhow we battle misinformation.BԪַ

BԪַ(Credible information) has to be in the palm of peopleBԪַs hands so that they can have dinner conversations, so when theyBԪַre sitting down in their couch that theyBԪַre empowered,BԪַ he said.

But when it comes to other organizations and doctors, BԪַwe canBԪַt assume that all of them have the capacity to do this,BԪַ said Heidi Tworek, an associate professor at the University of British Columbia, whose research focuses on the effects of new media technologies.

These organizations want to get credible information out, but in the cash-strapped and time-lacking health-care industry, thereBԪַs always another patient to help.

BԪַSome health-care institutions would say, BԪַOK, weBԪַve got X amount of money, weBԪַve got to choose what we spend it on. Maybe we want to spend it on something other than communications,BԪַBԪַ Tworek said.

In some instances, doctors are also BԪַdoing it off the side of their deskBԪַbecause they think it is valuable,BԪַ but that subjects them to new risks like online attacks and sometimes even death threats.

BԪַSome people donBԪַt want to enter those spaces at all because they see what happens to others,BԪַ she said.

To better combat medical misinformation, she would like platforms to act more responsibly because she often notices their algorithms push problematic content to the top of social media timelines.

However, she and Caulfield agree health misinformation needs an all-hands-on-deck approach.

BԪַThe platforms bear a lot of responsibility. TheyBԪַre becoming like utilities and we know the impact that they have on public discourse, on polarization,BԪַ Caulfield said.

BԪַBut we also need to teach critical thinking skills.BԪַ

That could begin at school, where students could learn how to identify credible sources and detect when something could be incorrect BԪַ lessons heBԪַs heard Finland begins in kindergarten.

No matter when or how that education takes place, he said the bottom line is BԪַwe need to give citizens the tools to discern whatBԪַs misinformation.BԪַ

READ ALSO:





(or

BԪַ

) document.head.appendChild(flippScript); window.flippxp = window.flippxp || {run: []}; window.flippxp.run.push(function() { window.flippxp.registerSlot("#flipp-ux-slot-ssdaw212", "Black Press Media Standard", 1281409, [312035]); }); }