Inside horror Facebook bug that led to MORE dangerous posts being shown to users for 6 MONTHS

A FACEBOOK bug led to the platform mistakenly showing users more harmful content for six months.

According to The Verge, content identified as misleading or problematic was prioritized in users' feeds when it should have been hidden.

Internal documents show that the software bug was identified by engineers and took half a year to fix.

Facebook disputed the report, which was published Thursday, saying that it "vastly overstated what this bug."

The glitch ultimately had "no meaningful, long-term impact on problematic content," according to Joe Osborne, a spokesman for parent company Meta.

But it was serious enough for a group of Facebook employees to draft an internal report referring to a "massive ranking failure" of content.

Read more about Facebook

Mark Zuckerberg promises metaverse plans including VR Grand Theft Auto

Mark Zuckerberg says Meta has invented the gadget that will REPLACE iPhone

In October, the employees noticed that some content that had been marked as questionable was nevertheless being favoured by the algorithm to be widely distributed in users' News Feeds.

The content was flagged by external media – members of Facebook's third-party fact-checking program.

"Unable to find the root cause, the engineers watched the surge subside a few weeks later and then flare up repeatedly until the ranking issue was fixed on March 11," The Verge reported.

But according to Osborne, the bug affected "only a very small number of views" of content.

Most read in News Tech


Urgent Apple warning as every iPhone user should check settings immediately


Jaw-dropping image of ISS taken from Earth is so clear you can SEE astronauts


WhatsApp issues warning that voice notes are changing in THREE huge ways


Genius iMessage hack lets you text MUCH faster – and it works on WhatsApp too

That's because "the overwhelming majority of posts in Feed are not eligible to be down-ranked in the first place," Osborne explained.

He added that other mechanisms designed to limit views of "harmful" content remained in place, "including other demotions, fact-checking labels and violating content removals."

Facebook's fact-checking program launched in 2018 and aims to identify content that is harmful and misleading.

Under the program, Facebook pays to use fact checks from around 80 organisations, including media outlets and specialized fact-checkers, on its platform, WhatsApp and on Instagram.

Content rated "false" is downgraded in news feeds so fewer people will see it.

If someone tries to share that post, they are presented with an article explaining why it is misleading.

Read More on The Sun

Dog mauling deaths DOUBLE as ‘impulse buys during lockdown’ spark UK crisis

Lewis Hamilton retirement fears after emotional post by F1 Mercedes star

Those who still choose to share the post receive a notification with a link to the article. No posts are taken down.

Fact-checkers are free to choose how and what they wish to investigate.

  • Read all the latest Phones & Gadgets news
  • Keep up-to-date on Apple stories
  • Get the latest on Facebook, WhatsApp and Instagram

Best Phone and Gadget tips and hacks

Looking for tips and hacks for your phone? Want to find those secret features within social media apps? We have you covered…

  • How to get your deleted Instagram photos back
  • How to track someone on Google Maps
  • How can I increase my Snapchat score?
  • How can I change my Facebook password?
  • How can I do a duet on TikTok?
  • Here's how to see if your Gmail has been hacked
  • How can I change my Amazon Alexa voice in seconds?
  • What is dating app Bumble?
  • How can I test my broadband internet speed?
  • Here's how to find your Sky TV remote in SECONDS

We pay for your stories! Do you have a story for The Sun Online Tech & Science team? Email us at [email protected]

    Source: Read Full Article