You are here
Facebook gets serious about fighting fake news
By AP - Dec 17,2016 - Last updated at Dec 17,2016
AFP photo
NEW YORK — Facebook is taking new measures to curb the spread of fake news on its huge and influential social network. It will focus on the "worst of the worst" offenders and partner with outside fact-checkers and news organisations to sort honest news reports from made-up stories that play to people's passions and preconceived notions.
The social network will make it easier for users to report fake news when they see it, which they will be able to do in two steps, not three. If enough people report a story as fake, Facebook will pass it to third-party fact-checking organisations that are part of the non-profit Poynter Institute's International Fact-Checking Network.
Five fact-checking and news organisations are working with Facebook on this: ABC News, The Associated Press, FactCheck.org, Politifact and Snopes. Facebook says this group is likely to expand.
Stories that flunk the fact check will not be removed from Facebook. But they will be publicly flagged as "disputed", which will force them to appear lower down in people's news feed. Users can click on a link to learn why that is. And if people decide they want to share the story with friends anyway, they can — but they will get another warning.
Why fake news matters
"We do believe that we have an obligation to combat the spread of fake news," said John Hegeman, vice president of product management on news feed, in an interview. But he added that Facebook also takes its role to provide people an open platform seriously, and that it is not the company's place to decide what is true or false.
Fake news stories touch on a broad range of subjects, from unproven cancer cures to celebrity hoaxes and backyard Bigfoot sightings. But fake political stories have drawn outsized attention because of the possibility that they influenced public perceptions and could have swayed the US presidential election.
There have been dangerous real-world consequences. A fake story about a child sex ring at a Washington, DC, pizza joint prompted a man to fire an assault rifle inside the restaurant.
By partnering with respected outside organisations and flagging, rather than removing, fake stories, Facebook is sidestepping some of the biggest concerns experts had raised about it exercising its considerable power in this area. For instance, some worried that Facebook might act as a censor — and not a skillful one, either, being an engineer-led company with little experience making complex media ethics decisions.
"They definitely don't have the expertise," said Robyn Caplan, researcher at Data & Society, a nonprofit research institute funded in part by Microsoft and the National Science Foundation. In an interview before Facebook's announcement, she urged the company to "engage media professionals and organisations that are working on these issues".
Facebook and fake news
Facebook CEO Mark Zuckerberg has said that fake news constitutes less than 1 per cent of what's on Facebook, but critics say that's wildly misleading. For a site with nearly 2 billion users tapping out posts by the millisecond, even 1 per cent is a huge number, especially since the total includes everything that's posted on Facebook — photos, videos and daily updates, in addition to news articles.
In a study released Thursday, the Pew Research Center found that nearly a quarter of Americans say they have shared a made-up news story, either knowingly or unknowingly. Forty-five per cent said that the government, politicians and elected officials bear responsibility for preventing made-up stories from gaining attention. Forty-two per cent put this responsibility on social networking sites and search engines, and a similar percentage on the public itself.
Fake news stories can be quicker to go viral than news stories from traditional sources. That's because they were created for sharing — they are clickable, often inflammatory and pander to emotional responses. Mike Caufield, director of blended and networked learning at Washington State University Vancouver, tracked whether real or fake news is more likely to be shared on Facebook.
He compared a made-up story from a fake outlet with articles in local newspapers. The fake story, headlined "FBI Agent Suspected In Hillary Leaks Found Dead In Apparent Murder-Suicide" from the nonexistent Denver Guardian, was shared 1,000 times more than material from the real newspapers.
"To put this in perspective, if you combined the top stories from the Boston Globe, Washington Post, Chicago Tribune, and LA Times, they still had only 5% the viewership of an article from a fake news," he wrote in a blog post .
Facebook is emphasising that it's only going after the most egregious fake news creators and sites, the "the clear hoaxes spread by spammers for their own gain", wrote Adam Mosseri , vice president of product for Facebook's news feed, in a blog post Thursday.
Follow the money
The social network's first public step towards fixing the fake-news problem since the election was a statement barring fake-news sites from using its lucrative ad network. But it was not much more than rhetorical. Facebook's policies already blocked sites that spread misleading information from its ad network, an automated system that places ads on sites across the Internet.
Now, Facebook says it has also eliminated the ability for spammers to masquerade as real news organisations by spoofing domains. And it says it is weighing a crackdown on publishers of fake news as well.
Depriving scammers of money could be effective.
"Google and Facebook are the single two biggest engines for monetisation," said Susan Bidel, a senior analyst at Forrester Research focusing on digital publishers. "I don't think you are ever going to completely eradicate it. But it could get down to a manageable level."
Facebook will not allow publishers to promote any story flagged as disputed. If this works, users should not see fake news stories in Facebook advertisements.
Robots vs falsehood
Facebook's main approach to problems has been to tackle them with studying its vast troves of user data, with algorithms that can be more effective at things than humans, and to favour engineers over editors. Data rules all else at the Menlo Park, California, company.
Beyond the human fact-checkers, Facebook is also using its algorithms to de-emphasise fake news stories. For example, if people are significantly less likely to share an article after they have read it, it's a "really good sign that the article was misleading or not informative in some way", Hegeman said — sort of like when you try a cereal sample at the grocery store, then decide not to buy it.
Fake news stories will not disappear from Facebook, not the way child porn and spam and various illegal stuff does. That is not Facebook's goal.
"We believe providing more context can help people decide for themselves what to trust and what to share," Mosseri wrote.
Related Articles
MENLO PARK, United States — In Facebook's "War Room," a nondescript space adorned with American and Brazilian flags, a team of 20 people mon
TRIPOLI, Lebanon — Browse through Arabic-language social media pages and you could walk away thinking COVID-19 is an American hoax, isn't de
TRIPOLI, Lebanon — Browse through Arabic-language social media pages and you could walk away thinking COVID-19 is an American hoax, is