How Facebook is trying to reduce the spread of misinformation
Facebook is working to ensure misinformation doesn’t spread, and that people are able to know what’s real by identifying which content is fake.
While Facebook is currently using a third-party as well as machine learning to detect and fact-check fake news, misinformation isn’t prominently displayed on questionable posts or articles.
When Facebook initially released its fake news initiative in 2017, it backfired. The company found that people were more likely to click and share the articles marked as fake. While the content is tagged, it’s definitely something that can be overlooked by users; check out the image below.
In the next month, you will start to see Facebook implement new labels that will make fact-checking more transparent — ensuring people clearly know when something has been fact-checked. If that content is marked false, labels will be shown on top of the false photos or videos, with a link to the fact-checker assessment.
This isn’t something that’s only being implemented on Facebook. Instagram is also working to reduce and label any misinformation shared by its users. Similarly, when it comes to accounts that are repeatedly posting misinformation, Instagram will filter content from those accounts from appearing in its Explore and hashtag pages.
While using a third-party fact-checker is effective — and allows the platform to be proactive — there is a limit to using software and machine learning to detect fake news and fact check.
Introducing this tab is a way for the company to partner with over 200 credible news organizations to bring in the most relevant stories.
This new tab is also a way for Facebook to monitor and ensure that the publishers and the content being shared follows a set of integrity standards. Content violating those standards would include clickbait, hate speech, spam, and misrepresentation, to name a few.
While all of those tactics are great initiatives for identifying fake news, there’s still the possibility that fake news can make its way across social media. When it comes to your brand, misinformation can also lead to mistrust amongst companies and users.
So, what does this mean for marketers?
Fake news isn’t just something that can affect politicians. There could be serious consequences for your brand’s image.
Someone from the anonymous online message board 4 Chan posted tweets advertising “Dreamer Day” in an attempt to persuade undocumented immigrants to visit the coffee chain for free drinks. The social media ads included the company’s logo, graphics, and signature font.
The company had to work quickly to counter and remove the seemingly legitimate advertisements.
Not all fake news is used to harm companies, it could also be used to seek financial gain as well as discrediting a company or small business.
Despite all of those examples being from billion-dollar organizations, it’s something that could happen to any company, even with something as small as a fake negative review or post.
As a marketer, it’s important to ensure your company is able to combat and mitigate such threats through social media, the press, and even a legal team (when necessary). Similar to your company having a cybersecurity or security breach protocol, there should be something in place for tackling malicious threats that occur on social media.
It’s also important to know what is being said about your company — from customers, employees, and your audience on social media. If you’re using HubSpot, one thing you could do is set up streams to monitor social posts that mention your company name.
When mitigating threats to your brand online, be sure to remain professional and transparent. Don’t be afraid to share the issue with your audience. Your followers and customers will appreciate the transparency, and it could potentially generate positive feedback and reviews