I don’t recall my exact search criteria, but I do remember being bombarded by a slew of sites that were conservatively against vaccinations. While the validity and author of the articles may have been questionable, because of the volume of results, I did second guess whether or not to vaccinate my son.
With that in mind, upon hearing about Pinterest’s new approach to stop false information from being searched on their site, I was stunned and initially, impressed.
This week, Pinterest told the Wall Street Journal that they began to quietly block anti-vaccination search terms in late 2018 from their users in an effort to limit the spread of misleading content.
Meaning, users can still pin content, but the company is actively preventing users from finding it.
Now, it’s important to state here that this doesn’t appear to be a political or social move, but rather one done in an effort to combat the spreading of false information. Considering how much hot water this has gotten other social platforms into (i.e. Facebook), this is a move worth talking about.
Should online platforms take responsibility for the content being posted on their networks?
Facebook and Google claim that they are making strides to reduce anti-vaccination fodder (and other false information), but Pinterest is aggressively taking a stance to abolish it from their platform.
Pinterest’s dramatic stance brings to light a growing trend in big-name platforms that have vowed to eradicate false content searches.
While the company doesn’t clearly define subjects that could be tied to harmful misinformation, some examples where they are taking extra precaution are false claims around 9/11 and serious medical illness cures.
But is this monitoring, blocking, and screening done by social media platforms really necessary?
I do not envy social platforms trying to solve the political and ethical challenges associated with content regulation, but it is the responsibility of these platforms to create an environment conducive to both businesses and users.