Sandberg stated, "In the immediate aftermath, we took down the alleged terrorist's Facebook and Instagram accounts, removed the video of the attack, and used artificial intelligence to proactively find and prevent related videos from being posted."
She continued, "We have heard feedback that we must do more — and we agree. In the wake of the terrorist attack, we are taking three steps: strengthening the rules for using Facebook Live, taking further steps to address hate on our platforms, and supporting the New Zealand community."
But What Will Happen Next?
We now know that Facebook is working with the government in Austria and New Zealand to make Facebook Live more secure, but of course, it appears that taking the live feature away completely is not the conversation.
Rather, Facebook is exploring other avenues to keep the feature while implementing restrictions to regulate it and prevent crime/hate from being shared on the platform.
This is the rollout of actions items so far:
Restrictions for Select Users
While there are no specific details on the exact criteria for who would not have access to the Live feature we do have a teaser from Sandberg who stated,
“We are exploring restrictions on who can go Live depending on factors such as prior Community Standard violations."
We can infer that Facebook will unroll the Live feature for select users based on their past behavior on the social platform.
While this can’t guarantee that other violent crimes won’t be streamed live on social media, it is a step in the right direction.
Improved AI identification tools
While many wish it would’ve been faster, Facebook was able to remove the footage of the Christ Church attack in New Zealand in 17 minutes.
This removal was more than stopping the live stream as the video was re-shared and it spread quickly. The platform was able to stop the video from streaming and identified over 900 different videos that displayed the violent crime.
Facebook has communicated that the company will continue to use AI identification tools as a means to improve their response time to harmful content removed from their social platforms.
"While the original New Zealand attack video was shared Live, we know that this video spread mainly through people re-sharing it and re-editing it to make it harder for our systems to block it; we have identified more than 900 different videos showing portions of those horrifying 17 minutes.
People with bad intentions will always try to get around our security measures. That’s why we must work to continually stay ahead. In the past week, we have also made changes to our review process to help us improve our response time to videos like this in the future."
In the past, Facebook has had a ban on racial hate and they are now taking this one step further by placing a ban specifically on white supremacy and white separatism.
Facebook released a letter Standing Against Hate stating, “Today we’re announcing a ban on praise, support, and representation of white nationalism and white separatism on Facebook and Instagram, which we’ll start enforcing next week. It’s clear that these concepts are deeply linked to organized hate groups and have no place on our services.”
They plan on taking this even further by directing people from hate groups and those that search for white supremacist content to Life After Hate, an organization founded by former violent extremists that provide crisis intervention, education, support groups, and outreach.
What This All Means for Marketers
Although a full plan has not been rolled out, we do see that Facebook is accepting their responsibility as a widely- used platform and taking multiple steps to ensure it is a safe and hate-free environment.
This is a situation that marketers should be tracking as the details of how Facebook Live will be regulated are still not crystal clear. There is the possibility that they require businesses to buckle down, although this is just speculation, it is something to be mindful of.
We will be watching this story closely and providing updates as they become available.