The content isn’t right, but to the frustrations of thousands it isn’t always exactly, technically, legally, pick a comfort adverb, wrong. It can be violent, or graphic, or graphically violent, but if it doesn’t violate the community standards, Facebook moderators allow it to remain visible.
Two months ago, a campaign against skewed enforcement of the community standards succeeded in procuring a response: Facebook promised to do better. I questioned these words, and if they would turn to action. In some respects, they have. Women, Action, and The Media estimates 70% of the Pages sent during the #fbrape campaign have been removed. Two months ago, a campaign set out to makes changes, and some changes have been made. Some.
One month ago, Facebook announced A New Review Policy For Pages and Groups. Content deemed in poor taste, but not in violation of standing FB policies, will no longer display ads. Now advertisers don’t have to worry about being called to boycott and shown images of bloodied and beaten women next to their brand. Facebook gets to increase profits. And users aren’t distracted by some pesky Summer Clearance Sale when gawking at the suffering of another human being. Problem solved! But, no, not really, not yet.
The only verifiable action taken to date is nothing more than a band-aid. It’s a quick-fix, make the Board of Directors and Profiteers happy kind of fix. It’s a distraction, an excuse.
This is a threesome here: Facebook the Company, Advertisers the Companies, and Users the People. Facebook wants to remain a place of content sharing, and profit sharing. Advertisers want to reach consumers, existing, new, and potential. Users want to share content.
Facebook is still going strong at full operational pace, gaining more users and selling to more advertisers than they are losing, improving and increasing in every positive category. Advertisers are still present, ever-present, and now have the relief of this new page review and ad policy as well as the addition of a streamlined approach to ad placement and presentation. Users, well, it’s life on Facebook as it’s always been life on Facebook.
The solution has covered two-thirds of the threesome: Facebook and Advertisers. Hateful content is created and shared and Users are burdened with the responsibility of reporting it. Some action is happening, there’s an expedited process now to report and request removal of especially heinous images, but it’s still not enough. Those images are still freely circulated until someone reports them.
A form of user-initiated protection would go a long way to increasing their response to a very serious issue. There could be a filter, perhaps. Given the option, I would opt out of seeing images marked as improper but not illegal. I can block users, apps, app invites, and events. But I cannot block content. Let me decide if I want to open the image, rather than the current model of forcing it in my face.
They’re “working on it”, but in a world of instant gratification, two months is way overdue on concrete changes. They cannot stop stupid. They cannot stop people from creating and attempting to circulate hate-speech, but they can mediate it, make it harder to reach people unwilling to innocently provide a page view, and make it more work for those who wish to promote violence and hate.
The voice against the hate fueled a large fire, and Facebook called in their reserves and battled the blaze. Now that the fire is under control, Facebook seems to have remained largely complacent when it comes to giving their users power and control over what they get out of the freely shared content Facebook is so proud of sharing freely.