As you surely already know, the opinions and attitudes of Facebook users vary widely.
After all, with 1.5 billion users there are going to be people expressing opinions that don’t exactly match with those of other users.
Unfortunately, some of the views expressed in the posts, memes and videos can be quite offensive to some users.
But at what point does a post cross the line into what is commonly referred to nowadays as “hate speech”?
Well, Facebook has come up with a plan to find out, and they want you to help!
Starting today, some users are now seeing this question on every post that shows up in their newsfeed:
Does this post contain hate speech?
The reader can answer the question by clicking either “Yes” or No”.
Apparently the actual content of a post has nothing to do with whether the question is asked or not. If you happen to be one of the users selected to answer this question you’ll see it on every single post that lands in your newsfeed.
I’m willing to give Facebook the benefit of the doubt and assume their intentions here are probably noble. I’m sure they’re just trying to find out which users are posting hateful or disturbing content on the Facebook website so they can put a stop to it.
But that being said, I can see a huge glaring problem with this approach…
It would be very easy for someone to message all their friends and ask them to “mark” all the posts from a particular person as “hate speech” if that person tends to post things they don’t agree with.
Take politics for example. If someone regularly puts up political posts that don’t exactly align with another user’s opinions, it would be very easy for that person to rally his/her friends and mark all the “offending” person’s posts as “hate speech”.
Of course that shouldn’t really be a problem if a flagged account gets reviewed by a human. The reviewer should be able to quickly determine that all those “hate speech” flags are false.
But what if the reviewer himself also happens to disagree with the opinions stated in a flagged accounts’ posts? I’d like to think that everyone Facebook hires to review flagged accounts would do it honestly, but what if they decide to become an activist instead?
And that brings me to another potential problem: How can any individual accurately determine what is and what isn’t “hate speech”?
Like “great food” or “bad food”, the label that gets applied will be determined by the tastes (or in this case opinions) of the person doing the review.
Even worse, what if Facebook decides to rely on an algorithm instead of a human reviewer to determine the fates of flagged user accounts?
If the percentage of a user’s posts that get marked as hate speech cross some numerical threshold that user could either have their posting privileges suspended or be booted off of Facebook altogether.
It all boils down to this: Two perfectly reasonable people could easily see the same post and have very different opinions about whether it contains “hate speech” or not. See the problem there?
This new “Does this post contain hate speech?” thing appears to be a test because it’s apparently only being seen by a relatively small number of users thus far.
If so, I hope it’s a short-lived test because I seriously doubt that it’s going to work very well for either Facebook or their users.
Bonus tip #1: Click here to read about several scams and hoaxes that are making the rounds right now.
Bonus tip #2: Want to make sure you never miss one of my tips? Click here to join my Rick’s Tech Tips Facebook Group!
Want to ask Rick a tech question? Click here and send it in!
If you found this post useful, would you mind helping me out by sharing it? Just click one of the handy social media sharing buttons below.