I saw someone making bigoted and homophobic remarks about the first openly-gay U.S. governor, Jared Polis, on (of all things) a Facebook thread about Britney Spears finally getting her conservatorship ended. After reading her hateful message to Gov. Polis about being married to another man, I told the person I wondered exactly how ugly of a human being she is and flagged her post for hateful speech. Facebook updated me by confirming her comment was hateful speech and removed it.
But Facebook also said I was harassing and bullying her by asking her this question and put me in Facebook Jail for 24 hours.
There was no appeals process or opportunity to respond before the deprivation commenced. This is a common experience for many of us on social media, likely many times each. And almost certainly because these decisions are made by algorithms who do not understand the moral context of the statement ("ugly human being"); likely instead recoding Facebook to combat worsening eating disorders and depression (especially related to recent internal research leaks about in teen girls on Instagram who struggle with self-image issues). So this kind of over-correction is the result.
Almost every one of these online policy decisions have a significant annual negative impact on commerce because many social media users also rely on these platforms for advertising and sales. And yet responses to such matters are almost all determined (along with sanctions imposed) by an automated process long before a person reviews the matter in question. Who wrote the Facebook Constitution, anyway? Why shouldn't there be a Social Media Bill of Rights to include sufficient due process from actual human beings employed by major online platforms?
No comments:
Post a Comment