Evan Harvey
Professor Green
7/30/2018
Engl-400
The Toxic Environment of Facebook – Proposal
Social media is an evolving place seen as a digital public forum where anyone’s and everyone’s opinion can be heard, and people can stay connected to whomever they want. For the most part, social media can be a positive experience for many people who want to keep up with friends, see the news in a convenient place, or check up on how their favorite celebrities are doing. There is however a dark side of social media, one mired in hate, scams, and controversy that needs to be addressed. Many laws and regulations haven’t caught up yet with how to handle the likes of social media, so it falls upon the company itself to control their site and its users. One of the biggest areas people will congregate to is Facebook and thus starting there can make a good example for other social media sites to follow. Facebook should increase their regulation on what comments or posts are allowed to be made and expand their programs to deal with fake accounts in order to help stop the spread of hate and bigotry on their site.
It’s very apparent that within today’s society it is easier than ever to get your opinions heard but this has also created the unsavory effect of allowing hate to spread as well. One doesn’t have to look any further than the comment section on a news article on Facebook to get a glimpse of how bad it can be. An example can be seen of this from a recent CNN article that was focused on the topic of a new barbie doll being made inspired by an Olympic fencer with a hijab. One of the comments regarding this is “Each doll comes complete with hijab and sword, bomb pack sold separately. Free copy of the Koran with each purchase” (Donailson). This type of message is not only overwhelming offensive but also served to start a lot of controversy in the replies to this comment. How can posts like these be made and yet nothing can be done about it. There is an aspect that, yes, this is the first amendment, but this is a pure attack on an entire race of people, it’s not like it even adds to the conversation it resembles that of a bully in school. This is simply one comment out of many, you can check major news pages like Fox and CNN and see people throwing ridiculous verbal attacks from one side to the other, making the public even more decisive than it already is. While these tirades can be made by regular people, there are some accounts whose sole purpose is to spread hate, and they aren’t even real people.
One important aspect that has helped egg on the recent controversies stirring up on many of the social media platforms is fake accounts. Fake accounts are just what they sound like, accounts with no real identity to them meant to usually fulfill a single purpose whether it be to attempt to scam people or just to cause as much chaos as possible. This was most apparent during the electoral season between Donald Trump and Hillary Clinton where certain countries like Russia had begun to invade social media trying to denounce the democratic party and stir as much controversy as possible between the two and is something they are actively still doing to this day. This can be seen from a recent article by Recode where it points out how “Facebook disabled nearly 1.3 billion “fake” accounts over the past two quarters . . . a reminder of what Facebook is up against just 18 months ago after it was learned that a Russian troll farm used Facebook to try and influence the 2016 presidential election”(Wagner, Molla). This a significant example of how social media can go wrong when a foreign nation can directly influence the results of an election without having to even pull complicated tactics. The idea is simple, and it works because many individuals see these comments and have many different reactions to it, but those reactions can conflict with each other and then a full-blown argument can start to occur within the comment section. It’s scary to think that something not even real can have so much influence on current events and while Facebook does try to deal with these fake accounts, it isn’t doing it enough. According to an article from the New York Times “Despite months of talk about the problem of fraud facing Facebook and other tech companies, and vows to root it out, their sites remain infected by obvious counterfeits” (Shane, Issac). If we are going to create a friendlier environment that can unify people instead of dividing them, then issue needs to be addressed.
The way these issues can be solved ultimately comes down to how much effort Facebook is willing to put in to creating a better platform. Starting with the first problem, the general spreading of hate by people, there needs to be stronger regulations, the ability to spread awareness, and an overall better system for reporting. What that would entail is Facebook would clearly advertise what is the right kind of content to be shared and the wrong kind, this would mean taking some of their ad spaces and putting more tips into what post people should think about before actually putting them up for everyone to see. In the regulations side it needs to be clearly defined what is not acceptable to post on this platform, the comment about the hijab doll coming with a bomb and many posts included within this as examples. The report system also needs a great overhaul because it’s not serving the original purpose it had. An article by the Verge gives an example of how the system can be abused, with “The strategy is simple – rack up enough abuse reports to knock the site off Facebook, effectively cutting it off from its audience,” which was used to target journalist within Vietnam (Brandom). This was not what the report button was meant to do before and instead of dealing with these terrible attacks, its used to cut out journalism who are trying to get the truth out there.
As for dealing with Fake accounts, the only real solution to this is having a facial recognition system that actually works. Facebook already has this system in place but as described by the Chicago Tribune, “ after The Post presented Facebook with a list of numerous fake accounts, the company revealed that its system is much less effective than previously advertised: The tool looks only for impostors within a user’s circle of friends and friends of friends – not the site’s 2-billion-user network, where the vast majority of doppelganger accounts are likely born” (Harwell). The quote makes a very important point, the facial recognition system only works with that person’s friends, and there is no reason a fake account trying to be an imposter of someone would ever friend it’s the original owner of its identity. In that sense the system effectively does nothing at all to counter the distinct fake accounts that are causing so much controversy so in that sense that’s why the systems need to be expended to hit all of Facebook, not just some. Of course, this would bring up possible severe problems of privacy, so it should primarily be an Opt-in system that users can agree too that helps make sure no one is impersonating them. This won’t get all of the fake accounts because not everyone will opt-in but any amount in this matter helps either way.
Now with all of this, there is one glaring issue people could have when in regard to targeting fake accounts and vicious comments, freedom of speech. Freedom of speech is one of the biggest things that allow people to voice whatever opinion they want, even including those of hate groups, and has been a large debate for significant amount of time. While there was a supreme court decision that social media sites weren’t allowed to bar people from their sites in Packingham v. North Carolina, There is an important distinction to be made here. Facebook still has the power to regulate pages and comments because at the end of the day it’s a private corporation. They even say they have this power, noted by the New York Times with “But social media sites are not bound by the First Amendment to protect user speech. Facebook’s mission statement says as much, with its commitment to “remove bad actors and their content quickly to keep a positive and safe environment” (Kaminski, Klonick). They have every right to be able to take down speech seen as blatant attacks on others or comments trying to cause in a sense chaos. In regard to fake accounts, that’s not even allowed in the first place within their terms of service. There is a fear of letting Facebook decide what is and isn’t hate speech but for the most part it’s better to do something than just nothing at all.
Fake accounts and hate has been spreading throughout Facebook and other social media sites that requires more attention than ever before. Today’s society has become one of the most divisive in recent times as with the popularization of social media, which because of these destructive forces of fake accounts is being used to cause even more of a divide. We are supposed to be a proud nation, but we have slowly devolved into something else. While this won’t fix the issue of hate by a long shot, doing something is better than nothing and more regulation for what can be shown on social media is a good start towards it.
Bibliography
Brandom, Russel. “Facebook’s Report Abuse button has become a tool of global oppression.” The Verge, https://www.theverge.com/2014/9/2/6083647/facebook-s-report-abuse- button-has-become-a-tool-of-global-oppression. Accessed 30 July 2018.
Donailson, Carlos. Comment on “For the first time ever, Barbie is wearing a hijab. The new doll is modeled after Olympic fencer Ibtihaj Muhammad and is part of broader effort by Mattel to diversify the Barbie line.” CNN, July 30 2018, 4:30 p.m., https://www.facebook.com/pg/cnn/posts/?ref=page_internal.
Harwell, Drew. “Facebook crackdown on fake accounts isn’t solving the problem for everyone.” Chicago Tribune, http://www.chicagotribune.com/news/nationworld/ct-facebook-fake- accounts-20180504-story.html. Accessed 30 July 2018.
Kaminski, Margot, and Kate Klonick. “Facebook, Free Expression and the Power of a Leak.” The New York Times, https://www.nytimes.com/2017/06/27/opinion/facebook-first- amendment-leaks-free-speech.html. Accessed 2 August 2018.
Shane, Scott, and Mike Issac, “Facebook Says It’s Policing Fake Accounts. But They’re Still Easy to Spot.” The New York Times, https://www.nytimes.com/2017/11/03/technology/facebook-fake-accounts.html. Accessed 30 July 2018.
Wagner, Kurt, and Rami Molla. “Facebook has disabled almost 1.3 billion fake accounts over the past six months.” Recode, https://www.recode.net/2018/5/15/17349790/facebook-mark-zuckerberg-fake-accounts-content-policy-update. Accessed 30 July 2018.