In Apple’s latest iOS 10 update there’s a new feature that allows you to search GIFs in iMessage. But after realizing that using the search term “butt” resulted in a PG-13 version of a My Little Pony character, the company censored the search feature by hiding said images.
Apple has always had a pretty firm opposition toward anything pornographic appearing in its search results. In fact, the platform is completely porn-free and will not allow any third parties to hold pornographic images. All that being said, it’s a bit humorous to think that Apple made a mistake allowing those “butt” images to show up.
Moreover, in case you weren’t aware, Apple censors most search terms related to pornography and the company has a history of censoring. In 2013, the company removed 500px, a photography app, from the app store because of nude photos that were being published through it.
Apple is not the only company that censors what you can search and find on its platform. In fact, most major social media platforms, like Twitter, YouTube, Instagram and Facebook, censor us in some form. Whether it’s banning female nipples from photos or rigging an algorithm so a particular hashtag cannot trend, censorship in the social media age is scary, especially considering more than 60% of Americans receive their news on Facebook and Twitter.
Where governments previously had control, it appears the power has shifted into the hands of private social media companies.
Take Twitter, for example. The online social networking service is often criticized for not protecting its users and allowing pretty much anything on its platform. Because it does allow anything on its platform and that’s part of why people love it so much.
Twitter is also kind of like a filing cabinet that records and preserves our digital history for us. You can go to Twitter, search a hashtag and find photos, articles and all the opinions you could ever want to hear on a particular moment from (recent) history. That’s pretty cool, when you think about it. But what’s not cool is finding tweets that you never saw, tweets that seemingly disappeared even though the 140-characters were typed.
Twitter has been caught removing tweets, hiding tweets and deleting trending hashtags. Such accusations have opened the doors to how such a prominent platform—which sources the news for over half the population-gets away with censorship before our very eyes.
It was just a few months ago that Twitter was accused of censoring tweets under ‘#DNCLeak’ during the WikiLeaks e-mail debacle. In this particular scenario, conservative Twitter users accused the platform of removing the hashtag from the trending bar, despite it displaying an estimated 250,000 tweets about the leaked employee e-mails from the Democratic National Committee.
On the other side of the political spectrum, of course, is the whole Milo Yiannopoulos situation, which involves permanent censorship in the form of banning. Another accusation comes from Trump supporters, which say the social platform concealed his tweets asking for campaign donations.
But is this censorship? Or is it just the work of a flawed algorithm?
Twitter, like other social platforms, works off an automated algorithm that determines what appears in your news feed and when. The information that shows up is based on an analysis of whom you interact with and what kind of tweets the algorithm thinks you’ll be more interested in seeing.
In some cases, you might not see a person’s tweets in your timeline because of the algorithm. Then, when you go to that person’s Twitter profile, there will be a slew of tweets you never saw. It could be as innocent as an algorithm only pushing certain tweets to the front of your feed-but that very fact shows the kind of power Twitter has.
If their algorithm can control what you see and when, whose to say that it can’t be told to flag a particular tag or keyword and stop tweets containing that language from being noticed?
I spoke with an employee from Instagram and she mentioned that when Instagram was first released, she spent a lot of time removing “dick pics” from the platform.
Instagram prohibits “graphic content” referring to male and female body parts it won’t allow users to publish photos of. However, though male nipples can be seen in images posted to the site, the mobile photography sharing service will remove any image containing female nipples.
“Instagram needs to accept that we cannot live in a society that treats men and women differently,” writes Silje Mari, an artist based in Norway. “Men and women do not have the same rights, especially when it comes to Instagram. This is sexism, and yes I am a feminist fighting for equality. Now I worry Instagram may disable my account.”
In its community guidelines, Instagram names a few rules you have to follow, like only sharing photos that you took and not spamming people for fake followers and likes. It also describes its censorship of nudity, noting, “We know that there are times when people might want to share nude images that are artistic or creative in nature, but for a variety of reasons, we don’t allow nudity on Instagram.”
Instagram CEO Kevin Systrom revealed one of those reasons in an interview with _Business Insider. Basically, it all goes back to Apple censoring nudity and pornographic images. Remember when I mentioned that third party apps couldn’t contain graphic content? Apple considers female nipples pornographic, so in order for Instagram to remain on our smartphones (and keep Apple happy) it cannot #freethenipple any time soon.
But it’s not just female nipples that Instagram wants to censor. In March 2013, Rupi Kaur had an image of herself removed twice by the social platform. What was so wrong about it? Well, it showed her lying in bed (clothed) with a red menstrual stain on her underwear and bed sheets. Apparently, this was a violation of Instagram’s community guidelines, so the photo was removed.
“This just goes to show who is sitting behind the desk. And who is controlling the show. Who is controlling the media and who is censoring us,” wrote Kaur.
I recently wrote about how Facebook’s Trending platform is seriously flawed. But Facebook, in general, when it comes to content curating, is incredibly defective.
Let’s say you want to post something on Facebook, but you’re not sure whether or not it’s considered controversial. Facebook has a list of things it prohibits on its platform, including, but not limited to: nudity, hate speech, fake profiles and spam. If you see someone post something on Facebook that you think breaks these rules, you can report it, and then Facebook will remove it.
But what about when Facebook itself catches you breaking its rules or posting something it doesn’t like? You’ll get a warning that what you’re posting isn’t allowed, usually followed by a suspension from your account for a set amount of time. Anyone else feel like they’re in kindergarten again?
A few weeks ago, Facebook was in hot water for removing and suspending the account of Espen Egil Hansen, CEO and editor-in-chief of Norway-based publication, Aftenposten_. Hansen posted the iconic “Napalm Girl” image by Nick Ut, which was a violation of Facebook’s Community Guidelines since the image contains a naked child.
Not only did Facebook delete the post, but it also deleted the article Hagen penned in response, in which he accused the company of censorship. Though Facebook ultimately backed down, apologized and ceased censoring the photo, the incident is one of many censorship concerns for the social platform.
The ‘Trending’ module, for example, operates off the sense of an algorithm and a few real humans. Meaning that, not only does the algorithm still make mistakes, but also there’s room for a biased employee to determine what they think should be trending and what shouldn’t.
A report conducted by Online Censorship, a platform what documents social media censorship from user-generated data, found that Facebook is censoring us the most compared to other social channels.
Unfortunately, since these private companies set the rules of their platforms and we blindly sign up for them, they technically aren’t doing anything illegal. But, that’s also because social media is still so new and we haven’t, until now, realized how these platforms impact our right to freedom of speech.
You don’t have to have an account on a social channel if you don’t want to-but if you choose to, then you have to play by their rules.
If you’re not happy with how one of these platforms operates, you can just delete your account and move on with your life, right? But before you do so, consider what that action would mean. It would mean that you’re aware of the control a social media company has, that you don’t like it, but that instead of doing anything about it, you’re just going to turn your back and walk away. If we all just did that, we’d win though, right?
Remember: more than half our population gets our information from Facebook and Twitter. I don’t think it’s as easy as saying “everyone just stop using Facebook” but I also don’t think it’s as easy as telling companies they can’t filter our tweets and that everyone and anyone should just be able to do whatever they want on the Internet.
Instead, we should push to ask companies to be more transparent about how they determine to censor user’s content. If your posts are censored online, report it and ask “why?” Submit an appeal and get an explanation from the site instead of just clicking “Okay” and waiting for your time out to be over. The more we all start to do this, the more we can bring about awareness and change for how private companies regulate speech.
And then maybe, just maybe, social media will start to be fun again.