Two weeks ago, Facebook began suspending the accounts of members of the Sisters of Perpetual Indulgence, a drag and community service organization. Members who’d been active on Facebook under their stage names were locked out until they registered with their legal names.
For those members who wanted to keep their stage identities separate from the rest of their lives, at least online, Facebook’s actions threatened to tear down a critical wall of privacy. After attempted discussions with Facebook, a whole lot of media (and social media) attention, and a Change.org campaign that’s collected more than 36,300 signatures, on Wednesday Facebook’s chief product officer Chris Cox apologized to the group.
According to Facebook’s policy, members of Facebook must appear on the social network as they do in real life. In effect, “fake names” are not allowed.
It turned out another Facebook user who reported “several hundred of these accounts” was behind the trouble, Cox confirmed in a Facebook post. The newly reported accounts were among “several hundred thousand fake name reports,” Cox wrote, “99 percent of which are bad actors doing bad things: impersonation, bullying, trolling, domestic violence, scams, hate speech, and more.”
In his Facebook post, Cox wrote that the names the Sisters took on Facebook were, in fact, authentic enough.
Too little, too late?
This episode brings two issues into focus, and not for the first time. The first of course is the subject of online identities and the value that members of an online community place on pseudonyms.
People who study social networks and human behavior have argued that a real name policy can be harmful, particularly to members of the LGBTQ community. For one, it robs marginalized groups of a safe space online. Facebook is hardly the first social network to enforce such a policy—Google+ came under fire for requiring legal names when it launched.
This becomes all the more problematic when you consider the current safety of online space. Other scholars have argued that the tool that companies like Facebook use to protect the people from offensive and abusive behavior is broken.
Kate Crawford, a researcher at Microsoft Research, and Tarleton Gillespie, associate professor at Cornell University, have argued that the system of reporting harmful content by flagging—not just on Facebook, but in some version on every social platform—it isn’t enough to prevent or stop abusive behavior. Rob Meyer at The Atlantic explains this more, but the rub is that flags are a “technical solution” that papers over a deeply human problem: People do bad stuff.
One more suitable alternative is also a more social one, Crawford and Gillespie say. Their solution: Make post moderation open to public debate.
This would be better at locating the real “bad actors” and also keep harmless cases, like the Sisters and their pseudonyms, from getting mixed up in the machinery.
To some extent, Cox seems to agree. “We see through this event that there’s lots of room for improvement in the reporting and enforcement mechanisms,” Cox wrote on Wednesday.
It’s worth noting that two weeks of media coverage and a campaign started by one of the Sisters under her real name, were up before the Facebook response.