The Collision: Enforced Religion Meets the Internet

Miriam Badawi, daughter of Raif, from the Free Raif Badawi Facebook page.
Every human being alive today, provided they have access to even the most rudimentary computing hardware, is now a broadcasting platform. Compared to generations past, even for the least electronically visible among us, we have many times the reach for any thought or opinion we care to express. And we rarely have full control over who hears what we say.

The consequences for the expression of religious belief (or lack thereof) have been enormous. To my mind, the collision of the democratized Internet with the innate restrictions of certain faith traditions is the most significant development in the world of religion today. Never in human history have supernaturally based belief systems, so specific in their proscriptions for behavior and thought, been so open to scrutiny and criticism, and on such a mass scale.

And it is a collision, because the free expression of dissenting ideas are anathema to dogma, particularly state-enforced dogma. Incidents of heretics and religious dissenters are not unique to our era of course, but never has it been so easy to broadcast one’s dissent, for religious authorities to become aware of said dissent, and for the rest of the world to be awoken to how those dissenters are being persecuted. They are no longer isolated to villages or insular nations. A heretical tweet can land one in jail, but one’s next tweet can then rally a movement to demand your freedom.

This collision has sparked a global crisis, a crackdown on free expression from serial offenders such as Saudi Arabia and Pakistan, to countries that fancy themselves democracies like Turkey, Russia, and even Greece. Often these prohibitions against religious dissent are given names like “hurting religious sentiments” or “insulting religion,” but they all fall under the rubric of blasphemy laws.

It is remarkably easy to commit blasphemy today. Some victims of persecution have been willing agitators, intentionally trying to bring about change within their own societies, such as Saudi Arabia’s Raif Badawi, who began a website for the discussion of liberal opinion; or Alber Saber, the Egyptian secular activist arrested in 2012 for allegedly sharing links to the video “The Innocence of Muslims,” which sparked enraged protests and violence across the Islamic world. But on the other hand, we also have people like Alexander Aan, an Indonesian civil servant who has begun quietly expressing atheistic opinions on Facebook, and soon faced the violent wrath of an angry mob and several months in prison as a result.

In other words, one need not seek out controversy to find it, and one need not look to publicly commit blasphemy to find oneself in existential danger as a result of expressing dissenting beliefs. A casual Facebook conversation or tweet can land one in just as much peril as being an intentional rabble-rouser online.

The silver lining to this is how easily the rest of us can become aware of this crisis, and each instance of it. Alexander Aan began his travails alone, but soon found that he had won the support of countless allies around the world, including leading human rights organizations, such as the one that employs me, the Center for Inquiry. These newfound allies, friends he could never have known he had without the same technology that allowed him to be placed in danger in the first place, rallied to his cause and leveled a degree of political pressure to Indonesian authorities that they could not have anticipated when the first locked Alex up.

And for Raif Badawi – along with his fellow dissenter, Saudi human rights activist Waleed Abu Al-Khair, who also sits in prison for blasphemy-related offenses – their cause has likewise brought to bear the combined efforts of activists, human rights organizations, and even casual users of social media to push back against their persecution. Their case was recently brought before the UN’s Human Rights Commission by my organization, which was an important enough step in itself. But when delegates of the Saudi government manically tried to silence our own representative, Josephine Macintosh, as she delivered her rebuke of Saudi’s human rights abuses, the video of the altercation went viral, exposing to tens of thousands of individuals the extent of Saudi Arabia’s crimes, the plights of Badawi and Al-khair, and the fact that a growing movement was working so hard to push back.

But without Twitter and Facebook and other online media, we in the West might never know any of this. We might go on wholly unaware and uninterested in the challenges faced by atheists and other religious dissenters around the world. Miriam Ibrahim, originally of Sudan, is a Christian woman who was sentenced to death for refusing to convert to Islam, but the outcry for her right to follow the faith of her choosing was heard at first exclusively online, and largely from atheists and secularists. The sheer volume of attention brought to her cause online led to breathless “mainstream” media coverage, which in turn brought the heat of the world’s gaze upon Sudan, who eventually released her. She is no agitator. She didn’t have a blasphemous blog or tweet religious satire. She, simply and quietly, refused to violate her conscience, and the online world turned up the volume on her behalf.

Religious belief, whatever good can be ascribed to it, nearly always brings with it the expectation of conformity of thought and deed, lest one earn the wrath of the creator of the universe. The Internet is, among other things, an engine for sifting, parsing, and critiquing information and opinion. The collision of these two phenomena in this early part of the 21st century is one whose shockwaves will be felt for generations to come.

Editors’ NoteThis article is part of the Public Square 2014 Summer Series: Conversations on Religious Trends. Read other perspectives from the Patheos community here


You can learn much more about blasphemy laws and the fight for free expression at CFI’s Campaign for Free Expression.

You Can Be Jailed for Internet Blasphemy Before You’ve Even Committed It in India

If you use the Internet in a particular state in India, you might be jailed for pre-crime.

I wish I was being overly dramatic, but it really does seem to be the case that a law amended earlier this month assumes authorities in the Indian state of Karnataka to have Minority Report-like precognitive powers, allowing them to arrest someone who they think might at some point violate their Information Technology laws.

Let me back up a bit. What first caught my attention was a bit of news hitting the tech blogosphere that Karnataka police were letting it be known that citizens would be violating the law by the mere act of “liking” something on Facebook that has “an intention of hurting religious sentiments knowingly or unknowingly,” and that folks should report any such activity they see to the police. (Never mind that it doesn’t make sense that something could have “an intention” of doing something “knowingly or unknowingly.”) This is reprehensible on its face, criminalizing not only “blasphemous” content, but even the appearance of approval of said content. It’s a human rights violation of the most obvious sort.

But following the links deeper into the originating reports, I find that this is just part of the problem. It seems that this is a way of enforcing what’s called, amazingly, the Karnataka Prevention of Dangerous Activities of Bootleggers, Drug-offenders, Gamblers, Goondas, Immoral Traffic Offenders, Slum-Grabbers and Video or Audio Pirates Bill, or the “Goondas Act.” (A goonda is a hired thug.) And it’s the “prevention” part of that title that’s key, because it effectively takes any offenses under the auspices of the state’s Information Technology and Copyright acts under its own umbrella, and aims to stop them before they can actually be committed, according to the Bangalore Mirror:

Until now, people with a history of offences like bootlegging, drug offences and immoral trafficking could be taken into preventive custody. But the government, in its enthusiasm, while adding acid attackers and sexual predators to the law, has also added ‘digital offenders’. While it was thought to be against audio and video pirates, Bangalore Mirror has found it could be directed at all those who frequent [Facebook], Twitter and the online world, posting casual comments and reactions to events unfolding around them. [ … ]

Technically, if you are even planning to forward ‘lascivious’ memes and images to a WhatsApp group or forwarding a song or ‘copyrighted’ PDF book, you can be punished under the Goondas Act.

And once arrested, you can be held from 90 days to a full year before even seeing a judge to make your case. It’s horrifying. One section of the act even prohibits the “publishing of information which is obscene in electronic form,” which includes “any material which is lascivious or appeal to the prurient interest.” Sunil Abraham of the Centre for Internet and Society provides the Mirror with a terrifying and yet totally plausible example of what could happen:

If I publish an image of a naked body as part of a scientific article about the human body, is it obscene or not? It will not be obscene and, if I am arrested under the [original Information Technology] Act, I will be produced before the magistrate within 24 hours and can explain it to him. But now, I will be arrested under the Goonda Act and need not be produced before a magistrate for 90 days. It can be extended to one year. So for one year, I will be in jail even if I have not committed any wrong.

So what for me began as more fuel for the fire against blasphemy laws around the world, the battle against which my employing organization the Center for Inquiry has taken on as one of its core missions, revealed itself to be a situation with a police and surveillance state run utterly amok, persecuting those who might at some point violate some arbitrary and undefinable religious or moral sensibility.

Ferguson as Portrayed by Facebook and Twitter: Algorithms Have Consequences

Image source.
If Facebook’s algorithm is a brain, then Twitter is a stream of conscience. The Facebook brain decides what will and will not show up in your newsfeed based on an unknown array of factors, a major category of which is who has paid for extra attention (“promoted posts”). Twitter, on the other hand, is a firehose. If you follow 1000 people, you’ll see more or less whatever they tweet, at the time they tweet it, at the time you decide to look.

As insidious as it feels, the Facebook brain serves a function by curating what would otherwise be a deluge of information. For those with hundreds or thousands of “friends,” seeing everything anyone posts as it happens would be a disaster. Everyone is there, everyone is posting, and no one wants to consume every bit of that.

Twitter is used by fewer people, often by those more savvy in social media communication (though I’m sure that’s debatable), the content is intentionally limited in scope to 140 characters including links and/or images, and the expectation of following is different than with Facebook. On Facebook, we “follow” anyone we know, family of all ages and interests, old acquaintances, and anyone else we come across in real life or online. On Twitter, it’s generally accepted that we can follow whomever we please, and as few as we please, and there need be no existing social connection. I can only friend someone who agrees to it on Facebook (though I can follow many more), but I can follow anyone I like on Twitter whose account is not private.

So the Facebook brain curates for you, the Twitter firehose is curated by you.

Over the past few days, we have learned that there are significant social implications to these differences. Zeynep Tufekci has written a powerful and much talked about piece at Medium centered on the startling idea that net neutrality, and the larger ideal of the unfiltered Internet, are human rights issues, illustrated by the two platforms’ “coverage” (for lack of a better word) of the wrenching events in Ferguson, Missouri. She says that transparent and uncensored Internet communication “is a free speech issue; and an issue of the voiceless being heard, on their own terms.”

Her experience of the situation in Ferguson, as seen on Twitter and Facebook, mirrored my own:

[The events in Ferguson] unfolded in real time on my social media feed which was pretty soon taken over by the topic — and yes, it’s a function of who I follow but I follow across the political spectrum, on purpose, and also globally. Egyptians and Turks were tweeting tear gas advice. Journalists with national profiles started going live on TV. And yes, there were people from the left and the right who expressed outrage.

… I switched to non net-neutral Internet to see what was up. I mostly have a similar a composition of friends on Facebook as I do on Twitter.

Nada, zip, nada.

No Ferguson on Facebook last night. I scrolled. Refreshed.

Okay, so one platform has a lot about an unfolding news event, the other doesn’t. Eventually, Facebook began to reflect some of what was happening elsewhere, and Ferguson information did begin to filter up. But so what? If you want real-time news, you use Twitter. If you want a more generalized and friendly experience, you use Facebook. Here’s the catch, according to Tufekci:

[W]hat if Ferguson had started to bubble, but there was no Twitter to catch on nationally? Would it ever make it through the algorithmic filtering on Facebook? Maybe, but with no transparency to the decisions, I cannot be sure.

Would Ferguson be buried in algorithmic censorship?

Without Twitter, we get no Ferguson. The mainstream outlets have only lately decided that Ferguson, a situation in which a militarized police force is laying nightly violent siege to a U.S. town of peaceful noncombatants, is worth their attention, and this is largely because the story has gotten passionate, relentless coverage by reporters and civilians alike on Twitter.

Remember, Tufekci and I both follow many of the same people on both platforms, and neither of us saw any news of Ferguson surface there until long after the story had already broken through to mainstream attention on Twitter. What about folks who don’t use Twitter? Or don’t have Facebook friends who pay attention? What if that overlap was so low that Ferguson remained a concern solely of Twitter users?

And now think about what would have happened if there was no Twitter. Or if Twitter adopted a Facebook algorithmic model, and imposed its own curation brain on content. Would we as a country be talking about the siege on Ferguson now? If so, might we be talking about it solely in terms of how these poor, wholesome cops were threatened by looting hoodlums, and never hear the voices of the real protesters, the real residents of Ferguson, whose homes were being fired into and children were being tear-gassed?

As I suggested on Twitter last night, “Maybe the rest of the country would pay attention if the protesters dumped ice buckets on their heads. Probably help with the tear gas.” Tufekci writes, “Algorithms have consequences.” I’ve been writing a lot about how platforms like Facebook and Twitter serve to define our personal identities. With the Facebook brain as a sole source, the people of Ferguson may have had none at all. With the Twitter firehose, we began to know them.

EU’s “Right to Be Forgotten” Hits Wikipedia, Blocking the Memory of the Web

In May, the European Union’s top court made the controversial ruling that search engines were responsible for upholding a so-called “right to be forgotten,” compelling Google, Bing, Yahoo, and others to cease indexing and displaying links to web pages that are “inadequate, irrelevant or no longer relevant” to a person making a complaint. This is not globally enforceable, of course, and applies only to the EU court’s jurisdiction.

Today, the Wikimedia Foundation, of which Wikipedia is a part, reported that its pages were among those being removed from Google’s indexes, and let it be known they were not pleased. In a blog post today their legal representatives wrote (with emphasis mine):

As of July 18, Google has received more than 91,000 removal requests involving more than 328,000 links; of these, more than 50% of the URLs processed have been removed. More than fifty of these links were to content on Wikipedia.

That’s only the beginning of the problem, as the only reason the Wiki folks even know about these removals is because Google tells them, of their own volition.

Search engines have no legal obligation to send such notices. Indeed, their ability to continue to do so may be in jeopardy. Since search engines are not required to provide affected sites with notice, other search engines may have removed additional links from their results without our knowledge. This lack of transparent policies and procedures is only one of the many flaws in the European decision.

Since search engines are under no obligation to let anyone know what they’re not showing users, users have no way of knowing what they’re missing, or that there’s anything to miss. That’s the idea of the new rule, really, to “erase the memory” of the Internet to uphold some twisted notion of “fairness.”

Wikimedia’s executive director Lila Tretikov, in a separate post today, explained the stakes:

[T]he European court abandoned its responsibility to protect one of the most important and universal rights: the right to seek, receive, and impart information. As a consequence, accurate search results are vanishing in Europe with no public explanation, no real proof, no judicial review, and no appeals process. The result is an internet riddled with memory holes—places where inconvenient information simply disappears.

A few days ago, I wrote about the concept of fairness versus compassion in software, an idea of Ben Brooks’. The gist was that “fairness” is where decisions are made with the lowest common denominator in mind in order to appeal to every possible use case, and “compassion” is where products and solutions are developed on a case-by-case basis, with each product fulfilling a limited set of needs, and doing so very well, at the expense of other needs, which are served by other products. Fairness gets you Microsoft Word, full of features for every possible scenario but also byzantine and bloated, and compassion gets you OmWriter, simplified with a small set of tools that will integrate extremely well with a small set of users.

It seems to me that the European Court was trying to be fair. There is a chance that a number of people may indeed be legitimate victims of content on the Web that is truly “inadequate, irrelevant or no longer relevant,” and that this content is genuinely harmful to them, to no greater purpose. And that sucks. But it wasn’t enough to simply act on the complaint in question (in this case, a Spanish man who wanted an auction listing from the 90s removed), so in all fairness, the Court decided that a blanket rule for such removals had to apply across the board, for every EU citizen, and for every search engine doing business there. To limit the scope of the case to one man wouldn’t be, to their minds, fair.

But that attempt at fairness has sacrificed compassion, compassion for the human beings who are now denied access to information once freely available, and who now have no way of knowing what it is they’re being denied. Opaque internal tribunals make the decisions on all of these cases, and as has been reported, there are at least tens of thousands of them, and likely far more. Compassion would have had individual cases of serious merit addressed, but fairness has harmed, potentially, everyone in the EU.

And the fact that it’s hitting Wikipedia of all sites so hard makes it all the more salient. Wikipedia, like the Web itself, has grown organically to become the very historical memory of the Internet. Like a real person’s memory, it is flawed and prone to gross human error, but also the has benefit of human ingenuity and imagination, and we rely on it for better or worse. Google and other search engines are largely how we find everything on the Web, including Wikipedia. It’s like cutting off the neural connections to memories that are stored in your brain, they’re there, but you now can’t access them because one of those memories is of something someone else would rather you forgot.

And think: couldn’t the right to be forgotten apply to physical media? Should we prune print encyclopedias and start rummaging through libraries with pairs of scissors, hunting for information deemed “inadequate, irrelevant or no longer relevant”?

Perhaps Wikimedia’s public stance will help generate some movement against the “right to be forgotten.” It’s one thing to wave people off of stupid things one might have done on an auction 20 years ago. But blocking off access to the memory of the Web might finally make people anxious.

UPDATE: Tim Farley has some important advice to anyone who makes content on the Web regarding this issue that he put in the comments:

I’ve recently spoken about this issue on both the Skepticality podcast and the Virtual Skeptics webcast. You correctly point out that nobody is required to pass along these notices. However, Google is in fact notifying webmasters when they get a request affecting your website! Bloggers and webmasters need to be aware of this. If you commonly critique European people or companies on your site, then you should make sure you are properly registered with Google Webmaster Tools. Otherwise Google has no way to send you these notifications.

I’m on HuffPost Live, Talkin’ Nazis!

I think their usual lineup of guests must have all simultaneously perished, because HuffPost Live invited me to join a panel this evening, literally minutes before air time. I was happy to oblige, of course. (The host, Josh Zepps, has my boss on a lot.)
We’re discussing he recent moves by Hungary to ban Nazi and communist symbols, and whether laws that prohibit vile speech can ever be justified. I think I managed not to embarrass myself or my employers too badly. You be the judge.

Note: The protest I refer to in the piece was suddenly postponed to May 2 right after the broadcast.