Skepticism Warranted in the Panic Over Twitter Changes

Image by Shutterstock
The thing that’s been giving the online world a collective ulcer is the idea that Twitter is going to fundamentally change the way its service works by bringing Facebook-style curation to its real-time firehose. But is it really? Despite the recent rending of garments by the Twitter faithful, I have found that skepticism is warranted.

The panic began with the implementation of a system whereby tweets favorited by people you follow might appear out of context in your timeline, and things you favorite might emerge likewise in other people’s feeds. I wrote about how this was an ominous sign of Twitter mucking with what makes it great: real-time access to the zeitgeist, filtered only by whom you choose to follow.

Then the Wall Street Journal reported that Twitter’s CFO Anthony Noto had indicated that changes were coming to the traditional Twitter timeline:

Twitter’s timeline is organized in reverse chronological order, a delivery system that has not changed since the product was created eight years ago and one that some early adopters consider sacred to the core Twitter experience. But this “isn’t the most relevant experience for a user,” Noto said. Timely tweets can get buried at the bottom of the feed if the user doesn’t have the app open, for example. “Putting that content in front of the person at that moment in time is a way to organize that content better.”

Mathew Ingram’s analysis of this at GigaOm is what really had people reaching for their pitchforks and torches, writing that it “sounds like a done deal” and that coming modifications “could change the nature of the service dramatically.”

That sounds really scary to folks who have stuck with Twitter since the beginning, and truly value what it provides. It stands as a stark contrast to the heavily-curated experience of Facebook, its immediacy giving it its power and unique position in the media universe.

But as even Ingram acknowledged in a subsequent post, this change — an algorithmic approach to the timeline — was probably meant for the “discover” tab of the site, which is already heavily curated by the service, and within Twitter’s search feature. In fact, that’s what the Journal article even says:

Noto said the company’s new head of product, Daniel Graf, has made improving the service’s search capability one of his top priorities for 2015.

“If you think about our search capabilities we have a great data set of topical information about topical tweets,” Noto said. “The hierarchy within search really has to lend itself to that taxonomy.” With that comes the need for “an algorithm that delivers the depth and breadth of the content we have on a specific topic and then eventually as it relates to people,” he added.

Sure sounds to me like he’s just talking about search, since that’s what he actually says. It didn’t help matters, I suppose, that whoever wrote the Journal’s headline chose to blare, “Twitter Puts the Timeline on Notice.” Because, no, it didn’t. Not here, anyway.

After Ingram’s first piece, in fact, Dick Costolo, Twitter’s CEO (who I presume is in a position to know) tweeted, “[Noto] never said a ‘filtered feed is coming whether you like it or not’. Goodness, what an absurd synthesis of what was said.

What really settles all of this for me, though, is an interview given by Katie Jacobs Stanton, who is Twitter’s new media chief. What I read from what she tells Re/Code is that Twitter’s current and long-term strategies continue to revolve around the real-time, unfiltered timeline. Here’s Stanton on Twitter as a companion to live TV viewing (emphasis mine):

What’s happened is that every day our users have been able to transform the service into this second screen experience while watching live TV because Twitter has a lot of those characteristics. It’s live, it’s public, it’s conversational, it’s mobile. Television is something really unique, really powerful about Twitter and we’ve obviously made a big investment in that whole experience.

Here she is on the value Twitter provides during breaking news events:

We have a number of unique visitors that come to Twitter to get breaking news, to hear what people have to say. Joan Rivers died [Thursday] and people were grieving on Twitter and talking about her, but they’re also coming to listen to the voices on Twitter as they pay respect to Joan Rivers. This happens all the time. There’s also the broader syndication of tweets. News properties of the world embed tweets and cite tweets. That’s really unique.

Note how she emphasizes the fact that Twitter is not incidentally serving as this kind of platform, but that this makes it uniquely crucial to the wider media universe, for consumers of news and those producing it.

She later refers to Twitter as “the operating system of the news.” This does not sound to me like a service that is intent on dismantling what its own media boss is touting as its chief virtue.

Twitter will of course go through changes, and its experiments with favorites-from-nowhere is an example of that. It won’t always be exactly as it is. But I’m beginning to believe that the recent panic (including my own) is unwarranted. I suspect that the people at Twitter understand why people use it as religiously and obsessively as they do, that they use it very differently from the way they use Facebook, and that there needs to always be a way for a user to tap into the raw stream.

Maybe, down the road, the initial experience Twitter presents a user with is one that is more curated, more time-shifted, but I suspect that the firehose will always be there for faithful.

Learning Not to Be Tormented by the Twitter Torrent

I took a vacation from work last week, but I’m not good at vacations. One way or the other, I usually find some way to taint what should be a chance to relax with stress and labor. Sometimes that source of stress can be my own children. Not so much this time. This time, it was Twitter.

At first I had narrowed this epiphany to the bunch of jerks who attacked me when I tweeted in support of Anita Sarkeesian, and after my post on video games’ brutalization of women. And yes, that was stressful, and it’s not my fault that lots of people are jerks and decide to act on their jerkishness. But as the sun set on the last day of my time off, I realized that jerks on Twitter weren’t the sole problem for me, nor even at the core. It’s Twitter.

Two weeks ago, I, like hundreds of thousands of people I suspect, allowed the harrowing and upsetting news from Ferguson, Missouri eat me alive, night after night. I felt a kind of moral obligation to keep my eyes affixed to Tweetdeck as every outrageous development crossed the zeitgeist in real time. I could be of now help, and I couldn’t change the minds of those who thought the police’s siege was justified, but I wouldn’t allow myself to stop internally churning over every distressing incident. Tweet by tweet. Helpless watching and tweeting my feelings only served, in the end, to put a dent in my well-being.

It wasn’t a bad idea for me to be informed, or to feel a deep compassion for the peaceful people whose very humanity was being challenged by our system, there represented by a militarized police force. I know there is real value in being well-informed and empathetic . But there is educating yourself, and there is abusing yourself.

In the midst of the blowback over my tweets and posts about Anita Sarkeesian and video games, I found myself wounded with every attack. Sometimes the lashings came from people I sort of knew on Twitter, which stung in a particular way that unleashed all sorts of self-doubting anxieties. But even the stupid and overtly hostile attacks from trolls and other miscellaneous dingbats hurt. These were snarky, mean-spirited attempts at zingers from fools, devoid of sense, and they still upset me.

Put aside why I “let” these things affect me as severely as they do. Folks, this is what it is to be like me. The better question to ask is why I place myself in such a position where I can be affected.

Marco Arment recently discovered something similar after a Twitter-fight with a tech journalist, which apparently really got to him, and he’s not exactly a shrinking violet:

Much of the stress I felt during this is from the amount of access to me that I grant to the public….We allow people access to us 24/7. We’re always in public, constantly checking an anonymous comment box, trying to explain ourselves to everyone, and trying to win unwinnable arguments with strangers who don’t matter in our lives at all.

We allow this access because of what we feel we’re getting in return: all the benefits of the Twitter firehose, every tweet in real time from those we’ve chosen to follow, plus (and this is the big one) a platform to reach as many people who choose to follow us, plus however many people follow them, should they pass our content along. When Twitter’s great, it’s really, really great. “It’s like 51% good and 49% bad,” as Brent Simmons recently put it. “I don’t see it getting any better. Hopefully it can hold the line at just-barely-worth-it.”

And it’s not just about people ganging up on me, it’s about exposing myself to the Great Torrent of Feelings that Twitter can become. As I just talked about in my last post, Twitter and other social media can, at their worst, serve as platforms for one group of people to vociferously agree with each other at the expense of another group of people, who are just, like, “the worst.” Regardless of my orientation to this dynamic, whether I’m in the agreeing-group, the dissenting group, or simply watching it take place, it’s dispiriting, frustrating, and if it’s about an issue or group of people I care about, upsetting.

Here’s Frank Chimero, expressing something that rings true for me as well:

My feed (full of people I admire) is mostly just a loud, stupid, sad place. Basically: a mirror to the world we made that I don’t want to look into. The common way to refute my complaint is to say that I’m following the wrong people. I think I’m following the right people, I’m just seeing the worst side of them while they’re stuck in an inhospitable environment. It’s exasperating to be stuck in a stream.

And here’s a kicker: While the groupthink and mob dynamics are Twitter at its worst, I think Ferguson is an example of Twitter at its best: the real-time documentation of an existentially crucial event, with contributions from people who are participants and first-hand witnesses to developments, along with analysis and reaction from people watching from outside. But just because it’s important and useful doesn’t mean it’s healthy for a given individual to drink all of it in night after night.

As I grew wearier this week, I took breaks from Twitter. I didn’t do any kind of cold-turkey abstention or detox. I just put it aside for a while. I took almost one whole day away before finally writing and posting my article on video games, and over the course of the week, I affirmatively decided not to allow Twitter or social media to be a part of my routine or my passive phone-checking. I chose not to put it in front of me when I was playing with my kids. I assembled some toys and organized my daughter’s room with only a podcast, and little phone checking. I rode my bike more than I ever have. I turned off all of its notifications for reading a book on my iPad. I even checked out some dead-tree books from the library, in large part so that when I was reading them, the Great Torrent of Feelings could not reach me.

As I so often write under this blog’s banner, social media is best used with great intention. I usually mean this in terms of fostering your personal identity or in curating what content you’re exposed to. But it also applies to how much of your time and attention you allow it to claim overall. I have defaulted, I fear, to a stance in which the Twitter Torrent was granted passage through my nervous system as often, and for as long, as anyone else using it wanted. This week, while not the most relaxing and diverting vacation I could have hoped for, has at least taught me to be more specific and, yes, intentional, about my time in the Torrent.

Hat tip to Alan Jacobs, from whom I found a couple of these quotes, and usually deserves many hat tips.

Let’s Agree to Disagree with Those Other People: The Stifling of Meaningful Dissent Online

A recent study from Pew that’s getting a lot of attention suggests that social media use is contributing to a dynamic in which people are afraid to express opinions that might dissent from what appears to be the majority consensus both on and offline. It appears as though Facebook, Twitter, and other social media lead people to be afraid to disagree out loud both on these platforms and in real life.

The pushback against this that I’ve heard amounts to some version of this: “That can’t be true, people are arguing all the time online!” This anecdotal observation means, I suppose, that the study is bunk.

But I don’t think so. Believe me, I understand that there is a lot of disagreeing and arguing and dissenting online. But I think it’s largely between networks of people as opposed to within them. Perhaps the most obvious example from my own skepto-atheist experience is the nightly volley of tweets between atheists and religious believers, where each side is batting debate points back and forth over the answer to Life, the Universe, and Everything.

But these aren’t friends or amiable acquaintances having a thoughtful disagreement. These are people who are more or less strangers to each other, shelling one another with 140 characters’ worth of theological (or atheological) rhetoric.

No one in these debates need muster the courage to stand against the prevailing opinion of the crowd, as everyone knows where everyone stands before the argument even begins. The same can be said for much of what passes for “debate” online. Liberals versus conservatives is an easy example. If progressive activist X gets into a Twitter tit-for-tat with religious right crusader Y, it’s not an act of bravery on either participants’ part, but a chance to showboat. And that’s fine.

Someone who has a disagreement within their own circles, though, does face a tougher situation than they might have in the days before social media omnipresence (and near-omniscience). If someone has a different political point of view on a certain issue from most of their friends, it can take a little steeling of the spine to bring it up, and for most of recent history, this could be confined to a conversation among friends at a party or over drinks. If there was blowback or discomfort, the blast radius was limited to a few folks and in an isolated occasion.

Today, however, say something that doesn’t jibe with your circle’s line of thinking on a touchy subject, and you can expect all hell to rain down upon you from friends, friends-of-friends, and anyone else whose social Venn diagram even slightly butts up against theirs. Depending on the subject and the opinion in question, the reaction can be intense, hostile, overwhelming, and ultimately silencing. I know this phenomenon has made me rather hesitant, which is part of what made it so difficult to post my previous article, as it dissented with what many of my friends believed (apparently very strongly). As of this writing I still haven’t even tweeted out the link myself.

And where it really gets interesting is how this spills over into meatspace. Give it a moment’s thought, and you can see how the stultification of dissent online can effect one’s in-person interactions. Everyone you know in real life is probably also connected to you online, and will more or less instantly become aware of any meaningful disagreements on sensitive issues, as well as aware of the torrent of pushback one receives online as a result. Your real-life friends, in other words, will both know you think differently about something and see how you’re being pilloried for it. This, I can imagine, makes both parties – the dissenter and the members of their networks – dubious about the wisdom of opening one’s mouth. Plus, any dissent aired solely in meatspace can quickly find its way online by way of someone else’s reporting of it.

At the same time, the social rewards of being part of a cohesive group with a strongly-held, identical opinion on an issue are also apparent. You are safe within your own camp to lob snark, sarcasm, talking points, or missives of righteousness to the opposing camp (which is equally cohesive) or at some other poor sucker who was damn fool enough to disagree with his of her friends. If they still are friends, because of course they now hold the wrong opinion about a Very Important Issue.

Freddie deBoer recently noted how this phenomenon spoils online discourse, not just in debate, but in all manner of expression:

The elite internet is never worse– never– than when the people who create it decide that so-and-so is just the worst. When everybody suddenly decides that someone is a schmuck, it leads to the most tiresome and self-aggrandizing forms of groupthink imaginable.

So yes, there is plenty of argument online. But actually relatively little open disagreement. The disagreement most people who sneer at this study are observing is really just agreement on the position that those other people (or that one poor dumb bastard) on the other side are wrong.

It’s people, astride very tall horses, agreeing at other people.

Twitter is Monkeying with What Makes it Great

Original image from Shutterstock.
Twitter has a lot of problems. It doesn’t seem to have the wherewithal to deal with abuse and harassment on its platform, it’s managed to antagonize the developer community by limiting anyone’s ability to make new apps and interfaces to the service, and, oh yeah, it still doesn’t really know how to make money for itself. But the core service is something truly valuable and truly simple, and in that simplicity it has been – dare I say it? Yes I dare! – revolutionary.

But under the shadow of Facebook’s supermassive user base and Google’s vast resources underpinning so much of what we know of as the Web, Twitter seems willing to at least experiment with making fundamental changes to what makes it so great in the first place.

Anyone using the Twitter web interface might have noticed already that not only are retweets (when one posts someone else’s tweet on their own timeline) appearing in users’ feeds, but so are other people’s favorites (when you click the star on a tweet). Not all favorites from all followers, but those determined by algorithm to be of potential interest to you.

Here’s how Twitter itself explains the new order (with my emphasis):

[W]hen we identify a Tweet, an account to follow, or other content that’s popular or relevant, we may add it to your timeline. This means you will sometimes see Tweets from accounts you don’t follow. We select each Tweet using a variety of signals, including how popular it is and how people in your network are interacting with it. Our goal is to make your home timeline even more relevant and interesting.

That’s right, Twitter is playing with building its own Facebook-like curation brain. Or to put it another way, Twitter is putting kinks in its own firehose.

This is disconcerting to longtime Twitter users for a number of reasons. First is the relinquishing of control being forced on the user: what was once a raw feed of posts from a list of people entirely determined by the user will become one where content is inserted that the user may not even want there. As John Gruber put it, “That your timeline only shows what you’ve asked to be shown is a defining feature of Twitter.” Maybe not for long.

Another issue is that this content can be time-shifted, meaning that the immediacy of dipping into one’s Twitter stream for the second-by-second zeitgeist will become diluted at best, and meaningless at worst.

But also, this one relatively minor change in the grand scheme of things signifies an entirely different concept for what a “favorite” means on Twitter. It’s really never been entirely clear to me what clicking the star on someone’s tweet was supposed to signify, but as with many things on Twitter, folks have made it their own. For some it’s the equivalent of a nod or smile of approval without a verbal response, for others it serves the purpose of a bookmark, so you can return to it later. It’s never been meant as a “signal” to Twitter to provide more content like that tweet. Importantly, it’s always mainly been between the user and the original tweeter (not entirely, as one can click through on a given tweet and see all those who have favorited something), and now that’s completely gone. Now you have to assume that your favorites, along with your retweets, will be broadcast, put in front of people in their timelines.

Dan Frommer says changes like this may be necessary for Twitter’s longtime viability:

The bottom line is that Twitter needs to keep growing. The simple stream of tweets has served it well so far, and preservationists will always argue against change. But if additions like these—or even more significant ones, like auto-following newly popular accounts, resurfacing earlier conversations, or more elaborate features around global events, like this summer’s World Cup—could make Twitter useful to billions of potential users, it will be worth rewriting Twitter’s basic rules.

But with events around the world being as they are, the value of the Twitter firehose hasn’t been this clear since perhaps Iran’s Green Revolution. For Twitter to be monkeying with its fundamentals, the things that make it stand apart from Facebook and other platforms, is frightening. I have to hope that if Twitter does take this too far, that another platform will emerge that can be all that was good about Twitter, and also attract a critical mass of users to make it valuable.

Maybe we should have given App.net more of a shot.

Ferguson as Portrayed by Facebook and Twitter: Algorithms Have Consequences

Image source.
If Facebook’s algorithm is a brain, then Twitter is a stream of conscience. The Facebook brain decides what will and will not show up in your newsfeed based on an unknown array of factors, a major category of which is who has paid for extra attention (“promoted posts”). Twitter, on the other hand, is a firehose. If you follow 1000 people, you’ll see more or less whatever they tweet, at the time they tweet it, at the time you decide to look.

As insidious as it feels, the Facebook brain serves a function by curating what would otherwise be a deluge of information. For those with hundreds or thousands of “friends,” seeing everything anyone posts as it happens would be a disaster. Everyone is there, everyone is posting, and no one wants to consume every bit of that.

Twitter is used by fewer people, often by those more savvy in social media communication (though I’m sure that’s debatable), the content is intentionally limited in scope to 140 characters including links and/or images, and the expectation of following is different than with Facebook. On Facebook, we “follow” anyone we know, family of all ages and interests, old acquaintances, and anyone else we come across in real life or online. On Twitter, it’s generally accepted that we can follow whomever we please, and as few as we please, and there need be no existing social connection. I can only friend someone who agrees to it on Facebook (though I can follow many more), but I can follow anyone I like on Twitter whose account is not private.

So the Facebook brain curates for you, the Twitter firehose is curated by you.

Over the past few days, we have learned that there are significant social implications to these differences. Zeynep Tufekci has written a powerful and much talked about piece at Medium centered on the startling idea that net neutrality, and the larger ideal of the unfiltered Internet, are human rights issues, illustrated by the two platforms’ “coverage” (for lack of a better word) of the wrenching events in Ferguson, Missouri. She says that transparent and uncensored Internet communication “is a free speech issue; and an issue of the voiceless being heard, on their own terms.”

Her experience of the situation in Ferguson, as seen on Twitter and Facebook, mirrored my own:

[The events in Ferguson] unfolded in real time on my social media feed which was pretty soon taken over by the topic — and yes, it’s a function of who I follow but I follow across the political spectrum, on purpose, and also globally. Egyptians and Turks were tweeting tear gas advice. Journalists with national profiles started going live on TV. And yes, there were people from the left and the right who expressed outrage.

… I switched to non net-neutral Internet to see what was up. I mostly have a similar a composition of friends on Facebook as I do on Twitter.

Nada, zip, nada.

No Ferguson on Facebook last night. I scrolled. Refreshed.

Okay, so one platform has a lot about an unfolding news event, the other doesn’t. Eventually, Facebook began to reflect some of what was happening elsewhere, and Ferguson information did begin to filter up. But so what? If you want real-time news, you use Twitter. If you want a more generalized and friendly experience, you use Facebook. Here’s the catch, according to Tufekci:

[W]hat if Ferguson had started to bubble, but there was no Twitter to catch on nationally? Would it ever make it through the algorithmic filtering on Facebook? Maybe, but with no transparency to the decisions, I cannot be sure.

Would Ferguson be buried in algorithmic censorship?

Without Twitter, we get no Ferguson. The mainstream outlets have only lately decided that Ferguson, a situation in which a militarized police force is laying nightly violent siege to a U.S. town of peaceful noncombatants, is worth their attention, and this is largely because the story has gotten passionate, relentless coverage by reporters and civilians alike on Twitter.

Remember, Tufekci and I both follow many of the same people on both platforms, and neither of us saw any news of Ferguson surface there until long after the story had already broken through to mainstream attention on Twitter. What about folks who don’t use Twitter? Or don’t have Facebook friends who pay attention? What if that overlap was so low that Ferguson remained a concern solely of Twitter users?

And now think about what would have happened if there was no Twitter. Or if Twitter adopted a Facebook algorithmic model, and imposed its own curation brain on content. Would we as a country be talking about the siege on Ferguson now? If so, might we be talking about it solely in terms of how these poor, wholesome cops were threatened by looting hoodlums, and never hear the voices of the real protesters, the real residents of Ferguson, whose homes were being fired into and children were being tear-gassed?

As I suggested on Twitter last night, “Maybe the rest of the country would pay attention if the protesters dumped ice buckets on their heads. Probably help with the tear gas.” Tufekci writes, “Algorithms have consequences.” I’ve been writing a lot about how platforms like Facebook and Twitter serve to define our personal identities. With the Facebook brain as a sole source, the people of Ferguson may have had none at all. With the Twitter firehose, we began to know them.

What Happens When You Starve the Facebook Brain?

The Facebook algorithm, the “brain” which decides what content to feature, what content to bury, and what content to put in front of you, is being tested mightily of late. One writer tried to game the Facebook brain by disguising his posts as major life events in hopes of seeing them rise to the top. Another tried to overwhelm the brain (and himself) by clicking “like” on literally everything he saw.

Elan Morgan had a different idea altogether. Instead of gaming the Facebook brain, she more or less ignored it. Taking the opposite tack from Mat Honan, the Wired writer who liked all content for 48 hours without discrimination (and suffered for it), Morgan stopped clicking like altogether. She describes her troubles with the entire concept:

I actually felt pangs of guilt over not liking some updates, as though the absence of my particular Like would translate as a disapproval or a withholding of affection. I felt as though my ability to communicate had been somehow hobbled. The Like function has saved me so much comment-typing over the years that I likely could have written a very quippy, War-and-Peace-length novel by now.

Rather than give the Facebook brain a deluge of contradictory feedback as Honan did, Morgan gave it none at all, leaving the Facebook brain with little data with which to base its curation on. The result? Well, in a way, she got Facebook – the one we all used to like – back:

Now that I am commenting more on Facebook and not clicking Like on anything at all, my feed has relaxed and become more conversational.

Imagine that. This is what drew me to Facebook to begin with, when it still seemed to be a platform mainly for college students. It distinguished itself from MySpace by having a clean, uncomplicated interface, and with a news stream that didn’t necessesitate going to an individual’s page to interact. And when you wanted to interact, to comment or ask a question, it was quick and easy.

But when Facebook turned so strongly in the direction of heavy algorithm-based curation as almost literally everyone began posting on it, it turned into something that resembled a WalMart lined with cheesy inspirational posters. Community and interaction became incidental to passive consumption of content. Passive, save for the “like.”

Morgan saw this too:

I had been suffering a sense of disconnection within my online communities prior to swearing off Facebook likes. It seemed that there were fewer conversations, more empty platitudes and praise, and a dearth of political and religious pageantry. It was tiring and depressing. After swearing off the Facebook Like, though, all of this changed. I became more present and more engaged, because I had to use my words rather than an unnuanced Like function. I took the time to tell people what I thought and felt, to acknowledge friend’s lives, to share both joys and pains with other human beings. It turns out that there is more humanity and love in words than there are in the use of the Like.

I think this is an experiment very much worth pursuing. As Mike Daisey wrote (on Facebook) in response to Morgan’s piece, “[I]t might help make it closer to being a discussion board, which is what I wish it to be.” Same here.

But there are different perspectives on the “like.” Anil Dash wrote back in 2011 how he uses likes, Twitter “favoriting,” and other forms of social media up-voting with specific intention:

[F]avoriting or liking things for me is a performative act, but one that’s accessible to me with the low threshold of a simple gesture. It’s the sort of thing that can only happen online, but if I could smile at a person in the real world in a way that would radically increase the likelihood that others would smile at that person, too, then I’d be doing that all day long.

This idea, likes as a stand-in for in-person smiles and nods, is part of what Morgan finds problematic, that they are substanceless. “The Like is the wordless nod of support in a loud room,” she writes. “It’s the easiest of yesses, I-agrees, and me-toos.”

There’s nothing wrong with yesses and me-toos, of course. Sometimes that’s really all that’s worth saying, and that’s okay. I think the trick is to know when a mere “like” or “fav” truly is sufficient, when a more substantive response is warranted, and when it’s best, or just okay, to let something go by without expressing an opinion at all. After all, not every opinion needs expressing, does it?

I keep coming back to the idea of using Facebook and other social media with intention, knowing that there is an algorithm behind this platform that dominates so much of our online experiences, and acting on that understanding. That might mean you become far more judicious with your likes, and favoring prose responses over a mere thumbs-up. And maybe it means you eschew a reaction on Facebook’s platform altogether (thereby bypassing the Facebook brain altogether), and put your response into a blog post, a tweet, or a private email.

Just because something starts or is discovered on Facebook doesn’t mean it has to stay there. That brain doesn’t own you.

The Spectacle of Ourselves: Social Media and the Superfluous Will

Image by Sutterstock.
Are we losing ourselves in social media? A lot of people feel that way, that we’re all just absorbing ourselves into some kind of swirl of ones and zeroes, and that our identities and individualities are being lost to “Big Data.” One way I’ve heard it put is that we’re all turning into the passengers on the Axiom.

Rob Horning at The New Inquiry addresses this, by way of his reading of Jean Baudrillard, a social critic from the previous few decades, who in the 1980s seems to have prophesied the coming of social media, with a number of predictions about what it would lead to, which I won’t get into here. Horning quotes this bit from Baudrillard, and it rings true:

This is our destiny, subjected to opinion polls, information, publicity, statistics: constantly confronted with the anticipated statistical verification of our behavior, absorbed by this permanent refraction of our least movements, we are no longer confronted with our own will. … Now, where there is no other, the scene of the other, like that of politics and society, has disappeared. Each individual is forced despite himself into the undivided coherency of statistics.

Horning applies this to the example of things like Facebook’s feed curation (and I’d say Google’s person-by-person search algorithms), in that they give us what their data says we ought to want before we’ve ever requested it. He calls it “postauthenticity”:

Even if the prediction is initially wrong, preferential placement in the platform, and the efficacy of the subsequent feedback loops can make it so … Postauthenticity (social media plus Big Data) makes our will superfluous.

Okay so that does sound pretty unsettling. I certainly don’t want to think of myself as passively and thoughtlessly having my online experiences doled out to me a) without being in control of it and b) without even realizing or caring that I’m not in control.

Horning says:

Facebook promises to entertain you, but it turns out that promise is synonymous with manufacturing demand. … Within that model is where power is exercised, modulating behavioral outcomes at the level of populations.

I actually agree with his diagnosis here, that this is indeed the power that Facebook wields, as does Google, and perhaps to a lesser extent Amazon, and Twitter even less so, until they decide to exercise more of their power and deny the “firehose” to users and aim for enforced curation.

But I don’t see it as a crisis, at least no more so than with any other mass media with which we’re already familiar. Horning, channeling Baudrillard, talks about social media as a system that serves our selfhood to us, in which “the masses enjoy the spectacle of themselves as a kind of consumer good.” Which is a heavy concept, but not new. What else is television if not the masses enjoying the spectacle of themselves, a spectacle that is sold to them through exposure to advertisements and paid subscriptions?

Obviously the difference is that on TV, we’re not watching “us,” we’re watching other people. But the reason we watch TV, see theatre, read books, etc., is to place ourselves outside of our normal worlds and into someone else’s, be it something wholly fictional or something based in “reality.” It is escapism, with varying degrees of escape versus engagement. We place ourselves in those modes of entertainment because, as we watch, where else could our “selves” be?

On Facebook and other social media, obviously we’re more “ourselves.” We present our faces, our names, our biographical information, our day-to-day comings and goings, our likes and dislikes, and anything else that delights us, upsets us, or somehow reflects something about us that we want to show the world.

It’s easy to get lost in the rathole of Facebook, as much as I loathe it on ethical and aesthetic prinicples. It’s easy to waste countless hours and unfathomable patience on Twitter. And yes, one activity in which one can get lost is in constructing one’s own “self,” the version of you you’re going to show the world. I know I do! I’m at least under the impression that when I do it, it’s quite deliberate. Other times I am sure I am being unconsciously guided and dazzled by algorithms being processed on distant server farms.

But at the very least I do have that power to decide what I will and will not expose myself to. I can construct my online identity, aware all the while that it is only a representation, and not a whole. I can waste time in the various services’ ratholes, but all the while understand that I can change what I’m allowing myself to see, exit it at any time, and if I choose, never return.

With television, you can choose what you watch, or whether to watch at all, but once that choice is made, the choosing ends. You go strictly into passive mode, where social media can be either passive or active, serendipitous (or with the illusion of seredipty) or intentional.

All this said, I think we did and continue to lose the masses to “the spectacle of themselves” with television. Snotty smartypants so-and-so’s like me have been complaining about the brain-rot of television since its advent, and rightly so. It’s a mind-deadener for most people, but for some, it can also be used for enrichment or engagement.

I don’t think social media is any different on that end. The poor “masses” will use it purely for the spectacle of themselves (and others of course), and some, perhaps relatively few, will use its power for more intentional, active purposes, for good or ill, just like with television, radio, and the printing press. Does Big Data and the hyper-curation of online experiences subsume people until they are lost? For many of us, yes, the will does become superfluous, and more and more so as these systems become more powerful. But they don’t, and I think won’t, subsume all of us. And that makes all the difference.

**

Unrelated side note: Horning makes sure to let us know how “frustrating” he finds Baudrillard’s byzantine style of writing, which I find fairly ironic considering how generally hoity-toity The New Inquiry is as a whole, and how in this same piece Horning himself writes things like this:

Maybe then, the way to resist the demand to make one’s subjectivity productive for capital is to use social media in a “hyperconformist” normcore way, emptying “self-expression” of its value for social-media companies and shifting the location of selfhood elsewhere by perpetually deferring its “genuine” expression.

Yeah. So.

Abuse on Twitter: Humans Can’t Always Just “Brush it Off”

Image by Matthew Keys.
People being assholes online is hardly new, though awful people using Twitter as a kind of heat-seeking missile to hurt people has only lately begun to rise to the level of a mainstream conversation.

There seem to be three legs to this stool: The responsibilities of the perpetrators of Twitter abuse, what the target of the abuse is obliged to either tolerate or resist, and what Twitter itself ought to be doing.

For leg 1, the people who use Twitter (or any medium) to hurt people’s feelings, to scare them, to threaten them, the answer is clear, and we need not dwell on it. They should drop dead.

Let us assume, though, that they will disagree with me and continue to both live and use Twitter as a vessel for their vileness. We have left legs two and three.

To get at leg 2, I recommend a conversation had on this topic on This Week in Google between hosts Gina Trapani, Jeff Jarvis, and Leo Laporte. Jarvis and Laporte, both of whom I admire very much, while sensitive to what (mostly) women endure online, seem focused on the idea of ignoring the abuse, blocking bad actors and not letting the harassment get to you, lest the bad guys win. (The exceptions Laporte makes are for actual threats of violence that warrant real-world intervention, and the kind of abuse that can harm his business, but that’s a different thing.)

It takes Trapani, however, to ground the conversation where I think it belongs, in the minds and perceptions of those who are not public figures, who have not signed up to be in the spotlight, and are human.

In the abstract, on paper, it sounds entirely reasonable to recommend simply brushing off the vitriolic spewings of idiots on Twitter. But as Trapani explains from her own experience when she first became a visible figure online, the utter onslaught of criticism, the “nitpicking” of every facet of her existence, was completely overwhelming. Again, she knows that she chose to be in this spotlight, and was able, over time, to become more or less inured to the attacks. But think of the countless (mostly) women on Twitter, especially now that the service is mainstream and not a geek/niche platform, who suffer the same level of abuse that a public figure might. Now tell me they should just brush it off.

Because we are humans, you see. It’s not enough to say we ought simply process the data and coldly weigh the costs and benefits of every action and then act for the optimal outcome. “Too much abuse? Block, put it out of your mind, and decide not to be affected.” It simply doesn’t work that way for human beings with feelings and memories and psychological baggage and hearts. When we’re attacked, either in person or through bits, we feel it, physiologically. We experience real emotions like fear, self-loathing, and depression. Whether the abuse is “genuine” or a real-world physical threat is beside the point. We’re not the computers.

I’ve felt fear from the Internet, and it’s the same feeling as fear in meatspace. And I’m not a real target, not like so many (mostly) women are. I can’t imagine turning that fear up by several orders of magnitude, just so I can be free to tweet.

So what ought the targets do? They ought to have a service that does a hell of a lot more to both be safe and feel safe.

This is leg 3: What Twitter ought to do about all this. Trapani and Jarvis both note that there simply must be algorithmic things that Twitter can implement to at least begin to create a safer online space. But to get a more concrete idea, I recommend a post by Danilo Campos titled, appropriately, “The Least Twitter Could Do.” He has some concrete ideas about the steps Twitter could take, such as allowing users to set an auto-block threshold for users with few followers, or blocking any account that a certain number of one’s friends have blocked. Campos says these are only “band-aids,” but they’d be something. And it’s not clear to me at all that Twitter has taken this seriously yet.

Hey, I get it, they’re Silicon Valley, libertarian, information-wants-to-be-free types. But again, we’re talking about people, not machines, not startup manifestos or mission statements. Twitter’s got its infrastructure, its platform, its cultural power, and lots of engineering talent and money. It now just needs to give enough of a damn.

Quick! Get This Man to a Homeopathic Hospital!

There’s something particularly insidious about homeopathy, isn’t there? I can’t put my finger on it, but something about it gets under my skepto-atheist skin more than almost any other kind of pseudoscientific malarky.
I think it has something to do with the fact that things like religion and faith are kind of vague and etherial, making claims about things that are overtly and almost-explicitly imaginary, while homeopathy makes a nonsense claim about something that is actually supposed to be physically present; though a solution contains only a “memory” or “essence” of a substance, it’s still supposed to be there, if in only negligible amounts, and have some effect on you as a result. At least one’s qi or chakra or aura are as imaginary and ethereal as anything religion claims. Homeopathy is just straight up wrong.

This is all to say I made up a dumb joke on Twitter about what a homeopathic hospital might be like.

Homeopathic hospital: Huge empty building, one real doctor walks in, walks out again.

— Paul Fidalgo (@PaulFidalgo) August 1, 2014

And then other smart folks on Twitter took the idea and ran with it, and I thought I’d share some highlights.

@PaulFidalgo He’d have to jump around a bit on the inside first, because it’s hard to shake a hospital. — David Dennis (@The_Wolfster) August 1, 2014

@PaulFidalgo It would be a building with billions of staff and if any were doctors once, there’s no record of their ever having worked there — David Bradley (@sciencebase) August 1, 2014

Indeed, you need something that gives a similar effect to what you’re trying to cure. A mass-murderer wd be better. @AI_Joe@PaulFidalgo

— Stephan Brun (@tibfulv) August 1, 2014

@tibfulv@AI_Joe@PaulFidalgo Clearly the Dr. wouldn’t just walk in and out. He would have to at least twerk for an hour or something. — SCROB TV (@scrobTV) August 1, 2014

@PaulFidalgo Construction workers then remove one corner room at random and attach it to another hospital. #LatherRinseRepeat — Len Sanook (@LenSanook) August 1, 2014

@LenSanook@PaulFidalgo After each reconstruction, they whack it ten times with an enormous leather and wood wrecking ball. — Charles Richter (@richterscale) August 1, 2014

@PaulFidalgo Homeopathic hospital: Huge empty building, occasionally the janitor opens the window to recirculate the BS. — Travis Estrella (@AI_Joe) August 1, 2014

@AI_Joe@PaulFidalgo a bunch of water coolers labelled “help yourself”? — CBat (@ImADataGuy) August 1, 2014

And then Thomas (@tehabe) sent me this video, which I can’t believe I’ve never seen, and won the day.