Fret No More


Time flies when you’re having fun, and it flies at Mach 5 when you’re not. When I hear my kids complain, “I’m bored,” I tell them how much I envy them. Oh, to be bored! To have no immediate demands on my time, energy, and attention! Boredom may appear to be an unpleasant state, but it’s also a harbinger and a breeding ground of things worth doing. It’s the preamble for activities of choice, not obligation.

By mere coincidence I read in succession two pieces on how terrible we humans are at perceiving time and its passage, and how we might alter those perceptions in a more meaningful and satisfying way. They are both entirely convincing, and yet they each offer conflicting ideal states of mind. Or they might not.

First, Alan Jacobs in The Guardian. (I have never met this man, but I swear I count him among the most valuable teachers of my life.) Jacobs refers to our culture, as driven by our various media, as “presentist.” He writes, “The social media ecosystem is designed to generate constant, instantaneous responses to the provocations of Now.” There’s no way to think deeply or consider alternate or broader perspectives because the fire hose of stimuli never ceases.

The only solution is to cultivate “temporal bandwidth,” which Jacobs defines as “an awareness of our experience as extending into the past and the future.” Less “now” and more “back then, now, and later.” And the way we do that is to read books. Old books, preferably. “To read old books is to get an education in possibility for next to nothing.”

That education sets the stage for one’s mind to not only absorb the wisdom and the mistakes of the past, but to contemplate how they “reverberate into the future”:

You see that some decisions that seemed trivial when they were made proved immensely important, while others which seemed world-transforming quickly sank into insignificance. The “tenuous” self, sensitive only to the needs of This Instant, always believes — often incorrectly — that the present is infinitely consequential.

But cultivating temporal bandwidth is happening less and less, it seems. And as Jacobs says in a separate post, “Those who once might have been readers are all shouting at one another on Twitter.”

But while Jacobs recommends steering us away from believing the present to be of prime significance, David Cain at Raptitude urges us to grasp the present more tightly, and let concerns about the past and future fade to periphery.

And it is all to address the same basic problem: we feel washed away by the force and flow of time. Comparing an adult’s perceptions of time to a child’s, Cain writes:

As we become adults, we tend to take on more time commitments. We need to work, maintain a household, and fulfill obligations to others. […] Because these commitments are so important to manage, adult life is characterized by thoughts and worries about time. For us, time always feels limited and scarce, whereas for children, who are busy experiencing life, it’s mostly an abstract thing grownups are always fretting about. There’s nothing we grownups think about more than time — how things are going to go, could go, or did go.

Cain doesn’t point to social media or cultural illiteracy as culprits, but rather our disproportionate fixation on the past and the future. It may be that Cain is largely discussing a different scale of time than is Jacobs. Cain seems to be referring to our fixation on what has happened in the relatively recent past (10 minutes ago or 10 years ago, for example) and what the immediate future bodes (say, the next couple of hours or the next couple of months). Jacobs, by emphasizing the reading of “old books” (and by quoting lines from Horace) is certainly thinking of a much deeper past and a more distant future, spans that transcend our own lifetimes.

But as I said, Cain recommends regarding the past and future less, and home in on the present. “The more life is weighted towards attending to present moment experience, the more abundant time seems,” he says. And the way to attend to that present moment, as clichéd as it might sound these days, is through mindfulness, which can mean meditation or any activities “that you can’t do absent-mindedly: arts and crafts, sports, gardening, dancing.” Here’s why:

It’s only when we’re fretting about the future or reminiscing over the past that life seems too short, too fast, too out of control. When your attention is invested in present-moment experience, there is always exactly enough time. Every experience fits perfectly into its moment.

Note that Cain never mentions reading as one of those activities that one can’t do absent-mindedly. I don’t know about you, but if I read absent-mindedly I’m probably not actually reading at all, or at least not in such a way that I’ll retain anything. So whether or not he intended it or agrees with it, I’m throwing “reading books” into that list.

This is the bridge that connects these seemingly-conflicting viewpoints, making them complementary. Much of this rests on the difference in time scale I referred to, which, if taken into account, begins to form a complete picture. Few would argue with the idea that fretting about the immediate past and future is detrimental to one’s experience of time, or that contemplation and consideration of history and the long-term repercussions of our actions is a waste of time.

They key word here might indeed be “fretting.” In this sense, the definition of “fretting” isn’t limited to “worrying,” but describes a broader practice of wasting energy and attention on things within a narrow temporal scope without taking any meaningful action to address whatever concerns might be contained within. We fret about choices we’ve made and what such-and-such a person is thinking about us or how we’ll ever manage to get through the day, week, or year with our sanity intact. We rarely fret about how the Khwarazmian Empire was woefully unprepared for the Mongol army under Genghis Khan in 1219, or how the human inhabitants of TRAPPIST-1d will successfully harvest the planet’s resources to support a growing populace.

And of course, nothing engenders fretting like social media. Already primed for fretting by the demands of work, family, and self-doubt, now we can fret in real time (and repeatedly) over anything relatives, acquaintances, total strangers, politicians, celebrities, and algorithms flash before our awareness. It is possible to exist in a state of permanent fret.

Let me tell you, time really freaking zooms when you’re fretting.

So let’s combine the recommendations of Jacobs and Cain to address our temporal-perception crisis. Let’s get off of Facebook and Twitter, let’s turn off the television, and let’s get to that stack of books (or list of ebooks if you prefer) and read. Let’s allow our brains to expand our awareness, considerations, and moral circle beyond this moment, this year, this era. Let’s not burden ourselves with the exhausting worries about what we’re reading or how long it will take to read it or what else we should be reading but aren’t. Let’s make time to chat with our kids and our parents, and write, tinker, draw, arrange, organize, build, repair, or tend as best suits us. Let’s stop and breathe and think of nothing for a few minutes as we focus on the present instant in time and space, even to the atomic level. And then let’s think big, daring, universe-spanning thoughts beyond all measure.

Let’s be bored, and let that boredom nudge, inspire, or shock us into activity, be it infinitesimal or polycosmic.

It will take practice. It will not be easy. Let’s accept that this, too, is a journey of time and effort and moments.

And let us fret no more.


If you feel so inclined, you can support my work through Patreon.

Madame Defarge’s Memes

madamedefarge_2119297bAt Wired, Issie Lapowsky summarizes some research that tells us something that is not surprising, that more or less no one is ever persuaded to change their mind about a political position because of a post they saw on Facebook.

I suppose people do actually think that their social media posts are badly-needed ammunition in the political war of ideas, and that their fierce, impassioned, and ironclad arguments will surely win over the misguided. I assume they really do think that. Intellectually.

But the truth, which I believe they at least feel at a gut level, is that these political social media posts are social tokens, signifiers of belonging to a particular group, earning good will and social capital by reaffirming that which they all already believe. That’s largely why I write political tweets, usually because I think I can do so in a funny way and get some positive validation that might begin to fill the abyss that is my self-esteem. My zinger about Trump or my spirited defense of Hillary isn’t going to move the needle one teeny tiny little bit in anyone’s mind, and I have no expectation that it will.

At this level it’s harmless (other than those perilous moments when my tweets are not affirmed and I fail to achieve validation). The problem arises when the posts and tweets and memes go from social tokens to something more like Madame Defarge’s knitting. Outside of the more black-and-white world of election-year D vs. R posts, social media posts involving politics and heated social issues are designed to affirm via othering, by striking clear delineations between the good people and everyone else who is irredeemably bad for failing to check every ideological box, whether they know those boxes exist or not.

And it’s not just reactions to one’s own posts that do this work. It’s the posts of others. Lapowsky writes:

The majority of both Republicans and Democrats say they judge their friends based on what they write on social media about politics. What’s more, 12 percent of Republicans, 18 percent of Democrats, and 9 percent of independents who responded say they’ve unfriended someone because of those posts.

So it’s not political persuasion, as we might like to believe, it’s shaking the trees for villains to fall out of, it’s political partitioning.

In the film Bananas, the Castro-like ruler Esposito delivers his first speech to his people, and tells them, “All citizens will be required to change their underwear every half-hour. Underwear will be worn on the outside so we can check.”

The kind of social media I’m talking about is that underwear you just changed, and you’re pretty damned proud that you did it after only 29 minutes.

The Real People Who Serve As the Internet’s Depravity Filter

An incredible investigative piece in Wired by Adrian Chen reports on the lives of contract content moderators, folks whose job it is to go through content posted to online platforms (such as Facebook, YouTube, Whisper, etc.), and deal with the content that violates a platform’s policies or the law. And yes, we’re talking about the really bad stuff: Not just run-of-the-mill pornography or lewd images, but examples of humanity at its worst, from torture, sexual assault (involving adults, children, and animals), and beheadings.
Just reading Chen’s piece is a traumatic experience in and of itself, knowing what material is out there, what unthinkable behavior real people are engaging in, and what the relentless exposure to this content must do to the psyches of these grossly underpaid contract workers, whose lives are slowly being ruined, their well-being slowly poisoned, post by post and video by video. Simply reading this article will probably require some recovery time.

I can’t have a blog about tech, culture, and humanism without at least acknowledging what Chen has brought into daylight. I don’t think I have any novel observations at the outset, having just read it, still somewhat teetering on my heels. But here are some thoughts and questions that it raises for me:

First, the obvious: Are the major tech companies for whom this work is done really aware of what they put these moderators through? From the Bay Area liberal arts grads to the social-media-sweatshop moderators in the Philippines, hundreds of thousands of smart, sensitive human beings (and I think they must be smart and sensitive to have the kind of judgment and empathy required to do this kind of work) are having their minds eaten alive, losing their ability to trust, to love, to feel joy, with disorders that mirror, or explicitly are, post-traumatic stress. Do Mark Zuckerberg or Larry Page or whoever it is that runs Whisper give a damn? (Given how little Twitter has done to deal with abuse and harassment of its users, I think it’s safe to presume for now that they probably don’t.)

Also, now that we know what these folks are exposed to, what can we as users of these services do about it? What will we do about it? (I fear the answer is probably similar to what we all did when we learned about the conditions in factories in China: more or less nothing.)

Here’s what affected me the most about all of this. This report was a reminder of the depths of human depravity. Now, it’s not news that there are horrible people doing horrible things to each other, and likely ever shall it be. But something about the way it’s described in this report amplifies it for me. If these hundreds of thousands of moderators are being overwhelmed, deluged with violence and death and evil in all manner of their cruelly novel variations, how many of our fellow humans are perpetrators? These moderators are only catching the portion of these people who either get caught in the act or purposefully broadcast their actions. What more must be taking place? I can barely stand to ask the question.

Bearing witness to a video of a man doing something I cannot bear to recount here to a young girl, one moderator points us to the insidiousness of all of this, emphasis mine:

The video was more than a half hour long. After watching just over a minute, Maria began to tremble with sadness and rage. Who would do something so cruel to another person? She examined the man on the screen. He was bald and appeared to be of Middle Eastern descent but was otherwise completely unremarkable. The face of evil was someone you might pass by in the mall without a second glance.

Chen writes of how these moderators no longer feel they can trust the people in their day-to-day lives. You can see why.

Finally, I’ll be thinking about the fact that its these devices and services that I am so fascinated and often entranced by that are the delivery vessels for this horror. It is tempting to relegate one’s thinking about the tech revolution as one of liberation and renaissance. But these tools are available to us all, to the best of us and the worst. What then? What now?

The iMortal Show, Episode 2: Your Self, in Pixels

Image by Shutterstock.
Your browser does not support the audio element.

You are what you tweet? Do your “likes” tell the story of who you are?

For so many of us, major portions of our lives are lived non-corporeally. We don’t define ourselves solely by what we do in physical space in interaction with other live bodies, but also through pixelated representations of ourselves on social media.

How do we strike a balance between the real world and the streams of Twitter and Facebook? Can we be more truly ourselves online? When we tweet and share and comment, what parts of ourselves do we reveal, and what do we keep hidden or compartmentalized?

The way social media defines our identity is what we’re talking about on Episode 2 of the iMortal Show, with my guests: Activist and communications expert Sarah Jones, and master of tech, twitter, and self-ridicule Chris Sawyer.

Download the episode here.

Subscribe in iTunes or by RSS.

The iMortal Show, Episode 2: “Your Self, in Pixels”

Originally recorded September 3, 2014.

Produced and hosted by Paul Fidalgo.

Theme music by Smooth McGroove, used with permission.

Running time: 43 minutes.

Links from the show:

Sarah Jones on Twitter, at Nonprophet Status, and her blog Anthony B. Susan.

Chris Sawyer (with “sentient skin tags”) on Twitter.

Previous episode: “Ersatz Geek”

iMortal posts:

Skepticism Warranted in the Panic Over Twitter Changes

Image by Shutterstock
The thing that’s been giving the online world a collective ulcer is the idea that Twitter is going to fundamentally change the way its service works by bringing Facebook-style curation to its real-time firehose. But is it really? Despite the recent rending of garments by the Twitter faithful, I have found that skepticism is warranted.

The panic began with the implementation of a system whereby tweets favorited by people you follow might appear out of context in your timeline, and things you favorite might emerge likewise in other people’s feeds. I wrote about how this was an ominous sign of Twitter mucking with what makes it great: real-time access to the zeitgeist, filtered only by whom you choose to follow.

Then the Wall Street Journal reported that Twitter’s CFO Anthony Noto had indicated that changes were coming to the traditional Twitter timeline:

Twitter’s timeline is organized in reverse chronological order, a delivery system that has not changed since the product was created eight years ago and one that some early adopters consider sacred to the core Twitter experience. But this “isn’t the most relevant experience for a user,” Noto said. Timely tweets can get buried at the bottom of the feed if the user doesn’t have the app open, for example. “Putting that content in front of the person at that moment in time is a way to organize that content better.”

Mathew Ingram’s analysis of this at GigaOm is what really had people reaching for their pitchforks and torches, writing that it “sounds like a done deal” and that coming modifications “could change the nature of the service dramatically.”

That sounds really scary to folks who have stuck with Twitter since the beginning, and truly value what it provides. It stands as a stark contrast to the heavily-curated experience of Facebook, its immediacy giving it its power and unique position in the media universe.

But as even Ingram acknowledged in a subsequent post, this change — an algorithmic approach to the timeline — was probably meant for the “discover” tab of the site, which is already heavily curated by the service, and within Twitter’s search feature. In fact, that’s what the Journal article even says:

Noto said the company’s new head of product, Daniel Graf, has made improving the service’s search capability one of his top priorities for 2015.

“If you think about our search capabilities we have a great data set of topical information about topical tweets,” Noto said. “The hierarchy within search really has to lend itself to that taxonomy.” With that comes the need for “an algorithm that delivers the depth and breadth of the content we have on a specific topic and then eventually as it relates to people,” he added.

Sure sounds to me like he’s just talking about search, since that’s what he actually says. It didn’t help matters, I suppose, that whoever wrote the Journal’s headline chose to blare, “Twitter Puts the Timeline on Notice.” Because, no, it didn’t. Not here, anyway.

After Ingram’s first piece, in fact, Dick Costolo, Twitter’s CEO (who I presume is in a position to know) tweeted, “[Noto] never said a ‘filtered feed is coming whether you like it or not’. Goodness, what an absurd synthesis of what was said.

What really settles all of this for me, though, is an interview given by Katie Jacobs Stanton, who is Twitter’s new media chief. What I read from what she tells Re/Code is that Twitter’s current and long-term strategies continue to revolve around the real-time, unfiltered timeline. Here’s Stanton on Twitter as a companion to live TV viewing (emphasis mine):

What’s happened is that every day our users have been able to transform the service into this second screen experience while watching live TV because Twitter has a lot of those characteristics. It’s live, it’s public, it’s conversational, it’s mobile. Television is something really unique, really powerful about Twitter and we’ve obviously made a big investment in that whole experience.

Here she is on the value Twitter provides during breaking news events:

We have a number of unique visitors that come to Twitter to get breaking news, to hear what people have to say. Joan Rivers died [Thursday] and people were grieving on Twitter and talking about her, but they’re also coming to listen to the voices on Twitter as they pay respect to Joan Rivers. This happens all the time. There’s also the broader syndication of tweets. News properties of the world embed tweets and cite tweets. That’s really unique.

Note how she emphasizes the fact that Twitter is not incidentally serving as this kind of platform, but that this makes it uniquely crucial to the wider media universe, for consumers of news and those producing it.

She later refers to Twitter as “the operating system of the news.” This does not sound to me like a service that is intent on dismantling what its own media boss is touting as its chief virtue.

Twitter will of course go through changes, and its experiments with favorites-from-nowhere is an example of that. It won’t always be exactly as it is. But I’m beginning to believe that the recent panic (including my own) is unwarranted. I suspect that the people at Twitter understand why people use it as religiously and obsessively as they do, that they use it very differently from the way they use Facebook, and that there needs to always be a way for a user to tap into the raw stream.

Maybe, down the road, the initial experience Twitter presents a user with is one that is more curated, more time-shifted, but I suspect that the firehose will always be there for faithful.

Twitter is Monkeying with What Makes it Great

Original image from Shutterstock.
Twitter has a lot of problems. It doesn’t seem to have the wherewithal to deal with abuse and harassment on its platform, it’s managed to antagonize the developer community by limiting anyone’s ability to make new apps and interfaces to the service, and, oh yeah, it still doesn’t really know how to make money for itself. But the core service is something truly valuable and truly simple, and in that simplicity it has been – dare I say it? Yes I dare! – revolutionary.

But under the shadow of Facebook’s supermassive user base and Google’s vast resources underpinning so much of what we know of as the Web, Twitter seems willing to at least experiment with making fundamental changes to what makes it so great in the first place.

Anyone using the Twitter web interface might have noticed already that not only are retweets (when one posts someone else’s tweet on their own timeline) appearing in users’ feeds, but so are other people’s favorites (when you click the star on a tweet). Not all favorites from all followers, but those determined by algorithm to be of potential interest to you.

Here’s how Twitter itself explains the new order (with my emphasis):

[W]hen we identify a Tweet, an account to follow, or other content that’s popular or relevant, we may add it to your timeline. This means you will sometimes see Tweets from accounts you don’t follow. We select each Tweet using a variety of signals, including how popular it is and how people in your network are interacting with it. Our goal is to make your home timeline even more relevant and interesting.

That’s right, Twitter is playing with building its own Facebook-like curation brain. Or to put it another way, Twitter is putting kinks in its own firehose.

This is disconcerting to longtime Twitter users for a number of reasons. First is the relinquishing of control being forced on the user: what was once a raw feed of posts from a list of people entirely determined by the user will become one where content is inserted that the user may not even want there. As John Gruber put it, “That your timeline only shows what you’ve asked to be shown is a defining feature of Twitter.” Maybe not for long.

Another issue is that this content can be time-shifted, meaning that the immediacy of dipping into one’s Twitter stream for the second-by-second zeitgeist will become diluted at best, and meaningless at worst.

But also, this one relatively minor change in the grand scheme of things signifies an entirely different concept for what a “favorite” means on Twitter. It’s really never been entirely clear to me what clicking the star on someone’s tweet was supposed to signify, but as with many things on Twitter, folks have made it their own. For some it’s the equivalent of a nod or smile of approval without a verbal response, for others it serves the purpose of a bookmark, so you can return to it later. It’s never been meant as a “signal” to Twitter to provide more content like that tweet. Importantly, it’s always mainly been between the user and the original tweeter (not entirely, as one can click through on a given tweet and see all those who have favorited something), and now that’s completely gone. Now you have to assume that your favorites, along with your retweets, will be broadcast, put in front of people in their timelines.

Dan Frommer says changes like this may be necessary for Twitter’s longtime viability:

The bottom line is that Twitter needs to keep growing. The simple stream of tweets has served it well so far, and preservationists will always argue against change. But if additions like these—or even more significant ones, like auto-following newly popular accounts, resurfacing earlier conversations, or more elaborate features around global events, like this summer’s World Cup—could make Twitter useful to billions of potential users, it will be worth rewriting Twitter’s basic rules.

But with events around the world being as they are, the value of the Twitter firehose hasn’t been this clear since perhaps Iran’s Green Revolution. For Twitter to be monkeying with its fundamentals, the things that make it stand apart from Facebook and other platforms, is frightening. I have to hope that if Twitter does take this too far, that another platform will emerge that can be all that was good about Twitter, and also attract a critical mass of users to make it valuable.

Maybe we should have given more of a shot.

Ferguson as Portrayed by Facebook and Twitter: Algorithms Have Consequences

Image source.
If Facebook’s algorithm is a brain, then Twitter is a stream of conscience. The Facebook brain decides what will and will not show up in your newsfeed based on an unknown array of factors, a major category of which is who has paid for extra attention (“promoted posts”). Twitter, on the other hand, is a firehose. If you follow 1000 people, you’ll see more or less whatever they tweet, at the time they tweet it, at the time you decide to look.

As insidious as it feels, the Facebook brain serves a function by curating what would otherwise be a deluge of information. For those with hundreds or thousands of “friends,” seeing everything anyone posts as it happens would be a disaster. Everyone is there, everyone is posting, and no one wants to consume every bit of that.

Twitter is used by fewer people, often by those more savvy in social media communication (though I’m sure that’s debatable), the content is intentionally limited in scope to 140 characters including links and/or images, and the expectation of following is different than with Facebook. On Facebook, we “follow” anyone we know, family of all ages and interests, old acquaintances, and anyone else we come across in real life or online. On Twitter, it’s generally accepted that we can follow whomever we please, and as few as we please, and there need be no existing social connection. I can only friend someone who agrees to it on Facebook (though I can follow many more), but I can follow anyone I like on Twitter whose account is not private.

So the Facebook brain curates for you, the Twitter firehose is curated by you.

Over the past few days, we have learned that there are significant social implications to these differences. Zeynep Tufekci has written a powerful and much talked about piece at Medium centered on the startling idea that net neutrality, and the larger ideal of the unfiltered Internet, are human rights issues, illustrated by the two platforms’ “coverage” (for lack of a better word) of the wrenching events in Ferguson, Missouri. She says that transparent and uncensored Internet communication “is a free speech issue; and an issue of the voiceless being heard, on their own terms.”

Her experience of the situation in Ferguson, as seen on Twitter and Facebook, mirrored my own:

[The events in Ferguson] unfolded in real time on my social media feed which was pretty soon taken over by the topic — and yes, it’s a function of who I follow but I follow across the political spectrum, on purpose, and also globally. Egyptians and Turks were tweeting tear gas advice. Journalists with national profiles started going live on TV. And yes, there were people from the left and the right who expressed outrage.

… I switched to non net-neutral Internet to see what was up. I mostly have a similar a composition of friends on Facebook as I do on Twitter.

Nada, zip, nada.

No Ferguson on Facebook last night. I scrolled. Refreshed.

Okay, so one platform has a lot about an unfolding news event, the other doesn’t. Eventually, Facebook began to reflect some of what was happening elsewhere, and Ferguson information did begin to filter up. But so what? If you want real-time news, you use Twitter. If you want a more generalized and friendly experience, you use Facebook. Here’s the catch, according to Tufekci:

[W]hat if Ferguson had started to bubble, but there was no Twitter to catch on nationally? Would it ever make it through the algorithmic filtering on Facebook? Maybe, but with no transparency to the decisions, I cannot be sure.

Would Ferguson be buried in algorithmic censorship?

Without Twitter, we get no Ferguson. The mainstream outlets have only lately decided that Ferguson, a situation in which a militarized police force is laying nightly violent siege to a U.S. town of peaceful noncombatants, is worth their attention, and this is largely because the story has gotten passionate, relentless coverage by reporters and civilians alike on Twitter.

Remember, Tufekci and I both follow many of the same people on both platforms, and neither of us saw any news of Ferguson surface there until long after the story had already broken through to mainstream attention on Twitter. What about folks who don’t use Twitter? Or don’t have Facebook friends who pay attention? What if that overlap was so low that Ferguson remained a concern solely of Twitter users?

And now think about what would have happened if there was no Twitter. Or if Twitter adopted a Facebook algorithmic model, and imposed its own curation brain on content. Would we as a country be talking about the siege on Ferguson now? If so, might we be talking about it solely in terms of how these poor, wholesome cops were threatened by looting hoodlums, and never hear the voices of the real protesters, the real residents of Ferguson, whose homes were being fired into and children were being tear-gassed?

As I suggested on Twitter last night, “Maybe the rest of the country would pay attention if the protesters dumped ice buckets on their heads. Probably help with the tear gas.” Tufekci writes, “Algorithms have consequences.” I’ve been writing a lot about how platforms like Facebook and Twitter serve to define our personal identities. With the Facebook brain as a sole source, the people of Ferguson may have had none at all. With the Twitter firehose, we began to know them.

What Happens When You Starve the Facebook Brain?

The Facebook algorithm, the “brain” which decides what content to feature, what content to bury, and what content to put in front of you, is being tested mightily of late. One writer tried to game the Facebook brain by disguising his posts as major life events in hopes of seeing them rise to the top. Another tried to overwhelm the brain (and himself) by clicking “like” on literally everything he saw.

Elan Morgan had a different idea altogether. Instead of gaming the Facebook brain, she more or less ignored it. Taking the opposite tack from Mat Honan, the Wired writer who liked all content for 48 hours without discrimination (and suffered for it), Morgan stopped clicking like altogether. She describes her troubles with the entire concept:

I actually felt pangs of guilt over not liking some updates, as though the absence of my particular Like would translate as a disapproval or a withholding of affection. I felt as though my ability to communicate had been somehow hobbled. The Like function has saved me so much comment-typing over the years that I likely could have written a very quippy, War-and-Peace-length novel by now.

Rather than give the Facebook brain a deluge of contradictory feedback as Honan did, Morgan gave it none at all, leaving the Facebook brain with little data with which to base its curation on. The result? Well, in a way, she got Facebook – the one we all used to like – back:

Now that I am commenting more on Facebook and not clicking Like on anything at all, my feed has relaxed and become more conversational.

Imagine that. This is what drew me to Facebook to begin with, when it still seemed to be a platform mainly for college students. It distinguished itself from MySpace by having a clean, uncomplicated interface, and with a news stream that didn’t necessesitate going to an individual’s page to interact. And when you wanted to interact, to comment or ask a question, it was quick and easy.

But when Facebook turned so strongly in the direction of heavy algorithm-based curation as almost literally everyone began posting on it, it turned into something that resembled a WalMart lined with cheesy inspirational posters. Community and interaction became incidental to passive consumption of content. Passive, save for the “like.”

Morgan saw this too:

I had been suffering a sense of disconnection within my online communities prior to swearing off Facebook likes. It seemed that there were fewer conversations, more empty platitudes and praise, and a dearth of political and religious pageantry. It was tiring and depressing. After swearing off the Facebook Like, though, all of this changed. I became more present and more engaged, because I had to use my words rather than an unnuanced Like function. I took the time to tell people what I thought and felt, to acknowledge friend’s lives, to share both joys and pains with other human beings. It turns out that there is more humanity and love in words than there are in the use of the Like.

I think this is an experiment very much worth pursuing. As Mike Daisey wrote (on Facebook) in response to Morgan’s piece, “[I]t might help make it closer to being a discussion board, which is what I wish it to be.” Same here.

But there are different perspectives on the “like.” Anil Dash wrote back in 2011 how he uses likes, Twitter “favoriting,” and other forms of social media up-voting with specific intention:

[F]avoriting or liking things for me is a performative act, but one that’s accessible to me with the low threshold of a simple gesture. It’s the sort of thing that can only happen online, but if I could smile at a person in the real world in a way that would radically increase the likelihood that others would smile at that person, too, then I’d be doing that all day long.

This idea, likes as a stand-in for in-person smiles and nods, is part of what Morgan finds problematic, that they are substanceless. “The Like is the wordless nod of support in a loud room,” she writes. “It’s the easiest of yesses, I-agrees, and me-toos.”

There’s nothing wrong with yesses and me-toos, of course. Sometimes that’s really all that’s worth saying, and that’s okay. I think the trick is to know when a mere “like” or “fav” truly is sufficient, when a more substantive response is warranted, and when it’s best, or just okay, to let something go by without expressing an opinion at all. After all, not every opinion needs expressing, does it?

I keep coming back to the idea of using Facebook and other social media with intention, knowing that there is an algorithm behind this platform that dominates so much of our online experiences, and acting on that understanding. That might mean you become far more judicious with your likes, and favoring prose responses over a mere thumbs-up. And maybe it means you eschew a reaction on Facebook’s platform altogether (thereby bypassing the Facebook brain altogether), and put your response into a blog post, a tweet, or a private email.

Just because something starts or is discovered on Facebook doesn’t mean it has to stay there. That brain doesn’t own you.

What the Facebook Brain Thinks of You

Screen Shot 2014-08-13 at 3.32.41 PM

If you’re in public relations, journalism, entertainment, or other similar fields, you already know that Facebook wields enormous power, probably far too much, in that its algorithms are what chiefly decide to what degree any content you post will be seen by users. Only the people who work at Facebook can know precisely how every factor is weighed, but we do know that there are elements of the content itself that determine its delivery, as well as how its received once it meets its first few sets of eyeballs.

You can almost think of Facebook like an acting agent.

When you sign up with the agency, you audition for the agent, and show them what you’ve got. This is you initially submitting your content to Facebook. (Let’s not take this analogy too literally — obviously the agent can turn you away, where Facebook almost never decline to post something you submitted.)

The agent then shops you around to a few producers and directors and casting directors, and sees how you far in the market. This is the first few moments of activity, if any, your content sparks on Facebook.

If you (the actor) start getting hired for things, make a good name for yourself, and engender more interest, the agent puts you further up their priority list and show you off to more and more muckity-mucks, for higher pay, and more exposure, which leads to even better jobs. This is when your post does well, earns a bunch of likes, comments, and shares, which in turn generates more likes, comments, and shares.

Or, the acting jobs don’t turn out, the agent loses interest, and you go into career limbo. This is the fate of most actors, really, and most content on Facebook.

So what a couple of folks lately have been trying to do is game the agent, and we have examples from two different directions: as the content creator/submitter (the actor in the analogy) and as the audience for the content (the producers and casting directors and box office numbers).

On the first part, we have Caleb Garling at The Atlantic, who decided he’d package his content in a particular way to trick the Facebook algorithm into giving his content traction.

I wanted to see if I could trick Facebook into believing I’d had one of those big life updates that always hang out at the top of the feed. People tend to word those things roughly the same way and Facebook does smart things with pattern matching and sentiment analysis. Let’s see if I can fabricate some social love.

I posted: “Hey everyone, big news!! I’ve accepted a position trying to make Facebook believe this is an important post about my life! I’m so excited to begin this small experiment into how the Facebook algorithms processes language and really appreciate all of your support!”

You can guess what happened. Though totally manufactured, the post did very well, as opposed to some of his previous substantive posts.

Garling admits that he doesn’t know precisely why this worked. You’d need to do a lot more testing outside of a clever novelty stunt to understand what will and will not make Facebook lift your material. But it’s an intriguing way of thinking about how this electronic brain that makes so many decisions for us in terms of what content we see online actually works.

On the other side, we have the brilliant Mat Honan at Wired who decided not to submit content, but to respond to it. All of it. With no particular goal in mind, Honan decided to run an experiment in which he would click “like” on almost literally everything Facebook put in front of him for 48 hours, just to see how his Facebook experience would change. And the results were varying degrees of horrifying. If you hate Facebook now (as I do), just imagine if it were always like this (and this was just in the first 60 minutes):

My News Feed took on an entirely new character in a surprisingly short amount of time. After checking in and liking a bunch of stuff over the course of an hour, there were no human beings in my feed anymore. It became about brands and messaging, rather than humans with messages.

But over time, like a great body of water, or really, a brain with moods and dispositions, things changed. As a result of a rogue like on a conservative-leaning comment, things got ugly and nutty really fast, with a swarm of frightening hard-right vitriol now flooding the feed. And this led something we now know all too well in a world of curation-by-servers:

This is a problem much bigger than Facebook. It reminded me of what can go wrong in society, and why we now often talk at each other instead of to each other. We set up our political and social filter bubbles and they reinforce themselves—the things we read and watch have become hyper-niche and cater to our specific interests. We go down rabbit holes of special interests until we’re lost in the queen’s garden, cursing everyone above ground.

And those bubbles can get very small. We tend to think of this in terms of liberals and conservatives, or maybe Apple and Android fans. But look at what it does to niche within niche, like the skepto-atheosphere, where a burgeoning movement of folks who largely all ought to be on the same side cannot seem to stop eating each other alive online, and hunkering down with those who are ideologically pure – at ever-increasing rates of purity, and therefore ever-shrinking bubbles. As Felicia Day put it on her own blog, “We’re being tricked into believing that our small worlds are much bigger than they really are in the grand scheme of things.”

Garling might have tricked the Facebook brain into thinking he had posted content that was not what it seemed. Honan definitely tricked the Facebook brain into thinking he was asking to see all manner of content he never really would.

What this tells me is that, yes, the brain is fallible, sure, but more importantly that intention matters enormously when it comes to social media. In previous posts here, I’ve talked about how the “crisis” of this filter bubble can be mitigated by intentional self-curation, by being mindful of what you approve of, what you click, what you post, and what you seek out.

Meanwhile, you can’t allow what you see on social media, or what you post to it, to define who you are in your own eyes. So the other lesson is to be intentional in your own self-perception. An actor’s sense of self can rise or fall by the approval of their agent and the industry to which the agent presents them. But it shouldn’t. If the Facebook algorithm is a brain, it’s just one brain, and it’s not a very wise one.