It looks like I managed to completely miss February on here. The best laid plans, eh? Well, I’m still alive, for what it’s worth. Life has been low-key stressing me out for the past month+, but should be getting back to some semblance of normalcy soon. (The 30 second version: at the start of the month, we discovered a slow leak in the kitchen plumbing, which had started to warp the flooring. Various mitigation measures were brought in — drying mats and industrial dehumidifiers and the like — but ultimately they ended up needing to pull up the flooring. And the counters. Neither of which they could match, so now they’re replacing the entire floor downstairs and getting new counters. Hurrah for home insurance!)
Not much else to report. I’ll be sure to get back to posting random links and writing the occasional screed soon. Thanks for sticking around.
My heart goes out to the journalists at the multiple organizations laid off this week (and more). Something like a thousand laid off in the space of a week. Fast Company has a solid (and scathing) article about the recent Buzzfeed layoffs: BuzzFeed’s layoffs and the false promise of “unions aren’t for us”. It paints a pretty bleak picture of where things are at, why, and what we can expect more of in the future.
But as an outlet largely dependent on social platforms like Facebook, BuzzFeed was forced to follow platform trends. When Facebook announced it was focusing on video content, BuzzFeed turned its resources just to that. Brands like Tasty were born, which force-fed ubiquitous birds’-eye view videos of generally unappetizing food to the masses. And for a while, this seemed to work. Videos were performing well, thanks to Facebook’s algorithmic push, and BuzzFeed once again looked like a digital trailblazer. But this bet was predicated on the whim of a social network known for pendulum strategy shifts at the expense of its clients; this pivot didn’t take into account what would happen if Facebook changed course. It shouldn’t come as a shock that Facebook did precisely that.
I’ve talked about this with a number of people, about why I’m perfectly happy to live somewhere that is grey and rainy but almost never drops below 40 degrees (F), and a lot of the winter is spent right around 50. Hank is dropping some truth bombs here, on how to make the most of living in a cold climate.
He’s right though: try to enjoy it. Gotta find the good parts while you can.
Over at Lost Garden, Daniel Cook has a fantastic piece looking at how to create “human-scale” online games, and why that’s a better approach to MMOs. This is some really fantastic, well thought out stuff, and not just for games: what they’re really talking about is how to build community.
One way of thinking about the constraints suggested by Dunbar’s Layers is to imagine you have a budget of cognitive resources that can be spent on relationships. The physical limits of your human brain mean that you only have enough mental budget for a total of roughly 150 relationships.
Humans have developed a few tools that have expanded our ability to organize into groups well past our primate cousins—most notably language—but also large-scale systems of government and economics. In the early 2000s, people assumed that new technologies like online social networks could help break past Dunbar’s Number; by offloading the cost of remembering our friendships to a computer, we could live richer, more social lives, with strong relationships to even more people.
We now have copious data that this is not the case. Studies suggest that there’s still a limited budget of cognitive resources at play and even in online platforms we see the exact same distribution of relationships.
If anything, social networks damage our relationships. By making it possible for us to cheaply form superficial relationships (and invest our limited energy in maintaining them), such systems divert cognitive resources from smaller, intimate groups out towards larger, less-intimate groups. The result is that key relationships with best friends and loved ones suffer. And, unfortunately, it is the strength of these high-trust relationships that are most predictive of mental health and overall happiness.
It’s a long read (which you might have guessed by the size of my pull quote), but well worth it.
Most Americans today find work drudgery and leisure anxiously vacant. In our hours off work, we rarely achieve thrilling adventure, deliberate self-education, or engage in Whitmanian loafing. At the same time, faith is eroding in the idea that paid work can offer pleasure, self-discovery, a means for improving the world, or anything more than material subsistence.
I mean, they’re not wrong. I’m lucky enough to have a decent job with some flexibility to learn and grow, but jobs like that are decidedly not the majority of jobs out there. And while the work side might not be terrible at the moment, the “vacant leisure” is real. The author continues:
Recreational pursuits more demanding than fleeting digital absorption are, increasingly, acts of consumption. Leisure is not something you “do” but something you “buy,” whether in the form of hotels and cruises or Arianna Huffington–vetted mindfulness materials. The leisure industry provides work for some while promising relaxation to others, for a fee.
The sorry state of leisure is partly a consequence of an economy in which we are never fully detached from the demands of work. The category of “free” time is not only defined by its opposite (time “free” of work); it is subordinated to it. Free time, Theodor Adorno warns, “is nothing more than a shadowy continuation of labor.” Free time is mere recovery time. Spells of lethargy between periods of labor do little but prepare us for the resumption of work. Workers depleted by their jobs and in need of recuperation turn to escapist entertainment and vacuous hobbies. And the problem of figuring out when work is “over,” in an economy in which knowledge workers spend their job hours tweeting and their evening hours doing unpaid housework and child care, has never seemed more perplexing.
Yep. The conversation continues from there, and is worth the time to read.
I’m not sure the hows or whys that discussion about the literary theory of the “death of the author” cropped up recently, but it did. Lindsay Ellis has an excellent video essay about it (and really, go watch the rest of her stuff while you’re at it). It covers a lot of ground, and also some of the impact that our current culture’s shift towards personal brand (and subsequent association of any work done with that brand) has on how we think about creators and their creations. Go watch:
This prompted a follow-up response by John Scalzi, which I think is also worth reading. In process of discussing how the author is not the book and the book is not the author, he noted:
One side effect of this is that you should expect that at one point or another the authors whose work you admire will disappoint you, across a spectrum of behaviors or opinions. Because they’re human, you see. Think of all the humans you know, who have never disappointed you in one way or another. Having difficulty coming up with very many? Funny, that.
(Don’t worry, you’ve disappointed a whole bunch of people, too.)
I think that’s a pretty salient point to remember these days. That’s not to excuse people who do terrible things, nor to say there isn’t merit in boycotting the work of someone who did terrible things. But if you treat every gaff or slight as unconscionable, you’re setting yourself to be constantly outraged and constantly disappointed. Everyone’s line is going to be different, though, and it’s your call on whether any particular occasion crosses that line. Relatedly:
A shitty human can write great books (or make lovely paintings, or fantastic food, or amazing music, etc), and absolutely lovely humans can be aggressively mediocre to bad artists. There is very little correlation between decency and artistic talent. You don’t need to be a good human in order to understand human behavior well enough to write movingly about it; remember that con men are very good judges of character.
With that said, if you discover that the writer of one of your favorite novels (or whatever) doesn’t live up to your moral or ethical standards, you’re not obliged to give them any more of your time or attention, because life is too short to financially or intellectually support people you think are scumbags. Likewise, you and you alone get to decide where that line is, and how you apply it. Apply one standard for one author, and a different one for another? Okay! I’m sure you have your reasons, and your reasons can just be “because I feel like it.” Just like in real life, you might put up with more bullshit from one person than another, for reasons that are personal to you.
Just more things to chew on.
Addendum: Neil Gaiman also touched on some of this topic (more specifically, the commingling of author and work, and people making assumptions about the author based on characters in their book), and makes a point I wanted to note:
Well, unless you are going to only write stories in which nice things happen to nice people, you are going to write stories in which people who do not believe what you believe show up, just like they do in the world. And in which bad things happen, just as they do in the world. And that’s hard.
And if you are going to write awful people, you are going to have to put yourself into their shoes and into their head, just as you do when you write the ones who believe what you believe. Which is also hard.
My last post was discussing the state of social media (and frankly, the internet), and a possible future. I feel like I can say relatively objectively that things are pretty broken as they currently stand, in an actively harming society sort of way. But when you’re entrenched, it can be incredibly difficult to see a way out, even if you know you need to. I wanted to take a minute to talk about some of the hurdles to moving on from this mess. (Fair warning, this is a little long.)
I’ve been thinking a lot lately about what the internet will look like in the future. Right now, it’s dominated by social media in one form or another, with large, megacorp silos acting as our primary sources of information and discourse. This is a shift from the “DIY” homegrown state of the early internet — while there were absolutely megacorps that cornered entire niches of the internet (think about the likes of AOL, for instance), it didn’t feel like quite as much of a stranglehold. There was lots of room for growth and plucky startups and homegrown projects. A proliferation of open source projects to run your own websites in all sorts of wacky configurations (lots of weird mishmashes of CMSes, blog software, forums, galleries, with some shaky handcrafted glue between them all). Lots of platforms, lots of different standards and protocols popping up all the time, and nothing really talking to each other all that well. Early attempts at cohesion had mixed success (OAuth, yay! RSS, woo! Trackbacks… um).
Given that sort of morass, it’s perhaps unsurprising that a lot of these homegrown solutions gave way to professionally run central services. Tired of fiddling with your tumble log? Go sign up for Tumblr. Microblogging micromanagement got you down? Check out this Twitter thing. Sick of managing 500 logins to different forums? Roll on over to Reddit. Gallery software galling you? Find your way to Flickr! They handle the backend, you provide the content, and your audience grows as their userbase does! Sounds great, right?