Over at the Atlantic, Daniel Markovits discusses how Meritocracy harms everyone. While many of the examples he cites are centered around the wealthy elite, that’s kind of the point – it’s already clear to people not in that caste that the notion of meritocracy, while good in concept, is a sham in practice. As soon as the criteria for “merit” become understood, it is gamed by those who are in a position to do so, further entrenching the already wealthy (but with a slightly more palatable veneer than the up-front nepotism of previous systems).
Hardworking outsiders no longer enjoy genuine opportunity. According to one study, only one out of every 100 children born into the poorest fifth of households, and fewer than one out of every 50 children born into the middle fifth, will join the top 5 percent. Absolute economic mobility is also declining—the odds that a middle-class child will outearn his parents have fallen by more than half since mid-century—and the drop is greater among the middle class than among the poor. Meritocracy frames this exclusion as a failure to measure up, adding a moral insult to economic injury.
I agree that it’s a problem, and I don’t really know a good solution. It sounds like the author has some ideas (some of which sound a little… optimistic to me). It’s clear that things are coming to a head, though, and something is going to have to change.
If you’re just not a morning person, science says you may never be, over at Vox. This is from a few years ago, but I only came across it relatively recently: there’s increasing evidence that whether you’re a night person or a morning person has a genetic component, and fighting it can have significant impacts on your health. A salient bit:
Even people who are slightly more oriented to the evening — people who would like to sleep between 1 am and 9 am, say — may be faced with a difficult choice: Listen to your body, or force it to match the sleep habits of most everyone else?
Research has been gaining insight on that question. It turns out our internal clocks are influenced by genes and are incredibly difficult to change. If you’re just not a morning person, it’s likely you’ll never be, at least until the effects of aging kick in.
And what’s more, if we try to live out of sync with these clocks, our health likely suffers. The mismatch between internal time and real-world time has been linked to heart disease, obesity, and depression.
Brian Resnick, Vox
For the record, I’ve always been a night owl. When left to my own devices (extended periods where I was setting my own schedule), I tend to go to bed around 2am and then get up around 10am. That’s still just 8 hours, but because it’s offset from the schedule most of society runs at, it still comes across as oversleeping. (These days, a mixture of work and the dog keep me getting up earlier… but I also find myself feeling like I need a nap more often, as well.)
It reminds me of something from The Devil’s Dictionary:
DAWN, n. The time when men of reason go to bed. Certain old men prefer to rise at about that time, taking a cold bath and a long walk with an empty stomach, and otherwise mortifying the flesh. They then point with pride to these practices as the cause of their sturdy health and ripe years; the truth being that they are hearty and old, not because of their habits, but in spite of them. The reason we find only robust persons doing this thing is that it has killed all the others who have tried it.
Configuring buildings and public spaces as selfie sets may well work for tourism promotion and the buzz of a launch, but once the novelty factor has worn off, the whimsy can grate and the flimsiness become all too apparent. The urge for quick, affordable spectacle often leads to stick-on, paper-thin cladding materials that look good in photographs, but weather terribly. The stained, peeling facades of the last decade stand as a grim testament to prioritising photographability over function.
Art is wonderful, and public art is essential. It enhances and enriches the spaces we inhabit. But when we insert an overtly capitalist motivation of cashing in on a craze, we sacrifice the care and consideration in how we craft this art.
Something this also gets me thinking about is replication. That is, our penchant for seeing an interesting photo, and striving to replicate it. This isn’t new, and in many cases is designed (consider things like Yosemite, where you round a corner and get a reveal of El Capitan or Half Dome. You stop, and say “Wow,” and take a picture. This experience was designed, over a century ago, and is part of why we have 400 million nearly identical shots of these locations). It just bums me out, for some vague reasoning I can’t quite put a finger on, that rather than being inspired by an interesting photo, there is this impetus to simply replicate it. I should probably chew on that a bit.
This research project revisited the primary city of writer William H. Whyte’s Street Life Project and seminal study The Social Life of Small Urban Spaces (1980). It sought to understand how the types of new public spaces have changed some 40 years after he published his book and companion film, what has changed in how people use public realm spaces, and what makes well used spaces.
I think it’s interesting to juxtapose their findings with the expectations created by architects, urban planners, and similar when making mockups of their planned spaces. (Expectations versus reality.) Also interesting to think about how this could be applied not just for building more effective spaces, but for presenting more realistic crowd scenes in film or other types of performance.
My heart goes out to the journalists at the multiple organizations laid off this week (and more). Something like a thousand laid off in the space of a week. Fast Company has a solid (and scathing) article about the recent Buzzfeed layoffs: BuzzFeed’s layoffs and the false promise of “unions aren’t for us”. It paints a pretty bleak picture of where things are at, why, and what we can expect more of in the future.
But as an outlet largely dependent on social platforms like Facebook, BuzzFeed was forced to follow platform trends. When Facebook announced it was focusing on video content, BuzzFeed turned its resources just to that. Brands like Tasty were born, which force-fed ubiquitous birds’-eye view videos of generally unappetizing food to the masses. And for a while, this seemed to work. Videos were performing well, thanks to Facebook’s algorithmic push, and BuzzFeed once again looked like a digital trailblazer. But this bet was predicated on the whim of a social network known for pendulum strategy shifts at the expense of its clients; this pivot didn’t take into account what would happen if Facebook changed course. It shouldn’t come as a shock that Facebook did precisely that.
Over at Lost Garden, Daniel Cook has a fantastic piece looking at how to create “human-scale” online games, and why that’s a better approach to MMOs. This is some really fantastic, well thought out stuff, and not just for games: what they’re really talking about is how to build community.
One way of thinking about the constraints suggested by Dunbar’s Layers is to imagine you have a budget of cognitive resources that can be spent on relationships. The physical limits of your human brain mean that you only have enough mental budget for a total of roughly 150 relationships.
Humans have developed a few tools that have expanded our ability to organize into groups well past our primate cousins—most notably language—but also large-scale systems of government and economics. In the early 2000s, people assumed that new technologies like online social networks could help break past Dunbar’s Number; by offloading the cost of remembering our friendships to a computer, we could live richer, more social lives, with strong relationships to even more people.
We now have copious data that this is not the case. Studies suggest that there’s still a limited budget of cognitive resources at play and even in online platforms we see the exact same distribution of relationships.
If anything, social networks damage our relationships. By making it possible for us to cheaply form superficial relationships (and invest our limited energy in maintaining them), such systems divert cognitive resources from smaller, intimate groups out towards larger, less-intimate groups. The result is that key relationships with best friends and loved ones suffer. And, unfortunately, it is the strength of these high-trust relationships that are most predictive of mental health and overall happiness.
It’s a long read (which you might have guessed by the size of my pull quote), but well worth it.
Most Americans today find work drudgery and leisure anxiously vacant. In our hours off work, we rarely achieve thrilling adventure, deliberate self-education, or engage in Whitmanian loafing. At the same time, faith is eroding in the idea that paid work can offer pleasure, self-discovery, a means for improving the world, or anything more than material subsistence.
I mean, they’re not wrong. I’m lucky enough to have a decent job with some flexibility to learn and grow, but jobs like that are decidedly not the majority of jobs out there. And while the work side might not be terrible at the moment, the “vacant leisure” is real. The author continues:
Recreational pursuits more demanding than fleeting digital absorption are, increasingly, acts of consumption. Leisure is not something you “do” but something you “buy,” whether in the form of hotels and cruises or Arianna Huffington–vetted mindfulness materials. The leisure industry provides work for some while promising relaxation to others, for a fee.
The sorry state of leisure is partly a consequence of an economy in which we are never fully detached from the demands of work. The category of “free” time is not only defined by its opposite (time “free” of work); it is subordinated to it. Free time, Theodor Adorno warns, “is nothing more than a shadowy continuation of labor.” Free time is mere recovery time. Spells of lethargy between periods of labor do little but prepare us for the resumption of work. Workers depleted by their jobs and in need of recuperation turn to escapist entertainment and vacuous hobbies. And the problem of figuring out when work is “over,” in an economy in which knowledge workers spend their job hours tweeting and their evening hours doing unpaid housework and child care, has never seemed more perplexing.
Yep. The conversation continues from there, and is worth the time to read.
I’m not sure the hows or whys that discussion about the literary theory of the “death of the author” cropped up recently, but it did. Lindsay Ellis has an excellent video essay about it (and really, go watch the rest of her stuff while you’re at it). It covers a lot of ground, and also some of the impact that our current culture’s shift towards personal brand (and subsequent association of any work done with that brand) has on how we think about creators and their creations. Go watch:
This prompted a follow-up response by John Scalzi, which I think is also worth reading. In process of discussing how the author is not the book and the book is not the author, he noted:
One side effect of this is that you should expect that at one point or another the authors whose work you admire will disappoint you, across a spectrum of behaviors or opinions. Because they’re human, you see. Think of all the humans you know, who have never disappointed you in one way or another. Having difficulty coming up with very many? Funny, that.
(Don’t worry, you’ve disappointed a whole bunch of people, too.)
I think that’s a pretty salient point to remember these days. That’s not to excuse people who do terrible things, nor to say there isn’t merit in boycotting the work of someone who did terrible things. But if you treat every gaff or slight as unconscionable, you’re setting yourself to be constantly outraged and constantly disappointed. Everyone’s line is going to be different, though, and it’s your call on whether any particular occasion crosses that line. Relatedly:
A shitty human can write great books (or make lovely paintings, or fantastic food, or amazing music, etc), and absolutely lovely humans can be aggressively mediocre to bad artists. There is very little correlation between decency and artistic talent. You don’t need to be a good human in order to understand human behavior well enough to write movingly about it; remember that con men are very good judges of character.
With that said, if you discover that the writer of one of your favorite novels (or whatever) doesn’t live up to your moral or ethical standards, you’re not obliged to give them any more of your time or attention, because life is too short to financially or intellectually support people you think are scumbags. Likewise, you and you alone get to decide where that line is, and how you apply it. Apply one standard for one author, and a different one for another? Okay! I’m sure you have your reasons, and your reasons can just be “because I feel like it.” Just like in real life, you might put up with more bullshit from one person than another, for reasons that are personal to you.
Just more things to chew on.
Addendum: Neil Gaiman also touched on some of this topic (more specifically, the commingling of author and work, and people making assumptions about the author based on characters in their book), and makes a point I wanted to note:
Well, unless you are going to only write stories in which nice things happen to nice people, you are going to write stories in which people who do not believe what you believe show up, just like they do in the world. And in which bad things happen, just as they do in the world. And that’s hard.
And if you are going to write awful people, you are going to have to put yourself into their shoes and into their head, just as you do when you write the ones who believe what you believe. Which is also hard.
Remember to forgive yourself, and to forgive others. It’s too easy to be outraged these days, so much harder to change things, to reach out, to understand.
Try to make your time matter: minutes and hours and days and weeks can blow away like dead leaves, with nothing to show but time you spent not quite ever doing things, or time you spent waiting to begin.
Meet new people and talk to them. Make new things and show them to people who might enjoy them.
Hug too much. Smile too much. And, when you can, love.
Over at the New York Times, Amanda Hess writes about The Existential Void of the Pop-Up ‘Experience’. (This came out in September and has been sitting my tabs waiting to be blogged about since then. Oops.) It’s an interesting look at the panoply of “pop-up experiences” that have been popping up [sic] lately, where it’s all about the curated, Instagrammable experience. It kind of gets at something I noted when I lived in the Bay area: people doing things less for the participatory doing, and more for the being seen doing. You hear folks talking about their “platform” and “personal brand” and the optics of things. Even things we do to appear authentic end up being to some degree performative. (As an aside, Lindsay Ellis has a recent and excellent video talking about this from the perspective of video blogging, called Manufacturing Authenticity (For Fun and Profit!).)
The central disappointment of these spaces is not that they are so narcissistic, but rather that they seem to have such a low view of the people who visit them. Observing a work of art or climbing a mountain actually invites us to create meaning in our lives. But in these spaces, the idea of “interacting” with the world is made so slickly transactional that our role is hugely diminished.Stalking through the colorful hallways of New York’s “experiences,” I felt like a shell of a person. It was as if I was witnessing the total erosion of meaning itself.And when I posted a selfie from the Rosé Mansion saying as much, all of my friends liked it.
I don’t know, maybe I’m just not the target demographic, and I’m just an old curmudgeon who doesn’t “get” it. But there’s something that feels kind of funky about these manufactured, curated experiences. Hmm, that’s not fair: We’ve always curated experiences, chosen how we present things at both small and grand scales. I think there’s a distinction: there’s participatory interaction, and then there’s performative interaction, and these pop-ups seem to fall into the category of the latter more than the former, and that leaves us feeling… empty.