My Take on the WWDC Keynote: Net Positive

First, if you’ve not watched the WWDC Keynote yet, you can watch it here: WWDC 2016 Keynote (You can also see a write-up over at Wired and a Liveblog of the event at Engadget.) There are a few things that came up that I think are pretty notable.

First, Continuity continues to be a big push: Apple wants as seamless an ecosystem as possible across all devices and platforms in their stable of products. We started seeing some elements of this in Yosemite and El Capitan, and it looks like they’re doubling down on it in Sierra. I have some reservations about this — mainly, lock-in and whether or not it will play well with third parties. The concept itself, though, makes a ton of sense. I’m curious to see what sort of response we’ll see from Microsoft and Google in this space (MS is starting to point this direction with their push towards a single core across platforms, but at the same time we’re seeing a de-emphasis of Windows Phone, so curious how this will play out).

Second, Machine Learning. All the big players are getting into it (Siri, Alexa, Cortana, Google Assistant), and Apple has clearly invested heavily in this area, with tight integration of Siri into iOS and macOS. One thing I think is notable about Apple’s choices with this, though, is keeping the AI on-device, rather than web-driven. I’m very curious to see how this evolves in future releases.

Third, Security, Privacy, and Encryption. Several times in the keynote, they made a point of calling out that they’re NOT building profiles of user, and are keeping PII on your device, not on their servers. This emphasis on privacy (and security) pervades a number of the choices they’re making, which I applaud them for committing to. While I disagree with some of their product decisions (single-port computers, charging ports on the bottom of the new magic mouse, etc), I genuinely appreciate that they’re sticking to their guns in the face of pressure from the government.

Fourth, Opening up new APIs. A big concern I’ve had in recent releases from Apple is continued lockdown of services, where it felt like if you weren’t Apple, you couldn’t play on the playground. This release sees several integrated services get opened up to third parties (Messages, Maps, Siri being the big three to me), which gives me some hope that Apple isn’t entirely forgetting what made OS X so great.

Fifth, Swift Playground. It’s worth noting that this closed out the keynote, and for good reason. Apple is committing to bringing programming into education in a big way, by making what appears to be a robust learning app that targets youth where they are (mobile devices like iPads), teaching them a language they can directly use for real, complex applications. This is a big win for both Apple and STEM: For Apple, it gets a new generation of developers started using their tools, environment, and language, which you can bet will make an impact on what they choose to use in the future. For STEM, they’re providing free tools, free resources (entire books, including guides on how to teach it and incorporate into your curriculum), already targeted to youth. That’s awesome. You can read more about the whole initiative at their Everyone Can Code page.

It wasn’t covered in the keynote, but has been brought up elsewhere: they’re also releasing a new Apple File System, replacing the old and creaky HFS+. This is significant: Apple’s been using HFS (and then an expanded HFS+) for basically the entire time Mac OS has existed. From reports, it sounds like a robust, next-generation file system that brings some brilliant and essential features. While we likely won’t see the OS truly make the most of these new features until the version after Sierra, this is still quite interesting, and I’m excited to see what gets done with it.

Overall, it felt like a productive developer-centric keynote. It leaves me feeling cautiously hopeful about the future of the ecosystem, and that they’re placing their bets in the right places.

Loving Language

I was talking with a friend the other day about what I call “intellectual fappery.” This is in reference to the sort of academic papers (in particular in art criticism or art theory) that are so wrapped up with jargon and linguistic flourishes that it’s unreadable to a lot of people. It never made much sense to me that they’d do this (why make it harder to win people over to your point of view), and I sort of presumed that it was some sort of smug self-aggrandizement, speaking opaquely to keep out the riff-raff. (Another alternative is that they’re simply too inept at communicating their ideas, and so hide it behind linguistic flourishes.)

I had a minor epiphany, though: while some may be putting on airs, a lot may simply be in love with language. Not in terms of communication, but as expression. Flowery turns of phrase, using ornate, overly complex language not because they want to obscure their message, but simply because they think ornate, overly complex language is pretty. In short, an entire body of writing (looking at you, art theory and art critics) that prioritized form over function.

This doesn’t really make it any more fun to read (especially since you’re usually subjected to it as part of studies, rather than reading it by choice), but it does at least seem like a kinder way to look at things.

Wading into the Brambles

I’ve been debating doing NaNoWriMo this year. I’ve participated on and off for years, though I’ve never finished. At this point, it’s been a long time since I’ve written fiction (or told any sort of story, fictional or not), and I miss it. It’s a little weird to say that, since a) there’s technically nothing stopping me from doing it now, and b) I was never all that amazing at it. (I’m trying really hard not to just completely bag on my writing ability, since people seemed to generally respond favorably to what they read, and bear in mind Ira Glass’s quote on creative work, but it’s hard. Even with the stories I was moderately pleased with, there was SO MUCH room for improvement.)

I do miss it, though. It’s weird — I’ve felt blocked to the point of frustration for years now, and unable to bring myself to get past it, even though I know the answer is simply to keep it up until I get through the brambles. I’ve been thinking about it a lot for a while now — the dearth of creative outlets and making in my life, and it really struck home a little while ago. I was having a conversation with someone who is a maker and doer (and just generally awesome person), and we were talking about hanging out sometime, and they said they looked forward to hearing/seeing what I make. I was instantly filled with embarrassment, because I felt like I had nothing to offer to that conversation. I love creative people — it’s what I’m attracted to, both in friends and otherwise — and when given this opportunity to make a more solid connection with someone I already liked and wanted to get to know better, I felt like I had nothing to contribute.

Note, it wasn’t anxiety, it was embarrassment. I was embarrassed — I felt like I was a poser who’d been called out on their facade. I realize that isn’t really fair to myself or entirely accurate — there’s room for people who celebrate art and creativity, who are supportive and the first to cheer others on, and that doesn’t somehow make them a sham. But feelings aren’t rational, and it doesn’t feel like enough to validate the role creativity has on my personal identity.

So, it’s time to wade into the brambles again. It’s been so long that I don’t even remember what telling a story feels like on my tongue, the heft and shape of a narrative in my fingers. It’s time to correct that. I’m debating doing NaNoWriMo this year, and it almost doesn’t matter if I finish, as long as I actually begin.

User Experience(d)

Last week, I was at a family reunion filled with fabulous, intelligent, talented people whom I’m glad to call family. One thing I noticed: as people pulled out laptops and iPads and smartphones, or discussed some of the current technological hurdles they’re facing in their day to day lives, there was still a lot of frustration and implied distrust of the hardware or software being used. It really hammered home to me that there’s still a long distance left between usable and intuitive. They were adding complexity and hurdles that didn’t need to be there, because they were used to a previous mental model that was more complex.

I work with software and computers every day, and have for years. Even a lot of my hobbies end up taking place on computers. It’s easy to take for granted the human-computer interactions I do on a daily basis, because I do them regularly, and generally even if it’s a new piece of software or hardware, it still behaves similarly enough to other software that I can get the hang of it pretty quickly. The thing is, even with the pervasiveness of technology these days, I am an anomaly, not the norm. Many people — highly skilled, capable people — simply don’t have that background and context for understanding, nor the time or interest to gain it. As far as I see it, this is a lot of what user experience design is all about: finding that line between simplicity and complexity, where people have enough detail to understand what is happening (at least a high level), but is still simple enough that they don’t have to invest cognitive energy to grasp how to use it.

Aiming for clarity is hard on its own, but what I was noticing is that it faces an additional hurdle: overcoming the complexities or mental models of previous designs. It seemed like a big problem in particular for older generations was that they’d fallen out of sync with what experiences were designed to be now, and were burdened with the expectation of complexity or failure from experiences in the past. It’s easy to say “oh, well they just need to retrain themselves,” but that implies they have the cognitive energy, time, and interest to do so.

That’s not to say we shouldn’t keep working on improving the user experience, but it is something to bear in mind when developing software or hardware. I have a few ideas on how to accommodate this, some of which may be more palatable than others:

  • Evolving UX: Going with more iterative, minor changes rather than a large shift. This already happens some (depending on the software), and sometimes it’s unavoidable that multiple changes will need to go in at once.
  • Documentation: Creating effective documentation can be invaluable for keeping older users up to speed on what’s happening. Three things I’d want to make sure to consider: keeping docs up to date to the current version of the software; keeping legacy docs for older versions; mapping the old user experience to the new user experience in change logs and within the docs themselves.
  • Usability Studies of Existing Users: Doing usability research has definitely become more prevalent, which is a good thing, but I feel like tends to focus on how to attract new users, and doesn’t really give a lot of attention to existing users (I suspect at least partially under the presumption that once a user is committed to your product, they are less likely to take the additional effort to switch). It would be really interesting to make sure to include existing long-time users when doing usability studies. If considering retention of existing users isn’t on your radar, maybe you should reconsider.

Obviously, it’s impossible to please all of the people, and maybe more of this is already in progress than I’m aware of, but it does feel like we’ve got a distance left to go on learning to effectively clear out the cobwebs of past experiences.

Hypersigils, Identity, and the Internet

Back in 2010, I ended up having a really rewarding Twitter conversation with some very smart people, talking about hypersigils and how they apply to the internet. I’ve been thinking more about the topic lately, and wanted to expand on what was said before.

Let’s start with the term hypersigil. The term was coined by Grant Morrison, but the concept has been around for a lot longer than that. The term has a certain magickal [sic] connotation because of its origins, and I know that some folks get squicked out about that. If it makes you feel any better, just think of it as a psychological focus used to affect personal change, in the form of creating a narrative. If that’s still not enough, come up with a better term that does an even better job of wrapping a complex concept into a compact term, that does an even better job of packing loads of exformation into one word, and then popularize that instead. I’d love to hear it.
Continue reading “Hypersigils, Identity, and the Internet”

And the Alarm Went "Wii U, Wii U, Wii U…"

There have some been some criticisms surrounding the new Nintendo console, the Wii U. I’ve seen complaints that they focused too much on the new controller, and glossed over the new console itself (seems valid, and something Nintendo has admitted to dropping the ball on). There are also other complaints that seem less valid, and a remarkable amount of press attention to Nintendo’s stock price dropping 10% after the announcement. The reactions to it seem fairly down the middle, with the dividing line basically coming down to those that got a chance to demo a unit in person thinking it’s an interesting device, and those that didn’t get to demo the unit thinking Nintendo has completely dropped the ball.

A New Console, Now?

You can find what specs have been published for the device from Nintendo’s website, so I won’t bother rehashing it here. The quick summary: beefed up processor, beefed up graphics capabilities, full HD support, all around decent specs for a modern console. It’s not a mindblowing leap forward, but that is not, and has not been the point. The point is that the cost of having the higher end graphics is finally low enough that they don’t have to sacrifice their target pricing model in order to compete graphically with the other consoles. So basically, they let their competitors take a significant loss on every console in order to support HD, and then once the technology had matured, caught up while having made a profit the whole time.

It makes sense that they’d put out the Wii U now. Look at their past development cycles:

  • NES – 1983 (Japan), 1985 (US)
  • SNES – 1990 (Japan), 1991 (US)
  • Nintendo 64 – 1996
  • GameCube – 2001
  • Wii – 2006
  • Wii U – 2012

It doesn’t take a rocket scientist to see the trend here: Nintendo puts out a new console every 5-6 years. By contrast, we’ve heard nothing concrete out of Microsoft or Sony for a new console (and if so, it’s unclear what they would be adding), with as recently as a few months ago, Sony claiming the PS3 would have a 10 year product lifespan (it sound like they are no longer saying this, instead claiming somewhere between 8 and 10 years), meaning we can’t really expect a new console from either other major console company until at least 2013, more realistically 2014-2016. This all puts Nintendo in a great position by putting a new console out now.

What about their existing user base?

Wii U is backwards compatible with the Wii, so it becomes a no-brainer for consumers to upgrade. Easy migration plus a HUGE existing install base (86 million units, versus Microsoft’s 55 million and Sony’s 50 million). So, again, why not put out a new console now? Getting out of sync with the other console maker’s schedule is a good thing: less competition for consumer dollars, and games currently in development can ostensibly add support or the new console fairly easily (known architecture, and comparable specs to other consoles).

The Stock Drop is Irrelevant

Full disclosure: I do own some shares in Nintendo (a whopping 4 shares). That said: I don’t care what the stock price is. Nintendo is a dividend-bearing stock, unlike a number of other technology companies. As long as it continues to make a profit, the stock price is largely irrelevant to existing investors, unless they are the sort who feel the constant need to buy and sell shares (to which I say: go play with your IPO stock bubbles and leave long-form investment alone).

So considering the nature of Nintendo’s stock, why the hell is the gaming press making a big deal about their stock drop? It has absolutely NO relation to the quality or nature of the new product. Further, it shows a lack of historical awareness: it’s not uncommon stock prices to dip after a keynote — look at Apple. For years, even when they announced hugely successful products (that were clear to be successful from the start, no less), their stock took a marked dip immediately after.

Disruptive Technology is Called Disruptive for a Reason

It feels like a lot of the people complaining about this announcement are complaining for the sake of complaining. They don’t understand the new technology and its potential effects, or in rarer cases understand but disagree with the direction. A lot of those complaints were also levied against the original Wii as well, which then swept the market for the last 5 years, with a significantly larger install base than either competitor. Iwata’s 2006 GDC keynote discussed expanding markets and to not keep only vying for the same hardcore gaming market — this philosophy worked with the Wii, it worked with the DS, Microsoft adopted a similar stance with the Kinect to great success. Given all this, it increasingly feels like the complaints are coming from a small subset of people who are either resistant to change, or simply have a myopic view of the gaming industry and the shifting landscape of the market.

Here’s something to think about: the gaming news media is comprised of people who love games. Its why they chose that field. Don’t you think that this love of how games are now or have been, might bias their views on what could shift or change the gaming industry?

Lions, Dashboards, and Calculators (Oh My!)

This summer, Apple is planning to release their next iteration of Mac OS X, 10.7 (codenamed “Lion”). From the looks of things, their primary focus this time around is interface improvements to make the user experience more fluid and effective. In general, I’m liking what I’ve been seeing, though looking at the system requirements that have been coming out suggests that I’ll be on the hairy edge of being able to run it at all (a Core 2 Duo or higher is required, of which I’m running the first Core 2 Duo Macbook Pro they offered), so I’m not sure how much real benefit I’ll be seeing in the near future. That said, one of the design changes they’re making seems like a horrible idea: they’re moving the Dashboard into its own space, rather than continuing to work as an overlay over whatever screen you’re on.

Given that the dashboard is for quick-reach, simple widgets, this seems remarkably backwards, and more like something you’d do to get people to not use it so it can be phased out of a later release. Think about it for a second: widgets are meant to show information at a glance, i.e. without significantly interfering or distracting the user from their task at hand. While several widgets seem like simply a bad idea to be shoved into their own space, there are a few that will have their usefulness significantly reduced, most notably the calculator.

To be clear, the dashboard calculator is not especially robust. It has no history or “tape”, no special functions, just your basic arithmetic. About the extent of its bells and whistles is that it accepts numeric input instead of being forced to use the buttons. But you know what? That’s the point. It’s a simple calculator for when you want to run some numbers really quickly, without interfering with the rest of your workflow. More often than not, these numbers will be pulled off a website or email, or chat. You aren’t particularly invested in running the numbers, you just want to check them really quickly. This, specifically, is the value of the dashboard calculator: just pull up the dashboard, and you can punch in the numbers, which are still visible, into the calculator for a quick total, without going through the process of loading up a separate application. I don’t want to have to constantly page back and forth between two screens just to run a quick number check. At that point, why not just use the actual Calculator app?

I doubt I’ll ever know, but I would love to find who made this particular design decision and ask them what on earth they were thinking.

Emotional Communication

There is no language I know of that exists today that is able to truly convey our emotions, our inner needs. The scope just isn’t there — the best we can do is approximate it. We have words that are supposed to convey meaning, but even then, exactly what meaning is so fluid and amorphous, the true intent and meaning is lost in translation. Think about some of the biggest emotions in our lives. Think about love for a moment. “I love you.” “I love this television show.” “I love this song.” “I love my family.” There are so many valid contexts for the verb “to love.” As far as language is concerned, they are all valid, and we treat them as such socially. But the emotions underneath vary wildly. As human beings, we try to pick up on this additional nuance and emotional intent through body language, through situational awareness, the timbre of the voice, the tension of the moment. All of which rests on the hope that those around us are observant enough to notice, and aware enough to interpret these signals correctly. This is frightening, that so much of our emotional communication and well-being is reliant on others’ ability to perceive our comments in the way we intend. Given that, it is unsurprising that so many people feel isolated and alone.

Which brings us to another tool we have to try and communicate: if language does not have the tools to describe an emotion directly (not in a meaningful way, anyway), then it can at least describe them indirectly. Think about music, or books, or film, or photographs, or paintings, or any number of forms of art. The classic question of “What is art?” is easily defined to me: work intended to convey an emotional or personal response to someone. It’s an imperfect tool — there will inevitably be a lot of people who don’t “get” it. It’s not a fault of the artist, or of the viewer — they simply lack the shared context to invoke a response. A photograph of a weathered fence post in a field may not speak to some, but for others it can invoke a personal memory of visiting their grandparents on the farm, or strike a chord more metaphorically, describing for a moment the feeling of isolation that the viewer may be feeling or have felt. Put simply: art describes emotions.

Personally, I tend to draw from media sources to describe a range of emotions and personal thoughts pretty often. I’ve been doing so all week with video clips and songs and quotes, and this is hardly the first time, and I’m not remotely the only one — for every random silly link blog of goofy stuff out on the web, there is also a curated blog of someone trying to point at something in the hopes of getting their message across, and communicate something they feel is important to those around them. I post videos and quotes and songs and images to create a pastiche of who I am and how I’m feeling. (I’d be interested to see what interpretations people draw from the entries posted this past week.) Of course, I’m always afraid that I’m a bit of a Hector the Collector character when I do so, but if even one person gets and appreciates what’s shared, it’s worth it.

Casual Games is a Misnomer

There’s been a lot of discussion of the expanding “casual games” market for a while now, and frankly I think the term is causing some serious confusion because it’s not really a useful title. It’s not grossly inaccurate, it’s just not useful. We talk about games like Farmville being a casual game, because the mechanics of the game are relatively simple. The structural rules of the game are simple, but where it falls down as being “casual” is that the social rules, the “meta” rules of the game are more complex. There is a social give-and-take of desiring help from others, but also (hopefully) wanting to avoid annoying friends and relatives around you by flooding them with requests. People try to push those boundaries, though, to achieve more, to gain mastery of the game. The drive for achievement provokes a more “hardcore” approach to the game. The gameplay may be casual, but the intensity of play is more hardcore. The truly casual gamer doesn’t stick with it, because excelling requires more commitment than they are willing to give.

It would probably help at this point to define some terms and concepts, for the sake of clarity and communication.

  1. Play intensity: the level of investment of time, energy, and resources needed to achieve mastery of the game.
  2. Game mastery: the exact form mastery may take depends on the type of game, but generally involves a combination of implicit or explicit knowledge of how the game works that allows for maximizing the results of time spent playing. Maybe it will involve knowing every detail of a map and where and how players tend to play it; maybe it will involve having learned every skill combo in a game and developed the muscle memory to do each precisely and quickly.
  3. Investment threshold: the limit to how much time, energy, and resources a person is willing to invest in a given task. This varies from person to person and task to task, and is the crux of the difference between a “casual” and “hardcore” gamer.

I am fundamentally a casual gamer. Considering I write a game-related blog, wrote two theses related to game development, and work in QA in the game industry, I suppose some might think this is inaccurate, but hear me out. “Casual” gaming isn’t about the quantity of game time, nor the quality of play; it’s about the approach to gameplay. Put simply, I’m not really invested in mastery of most games. I will totally try to complete the game, find as many secrets and bonus content and other goodies as I can, but when doing so requires ludological mastery and precision, I usually walk away from that content (and sometimes the game). I’m not mad about it (more on that in a moment), at most a little disappointed that I didn’t/couldn’t get whatever it was I was trying to do. An unspoken threshold of skill investment was exceeded. If “good enough” isn’t enough, then I’m done. That, to me, is the distinction of a casual gamer.

Think about hardcore gaming, and the concept of the ragequit. It’s a valid, if undesired, reaction to being placed in what is perceived as “unfair” conditions without clear methods for correction, and isn’t new — think about the trope of “taking your ball and going home.” But what, exactly, is unfair about it? The game itself ostensibly has immutable rulesets (short of hacks), and if the game designers did their job balancing the game well, then neither side has undue advantage over the other mechanically. The difference comes down to the players themselves and what level of mastery they’ve invested. In a ragequit situation, there’s generally at least one player who has a relatively high level of mastery of the game — they’ve invested the time and energy in understanding the mechanics of the game, and how to maximize their use of those mechanics in their favor. When you then pair them with someone who has the desire for mastery, but either hasn’t had the time needed to invest, or the capacity for the necessary skills to compete, the mismatch results in a ragequit. A casual player may try to play, decide they don’t want to have to invest days or weeks into gaining mastery, and walk away. The behavior of the other players may have been exactly the same, but the likelihood of a ragequit is less, since the casual player isn’t as invested in the game.

Different games can have different requirements for game mastery, and still have both fall under the aegis of a casual or casual-friendly game. A more distinct delineation is to establish the play intensity of the game: examine the amount of investment in game mastery that is necessary to continue to move forward in the game. If there is little room for players who haven’t invested as many resources into mastery of the game (e.g. they didn’t spend hours playing the same zone or area, learning all its quirks and best solutions to the challenges it poses), then that game will only be attractive to players with a high investment threshold, i.e. it isn’t a casual game, no matter how simple the interface is, no matter how complex the game mechanics are.

Now, what really fascinates me are the games that find ways to straddle the line. While some consider World of Warcraft a hardcore game, I consider it a casual game: the requirements for game knowledge and expertise in order to proceed is relatively low — you can play without investing significant time in HOW to play (gaining mastery instead of moving forward in the game). But I tend to be an altaholic. If I were to try and get into raiding and high level instances (what’s considered the “end game”), I’m quite positive my perception of the game would shift to considering it a more “hardcore” game — to raid effectively, from all accounts, requires a more in-depth understanding of the mechanics of the game, as well as specific details of the instances themselves.

So, with all this in mind, the question I find myself asking is: are these sorts of casual gamers worth accounting for? We’re a pretty fickle lot; happy to drop a game if it’s no longer satisfying, and probably won’t even use some of your mini-games or features. My vote is: yes, they should be accounted for when designing games. A game can still be layered and complex; it can still reward greater mastery, and encourage high intensity play; it can still penalize errors and poor play. BUT, the complexity and greater mastery should enhance the player experience, not hinder it. Give a broad range of allowable game mastery and play intensity, and let the player decide their own level of involvement.