There have some been some criticisms surrounding the new Nintendo console, the Wii U. I’ve seen complaints that they focused too much on the new controller, and glossed over the new console itself (seems valid, and something Nintendo has admitted to dropping the ball on). There are also other complaints that seem less valid, and a remarkable amount of press attention to Nintendo’s stock price dropping 10% after the announcement. The reactions to it seem fairly down the middle, with the dividing line basically coming down to those that got a chance to demo a unit in person thinking it’s an interesting device, and those that didn’t get to demo the unit thinking Nintendo has completely dropped the ball.
A New Console, Now?
You can find what specs have been published for the device from Nintendo’s website, so I won’t bother rehashing it here. The quick summary: beefed up processor, beefed up graphics capabilities, full HD support, all around decent specs for a modern console. It’s not a mindblowing leap forward, but that is not, and has not been the point. The point is that the cost of having the higher end graphics is finally low enough that they don’t have to sacrifice their target pricing model in order to compete graphically with the other consoles. So basically, they let their competitors take a significant loss on every console in order to support HD, and then once the technology had matured, caught up while having made a profit the whole time.
It makes sense that they’d put out the Wii U now. Look at their past development cycles:
- NES – 1983 (Japan), 1985 (US)
- SNES – 1990 (Japan), 1991 (US)
- Nintendo 64 – 1996
- GameCube – 2001
- Wii – 2006
- Wii U – 2012
It doesn’t take a rocket scientist to see the trend here: Nintendo puts out a new console every 5-6 years. By contrast, we’ve heard nothing concrete out of Microsoft or Sony for a new console (and if so, it’s unclear what they would be adding), with as recently as a few months ago, Sony claiming the PS3 would have a 10 year product lifespan (it sound like they are no longer saying this, instead claiming somewhere between 8 and 10 years), meaning we can’t really expect a new console from either other major console company until at least 2013, more realistically 2014-2016. This all puts Nintendo in a great position by putting a new console out now.
What about their existing user base?
Wii U is backwards compatible with the Wii, so it becomes a no-brainer for consumers to upgrade. Easy migration plus a HUGE existing install base (86 million units, versus Microsoft’s 55 million and Sony’s 50 million). So, again, why not put out a new console now? Getting out of sync with the other console maker’s schedule is a good thing: less competition for consumer dollars, and games currently in development can ostensibly add support or the new console fairly easily (known architecture, and comparable specs to other consoles).
The Stock Drop is Irrelevant
Full disclosure: I do own some shares in Nintendo (a whopping 4 shares). That said: I don’t care what the stock price is. Nintendo is a dividend-bearing stock, unlike a number of other technology companies. As long as it continues to make a profit, the stock price is largely irrelevant to existing investors, unless they are the sort who feel the constant need to buy and sell shares (to which I say: go play with your IPO stock bubbles and leave long-form investment alone).
So considering the nature of Nintendo’s stock, why the hell is the gaming press making a big deal about their stock drop? It has absolutely NO relation to the quality or nature of the new product. Further, it shows a lack of historical awareness: it’s not uncommon stock prices to dip after a keynote — look at Apple. For years, even when they announced hugely successful products (that were clear to be successful from the start, no less), their stock took a marked dip immediately after.
Disruptive Technology is Called Disruptive for a Reason
It feels like a lot of the people complaining about this announcement are complaining for the sake of complaining. They don’t understand the new technology and its potential effects, or in rarer cases understand but disagree with the direction. A lot of those complaints were also levied against the original Wii as well, which then swept the market for the last 5 years, with a significantly larger install base than either competitor. Iwata’s 2006 GDC keynote discussed expanding markets and to not keep only vying for the same hardcore gaming market — this philosophy worked with the Wii, it worked with the DS, Microsoft adopted a similar stance with the Kinect to great success. Given all this, it increasingly feels like the complaints are coming from a small subset of people who are either resistant to change, or simply have a myopic view of the gaming industry and the shifting landscape of the market.
Here’s something to think about: the gaming news media is comprised of people who love games. Its why they chose that field. Don’t you think that this love of how games are now or have been, might bias their views on what could shift or change the gaming industry?
There’s been a lot of discussion of the expanding “casual games” market for a while now, and frankly I think the term is causing some serious confusion because it’s not really a useful title. It’s not grossly inaccurate, it’s just not useful. We talk about games like Farmville being a casual game, because the mechanics of the game are relatively simple. The structural rules of the game are simple, but where it falls down as being “casual” is that the social rules, the “meta” rules of the game are more complex. There is a social give-and-take of desiring help from others, but also (hopefully) wanting to avoid annoying friends and relatives around you by flooding them with requests. People try to push those boundaries, though, to achieve more, to gain mastery of the game. The drive for achievement provokes a more “hardcore” approach to the game. The gameplay may be casual, but the intensity of play is more hardcore. The truly casual gamer doesn’t stick with it, because excelling requires more commitment than they are willing to give.
It would probably help at this point to define some terms and concepts, for the sake of clarity and communication.
- Play intensity: the level of investment of time, energy, and resources needed to achieve mastery of the game.
- Game mastery: the exact form mastery may take depends on the type of game, but generally involves a combination of implicit or explicit knowledge of how the game works that allows for maximizing the results of time spent playing. Maybe it will involve knowing every detail of a map and where and how players tend to play it; maybe it will involve having learned every skill combo in a game and developed the muscle memory to do each precisely and quickly.
- Investment threshold: the limit to how much time, energy, and resources a person is willing to invest in a given task. This varies from person to person and task to task, and is the crux of the difference between a “casual” and “hardcore” gamer.
I am fundamentally a casual gamer. Considering I write a game-related blog, wrote two theses related to game development, and work in QA in the game industry, I suppose some might think this is inaccurate, but hear me out. “Casual” gaming isn’t about the quantity of game time, nor the quality of play; it’s about the approach to gameplay. Put simply, I’m not really invested in mastery of most games. I will totally try to complete the game, find as many secrets and bonus content and other goodies as I can, but when doing so requires ludological mastery and precision, I usually walk away from that content (and sometimes the game). I’m not mad about it (more on that in a moment), at most a little disappointed that I didn’t/couldn’t get whatever it was I was trying to do. An unspoken threshold of skill investment was exceeded. If “good enough” isn’t enough, then I’m done. That, to me, is the distinction of a casual gamer.
Think about hardcore gaming, and the concept of the ragequit. It’s a valid, if undesired, reaction to being placed in what is perceived as “unfair” conditions without clear methods for correction, and isn’t new — think about the trope of “taking your ball and going home.” But what, exactly, is unfair about it? The game itself ostensibly has immutable rulesets (short of hacks), and if the game designers did their job balancing the game well, then neither side has undue advantage over the other mechanically. The difference comes down to the players themselves and what level of mastery they’ve invested. In a ragequit situation, there’s generally at least one player who has a relatively high level of mastery of the game — they’ve invested the time and energy in understanding the mechanics of the game, and how to maximize their use of those mechanics in their favor. When you then pair them with someone who has the desire for mastery, but either hasn’t had the time needed to invest, or the capacity for the necessary skills to compete, the mismatch results in a ragequit. A casual player may try to play, decide they don’t want to have to invest days or weeks into gaining mastery, and walk away. The behavior of the other players may have been exactly the same, but the likelihood of a ragequit is less, since the casual player isn’t as invested in the game.
Different games can have different requirements for game mastery, and still have both fall under the aegis of a casual or casual-friendly game. A more distinct delineation is to establish the play intensity of the game: examine the amount of investment in game mastery that is necessary to continue to move forward in the game. If there is little room for players who haven’t invested as many resources into mastery of the game (e.g. they didn’t spend hours playing the same zone or area, learning all its quirks and best solutions to the challenges it poses), then that game will only be attractive to players with a high investment threshold, i.e. it isn’t a casual game, no matter how simple the interface is, no matter how complex the game mechanics are.
Now, what really fascinates me are the games that find ways to straddle the line. While some consider World of Warcraft a hardcore game, I consider it a casual game: the requirements for game knowledge and expertise in order to proceed is relatively low — you can play without investing significant time in HOW to play (gaining mastery instead of moving forward in the game). But I tend to be an altaholic. If I were to try and get into raiding and high level instances (what’s considered the “end game”), I’m quite positive my perception of the game would shift to considering it a more “hardcore” game — to raid effectively, from all accounts, requires a more in-depth understanding of the mechanics of the game, as well as specific details of the instances themselves.
So, with all this in mind, the question I find myself asking is: are these sorts of casual gamers worth accounting for? We’re a pretty fickle lot; happy to drop a game if it’s no longer satisfying, and probably won’t even use some of your mini-games or features. My vote is: yes, they should be accounted for when designing games. A game can still be layered and complex; it can still reward greater mastery, and encourage high intensity play; it can still penalize errors and poor play. BUT, the complexity and greater mastery should enhance the player experience, not hinder it. Give a broad range of allowable game mastery and play intensity, and let the player decide their own level of involvement.
There’s a lot of talk currently about the Level-3/Comcast mess, where Comcast is demanding additional money from Level 3 (an internet backbone and current partner with Netflix for providing streaming media) before they will allow streaming media onto their network. Comcast’s reasoning is that Level 3 is acting as a Content Delivery Network (CDN), not just as an internet backbone, and thus no longer qualifies for the peerage agreements that would allow for traffic between the two networks without additional fees. Which is a bogus assertion, and feels like a money-grab: Comcast’s customers are paying for that bandwidth already, and making a legitimate request for the data being provided — all Level 3 is doing is sending the requested data. To then block the data that the customer has paid for (twice: they pay Comcast for the bandwidth, and Netflix for the content) directly violates the principles of an open internet.
This is a prime example of why there are concerns over the imminent Comcast-NBC Universal deal (for those who haven’t been paying attention: Comcast is trying to purchase NBC Universal from General Electric for $6.5 billion dollars CASH, plus investing an additional $7.5 billion dollars in programming), in terms of media consolidation and vertical control effectively creating a walled garden. To quote Senator Bernie Sanders:
The sale of NBCU to Comcast would create an enormously powerful, vertically integrated media conglomerate, causing irreparable damage to the American media landscape and ultimately to society as a whole.
This is hardly the first time Comcast has been caught with their hand in the proverbial cookie jar, taking censorial action while claiming to be in favor of an open internet. Their behavior is antithetical to net neutrality on a fundamental and obvious level.
So, why does this matter to game development? A variety of reasons, actually. Regardless of what type of games you are talking about, modern gaming takes bandwidth: assets need to be downloaded, whether as a standalone game title, or even the casual, cloud-based games you find on Armor Games or Kongregate or even Facebook. If there is any type of online component, there will be regular communication between client and server. This sort of bandwidth costs money, and if developers have to start paying additional fees to be allowed into walled gardens, the cost may reach a point where it is no longer feasible for many developers to continue. Even already, a number of games are looking at solutions to mitigate the costs of hosting content, such as distributed downloading solutions like BitTorrent (yes, believe it or not, peer to peer isn’t just for illegal uses). While some price fluctuation is expected and reasonable as the market shifts and costs of hosting and bandwidth change, at what point do developers (including smaller developers without the resources of large publishers) have to start dealing directly with Comcast (or other gatekeepers) for the right to sell their own product to the public? One of the biggest benefits of the internet, open access, not having to go through a gatekeeper process and large publishers to share your work with the world, is already being challenged by device-specific gates, like the Apple App Store for the iPhone, and to a lesser extent the Playstation Network and Xbox Live Arcade and WiiWare. (I say lesser extent because those networks are ones that ostensibly can’t reach the rest of the internet without additional effort, if at all, whereas the iPhone App store has no such issues.) We do not need, nor want, service providers blockading legitimate customers from our products.
According to the ESA’s reports, the five states that are serving as game development hubs in the US are California, Washington, Texas, New York, and Massachusetts. This shouldn’t come as a surprise to anyone; cities like Seattle, San Diego, Austin, and their peripheral towns are often mentioned in gaming press. This is fine – certain hubs are expected to rise up in any industry, and game development, at $22 billion domestically per year, absolutely qualifies as industry. However, it is becoming increasingly apparent that there is a need to start expanding into new locations if studios expect to continue to grow profitably. It comes down to cost: the cost of living, and cost of business.
The cities and regions that game developers are based in right now tend to be expensive: the amount of money it takes to maintain the same quality of life is higher than in other cities. As an example, comparing Portland, Oregon, and Seattle, Washington, two cities that offer similar climates, similar cultural opportunities, overall a similar quality of life. In Seattle, an examination of average office lease rates are running between $25 and $40 per square foot depending on where in Seattle you are (and where most of these companies are located, you’re looking at the high end of that range). A similar examination of Portland puts the lease rates between $12 and $25 per square foot. (To put those prices in perspective, Bungie recently announced their move into downtown Bellevue, leasing 85,000 square feet. Assuming they got a killer deal and only paid $30 per square foot, that’s still $2,550,000.) An equivalent space in Portland, assuming, say, $20 per square foot, is $1,700,000. That’s an $850,000 price difference, and that’s only one part of the overall cost of doing business.
Looking at the cost of living for the employees themselves, median apartment rental prices drop nearly in half between Seattle and Portland. While other price comparisons are less dramatic (the cost of heating a home doesn’t vary much, which is unsurprising considering they share a similar climate), it still works out to a net savings for the employee to be in Portland. What this means for the employee is that they can live at the same quality of life, for less money. What this means for employers is that they can price their salaries accordingly (as they already do), and again, save money to either a) bring down development costs, or b) hire more developers.
Of course, so far we’ve only discussed basic numbers, on the assumption that one would have to pay for everything involved. For a number of developers, this is already not the case: both Ontario and Quebec (and respective cities Toronto and Montreal) offer significant subsidies to game companies to build studios there. It was reported a few years ago that the city of Montreal and the province of Quebec combined subsidized over half the salaries for Ubisoft and EA, two major developers and game publishers. Ubisoft is expanding again, opening a new studio in Toronto, who have committed to investing $226 million into Ubisoft over the next ten years. Here in the U.S, 8 states have already passed initiatives to encourage game development, including significant tax breaks and other incentives to draw the industry in. The city of Savannah has gone so far as to offer a full year of office space free to any company willing to commit to offices there.
Now, I realize it is pretty rare that a company is in a position to be able to perform an en masse relocation (there have been a few examples, such as when Square moved from Washington to California, or when Bungie moved from Illinois to Washington), but that isn’t really what anyone is trying for: as development teams grow, new facilities are needed, and new development teams are created. These new studios and teams are in a prime position to make use of the lower development costs of setting up in a less expensive city. It would be foolish for a large game developer to not at least consider this when building out their next team.
The cities I expect to be great additions:
- Portland, Oregon: the city has so much going for it in, and is already starting to undergo a bit of a cultural explosion thanks to its fantastic music and art scene, green policies, and laid back atmosphere.
- Minneapolis/St. Paul, Minnesota: it’s been largely off the radar for a lot of people, yet sports a remarkable diversity within the area, low costs, and is something of a jewel of the central states.
- Boulder, Colorado: it is already becoming a pretty significant tech hotspot, housing a number of startups and offering a range of support for the software industry.
So, for about a year and a half now, I’ve been playing an online nation simulation game called CyberNations. I’ve been meaning to mention it for a while now. It’s a Persistent Browser Based Game (PBBG) that I was introduced to by Snikt and Co., and I’ve been lassoed into being the Minister of Foreign Affairs for the UberCon Alliance for about a year now (hopefully someone else will actually run against me in the election come December/January). If anyone is interested in trying out a nation simulation game (usually takes up maybe 5 minutes a day, tops — it’s only when you start getting political and into the metagame that it really starts to suck up your time), I’d definitely recommend checking it out.
If you do start a nation, ping me in game, and I’ll see about helping get you set up and running. The UberCon Alliance is a pretty peaceful place, we do what we can to keep our heads down and other alliances friendly with us, so we’ve thankfully not had a skirmish with another alliance in my tenure as MoFA, and we aim to keep it that way.
The harbinger of game’s ascendancy to all aspect of the modern life is not some piece of evocative art or Citizen Kane-a-like. Instead, our future appears in the form of a glorified bathroom scale. Still, if we can improve people’s lives with a bathroom scale, just imagine how games can transform the rest of our world. (Danc at Lost Garden)
What gets me is that there are assholes out there who manage to get funding to pull this sort of stunt, when there are hundreds, if not thousands of folks who are working on mods and indie games that would KILL to have even a share of their funding that can’t even get a publisher to pick up the phone.
What am I talking about? A little game called “Limbo of the Lost”, which has received publisher funding for at least 6, if not 10 (as claimed) years, which just recently came out. The vast majority (not 50 or 60%, but more like 80 or 90%) of the content is directly stolen from other games, often without even so much as a color change or added component. This is not an epic fail, this is a LEGENDARY failure, across the board, first on the part of the corrupt developers whom I hope NEVER work in the industry again (I’m sorry, you do not get a second chance after this), and on the part of the publisher for not practicing even an iota of due diligence in reviewing the game.
They really did nail it.
I’ve always enjoyed Jerry’s writing over at Penny Arcade, so I suppose it should come as no surprise that I think he damn near nailed the game industry metaphor when he said this:
The stakes are high, and getting higher, and publishers who were once merely gun-shy are now officially paranoid, rolling around in a padded cell until the drugs take effect. Part of the reason GDC made me uncomfortable is that I could feel its culture pressing on me from all sides, and I knew it wasn’t mine. But the other part was that I got a sense of how brutal that life is, how unstable it can be, how maddening, and I just wanted to come home and match gems or some shit. I didn’t want to see it anymore. I don’t want to think about a cow’s quiet eyes every time I grip a hamburger.