If You See Something…

Generally speaking, I’m not very political. But the current “If you see something, say something” Homeland Security campaign creeps me the hell out. I can’t help but think about the old Soviet and eastern bloc informant system, which was pretty heinously evil.

As a social phenomenon anonymous letters were a frequent occurrence in the USSR during the period of mass political terror. These were the years when physical destruction of the opposition and prosecutions of “enemies of the people” helped Stalin consolidate his dictatorship. Bloody “purges” accompanied by constant appeals for “vigilance” and the eradication of complacency in the struggle against wreckers (saboteurs), spies, and “internal” counterrevolution created through the entire country an atmosphere of mistrust and suspicion. Many Soviet Citizens in constant turmoil over the threat to their freedom and their lives, turned to informing as a means of self-preservation by proving their “reliability.” They were no more amoral in their attitude toward society than society was toward them. Informing was then extolled as a “moral duty” of the Soviet ctiizen.

Anonymous letters were written not only by those who retained their belief in the rectitude and infallibility of communist ideals but also by those who, by victimizing as many others as possible, hoped to stay safe; and by those who were seeking revenge against personal enemies among their acquaintances and colleagues. Zemtsov, Ilya. Encyclopedia of Soviet Life, pp13-14.

Casual Games is a Misnomer

There’s been a lot of discussion of the expanding “casual games” market for a while now, and frankly I think the term is causing some serious confusion because it’s not really a useful title. It’s not grossly inaccurate, it’s just not useful. We talk about games like Farmville being a casual game, because the mechanics of the game are relatively simple. The structural rules of the game are simple, but where it falls down as being “casual” is that the social rules, the “meta” rules of the game are more complex. There is a social give-and-take of desiring help from others, but also (hopefully) wanting to avoid annoying friends and relatives around you by flooding them with requests. People try to push those boundaries, though, to achieve more, to gain mastery of the game. The drive for achievement provokes a more “hardcore” approach to the game. The gameplay may be casual, but the intensity of play is more hardcore. The truly casual gamer doesn’t stick with it, because excelling requires more commitment than they are willing to give.

It would probably help at this point to define some terms and concepts, for the sake of clarity and communication.

  1. Play intensity: the level of investment of time, energy, and resources needed to achieve mastery of the game.
  2. Game mastery: the exact form mastery may take depends on the type of game, but generally involves a combination of implicit or explicit knowledge of how the game works that allows for maximizing the results of time spent playing. Maybe it will involve knowing every detail of a map and where and how players tend to play it; maybe it will involve having learned every skill combo in a game and developed the muscle memory to do each precisely and quickly.
  3. Investment threshold: the limit to how much time, energy, and resources a person is willing to invest in a given task. This varies from person to person and task to task, and is the crux of the difference between a “casual” and “hardcore” gamer.

I am fundamentally a casual gamer. Considering I write a game-related blog, wrote two theses related to game development, and work in QA in the game industry, I suppose some might think this is inaccurate, but hear me out. “Casual” gaming isn’t about the quantity of game time, nor the quality of play; it’s about the approach to gameplay. Put simply, I’m not really invested in mastery of most games. I will totally try to complete the game, find as many secrets and bonus content and other goodies as I can, but when doing so requires ludological mastery and precision, I usually walk away from that content (and sometimes the game). I’m not mad about it (more on that in a moment), at most a little disappointed that I didn’t/couldn’t get whatever it was I was trying to do. An unspoken threshold of skill investment was exceeded. If “good enough” isn’t enough, then I’m done. That, to me, is the distinction of a casual gamer.

Think about hardcore gaming, and the concept of the ragequit. It’s a valid, if undesired, reaction to being placed in what is perceived as “unfair” conditions without clear methods for correction, and isn’t new — think about the trope of “taking your ball and going home.” But what, exactly, is unfair about it? The game itself ostensibly has immutable rulesets (short of hacks), and if the game designers did their job balancing the game well, then neither side has undue advantage over the other mechanically. The difference comes down to the players themselves and what level of mastery they’ve invested. In a ragequit situation, there’s generally at least one player who has a relatively high level of mastery of the game — they’ve invested the time and energy in understanding the mechanics of the game, and how to maximize their use of those mechanics in their favor. When you then pair them with someone who has the desire for mastery, but either hasn’t had the time needed to invest, or the capacity for the necessary skills to compete, the mismatch results in a ragequit. A casual player may try to play, decide they don’t want to have to invest days or weeks into gaining mastery, and walk away. The behavior of the other players may have been exactly the same, but the likelihood of a ragequit is less, since the casual player isn’t as invested in the game.

Different games can have different requirements for game mastery, and still have both fall under the aegis of a casual or casual-friendly game. A more distinct delineation is to establish the play intensity of the game: examine the amount of investment in game mastery that is necessary to continue to move forward in the game. If there is little room for players who haven’t invested as many resources into mastery of the game (e.g. they didn’t spend hours playing the same zone or area, learning all its quirks and best solutions to the challenges it poses), then that game will only be attractive to players with a high investment threshold, i.e. it isn’t a casual game, no matter how simple the interface is, no matter how complex the game mechanics are.

Now, what really fascinates me are the games that find ways to straddle the line. While some consider World of Warcraft a hardcore game, I consider it a casual game: the requirements for game knowledge and expertise in order to proceed is relatively low — you can play without investing significant time in HOW to play (gaining mastery instead of moving forward in the game). But I tend to be an altaholic. If I were to try and get into raiding and high level instances (what’s considered the “end game”), I’m quite positive my perception of the game would shift to considering it a more “hardcore” game — to raid effectively, from all accounts, requires a more in-depth understanding of the mechanics of the game, as well as specific details of the instances themselves.

So, with all this in mind, the question I find myself asking is: are these sorts of casual gamers worth accounting for? We’re a pretty fickle lot; happy to drop a game if it’s no longer satisfying, and probably won’t even use some of your mini-games or features. My vote is: yes, they should be accounted for when designing games. A game can still be layered and complex; it can still reward greater mastery, and encourage high intensity play; it can still penalize errors and poor play. BUT, the complexity and greater mastery should enhance the player experience, not hinder it. Give a broad range of allowable game mastery and play intensity, and let the player decide their own level of involvement.

Comcast, Walled Gardens, and Games

There’s a lot of talk currently about the Level-3/Comcast mess, where Comcast is demanding additional money from Level 3 (an internet backbone and current partner with Netflix for providing streaming media) before they will allow streaming media onto their network. Comcast’s reasoning is that Level 3 is acting as a Content Delivery Network (CDN), not just as an internet backbone, and thus no longer qualifies for the peerage agreements that would allow for traffic between the two networks without additional fees. Which is a bogus assertion, and feels like a money-grab: Comcast’s customers are paying for that bandwidth already, and making a legitimate request for the data being provided — all Level 3 is doing is sending the requested data. To then block the data that the customer has paid for (twice: they pay Comcast for the bandwidth, and Netflix for the content) directly violates the principles of an open internet.

This is a prime example of why there are concerns over the imminent Comcast-NBC Universal deal (for those who haven’t been paying attention: Comcast is trying to purchase NBC Universal from General Electric for $6.5 billion dollars CASH, plus investing an additional $7.5 billion dollars in programming), in terms of media consolidation and vertical control effectively creating a walled garden. To quote Senator Bernie Sanders:

The sale of NBCU to Comcast would create an enormously powerful, vertically integrated media conglomerate, causing irreparable damage to the American media landscape and ultimately to society as a whole.

This is hardly the first time Comcast has been caught with their hand in the proverbial cookie jar, taking censorial action while claiming to be in favor of an open internet. Their behavior is antithetical to net neutrality on a fundamental and obvious level.

So, why does this matter to game development? A variety of reasons, actually. Regardless of what type of games you are talking about, modern gaming takes bandwidth: assets need to be downloaded, whether as a standalone game title, or even the casual, cloud-based games you find on Armor Games or Kongregate or even Facebook. If there is any type of online component, there will be regular communication between client and server. This sort of bandwidth costs money, and if developers have to start paying additional fees to be allowed into walled gardens, the cost may reach a point where it is no longer feasible for many developers to continue. Even already, a number of games are looking at solutions to mitigate the costs of hosting content, such as distributed downloading solutions like BitTorrent (yes, believe it or not, peer to peer isn’t just for illegal uses). While some price fluctuation is expected and reasonable as the market shifts and costs of hosting and bandwidth change, at what point do developers (including smaller developers without the resources of large publishers) have to start dealing directly with Comcast (or other gatekeepers) for the right to sell their own product to the public? One of the biggest benefits of the internet, open access, not having to go through a gatekeeper process and large publishers to share your work with the world, is already being challenged by device-specific gates, like the Apple App Store for the iPhone, and to a lesser extent the Playstation Network and Xbox Live Arcade and WiiWare. (I say lesser extent because those networks are ones that ostensibly can’t reach the rest of the internet without additional effort, if at all, whereas the iPhone App store has no such issues.) We do not need, nor want, service providers blockading legitimate customers from our products.

Browser Hell

While there are a variety of methods to view the web, the vast majority of people use only one of a few options: Internet Explorer, Firefox, Safari, Opera, and (johnny-come-lately but gaining market-share fast) Chrome. While it’s fantastic that each of these browsers are doing well enough to be considered major players, the problem is that they all have some pretty serious failings.

Internet Explorer LogoThe problems with IE are well documented, and frankly given that it’s Windows-only, I’m going to gloss over it here by simply saying: don’t use it unless you have to. Don’t support it unless you have to. Just. Don’t. This may change with the upcoming IE9, as there’s been a BIG push by developers to get Internet Explorer up to date and standards compliant. If even half the features and support Microsoft has promised actually make it into the final product, Internet Explorer may well be worth another look. In the meantime, take a pass.

Firefox LogoNext up is Firefox, a very popular open-source effort run by Mozilla. It’s free, it’s open source, it’s cross platform, there are lots of themes and profiles and extensions you can get for it to make the browser do more, all of which makes it the darling of the geek community. It isn’t without its faults, however: the same extensions that make Firefox useful often contribute to browser instability, but Firefox without extensions is… well, lackluster. Which is to say: a plain copy of Firefox is a perfectly serviceable browser, but lacks anything to set it apart from other major browsers. That coupled with one of the slower load times and a rather substantial resource footprint makes it a less than ideal solution for someone trying to run a lean, stable system.

Safari LogoWhile Safari doesn’t have anywhere near the usage rates of IE or Firefox, it’s still a major contender in the browser wars, for three reasons: 1) It’s the default browser on every Mac system, and has the highest browser rates on Macintosh computers; 2) It’s the default (and until Opera Mini managed to strongarm their way onto it, only) browser on the iPhone, iPod Touch, and iPad; and 3) It’s cross-platform and free. I’ve been a diehard Safari user since it came out, only occasionally switching to Firefox or Camino. However, as they’ve continued to add more features, the overall quality has (in my opinion) gone down. Reports of stability issues are prevalent on the Windows version, and I’ve been discovering massive resource consumption on my Mac. Since Safari 5, the memory footprint has grown significantly, causing repeated beachballs for the most basic browsing tasks because my laptop, with 2gb of ram, was out of memory. (My frustration with this is actually what has prompted this post.) I can only assume it’s a memory leak that slipped past them, because I cannot fathom how that sort of resource consumption would be acceptable for a shipping product.

Opera LogoOpera is a trooper from the old browser wars. While it has incredible market penetration on devices and globally, as a desktop web browser it didn’t really get a strong foothold in the U.S. They’ve continued to improve the browser over a number of years (the current version as of this writing is 10.60), and at this point boast one of the most standards compliant, fastest browsers on the market, with a ridiculous amount of features. Which is the problem: there are so many features and customizations and tie-in services like Opera Unite and Opera Link that it’s incredibly easy for the average user to get mired in unwanted complexity. Additionally, while they have support for widgets (which can even work as standalone applications from the desktop), I had trouble finding any plugins to fix some egregious oversights (despite all those features, Opera tends to only play with itself — service integration with third party options like Evernote or Delicious are non-existent). Some of the interface I found cumbersome, but I was willing to work through that (all browsers have some quirks, after all), but was off-put by the sheer number of browser themes that were for Windows only, leaving Mac users very few options to try and find a more suitable interface.

Chromo LogoThe last of the “big” browsers I wanted to mention was Google’s foray into the browser market, Google Chrome, and its development sibling Chromium. Despite being very new, Chrome has already gained a significant market share in terms of browser statistics, and not without reason: it’s fast; it breaks page viewing into separate processes to keep the entire browser from crashing when one page hits bad code; and, well, it’s made by Google. Frankly, while I appreciated some of the features of Chrome, I found it to be an incredibly slipshod application. The user interface was inconsistent and unclear on numerous occasions, with the preferences window being a morass of poorly explained buttons and hidden panels, and their handling of tabs becoming utterly useless once you get much over 20 tabs open. It’s easy to start cutting them some slack by saying “It’s a beta,” but let’s be realistic here. Google has made a point of hiring some of the smartest, most talented, capable people on the planet, and invested millions into the development and marketing of Google Chrome already. A product with that sort of backing feeling this slapdash is embarrassing for them and frustrating for the user. (Final gripe about this: despite their session-splitting to help prevent browser crashes, Chrome crashed on me when I tried to quit.)

So there you have it, the biggest, most popular browsers out there. The reality is that they all have MAJOR FLAWS, and there is major work that should be done on all of them. The bright side is that each of these browsers is under active development, so a lot of the work that needs to be done will be done. Until the problems are fixed, however, I’m inclined to look into one of the numerous smaller browser projects being developed out there, and hopefully find a diamond in the rough that blows the big boys out of the water.

Where to Build Your Next Team

According to the ESA’s reports, the five states that are serving as game development hubs in the US are California, Washington, Texas, New York, and Massachusetts. This shouldn’t come as a surprise to anyone; cities like Seattle, San Diego, Austin, and their peripheral towns are often mentioned in gaming press. This is fine –€” certain hubs are expected to rise up in any industry, and game development, at $22 billion domestically per year, absolutely qualifies as industry. However, it is becoming increasingly apparent that there is a need to start expanding into new locations if studios expect to continue to grow profitably. It comes down to cost: the cost of living, and cost of business.

The cities and regions that game developers are based in right now tend to be expensive: the amount of money it takes to maintain the same quality of life is higher than in other cities. As an example, comparing Portland, Oregon, and Seattle, Washington, two cities that offer similar climates, similar cultural opportunities, overall a similar quality of life. In Seattle, an examination of average office lease rates are running between $25 and $40 per square foot depending on where in Seattle you are (and where most of these companies are located, you’re looking at the high end of that range). A similar examination of Portland puts the lease rates between $12 and $25 per square foot. (To put those prices in perspective, Bungie recently announced their move into downtown Bellevue, leasing 85,000 square feet. Assuming they got a killer deal and only paid $30 per square foot, that’s still $2,550,000.) An equivalent space in Portland, assuming, say, $20 per square foot, is $1,700,000. That’s an $850,000 price difference, and that’€™s only one part of the overall cost of doing business.

Looking at the cost of living for the employees themselves, median apartment rental prices drop nearly in half between Seattle and Portland. While other price comparisons are less dramatic (the cost of heating a home doesn’€™t vary much, which is unsurprising considering they share a similar climate), it still works out to a net savings for the employee to be in Portland. What this means for the employee is that they can live at the same quality of life, for less money. What this means for employers is that they can price their salaries accordingly (as they already do), and again, save money to either a) bring down development costs, or b) hire more developers.

Of course, so far we’ve only discussed basic numbers, on the assumption that one would have to pay for everything involved. For a number of developers, this is already not the case: both Ontario and Quebec (and respective cities Toronto and Montreal) offer significant subsidies to game companies to build studios there. It was reported a few years ago that the city of Montreal and the province of Quebec combined subsidized over half the salaries for Ubisoft and EA, two major developers and game publishers. Ubisoft is expanding again, opening a new studio in Toronto, who have committed to investing $226 million into Ubisoft over the next ten years. Here in the U.S, 8 states have already passed initiatives to encourage game development, including significant tax breaks and other incentives to draw the industry in. The city of Savannah has gone so far as to offer a full year of office space free to any company willing to commit to offices there.

Now, I realize it is pretty rare that a company is in a position to be able to perform an en masse relocation (there have been a few examples, such as when Square moved from Washington to California, or when Bungie moved from Illinois to Washington), but that isn’t really what anyone is trying for: as development teams grow, new facilities are needed, and new development teams are created. These new studios and teams are in a prime position to make use of the lower development costs of setting up in a less expensive city. It would be foolish for a large game developer to not at least consider this when building out their next team.

The cities I expect to be great additions:

  1. Portland, Oregon: the city has so much going for it in, and is already starting to undergo a bit of a cultural explosion thanks to its fantastic music and art scene, green policies, and laid back atmosphere.
  2. Minneapolis/St. Paul, Minnesota: it’s been largely off the radar for a lot of people, yet sports a remarkable diversity within the area, low costs, and is something of a jewel of the central states.
  3. Boulder, Colorado: it is already becoming a pretty significant tech hotspot, housing a number of startups and offering a range of support for the software industry.

Intel's Social Media Guidelines

In an excellent example of corporate social-transparency, Intel just posted their social media guidelines, which they expect their employees to follow when engaging the public. I think this is fantastic, and a great example of a major company “walking the walk” when it comes to social media and community interaction. For anyone engaging in online communities and social media interaction, they’re an excellent guide to go by.

LiveBlog: CyborgCamp

9:43am: Currently in the Forum at Cubespace, waiting for opening remarks on CyborgCamp, Amber Case (@caseorganic appears to be MC’ing.

9:50am: there are several extras for following what’s happening with CyborgCamp (#cyborgcamp): CyborgCamp Backchan.nl, CyborgCamp LiveStream, Twitter Tracking.

Should definitely check out the sponsors at CyborgCamp.com.

10:00am: Still going through sponsors, each is getting a chance to get up and sort of give their spiel as to what they do. I’ve yet to see any that aren’t worth checking out.

Explanation of an unconference — a mixture of established presentations, and blocks of time where you can create breakout sessions — if you have something you want to discuss or present, just put it on a card, put it on the grid. The point is to make these conferences to work for you. There is no commitment as an attendee — go where you’re finding value; if something isn’t what you wanted, go somewhere else.

10:12am: Okay, starting to organize the unconferences and meeting back here in 30.
Continue reading “LiveBlog: CyborgCamp”