Casual Games is a Misnomer

There’s been a lot of discussion of the expanding “casual games” market for a while now, and frankly I think the term is causing some serious confusion because it’s not really a useful title. It’s not grossly inaccurate, it’s just not useful. We talk about games like Farmville being a casual game, because the mechanics of the game are relatively simple. The structural rules of the game are simple, but where it falls down as being “casual” is that the social rules, the “meta” rules of the game are more complex. There is a social give-and-take of desiring help from others, but also (hopefully) wanting to avoid annoying friends and relatives around you by flooding them with requests. People try to push those boundaries, though, to achieve more, to gain mastery of the game. The drive for achievement provokes a more “hardcore” approach to the game. The gameplay may be casual, but the intensity of play is more hardcore. The truly casual gamer doesn’t stick with it, because excelling requires more commitment than they are willing to give.

It would probably help at this point to define some terms and concepts, for the sake of clarity and communication.

  1. Play intensity: the level of investment of time, energy, and resources needed to achieve mastery of the game.
  2. Game mastery: the exact form mastery may take depends on the type of game, but generally involves a combination of implicit or explicit knowledge of how the game works that allows for maximizing the results of time spent playing. Maybe it will involve knowing every detail of a map and where and how players tend to play it; maybe it will involve having learned every skill combo in a game and developed the muscle memory to do each precisely and quickly.
  3. Investment threshold: the limit to how much time, energy, and resources a person is willing to invest in a given task. This varies from person to person and task to task, and is the crux of the difference between a “casual” and “hardcore” gamer.

I am fundamentally a casual gamer. Considering I write a game-related blog, wrote two theses related to game development, and work in QA in the game industry, I suppose some might think this is inaccurate, but hear me out. “Casual” gaming isn’t about the quantity of game time, nor the quality of play; it’s about the approach to gameplay. Put simply, I’m not really invested in mastery of most games. I will totally try to complete the game, find as many secrets and bonus content and other goodies as I can, but when doing so requires ludological mastery and precision, I usually walk away from that content (and sometimes the game). I’m not mad about it (more on that in a moment), at most a little disappointed that I didn’t/couldn’t get whatever it was I was trying to do. An unspoken threshold of skill investment was exceeded. If “good enough” isn’t enough, then I’m done. That, to me, is the distinction of a casual gamer.

Think about hardcore gaming, and the concept of the ragequit. It’s a valid, if undesired, reaction to being placed in what is perceived as “unfair” conditions without clear methods for correction, and isn’t new — think about the trope of “taking your ball and going home.” But what, exactly, is unfair about it? The game itself ostensibly has immutable rulesets (short of hacks), and if the game designers did their job balancing the game well, then neither side has undue advantage over the other mechanically. The difference comes down to the players themselves and what level of mastery they’ve invested. In a ragequit situation, there’s generally at least one player who has a relatively high level of mastery of the game — they’ve invested the time and energy in understanding the mechanics of the game, and how to maximize their use of those mechanics in their favor. When you then pair them with someone who has the desire for mastery, but either hasn’t had the time needed to invest, or the capacity for the necessary skills to compete, the mismatch results in a ragequit. A casual player may try to play, decide they don’t want to have to invest days or weeks into gaining mastery, and walk away. The behavior of the other players may have been exactly the same, but the likelihood of a ragequit is less, since the casual player isn’t as invested in the game.

Different games can have different requirements for game mastery, and still have both fall under the aegis of a casual or casual-friendly game. A more distinct delineation is to establish the play intensity of the game: examine the amount of investment in game mastery that is necessary to continue to move forward in the game. If there is little room for players who haven’t invested as many resources into mastery of the game (e.g. they didn’t spend hours playing the same zone or area, learning all its quirks and best solutions to the challenges it poses), then that game will only be attractive to players with a high investment threshold, i.e. it isn’t a casual game, no matter how simple the interface is, no matter how complex the game mechanics are.

Now, what really fascinates me are the games that find ways to straddle the line. While some consider World of Warcraft a hardcore game, I consider it a casual game: the requirements for game knowledge and expertise in order to proceed is relatively low — you can play without investing significant time in HOW to play (gaining mastery instead of moving forward in the game). But I tend to be an altaholic. If I were to try and get into raiding and high level instances (what’s considered the “end game”), I’m quite positive my perception of the game would shift to considering it a more “hardcore” game — to raid effectively, from all accounts, requires a more in-depth understanding of the mechanics of the game, as well as specific details of the instances themselves.

So, with all this in mind, the question I find myself asking is: are these sorts of casual gamers worth accounting for? We’re a pretty fickle lot; happy to drop a game if it’s no longer satisfying, and probably won’t even use some of your mini-games or features. My vote is: yes, they should be accounted for when designing games. A game can still be layered and complex; it can still reward greater mastery, and encourage high intensity play; it can still penalize errors and poor play. BUT, the complexity and greater mastery should enhance the player experience, not hinder it. Give a broad range of allowable game mastery and play intensity, and let the player decide their own level of involvement.

Comcast, Walled Gardens, and Games

There’s a lot of talk currently about the Level-3/Comcast mess, where Comcast is demanding additional money from Level 3 (an internet backbone and current partner with Netflix for providing streaming media) before they will allow streaming media onto their network. Comcast’s reasoning is that Level 3 is acting as a Content Delivery Network (CDN), not just as an internet backbone, and thus no longer qualifies for the peerage agreements that would allow for traffic between the two networks without additional fees. Which is a bogus assertion, and feels like a money-grab: Comcast’s customers are paying for that bandwidth already, and making a legitimate request for the data being provided — all Level 3 is doing is sending the requested data. To then block the data that the customer has paid for (twice: they pay Comcast for the bandwidth, and Netflix for the content) directly violates the principles of an open internet.

This is a prime example of why there are concerns over the imminent Comcast-NBC Universal deal (for those who haven’t been paying attention: Comcast is trying to purchase NBC Universal from General Electric for $6.5 billion dollars CASH, plus investing an additional $7.5 billion dollars in programming), in terms of media consolidation and vertical control effectively creating a walled garden. To quote Senator Bernie Sanders:

The sale of NBCU to Comcast would create an enormously powerful, vertically integrated media conglomerate, causing irreparable damage to the American media landscape and ultimately to society as a whole.

This is hardly the first time Comcast has been caught with their hand in the proverbial cookie jar, taking censorial action while claiming to be in favor of an open internet. Their behavior is antithetical to net neutrality on a fundamental and obvious level.

So, why does this matter to game development? A variety of reasons, actually. Regardless of what type of games you are talking about, modern gaming takes bandwidth: assets need to be downloaded, whether as a standalone game title, or even the casual, cloud-based games you find on Armor Games or Kongregate or even Facebook. If there is any type of online component, there will be regular communication between client and server. This sort of bandwidth costs money, and if developers have to start paying additional fees to be allowed into walled gardens, the cost may reach a point where it is no longer feasible for many developers to continue. Even already, a number of games are looking at solutions to mitigate the costs of hosting content, such as distributed downloading solutions like BitTorrent (yes, believe it or not, peer to peer isn’t just for illegal uses). While some price fluctuation is expected and reasonable as the market shifts and costs of hosting and bandwidth change, at what point do developers (including smaller developers without the resources of large publishers) have to start dealing directly with Comcast (or other gatekeepers) for the right to sell their own product to the public? One of the biggest benefits of the internet, open access, not having to go through a gatekeeper process and large publishers to share your work with the world, is already being challenged by device-specific gates, like the Apple App Store for the iPhone, and to a lesser extent the Playstation Network and Xbox Live Arcade and WiiWare. (I say lesser extent because those networks are ones that ostensibly can’t reach the rest of the internet without additional effort, if at all, whereas the iPhone App store has no such issues.) We do not need, nor want, service providers blockading legitimate customers from our products.

Browser Hell

While there are a variety of methods to view the web, the vast majority of people use only one of a few options: Internet Explorer, Firefox, Safari, Opera, and (johnny-come-lately but gaining market-share fast) Chrome. While it’s fantastic that each of these browsers are doing well enough to be considered major players, the problem is that they all have some pretty serious failings.

Internet Explorer LogoThe problems with IE are well documented, and frankly given that it’s Windows-only, I’m going to gloss over it here by simply saying: don’t use it unless you have to. Don’t support it unless you have to. Just. Don’t. This may change with the upcoming IE9, as there’s been a BIG push by developers to get Internet Explorer up to date and standards compliant. If even half the features and support Microsoft has promised actually make it into the final product, Internet Explorer may well be worth another look. In the meantime, take a pass.

Firefox LogoNext up is Firefox, a very popular open-source effort run by Mozilla. It’s free, it’s open source, it’s cross platform, there are lots of themes and profiles and extensions you can get for it to make the browser do more, all of which makes it the darling of the geek community. It isn’t without its faults, however: the same extensions that make Firefox useful often contribute to browser instability, but Firefox without extensions is… well, lackluster. Which is to say: a plain copy of Firefox is a perfectly serviceable browser, but lacks anything to set it apart from other major browsers. That coupled with one of the slower load times and a rather substantial resource footprint makes it a less than ideal solution for someone trying to run a lean, stable system.

Safari LogoWhile Safari doesn’t have anywhere near the usage rates of IE or Firefox, it’s still a major contender in the browser wars, for three reasons: 1) It’s the default browser on every Mac system, and has the highest browser rates on Macintosh computers; 2) It’s the default (and until Opera Mini managed to strongarm their way onto it, only) browser on the iPhone, iPod Touch, and iPad; and 3) It’s cross-platform and free. I’ve been a diehard Safari user since it came out, only occasionally switching to Firefox or Camino. However, as they’ve continued to add more features, the overall quality has (in my opinion) gone down. Reports of stability issues are prevalent on the Windows version, and I’ve been discovering massive resource consumption on my Mac. Since Safari 5, the memory footprint has grown significantly, causing repeated beachballs for the most basic browsing tasks because my laptop, with 2gb of ram, was out of memory. (My frustration with this is actually what has prompted this post.) I can only assume it’s a memory leak that slipped past them, because I cannot fathom how that sort of resource consumption would be acceptable for a shipping product.

Opera LogoOpera is a trooper from the old browser wars. While it has incredible market penetration on devices and globally, as a desktop web browser it didn’t really get a strong foothold in the U.S. They’ve continued to improve the browser over a number of years (the current version as of this writing is 10.60), and at this point boast one of the most standards compliant, fastest browsers on the market, with a ridiculous amount of features. Which is the problem: there are so many features and customizations and tie-in services like Opera Unite and Opera Link that it’s incredibly easy for the average user to get mired in unwanted complexity. Additionally, while they have support for widgets (which can even work as standalone applications from the desktop), I had trouble finding any plugins to fix some egregious oversights (despite all those features, Opera tends to only play with itself — service integration with third party options like Evernote or Delicious are non-existent). Some of the interface I found cumbersome, but I was willing to work through that (all browsers have some quirks, after all), but was off-put by the sheer number of browser themes that were for Windows only, leaving Mac users very few options to try and find a more suitable interface.

Chromo LogoThe last of the “big” browsers I wanted to mention was Google’s foray into the browser market, Google Chrome, and its development sibling Chromium. Despite being very new, Chrome has already gained a significant market share in terms of browser statistics, and not without reason: it’s fast; it breaks page viewing into separate processes to keep the entire browser from crashing when one page hits bad code; and, well, it’s made by Google. Frankly, while I appreciated some of the features of Chrome, I found it to be an incredibly slipshod application. The user interface was inconsistent and unclear on numerous occasions, with the preferences window being a morass of poorly explained buttons and hidden panels, and their handling of tabs becoming utterly useless once you get much over 20 tabs open. It’s easy to start cutting them some slack by saying “It’s a beta,” but let’s be realistic here. Google has made a point of hiring some of the smartest, most talented, capable people on the planet, and invested millions into the development and marketing of Google Chrome already. A product with that sort of backing feeling this slapdash is embarrassing for them and frustrating for the user. (Final gripe about this: despite their session-splitting to help prevent browser crashes, Chrome crashed on me when I tried to quit.)

So there you have it, the biggest, most popular browsers out there. The reality is that they all have MAJOR FLAWS, and there is major work that should be done on all of them. The bright side is that each of these browsers is under active development, so a lot of the work that needs to be done will be done. Until the problems are fixed, however, I’m inclined to look into one of the numerous smaller browser projects being developed out there, and hopefully find a diamond in the rough that blows the big boys out of the water.

Where to Build Your Next Team

According to the ESA’s reports, the five states that are serving as game development hubs in the US are California, Washington, Texas, New York, and Massachusetts. This shouldn’t come as a surprise to anyone; cities like Seattle, San Diego, Austin, and their peripheral towns are often mentioned in gaming press. This is fine –€” certain hubs are expected to rise up in any industry, and game development, at $22 billion domestically per year, absolutely qualifies as industry. However, it is becoming increasingly apparent that there is a need to start expanding into new locations if studios expect to continue to grow profitably. It comes down to cost: the cost of living, and cost of business.

The cities and regions that game developers are based in right now tend to be expensive: the amount of money it takes to maintain the same quality of life is higher than in other cities. As an example, comparing Portland, Oregon, and Seattle, Washington, two cities that offer similar climates, similar cultural opportunities, overall a similar quality of life. In Seattle, an examination of average office lease rates are running between $25 and $40 per square foot depending on where in Seattle you are (and where most of these companies are located, you’re looking at the high end of that range). A similar examination of Portland puts the lease rates between $12 and $25 per square foot. (To put those prices in perspective, Bungie recently announced their move into downtown Bellevue, leasing 85,000 square feet. Assuming they got a killer deal and only paid $30 per square foot, that’s still $2,550,000.) An equivalent space in Portland, assuming, say, $20 per square foot, is $1,700,000. That’s an $850,000 price difference, and that’€™s only one part of the overall cost of doing business.

Looking at the cost of living for the employees themselves, median apartment rental prices drop nearly in half between Seattle and Portland. While other price comparisons are less dramatic (the cost of heating a home doesn’€™t vary much, which is unsurprising considering they share a similar climate), it still works out to a net savings for the employee to be in Portland. What this means for the employee is that they can live at the same quality of life, for less money. What this means for employers is that they can price their salaries accordingly (as they already do), and again, save money to either a) bring down development costs, or b) hire more developers.

Of course, so far we’ve only discussed basic numbers, on the assumption that one would have to pay for everything involved. For a number of developers, this is already not the case: both Ontario and Quebec (and respective cities Toronto and Montreal) offer significant subsidies to game companies to build studios there. It was reported a few years ago that the city of Montreal and the province of Quebec combined subsidized over half the salaries for Ubisoft and EA, two major developers and game publishers. Ubisoft is expanding again, opening a new studio in Toronto, who have committed to investing $226 million into Ubisoft over the next ten years. Here in the U.S, 8 states have already passed initiatives to encourage game development, including significant tax breaks and other incentives to draw the industry in. The city of Savannah has gone so far as to offer a full year of office space free to any company willing to commit to offices there.

Now, I realize it is pretty rare that a company is in a position to be able to perform an en masse relocation (there have been a few examples, such as when Square moved from Washington to California, or when Bungie moved from Illinois to Washington), but that isn’t really what anyone is trying for: as development teams grow, new facilities are needed, and new development teams are created. These new studios and teams are in a prime position to make use of the lower development costs of setting up in a less expensive city. It would be foolish for a large game developer to not at least consider this when building out their next team.

The cities I expect to be great additions:

  1. Portland, Oregon: the city has so much going for it in, and is already starting to undergo a bit of a cultural explosion thanks to its fantastic music and art scene, green policies, and laid back atmosphere.
  2. Minneapolis/St. Paul, Minnesota: it’s been largely off the radar for a lot of people, yet sports a remarkable diversity within the area, low costs, and is something of a jewel of the central states.
  3. Boulder, Colorado: it is already becoming a pretty significant tech hotspot, housing a number of startups and offering a range of support for the software industry.

The MacBook Update

As I’m sure many are aware, Apple updated their laptop line today. There are some interesting technological advances going on, but (and their stock fluctuations today can attest) there seems to be a large backlash against several changes they made to their lineup — some justifiable, some spurious. Let’s look at the spurious complaints first:

  • “There’s no DVI port!” — and were you making the same complaint when DVI started to supersede VGA? Let’s be objective about this: DisplayPort is a VESA-certified industry standard meant specifically to address the needs of the computing market, in the same way that HDMI is meant to address the consumer electronics market. There are adapters already in existence to convert from DisplayPort to DVI (or even VGA) and back again. I know it’s hard when new standards come out, but you need to recognize that they’re coming out because what we have is no longer suitable for moving forward. HDMI is a marked improvement over Component. Well, DisplayPort is a marked improvement over DVI.
  • “There’s no button on the trackpad!” — anyone who has been paying attention could see this coming — look at the iPhone and iPod Touch and tell me you couldn’t foresee virtualized buttons coming. There are some complaints that they hate “tap-to-click,” and I can certainly concede that, but from looking at hands-on reports of the new setup, the system is designed in such a way that your muscle memory to hit the button with your thumb will still work in exactly the same fashion. The current button on the trackpads drops a millimeter, maybe two — you are in effect already “tapping” the button. The short of it is that by going to a virtualized solution, it becomes easier to adapt the trackpad to specific needs and solutions. I’m certain I can’t be the only who sees this.

There are definitely some very real gripes to be had, however:

  • “The black keyboard and black bezel are ugly.” — yes, I’m counting this as a real gripe. While from the exterior, the new laptops are sexy, when you open them up, the result a step backward; it is reminiscent of several offerings by Sony, Acer, even HP. Some are heralding it as a return to the Powerbook Titanium design philosophy, but I don’t really see that as a good thing. Why go back, when they clearly had so many options to move forward? Their external keyboards use a white on silver color scheme that would be markedly less jarring, let alone going with a silver-silver like they did with prior MacBook Pros. I consider this a valid complaint because part of what gets people to buy a Mac instead of a PC isn’t just the OS, it’s the hardware. The more it looks like everyone else’s offerings, the less reason there is to purchase the (more expensive) Mac option. Black on silver does not look good, I’m sorry. If they were going to go with the black bezel and black keyboard, in my opinion they should have gone with a black body. Either anodized or powder-coated black aluminum would still qualify for their EPEAT Gold rating, and yet would overall be more aesthetically unified.
  • “No firewire in the MacBooks!” — completely agreed. I don’t know what the hell Apple was thinking. Adding a FireWire 800 port would not have been difficult, even in the smaller enclosure, and yet by doing so, there would be a wealth of devices that would become available, including daisy chained hard drives and their own Target Disk Mode. Yes, that’s right, they’ve removed a technology that makes it easier to buy more of their products (by easing the process of migration). I understand the desire to further delineate between the MacBook and the MacBook Pro, but this is a grievous oversight.
  • “The dual graphics cards are neat, but can only use one or the other!” — I’m on the fence as to whether this is a valid or invalid complaint. My suspicion is that when 10.6 rolls out and OpenCL and Grand Central becomes more of a reality, we’ll start seeing the ability to prioritize processes and send some to one card, and others to another. If not Apple, then a third party developer. Given that nVidia has gone on record saying they’re supporting OpenCL, I think this is a reasonable prognostication. In the meantime, however, it’s just a “shiny-shiny” to give the marketers something to chew on. I really don’t care about the difference between a 4 hour and a 5 hour battery life — more often than not, if I’m in one place for that long, I’m able to plug in somewhere. So why not save the space in the laptop and just do the high end graphics card? (Of course, I consider this yet another reason to believe that there WILL be communication across the two cards in the future.)

I’m still very interested in getting a new MacBook Pro, as my current machine is starting to get long in the tooth and showing its age. Once I have a job that I can justify the expense, I imagine I’ll be getting one of the new machines, but if you’re in the generation immediately prior, I’d be hard pressed to encourage an upgrade. Honestly, a part of me (as lustful for a new machine as I am) wants to wait and see if they start offering a gun-metal-black iteration in 6 months.

No AP, Please

Patrick highlights recent unacceptable behavior on the part of AP over at Making Light. He makes some excellent points about how restrictive and ridiculous this sort of attempt at strong-arming individuals can be. A core principle of copyright law is the role of “fair use” to allow others to provide feedback, response, analysis, and commentary on a given work or material, since copyright law itself is provided as an incentive to promote scientific and cultural advancement. A blogger referencing (e.g. linking to the article, quoting specific passages, or re-summarizing/restating the basis of the article) a work clearly falls within this principle, on several fronts.

I will concede such cases as where the majority or entirety of the article is quoted, in particular in situations where it is done so without commentary, but that’s not what’s being discussed, here. What’s happening in THIS circumstance is pure, unbridled greed, without even a nod to the law as it stands.

Notwithstanding the provisions of sections 106 and 106A, the fair use of a copyrighted work, including such use by reproduction in copies or phonorecords or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use the factors to be considered shall include —

(1) the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;

(2) the nature of the copyrighted work;

(3) the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and

(4) the effect of the use upon the potential market for or value of the copyrighted work.

The fact that a work is unpublished shall not itself bar a finding of fair use if such finding is made upon consideration of all the above factors. (Copyright Act of 1976, 17 U.S.C. § 107)

Photoshop CS4's Interface

So, John Nack has previewed the new Photoshop interface, which has been drawing a fair amount of criticism around the ‘net for being “un-Mac-like”. I think the criticism is frankly a lot of gnashing of teeth because it’s different, and very little else. As Nack points out, if you bother looking at some of the best “Mac-like” apps, including applications made by Apple itself, much of the new design draws very similar parallels. It’s a very clean, modern interface, and keeps pace with the trend towards encapsulated applications (the document driven, single window experience). Frankly, I like it, and look forward to it.

Let’s face it: any user who multitasks ends up with a boatload of windows open at any given time, and there have yet to be really any effective ways to manage all the windows. This is becoming increasingly problematic as we find ways to have more and more windows up at any given time (I’m looking at you, Spaces), and so user interfaces have been forced to rethink how they display their data, to better encapsulate that data, so that everything related to a particular document STAYS with that document. Tabbed browsing was the start, but it’s totally logical that this design philosophy would (and should) enter other applications. Some of my favorite applications are ones that integrate data into the session window — a prime example is Scrivener. In Scrivener, the inspector is attached to the document window, rather than sitting in a separate “inspector pane/window”. From a design perspective, this makes it absolutely clear as to which document you are inspecting, which is particularly important if and when you have multiple documents open at once. The application is designed so that everything you need to do to the document can be done from one document window, with multiple files within it. You can even split the window to display attached research files or another page of writing at the same time, or if you decide you really NEED it to be in a separate window, that option is only a right-click away. That is GOOD DESIGN: it avoids juggling through multiple windows just to get your work done.

Detractors who might say it’s not “Mac-like” haven’t been paying attention. While there is, of course, the opportunity to get it wrong, and not make an effective interface, this is true regardless of whether you’re talking about a unified interface or a multi-window one. However, it’s pretty clear all the way down to the interface of the Finder, that we’re shifting towards a single-window-per-need design philosophy (if you don’t believe me, use the “Find…” option in OS X 10.5, or “Create Burn Folder”, or try out iChat with “Collect Chats into Single Window” turned on and tell me it’s not a better way to juggle a dozen conversations).

The key to note in what I’m saying is that it is PER DOCUMENT, or PER NEED. The places that I’ve seen single-window interfaces be successful is where elements that belong together are placed together. A window, in essence, becomes a method to encapsulate the data related to the task or project it was created for. As such, there are going to be times it DOESN’T make sense. Frankly, I’m just glad designers are realizing that there are times that it DOES.

Glory by Essie Jain

I know I’ve heard this song somewhere else, but for the life of me, I can’t place where. I do, however, know where I was (re)introduced to it: this afternoon, driving home from work, Essie Jain came into KEXP for a live performance and interview, and I immediately became enamored with this young lady from London. Her interview was delightful and personable, and the music was simply stunning. She’s currently touring for the release of her first album, We Made This Ourselves, of which “Glory” is the first track.

If you’re looking for layers or technical complexity, then you’re looking in the wrong place. “Glory” is primarily a vocalist and guitar, with another guitar accompanying, adding texture to the melody. That’s about it. Despite this simplicity (or perhaps because of it), “Glory” manages to capture a particular mood and atmosphere that simultaneously reminds me of walking in the summer twilight as the day’s heat cools off, and spending an evening curled up by a fireplace with a good book and hot chocolate as the snow falls outside. It may seem odd at first to have these two images juxtaposed, but if you think about it, they both depict the same sort of mellow, dreamy state of being. It’s a great feeling to have, and that essence distilled into a song is equally great to listen to.

[“Glory” by Essie Jain Free MP3]

[Official Site]

[Essie Jain on MySpace]

Gone for Good by Morphine

Today I’d like to go back to an old favorite, Morphine. On this day in 1999, Mark Sandman, front man of Morphine, died of a heart attack while on stage in Italy. While I’m sure he would have preferred to not die, I suspect that he would have appreciated going out playing as he did. Sandman (and his compatriots in the band) was a consummate musician, often playing unique, heavily modified instruments to create an unparalleled, interesting sound. Given the circumstances, the song I selected seems perhaps a little morbid, but appropriate: “Gone for Good” off Yes.

It’s a quiet song, just sparse guitar work and Sandman’s deep, resonating vocals. The lyrics paint a clear picture, and really conveys a sense of loneliness and rejection, the unrequited lover coming to terms with the realities of a situation. “Never gonna walk up your walk and ring your bell and feel you fall into my arms. No, never gonna see you again — you’re gone for good.”

At various points in my life, this song has really struck a personal chord with me. I won’t say all, but I imagine most of us have gone through one of these periods, where friends or family are dying, or you’ve been rejected by the love of your life, or hell, all of the above. You feel fragile, right on the hairy edge of breaking, as you realize the loss. This song perfectly captures that. This may, perhaps, depress some, but I hope that listeners will be able to appreciate the craftsmanship inherent in capturing that emotion and distilling it in a song regardless of the mood it may direct you to.

[“Gone for Good” on iTunes]

[Morphine on Wikipedia]

Tomato Song by PWRFL POWER

I’ll admit it: I’m a big fan of KEXP, and often end up raiding their fine selection of Song of the Day Podcasts for songs to review. I also, however, read their blog, where they often link to some really interesting independent artists, alternative tracks, remixes, and things you might not always hear on the radio. That would be the case with “Tomato Song” by PWRFL POWER. I haven’t been able to find info on buying an actual album, but he (yeah, it’s one guy, Kazutaka Nomura, a 22 year old Japanese International Student, currently living in Seattle) is apparently touring a fair bit (mostly in Japan). He’ll be at the Capitol Hill Block Party at the end of the month, but I’ll still be in Vermont. Ah well.

It’s an interesting song: acoustic guitar with an interesting progression, and simply sung lyrics, which gives a sort of childlike feel to the song that belies the nuances within the lyrics. There’s nothing grandiose or overly complex about the song: it’s simply good. I imagine he would be a great live show, so those of you on the east coast or in Japan, go check him out while you can (tour dates and locations are on his MySpace).

[“Tomato Song” by PWRFL POWER Free MP3]

[PWRFL POWER on MySpace]