And the Alarm Went "Wii U, Wii U, Wii U…"

There have some been some criticisms surrounding the new Nintendo console, the Wii U. I’ve seen complaints that they focused too much on the new controller, and glossed over the new console itself (seems valid, and something Nintendo has admitted to dropping the ball on). There are also other complaints that seem less valid, and a remarkable amount of press attention to Nintendo’s stock price dropping 10% after the announcement. The reactions to it seem fairly down the middle, with the dividing line basically coming down to those that got a chance to demo a unit in person thinking it’s an interesting device, and those that didn’t get to demo the unit thinking Nintendo has completely dropped the ball.

A New Console, Now?

You can find what specs have been published for the device from Nintendo’s website, so I won’t bother rehashing it here. The quick summary: beefed up processor, beefed up graphics capabilities, full HD support, all around decent specs for a modern console. It’s not a mindblowing leap forward, but that is not, and has not been the point. The point is that the cost of having the higher end graphics is finally low enough that they don’t have to sacrifice their target pricing model in order to compete graphically with the other consoles. So basically, they let their competitors take a significant loss on every console in order to support HD, and then once the technology had matured, caught up while having made a profit the whole time.

It makes sense that they’d put out the Wii U now. Look at their past development cycles:

  • NES – 1983 (Japan), 1985 (US)
  • SNES – 1990 (Japan), 1991 (US)
  • Nintendo 64 – 1996
  • GameCube – 2001
  • Wii – 2006
  • Wii U – 2012

It doesn’t take a rocket scientist to see the trend here: Nintendo puts out a new console every 5-6 years. By contrast, we’ve heard nothing concrete out of Microsoft or Sony for a new console (and if so, it’s unclear what they would be adding), with as recently as a few months ago, Sony claiming the PS3 would have a 10 year product lifespan (it sound like they are no longer saying this, instead claiming somewhere between 8 and 10 years), meaning we can’t really expect a new console from either other major console company until at least 2013, more realistically 2014-2016. This all puts Nintendo in a great position by putting a new console out now.

What about their existing user base?

Wii U is backwards compatible with the Wii, so it becomes a no-brainer for consumers to upgrade. Easy migration plus a HUGE existing install base (86 million units, versus Microsoft’s 55 million and Sony’s 50 million). So, again, why not put out a new console now? Getting out of sync with the other console maker’s schedule is a good thing: less competition for consumer dollars, and games currently in development can ostensibly add support or the new console fairly easily (known architecture, and comparable specs to other consoles).

The Stock Drop is Irrelevant

Full disclosure: I do own some shares in Nintendo (a whopping 4 shares). That said: I don’t care what the stock price is. Nintendo is a dividend-bearing stock, unlike a number of other technology companies. As long as it continues to make a profit, the stock price is largely irrelevant to existing investors, unless they are the sort who feel the constant need to buy and sell shares (to which I say: go play with your IPO stock bubbles and leave long-form investment alone).

So considering the nature of Nintendo’s stock, why the hell is the gaming press making a big deal about their stock drop? It has absolutely NO relation to the quality or nature of the new product. Further, it shows a lack of historical awareness: it’s not uncommon stock prices to dip after a keynote — look at Apple. For years, even when they announced hugely successful products (that were clear to be successful from the start, no less), their stock took a marked dip immediately after.

Disruptive Technology is Called Disruptive for a Reason

It feels like a lot of the people complaining about this announcement are complaining for the sake of complaining. They don’t understand the new technology and its potential effects, or in rarer cases understand but disagree with the direction. A lot of those complaints were also levied against the original Wii as well, which then swept the market for the last 5 years, with a significantly larger install base than either competitor. Iwata’s 2006 GDC keynote discussed expanding markets and to not keep only vying for the same hardcore gaming market — this philosophy worked with the Wii, it worked with the DS, Microsoft adopted a similar stance with the Kinect to great success. Given all this, it increasingly feels like the complaints are coming from a small subset of people who are either resistant to change, or simply have a myopic view of the gaming industry and the shifting landscape of the market.

Here’s something to think about: the gaming news media is comprised of people who love games. Its why they chose that field. Don’t you think that this love of how games are now or have been, might bias their views on what could shift or change the gaming industry?

Lions, Dashboards, and Calculators (Oh My!)

This summer, Apple is planning to release their next iteration of Mac OS X, 10.7 (codenamed “Lion”). From the looks of things, their primary focus this time around is interface improvements to make the user experience more fluid and effective. In general, I’m liking what I’ve been seeing, though looking at the system requirements that have been coming out suggests that I’ll be on the hairy edge of being able to run it at all (a Core 2 Duo or higher is required, of which I’m running the first Core 2 Duo Macbook Pro they offered), so I’m not sure how much real benefit I’ll be seeing in the near future. That said, one of the design changes they’re making seems like a horrible idea: they’re moving the Dashboard into its own space, rather than continuing to work as an overlay over whatever screen you’re on.

Given that the dashboard is for quick-reach, simple widgets, this seems remarkably backwards, and more like something you’d do to get people to not use it so it can be phased out of a later release. Think about it for a second: widgets are meant to show information at a glance, i.e. without significantly interfering or distracting the user from their task at hand. While several widgets seem like simply a bad idea to be shoved into their own space, there are a few that will have their usefulness significantly reduced, most notably the calculator.

To be clear, the dashboard calculator is not especially robust. It has no history or “tape”, no special functions, just your basic arithmetic. About the extent of its bells and whistles is that it accepts numeric input instead of being forced to use the buttons. But you know what? That’s the point. It’s a simple calculator for when you want to run some numbers really quickly, without interfering with the rest of your workflow. More often than not, these numbers will be pulled off a website or email, or chat. You aren’t particularly invested in running the numbers, you just want to check them really quickly. This, specifically, is the value of the dashboard calculator: just pull up the dashboard, and you can punch in the numbers, which are still visible, into the calculator for a quick total, without going through the process of loading up a separate application. I don’t want to have to constantly page back and forth between two screens just to run a quick number check. At that point, why not just use the actual Calculator app?

I doubt I’ll ever know, but I would love to find who made this particular design decision and ask them what on earth they were thinking.

Emotional Communication

There is no language I know of that exists today that is able to truly convey our emotions, our inner needs. The scope just isn’t there — the best we can do is approximate it. We have words that are supposed to convey meaning, but even then, exactly what meaning is so fluid and amorphous, the true intent and meaning is lost in translation. Think about some of the biggest emotions in our lives. Think about love for a moment. “I love you.” “I love this television show.” “I love this song.” “I love my family.” There are so many valid contexts for the verb “to love.” As far as language is concerned, they are all valid, and we treat them as such socially. But the emotions underneath vary wildly. As human beings, we try to pick up on this additional nuance and emotional intent through body language, through situational awareness, the timbre of the voice, the tension of the moment. All of which rests on the hope that those around us are observant enough to notice, and aware enough to interpret these signals correctly. This is frightening, that so much of our emotional communication and well-being is reliant on others’ ability to perceive our comments in the way we intend. Given that, it is unsurprising that so many people feel isolated and alone.

Which brings us to another tool we have to try and communicate: if language does not have the tools to describe an emotion directly (not in a meaningful way, anyway), then it can at least describe them indirectly. Think about music, or books, or film, or photographs, or paintings, or any number of forms of art. The classic question of “What is art?” is easily defined to me: work intended to convey an emotional or personal response to someone. It’s an imperfect tool — there will inevitably be a lot of people who don’t “get” it. It’s not a fault of the artist, or of the viewer — they simply lack the shared context to invoke a response. A photograph of a weathered fence post in a field may not speak to some, but for others it can invoke a personal memory of visiting their grandparents on the farm, or strike a chord more metaphorically, describing for a moment the feeling of isolation that the viewer may be feeling or have felt. Put simply: art describes emotions.

Personally, I tend to draw from media sources to describe a range of emotions and personal thoughts pretty often. I’ve been doing so all week with video clips and songs and quotes, and this is hardly the first time, and I’m not remotely the only one — for every random silly link blog of goofy stuff out on the web, there is also a curated blog of someone trying to point at something in the hopes of getting their message across, and communicate something they feel is important to those around them. I post videos and quotes and songs and images to create a pastiche of who I am and how I’m feeling. (I’d be interested to see what interpretations people draw from the entries posted this past week.) Of course, I’m always afraid that I’m a bit of a Hector the Collector character when I do so, but if even one person gets and appreciates what’s shared, it’s worth it.

Casual Games is a Misnomer

There’s been a lot of discussion of the expanding “casual games” market for a while now, and frankly I think the term is causing some serious confusion because it’s not really a useful title. It’s not grossly inaccurate, it’s just not useful. We talk about games like Farmville being a casual game, because the mechanics of the game are relatively simple. The structural rules of the game are simple, but where it falls down as being “casual” is that the social rules, the “meta” rules of the game are more complex. There is a social give-and-take of desiring help from others, but also (hopefully) wanting to avoid annoying friends and relatives around you by flooding them with requests. People try to push those boundaries, though, to achieve more, to gain mastery of the game. The drive for achievement provokes a more “hardcore” approach to the game. The gameplay may be casual, but the intensity of play is more hardcore. The truly casual gamer doesn’t stick with it, because excelling requires more commitment than they are willing to give.

It would probably help at this point to define some terms and concepts, for the sake of clarity and communication.

  1. Play intensity: the level of investment of time, energy, and resources needed to achieve mastery of the game.
  2. Game mastery: the exact form mastery may take depends on the type of game, but generally involves a combination of implicit or explicit knowledge of how the game works that allows for maximizing the results of time spent playing. Maybe it will involve knowing every detail of a map and where and how players tend to play it; maybe it will involve having learned every skill combo in a game and developed the muscle memory to do each precisely and quickly.
  3. Investment threshold: the limit to how much time, energy, and resources a person is willing to invest in a given task. This varies from person to person and task to task, and is the crux of the difference between a “casual” and “hardcore” gamer.

I am fundamentally a casual gamer. Considering I write a game-related blog, wrote two theses related to game development, and work in QA in the game industry, I suppose some might think this is inaccurate, but hear me out. “Casual” gaming isn’t about the quantity of game time, nor the quality of play; it’s about the approach to gameplay. Put simply, I’m not really invested in mastery of most games. I will totally try to complete the game, find as many secrets and bonus content and other goodies as I can, but when doing so requires ludological mastery and precision, I usually walk away from that content (and sometimes the game). I’m not mad about it (more on that in a moment), at most a little disappointed that I didn’t/couldn’t get whatever it was I was trying to do. An unspoken threshold of skill investment was exceeded. If “good enough” isn’t enough, then I’m done. That, to me, is the distinction of a casual gamer.

Think about hardcore gaming, and the concept of the ragequit. It’s a valid, if undesired, reaction to being placed in what is perceived as “unfair” conditions without clear methods for correction, and isn’t new — think about the trope of “taking your ball and going home.” But what, exactly, is unfair about it? The game itself ostensibly has immutable rulesets (short of hacks), and if the game designers did their job balancing the game well, then neither side has undue advantage over the other mechanically. The difference comes down to the players themselves and what level of mastery they’ve invested. In a ragequit situation, there’s generally at least one player who has a relatively high level of mastery of the game — they’ve invested the time and energy in understanding the mechanics of the game, and how to maximize their use of those mechanics in their favor. When you then pair them with someone who has the desire for mastery, but either hasn’t had the time needed to invest, or the capacity for the necessary skills to compete, the mismatch results in a ragequit. A casual player may try to play, decide they don’t want to have to invest days or weeks into gaining mastery, and walk away. The behavior of the other players may have been exactly the same, but the likelihood of a ragequit is less, since the casual player isn’t as invested in the game.

Different games can have different requirements for game mastery, and still have both fall under the aegis of a casual or casual-friendly game. A more distinct delineation is to establish the play intensity of the game: examine the amount of investment in game mastery that is necessary to continue to move forward in the game. If there is little room for players who haven’t invested as many resources into mastery of the game (e.g. they didn’t spend hours playing the same zone or area, learning all its quirks and best solutions to the challenges it poses), then that game will only be attractive to players with a high investment threshold, i.e. it isn’t a casual game, no matter how simple the interface is, no matter how complex the game mechanics are.

Now, what really fascinates me are the games that find ways to straddle the line. While some consider World of Warcraft a hardcore game, I consider it a casual game: the requirements for game knowledge and expertise in order to proceed is relatively low — you can play without investing significant time in HOW to play (gaining mastery instead of moving forward in the game). But I tend to be an altaholic. If I were to try and get into raiding and high level instances (what’s considered the “end game”), I’m quite positive my perception of the game would shift to considering it a more “hardcore” game — to raid effectively, from all accounts, requires a more in-depth understanding of the mechanics of the game, as well as specific details of the instances themselves.

So, with all this in mind, the question I find myself asking is: are these sorts of casual gamers worth accounting for? We’re a pretty fickle lot; happy to drop a game if it’s no longer satisfying, and probably won’t even use some of your mini-games or features. My vote is: yes, they should be accounted for when designing games. A game can still be layered and complex; it can still reward greater mastery, and encourage high intensity play; it can still penalize errors and poor play. BUT, the complexity and greater mastery should enhance the player experience, not hinder it. Give a broad range of allowable game mastery and play intensity, and let the player decide their own level of involvement.

Comcast, Walled Gardens, and Games

There’s a lot of talk currently about the Level-3/Comcast mess, where Comcast is demanding additional money from Level 3 (an internet backbone and current partner with Netflix for providing streaming media) before they will allow streaming media onto their network. Comcast’s reasoning is that Level 3 is acting as a Content Delivery Network (CDN), not just as an internet backbone, and thus no longer qualifies for the peerage agreements that would allow for traffic between the two networks without additional fees. Which is a bogus assertion, and feels like a money-grab: Comcast’s customers are paying for that bandwidth already, and making a legitimate request for the data being provided — all Level 3 is doing is sending the requested data. To then block the data that the customer has paid for (twice: they pay Comcast for the bandwidth, and Netflix for the content) directly violates the principles of an open internet.

This is a prime example of why there are concerns over the imminent Comcast-NBC Universal deal (for those who haven’t been paying attention: Comcast is trying to purchase NBC Universal from General Electric for $6.5 billion dollars CASH, plus investing an additional $7.5 billion dollars in programming), in terms of media consolidation and vertical control effectively creating a walled garden. To quote Senator Bernie Sanders:

The sale of NBCU to Comcast would create an enormously powerful, vertically integrated media conglomerate, causing irreparable damage to the American media landscape and ultimately to society as a whole.

This is hardly the first time Comcast has been caught with their hand in the proverbial cookie jar, taking censorial action while claiming to be in favor of an open internet. Their behavior is antithetical to net neutrality on a fundamental and obvious level.

So, why does this matter to game development? A variety of reasons, actually. Regardless of what type of games you are talking about, modern gaming takes bandwidth: assets need to be downloaded, whether as a standalone game title, or even the casual, cloud-based games you find on Armor Games or Kongregate or even Facebook. If there is any type of online component, there will be regular communication between client and server. This sort of bandwidth costs money, and if developers have to start paying additional fees to be allowed into walled gardens, the cost may reach a point where it is no longer feasible for many developers to continue. Even already, a number of games are looking at solutions to mitigate the costs of hosting content, such as distributed downloading solutions like BitTorrent (yes, believe it or not, peer to peer isn’t just for illegal uses). While some price fluctuation is expected and reasonable as the market shifts and costs of hosting and bandwidth change, at what point do developers (including smaller developers without the resources of large publishers) have to start dealing directly with Comcast (or other gatekeepers) for the right to sell their own product to the public? One of the biggest benefits of the internet, open access, not having to go through a gatekeeper process and large publishers to share your work with the world, is already being challenged by device-specific gates, like the Apple App Store for the iPhone, and to a lesser extent the Playstation Network and Xbox Live Arcade and WiiWare. (I say lesser extent because those networks are ones that ostensibly can’t reach the rest of the internet without additional effort, if at all, whereas the iPhone App store has no such issues.) We do not need, nor want, service providers blockading legitimate customers from our products.

Browser Hell

While there are a variety of methods to view the web, the vast majority of people use only one of a few options: Internet Explorer, Firefox, Safari, Opera, and (johnny-come-lately but gaining market-share fast) Chrome. While it’s fantastic that each of these browsers are doing well enough to be considered major players, the problem is that they all have some pretty serious failings.

Internet Explorer LogoThe problems with IE are well documented, and frankly given that it’s Windows-only, I’m going to gloss over it here by simply saying: don’t use it unless you have to. Don’t support it unless you have to. Just. Don’t. This may change with the upcoming IE9, as there’s been a BIG push by developers to get Internet Explorer up to date and standards compliant. If even half the features and support Microsoft has promised actually make it into the final product, Internet Explorer may well be worth another look. In the meantime, take a pass.

Firefox LogoNext up is Firefox, a very popular open-source effort run by Mozilla. It’s free, it’s open source, it’s cross platform, there are lots of themes and profiles and extensions you can get for it to make the browser do more, all of which makes it the darling of the geek community. It isn’t without its faults, however: the same extensions that make Firefox useful often contribute to browser instability, but Firefox without extensions is… well, lackluster. Which is to say: a plain copy of Firefox is a perfectly serviceable browser, but lacks anything to set it apart from other major browsers. That coupled with one of the slower load times and a rather substantial resource footprint makes it a less than ideal solution for someone trying to run a lean, stable system.

Safari LogoWhile Safari doesn’t have anywhere near the usage rates of IE or Firefox, it’s still a major contender in the browser wars, for three reasons: 1) It’s the default browser on every Mac system, and has the highest browser rates on Macintosh computers; 2) It’s the default (and until Opera Mini managed to strongarm their way onto it, only) browser on the iPhone, iPod Touch, and iPad; and 3) It’s cross-platform and free. I’ve been a diehard Safari user since it came out, only occasionally switching to Firefox or Camino. However, as they’ve continued to add more features, the overall quality has (in my opinion) gone down. Reports of stability issues are prevalent on the Windows version, and I’ve been discovering massive resource consumption on my Mac. Since Safari 5, the memory footprint has grown significantly, causing repeated beachballs for the most basic browsing tasks because my laptop, with 2gb of ram, was out of memory. (My frustration with this is actually what has prompted this post.) I can only assume it’s a memory leak that slipped past them, because I cannot fathom how that sort of resource consumption would be acceptable for a shipping product.

Opera LogoOpera is a trooper from the old browser wars. While it has incredible market penetration on devices and globally, as a desktop web browser it didn’t really get a strong foothold in the U.S. They’ve continued to improve the browser over a number of years (the current version as of this writing is 10.60), and at this point boast one of the most standards compliant, fastest browsers on the market, with a ridiculous amount of features. Which is the problem: there are so many features and customizations and tie-in services like Opera Unite and Opera Link that it’s incredibly easy for the average user to get mired in unwanted complexity. Additionally, while they have support for widgets (which can even work as standalone applications from the desktop), I had trouble finding any plugins to fix some egregious oversights (despite all those features, Opera tends to only play with itself — service integration with third party options like Evernote or Delicious are non-existent). Some of the interface I found cumbersome, but I was willing to work through that (all browsers have some quirks, after all), but was off-put by the sheer number of browser themes that were for Windows only, leaving Mac users very few options to try and find a more suitable interface.

Chromo LogoThe last of the “big” browsers I wanted to mention was Google’s foray into the browser market, Google Chrome, and its development sibling Chromium. Despite being very new, Chrome has already gained a significant market share in terms of browser statistics, and not without reason: it’s fast; it breaks page viewing into separate processes to keep the entire browser from crashing when one page hits bad code; and, well, it’s made by Google. Frankly, while I appreciated some of the features of Chrome, I found it to be an incredibly slipshod application. The user interface was inconsistent and unclear on numerous occasions, with the preferences window being a morass of poorly explained buttons and hidden panels, and their handling of tabs becoming utterly useless once you get much over 20 tabs open. It’s easy to start cutting them some slack by saying “It’s a beta,” but let’s be realistic here. Google has made a point of hiring some of the smartest, most talented, capable people on the planet, and invested millions into the development and marketing of Google Chrome already. A product with that sort of backing feeling this slapdash is embarrassing for them and frustrating for the user. (Final gripe about this: despite their session-splitting to help prevent browser crashes, Chrome crashed on me when I tried to quit.)

So there you have it, the biggest, most popular browsers out there. The reality is that they all have MAJOR FLAWS, and there is major work that should be done on all of them. The bright side is that each of these browsers is under active development, so a lot of the work that needs to be done will be done. Until the problems are fixed, however, I’m inclined to look into one of the numerous smaller browser projects being developed out there, and hopefully find a diamond in the rough that blows the big boys out of the water.

Where to Build Your Next Team

According to the ESA’s reports, the five states that are serving as game development hubs in the US are California, Washington, Texas, New York, and Massachusetts. This shouldn’t come as a surprise to anyone; cities like Seattle, San Diego, Austin, and their peripheral towns are often mentioned in gaming press. This is fine –€” certain hubs are expected to rise up in any industry, and game development, at $22 billion domestically per year, absolutely qualifies as industry. However, it is becoming increasingly apparent that there is a need to start expanding into new locations if studios expect to continue to grow profitably. It comes down to cost: the cost of living, and cost of business.

The cities and regions that game developers are based in right now tend to be expensive: the amount of money it takes to maintain the same quality of life is higher than in other cities. As an example, comparing Portland, Oregon, and Seattle, Washington, two cities that offer similar climates, similar cultural opportunities, overall a similar quality of life. In Seattle, an examination of average office lease rates are running between $25 and $40 per square foot depending on where in Seattle you are (and where most of these companies are located, you’re looking at the high end of that range). A similar examination of Portland puts the lease rates between $12 and $25 per square foot. (To put those prices in perspective, Bungie recently announced their move into downtown Bellevue, leasing 85,000 square feet. Assuming they got a killer deal and only paid $30 per square foot, that’s still $2,550,000.) An equivalent space in Portland, assuming, say, $20 per square foot, is $1,700,000. That’s an $850,000 price difference, and that’€™s only one part of the overall cost of doing business.

Looking at the cost of living for the employees themselves, median apartment rental prices drop nearly in half between Seattle and Portland. While other price comparisons are less dramatic (the cost of heating a home doesn’€™t vary much, which is unsurprising considering they share a similar climate), it still works out to a net savings for the employee to be in Portland. What this means for the employee is that they can live at the same quality of life, for less money. What this means for employers is that they can price their salaries accordingly (as they already do), and again, save money to either a) bring down development costs, or b) hire more developers.

Of course, so far we’ve only discussed basic numbers, on the assumption that one would have to pay for everything involved. For a number of developers, this is already not the case: both Ontario and Quebec (and respective cities Toronto and Montreal) offer significant subsidies to game companies to build studios there. It was reported a few years ago that the city of Montreal and the province of Quebec combined subsidized over half the salaries for Ubisoft and EA, two major developers and game publishers. Ubisoft is expanding again, opening a new studio in Toronto, who have committed to investing $226 million into Ubisoft over the next ten years. Here in the U.S, 8 states have already passed initiatives to encourage game development, including significant tax breaks and other incentives to draw the industry in. The city of Savannah has gone so far as to offer a full year of office space free to any company willing to commit to offices there.

Now, I realize it is pretty rare that a company is in a position to be able to perform an en masse relocation (there have been a few examples, such as when Square moved from Washington to California, or when Bungie moved from Illinois to Washington), but that isn’t really what anyone is trying for: as development teams grow, new facilities are needed, and new development teams are created. These new studios and teams are in a prime position to make use of the lower development costs of setting up in a less expensive city. It would be foolish for a large game developer to not at least consider this when building out their next team.

The cities I expect to be great additions:

  1. Portland, Oregon: the city has so much going for it in, and is already starting to undergo a bit of a cultural explosion thanks to its fantastic music and art scene, green policies, and laid back atmosphere.
  2. Minneapolis/St. Paul, Minnesota: it’s been largely off the radar for a lot of people, yet sports a remarkable diversity within the area, low costs, and is something of a jewel of the central states.
  3. Boulder, Colorado: it is already becoming a pretty significant tech hotspot, housing a number of startups and offering a range of support for the software industry.

The MacBook Update

As I’m sure many are aware, Apple updated their laptop line today. There are some interesting technological advances going on, but (and their stock fluctuations today can attest) there seems to be a large backlash against several changes they made to their lineup — some justifiable, some spurious. Let’s look at the spurious complaints first:

  • “There’s no DVI port!” — and were you making the same complaint when DVI started to supersede VGA? Let’s be objective about this: DisplayPort is a VESA-certified industry standard meant specifically to address the needs of the computing market, in the same way that HDMI is meant to address the consumer electronics market. There are adapters already in existence to convert from DisplayPort to DVI (or even VGA) and back again. I know it’s hard when new standards come out, but you need to recognize that they’re coming out because what we have is no longer suitable for moving forward. HDMI is a marked improvement over Component. Well, DisplayPort is a marked improvement over DVI.
  • “There’s no button on the trackpad!” — anyone who has been paying attention could see this coming — look at the iPhone and iPod Touch and tell me you couldn’t foresee virtualized buttons coming. There are some complaints that they hate “tap-to-click,” and I can certainly concede that, but from looking at hands-on reports of the new setup, the system is designed in such a way that your muscle memory to hit the button with your thumb will still work in exactly the same fashion. The current button on the trackpads drops a millimeter, maybe two — you are in effect already “tapping” the button. The short of it is that by going to a virtualized solution, it becomes easier to adapt the trackpad to specific needs and solutions. I’m certain I can’t be the only who sees this.

There are definitely some very real gripes to be had, however:

  • “The black keyboard and black bezel are ugly.” — yes, I’m counting this as a real gripe. While from the exterior, the new laptops are sexy, when you open them up, the result a step backward; it is reminiscent of several offerings by Sony, Acer, even HP. Some are heralding it as a return to the Powerbook Titanium design philosophy, but I don’t really see that as a good thing. Why go back, when they clearly had so many options to move forward? Their external keyboards use a white on silver color scheme that would be markedly less jarring, let alone going with a silver-silver like they did with prior MacBook Pros. I consider this a valid complaint because part of what gets people to buy a Mac instead of a PC isn’t just the OS, it’s the hardware. The more it looks like everyone else’s offerings, the less reason there is to purchase the (more expensive) Mac option. Black on silver does not look good, I’m sorry. If they were going to go with the black bezel and black keyboard, in my opinion they should have gone with a black body. Either anodized or powder-coated black aluminum would still qualify for their EPEAT Gold rating, and yet would overall be more aesthetically unified.
  • “No firewire in the MacBooks!” — completely agreed. I don’t know what the hell Apple was thinking. Adding a FireWire 800 port would not have been difficult, even in the smaller enclosure, and yet by doing so, there would be a wealth of devices that would become available, including daisy chained hard drives and their own Target Disk Mode. Yes, that’s right, they’ve removed a technology that makes it easier to buy more of their products (by easing the process of migration). I understand the desire to further delineate between the MacBook and the MacBook Pro, but this is a grievous oversight.
  • “The dual graphics cards are neat, but can only use one or the other!” — I’m on the fence as to whether this is a valid or invalid complaint. My suspicion is that when 10.6 rolls out and OpenCL and Grand Central becomes more of a reality, we’ll start seeing the ability to prioritize processes and send some to one card, and others to another. If not Apple, then a third party developer. Given that nVidia has gone on record saying they’re supporting OpenCL, I think this is a reasonable prognostication. In the meantime, however, it’s just a “shiny-shiny” to give the marketers something to chew on. I really don’t care about the difference between a 4 hour and a 5 hour battery life — more often than not, if I’m in one place for that long, I’m able to plug in somewhere. So why not save the space in the laptop and just do the high end graphics card? (Of course, I consider this yet another reason to believe that there WILL be communication across the two cards in the future.)

I’m still very interested in getting a new MacBook Pro, as my current machine is starting to get long in the tooth and showing its age. Once I have a job that I can justify the expense, I imagine I’ll be getting one of the new machines, but if you’re in the generation immediately prior, I’d be hard pressed to encourage an upgrade. Honestly, a part of me (as lustful for a new machine as I am) wants to wait and see if they start offering a gun-metal-black iteration in 6 months.

No AP, Please

Patrick highlights recent unacceptable behavior on the part of AP over at Making Light. He makes some excellent points about how restrictive and ridiculous this sort of attempt at strong-arming individuals can be. A core principle of copyright law is the role of “fair use” to allow others to provide feedback, response, analysis, and commentary on a given work or material, since copyright law itself is provided as an incentive to promote scientific and cultural advancement. A blogger referencing (e.g. linking to the article, quoting specific passages, or re-summarizing/restating the basis of the article) a work clearly falls within this principle, on several fronts.

I will concede such cases as where the majority or entirety of the article is quoted, in particular in situations where it is done so without commentary, but that’s not what’s being discussed, here. What’s happening in THIS circumstance is pure, unbridled greed, without even a nod to the law as it stands.

Notwithstanding the provisions of sections 106 and 106A, the fair use of a copyrighted work, including such use by reproduction in copies or phonorecords or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use the factors to be considered shall include —

(1) the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;

(2) the nature of the copyrighted work;

(3) the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and

(4) the effect of the use upon the potential market for or value of the copyrighted work.

The fact that a work is unpublished shall not itself bar a finding of fair use if such finding is made upon consideration of all the above factors. (Copyright Act of 1976, 17 U.S.C. § 107)

Photoshop CS4's Interface

So, John Nack has previewed the new Photoshop interface, which has been drawing a fair amount of criticism around the ‘net for being “un-Mac-like”. I think the criticism is frankly a lot of gnashing of teeth because it’s different, and very little else. As Nack points out, if you bother looking at some of the best “Mac-like” apps, including applications made by Apple itself, much of the new design draws very similar parallels. It’s a very clean, modern interface, and keeps pace with the trend towards encapsulated applications (the document driven, single window experience). Frankly, I like it, and look forward to it.

Let’s face it: any user who multitasks ends up with a boatload of windows open at any given time, and there have yet to be really any effective ways to manage all the windows. This is becoming increasingly problematic as we find ways to have more and more windows up at any given time (I’m looking at you, Spaces), and so user interfaces have been forced to rethink how they display their data, to better encapsulate that data, so that everything related to a particular document STAYS with that document. Tabbed browsing was the start, but it’s totally logical that this design philosophy would (and should) enter other applications. Some of my favorite applications are ones that integrate data into the session window — a prime example is Scrivener. In Scrivener, the inspector is attached to the document window, rather than sitting in a separate “inspector pane/window”. From a design perspective, this makes it absolutely clear as to which document you are inspecting, which is particularly important if and when you have multiple documents open at once. The application is designed so that everything you need to do to the document can be done from one document window, with multiple files within it. You can even split the window to display attached research files or another page of writing at the same time, or if you decide you really NEED it to be in a separate window, that option is only a right-click away. That is GOOD DESIGN: it avoids juggling through multiple windows just to get your work done.

Detractors who might say it’s not “Mac-like” haven’t been paying attention. While there is, of course, the opportunity to get it wrong, and not make an effective interface, this is true regardless of whether you’re talking about a unified interface or a multi-window one. However, it’s pretty clear all the way down to the interface of the Finder, that we’re shifting towards a single-window-per-need design philosophy (if you don’t believe me, use the “Find…” option in OS X 10.5, or “Create Burn Folder”, or try out iChat with “Collect Chats into Single Window” turned on and tell me it’s not a better way to juggle a dozen conversations).

The key to note in what I’m saying is that it is PER DOCUMENT, or PER NEED. The places that I’ve seen single-window interfaces be successful is where elements that belong together are placed together. A window, in essence, becomes a method to encapsulate the data related to the task or project it was created for. As such, there are going to be times it DOESN’T make sense. Frankly, I’m just glad designers are realizing that there are times that it DOES.