Excellent article today on Roughly Drafted about how the release of the iPhone SDK has suddenly catapulted Apple into a very strong position in the mobile gaming market. I heartily agree, having happily participated in the stampede that brought down Apple’s servers subsequent to the SDK’s release.
The fact that Apple is taking one of the hottest pieces of hardware around and making it so accessible is incredibly significant. In fact, when the SDK was released I had been planning to post a lengthy diatribe about the relative inaccessibility of Wii Ware to amateur developers. Nintendo’s insistence that Wii Ware developers be established companies (no home offices allowed) was a bit of a let-down, but the iPhone SDK more than made up for the disappointment—so far it seems to be everything I hoped for from Wii Ware and more.
Even though the SDK is still in a limited beta (and I’m one of the thousands who got my very own “we’ll be expanding the beta later, hang on for a while” email from Apple), it’s abundantly clear that they’ve gotten a lot right with the release, including the revenue sharing model. The ground is so fertile here that it’s convinced me to start hastily learning Objective-C (happily, C seems much easier to me now than it did eleven years ago, the last time I tried to pick it up...)
Such an exciting time right now. Man!
IGN Wii Editor Matt Casamassina recently posted video on his blog of 3DV Systems’ Z-Camera technology, which was showing at GDC. It looks pretty impressive--enabling body motion sensing in 3D without the need for an input device. As the comments on Matt’s blog indicate, there’s obviously going to be many applications for which you want to be holding something anyway, but the idea that that something could be a cheap plastic toy instead of an electronic device is an intriguing one. Matt hopes Nintendo is considering the tech for Wii 2--what should Nintendo do for Wii 2, anyway?
Raph Koster, president of Areae and designer of Ultima Online and Star Wars: Galaxies, made this and other interesting statements at a private GDC lunch as reported on gamesindustry.biz. Koster argues that Flash’s ubiquity and device-independence puts it in a leadership position among next-gen gaming platforms, and notes that as devices proliferate, “a lot of games… are not going to know what devices they are landing on”.
As it becomes increasingly common for a given experience to be run on multiple devices, some fundamental design issues come into play, as I’m discovering trying to develop works that will accommodate both the mouse and the Wii remote. While in this case I’m talking about multiple control schemes on a single hardware platform (a PC running WiiFlash), the problem is essentially the same as if I was designing a PC experience that could also run on the Wii.
It’s one thing to design a casual game in which the vocabulary of user actions is defined relatively independently from the control scheme, and then figure out how to make that game work with various input devices. It’s something else, however, to design an experience that takes advantage of the unique capabilities of the Wii remote, while still making the interface functional and rewarding for a mouse user. Of course, the fallback position is always to design for the mouse user and then use the IR pointer capability of the remote to emulate the mouse, and this might be perfectly appropriate for especially complex interfaces. A more customized bit between experience and controller, however, will always be desirable.
Just before Christmas, Chris Hecker posted a transcription of the 1982 print advertisement that introduced Electronic Arts to the world: “Can a computer make you cry?” This piece has become a touchstone among digital experience creators, crystallizing as it does our aspirations to be considered artists in the hope-when-I’m-old-they-give-me-a-lifetime-acheivement-award sense. Steven Spielberg himself implicitly invoked the ad at the 2004 opening of the EA Game Lab at USC (an event I captured on my cell phone camera, albeit poorly).
Twenty-five years after the publication of the ad, many gamers are able to recollect a small number of interactive experiences that provide ready and affirmative answers to the question posed by the ad. While many of these remembrances forego actual weeping, I think it’s reasonable to adopt Janet Murray’s position that the phrase “‘make us cry’ stands for a set of phenomena that do not have to involve actual tears” but more broadly engender heightened emotional engagement. A 2005 study identifies the death of Aeris in Final Fantasy VII as one of the most often cited moments. (I’ve never really gotten into the Final Fantasy series; my pick would be the bridge scene in Ico.)
I think there’s a lot of factors that play into whether a particular digital experience will “make us cry.” A lot of it has to do with the propensities of the user in the first place—most of those reporting high emotional engagement with interactive fiction are those who very much desire to experience such engagement, and to demonstrate that their chosen medium is capable of it. That willingness forgives a multitude of sins, which may include poor writing and acting, low resolution, simplistic characterization, and pandering to the wish-fulfillment impulse.
It also frequently forgives the fixed perspective. Cinema is widely understood to have come into its own as a medium once directors began to liberate themselves from proscenium framing (which placed the camera in the position of an audience member for a play) and began to put the camera in artistically optimal positions, using editing to string the various viewpoints together to achieve a particular effect. Games have their own version of the proscenium—a fixed point of view which is adopted throughout the interactive portions of the game. This fixed perspective arose as a matter of necessity: first a technical necessity (it’s much easier to create games in limited computing environments by restricting point of view) and then a design necessity (it’s much easier for people to learn how to interact with a virtual environment when their point of view is fixed).
As computing power has increased, the first necessity has generally fallen away, while the second remains strongly in force. Contemporary games with dynamic or interactive camera systems offer much greater variation in the range of perspectives offered to the user during play (i.e. not during a cut scene), but this variation is motivated almost exclusively by utility, with the goal of providing the optimal viewpoint for the player to carry out their tasks. It’s still rare to find games in which perspective is manipulated with artistic intent during play, mainly because experiences which tie interface strictly to a single avatar enforce restrictions on scale and perspective that are tough to get around. In a Tomb Raider game, if you suddenly cut to an extreme close-up of Lara Croft’s eye for artistic effect, the interface you were using to navigate her through her environment suddenly has no meaning. Either you’ve got to teach the player a new interface right then and there for controlling her eye (which may not even make sense artistically), or you temporarily turn off the interactivity (which leaves you with a cut scene).
The other factor that limits options for perspective in games is the widely-accepted design principle that difficulty must increase over the course of the experience. This is actually quite a curious phenomenon when examined in relation to other media. Does a song become harder to listen to the closer you get to its end? Do books become harder to read? Do movies become harder to watch? This one idea has a profound effect on the artistic potential of the medium, because the assumption that the user must learn a task that becomes progressively harder by default requires that the basic elements used to “stage” that task must remain constant. I can’t get better at controlling Lara Croft if my perspective on her is always radically changing; therefore all the artistic potential bound up in multiple points of view is discarded to satisfy the requirements of the learning curve.
The Nintendo DS and Wii have given us many examples of games which teach interface on the fly (with Wario Ware being the most hyperactive). If handled correctly, users enjoy these shifts; what’s needed is to locate them in interactive environments in which emotion—not difficulty or consistency—is the prime mover of the experience. This might speed the day when when games that “made us cry” (or more precisely, engaged our emotions) can be counted on more than one hand by more than just hardcore gamers.
This is so right on, I couldn’t pass it up. In a speech at the GCDC in Germany this afternoon (covered in this article at GamesIndustry.biz), Stormfront Sudios President and CEO Don Daglow made some excellent points that deserve to be repeated far and wide.
“If it changes the player’s view of what interactive entertainment is; if you think differently about it; if you have a new perspective after playing the game that you didn’t have before, to me that’s next-gen,” Daglow said in a refutation of conventional wisdom that you can’t create a next-gen experience without dramatic increases in processing power. I couldn’t agree more.
The most significant innovations waiting in the wings for interactive art and entertainment are absolutely not about processing power, better algorithms, or any form of rocket science, though they may be enabled by technological innovation (as with the Wii remote). They are simply smart design, inspired thinking, artistry, and most importantly, perspective—an actual point of view on the world that arises from one’s personal experience.
Another Daglow quote: “We’ve spent a quarter of a century saying ‘the machine is holding me back’… The only problem is that now the machines are so powerful, we’ve lost our excuse.” This became really clear to me in the waning years of the last console generation (PS2, Xbox, GameCube), when I started to get bored with gaming in general. Everything was a retread; new versions of old games with upgraded graphics. I was shocked out of my complacency, however, when the Wii controller was first announced (evidenced by the fact that as soon as I heard the announcement I immediately estimated the dimensions of the remote and built a Duplo version the same size to start imagining what was possible...)
Daglow defends the Wii as a next-gen platform from the skeptics who doubt that it’s lesser-powered processor qualifies it as such with a blunt truth that should be remembered and repeated:
“Nobody gets to tell us what we think is next-gen - we get to decide for ourselves.”
Amen to that.