I was recently invited to be a mentor at the Bay Area Video Coalition’s Producers Institute, a week-long intensive workshop in which teams of independent documentary producers are immersed in interactive technologies and techniques and then develop pitches for interactive projects based on their work. At the end of the week the project teams pitch their ideas to potential funders and hopefully get a kick-start on the path to getting their proposals underway. Though I was only able to help out for two days towards the end of the workshop, it was still a pretty amazing gathering to see and be a part of.
The main project I was involved with at the Institute was the forthcoming work from Take Action Games (TAG), the company best known for Darfur Is Dying, a game about the crisis in Sudan which received a lot of media attention and helped to put serious games on the map for many people. I’ve had the pleasure of consulting with TAG team members Susana Ruiz and Huy Truong before, and have found their professional style to be a wonderful mix of a strong vision combined with a genuine excitement about the medium and an openness to new ideas. Looking forward to finding out more about their latest project, In The Balance: The Death Penalty Game, I wasn’t disappointed, as Susana, Huy, and Ashley York are again bringing their talents to bear on a challenging social issue and stretching the boundaries of the medium in the process (the project was recently written up in the Washington Post).
I met with a number of project teams while at BAVC—the whole atmosphere of the gathering had a lot of camaraderie and intensity as the various groups, flush with new information from the Institute’s various speakers and events about leveraging documentary content online, sought to assemble compelling pitches for a host of fascinating projects. For a taste, check out the following video from The Drax Files, whose creator Bernhard Drax was documenting the goings-on. This clip touches briefly on In The Balance during a chat with Tony Walsh, a veteran BAVC mentor and founder of the game development firm Phantom Compass. Drax filed a number of reports from the Institute, so check out The Drax Files if you want to see more.
Excellent article today on Roughly Drafted about how the release of the iPhone SDK has suddenly catapulted Apple into a very strong position in the mobile gaming market. I heartily agree, having happily participated in the stampede that brought down Apple’s servers subsequent to the SDK’s release.
The fact that Apple is taking one of the hottest pieces of hardware around and making it so accessible is incredibly significant. In fact, when the SDK was released I had been planning to post a lengthy diatribe about the relative inaccessibility of Wii Ware to amateur developers. Nintendo’s insistence that Wii Ware developers be established companies (no home offices allowed) was a bit of a let-down, but the iPhone SDK more than made up for the disappointment—so far it seems to be everything I hoped for from Wii Ware and more.
Even though the SDK is still in a limited beta (and I’m one of the thousands who got my very own “we’ll be expanding the beta later, hang on for a while” email from Apple), it’s abundantly clear that they’ve gotten a lot right with the release, including the revenue sharing model. The ground is so fertile here that it’s convinced me to start hastily learning Objective-C (happily, C seems much easier to me now than it did eleven years ago, the last time I tried to pick it up...)
Such an exciting time right now. Man!
IGN Wii Editor Matt Casamassina recently posted video on his blog of 3DV Systems’ Z-Camera technology, which was showing at GDC. It looks pretty impressive--enabling body motion sensing in 3D without the need for an input device. As the comments on Matt’s blog indicate, there’s obviously going to be many applications for which you want to be holding something anyway, but the idea that that something could be a cheap plastic toy instead of an electronic device is an intriguing one. Matt hopes Nintendo is considering the tech for Wii 2--what should Nintendo do for Wii 2, anyway?
Raph Koster, president of Areae and designer of Ultima Online and Star Wars: Galaxies, made this and other interesting statements at a private GDC lunch as reported on gamesindustry.biz. Koster argues that Flash’s ubiquity and device-independence puts it in a leadership position among next-gen gaming platforms, and notes that as devices proliferate, “a lot of games… are not going to know what devices they are landing on”.
As it becomes increasingly common for a given experience to be run on multiple devices, some fundamental design issues come into play, as I’m discovering trying to develop works that will accommodate both the mouse and the Wii remote. While in this case I’m talking about multiple control schemes on a single hardware platform (a PC running WiiFlash), the problem is essentially the same as if I was designing a PC experience that could also run on the Wii.
It’s one thing to design a casual game in which the vocabulary of user actions is defined relatively independently from the control scheme, and then figure out how to make that game work with various input devices. It’s something else, however, to design an experience that takes advantage of the unique capabilities of the Wii remote, while still making the interface functional and rewarding for a mouse user. Of course, the fallback position is always to design for the mouse user and then use the IR pointer capability of the remote to emulate the mouse, and this might be perfectly appropriate for especially complex interfaces. A more customized bit between experience and controller, however, will always be desirable.
Just before Christmas, Chris Hecker posted a transcription of the 1982 print advertisement that introduced Electronic Arts to the world: “Can a computer make you cry?” This piece has become a touchstone among digital experience creators, crystallizing as it does our aspirations to be considered artists in the hope-when-I’m-old-they-give-me-a-lifetime-acheivement-award sense. Steven Spielberg himself implicitly invoked the ad at the 2004 opening of the EA Game Lab at USC (an event I captured on my cell phone camera, albeit poorly).
Twenty-five years after the publication of the ad, many gamers are able to recollect a small number of interactive experiences that provide ready and affirmative answers to the question posed by the ad. While many of these remembrances forego actual weeping, I think it’s reasonable to adopt Janet Murray’s position that the phrase “‘make us cry’ stands for a set of phenomena that do not have to involve actual tears” but more broadly engender heightened emotional engagement. A 2005 study identifies the death of Aeris in Final Fantasy VII as one of the most often cited moments. (I’ve never really gotten into the Final Fantasy series; my pick would be the bridge scene in Ico.)
I think there’s a lot of factors that play into whether a particular digital experience will “make us cry.” A lot of it has to do with the propensities of the user in the first place—most of those reporting high emotional engagement with interactive fiction are those who very much desire to experience such engagement, and to demonstrate that their chosen medium is capable of it. That willingness forgives a multitude of sins, which may include poor writing and acting, low resolution, simplistic characterization, and pandering to the wish-fulfillment impulse.
It also frequently forgives the fixed perspective. Cinema is widely understood to have come into its own as a medium once directors began to liberate themselves from proscenium framing (which placed the camera in the position of an audience member for a play) and began to put the camera in artistically optimal positions, using editing to string the various viewpoints together to achieve a particular effect. Games have their own version of the proscenium—a fixed point of view which is adopted throughout the interactive portions of the game. This fixed perspective arose as a matter of necessity: first a technical necessity (it’s much easier to create games in limited computing environments by restricting point of view) and then a design necessity (it’s much easier for people to learn how to interact with a virtual environment when their point of view is fixed).
As computing power has increased, the first necessity has generally fallen away, while the second remains strongly in force. Contemporary games with dynamic or interactive camera systems offer much greater variation in the range of perspectives offered to the user during play (i.e. not during a cut scene), but this variation is motivated almost exclusively by utility, with the goal of providing the optimal viewpoint for the player to carry out their tasks. It’s still rare to find games in which perspective is manipulated with artistic intent during play, mainly because experiences which tie interface strictly to a single avatar enforce restrictions on scale and perspective that are tough to get around. In a Tomb Raider game, if you suddenly cut to an extreme close-up of Lara Croft’s eye for artistic effect, the interface you were using to navigate her through her environment suddenly has no meaning. Either you’ve got to teach the player a new interface right then and there for controlling her eye (which may not even make sense artistically), or you temporarily turn off the interactivity (which leaves you with a cut scene).
The other factor that limits options for perspective in games is the widely-accepted design principle that difficulty must increase over the course of the experience. This is actually quite a curious phenomenon when examined in relation to other media. Does a song become harder to listen to the closer you get to its end? Do books become harder to read? Do movies become harder to watch? This one idea has a profound effect on the artistic potential of the medium, because the assumption that the user must learn a task that becomes progressively harder by default requires that the basic elements used to “stage” that task must remain constant. I can’t get better at controlling Lara Croft if my perspective on her is always radically changing; therefore all the artistic potential bound up in multiple points of view is discarded to satisfy the requirements of the learning curve.
The Nintendo DS and Wii have given us many examples of games which teach interface on the fly (with Wario Ware being the most hyperactive). If handled correctly, users enjoy these shifts; what’s needed is to locate them in interactive environments in which emotion—not difficulty or consistency—is the prime mover of the experience. This might speed the day when when games that “made us cry” (or more precisely, engaged our emotions) can be counted on more than one hand by more than just hardcore gamers.
Here’s a list of links to works cited in my recent talk “Storytelling in the Age of Divided Screens” at Gallaudet University.
I’m very happy to announce the launch of “Timeframing: The Art of Comics on Screens,” a new website that explores what comics have to teach us about creative communication in the age of screen media.
To celebrate the launch of Upgrade Soul, here’s a screen shot of an eleven year old prototype I made that sets artwork from Will Eisner’s “The Treasure of Avenue ‘C’” (a story from New York: The Big City) in two dynamically resizable panels.
The last couple of months have seen an uptick in published commentary on Strange Rain, much of it owing to notice the app received at this year’s Modern Language Association conference in Seattle.
Dialogue bubbles huddle together in the Unity authoring environment like backstage theatre performers awaiting their chance to shine in the forthcoming iOS and Android release Upgrade Soul, from Opertoon.