Wired recently featured a piece about new work being done using the Wiimote as an interface for real-world training simulators running in Second Life. Surgery, hazardous chemical handling, nuclear plant operations—all are fair game for WorldWired, a consultancy run by David E. Stone, who calls the Wiimote “one of the most significant technology breakthroughs in the history of computer science.”
Well, obviously I think the Wiimote’s pretty nifty too, but it’s only partly for the reasons given by MIT professor Eric Klopfer in the article: “People know intuitively what to do with it when they pick it up because we use it like devices we are familiar with—bats, rackets, wands, etc.”
So much Wiimote boosterism is about where the remote takes you—the mental models it so seamlessly helps you to adopt. All true, but I would argue that of equal significance is what the remote helps you leave behind. Imagine that the Wiimote comes into widespread usage as an alternative PC input device. Not something you use every day, but something you keep next to your work machine, since you can use it as a PowerPoint or iTunes remote, as well as for those all-too-rare moments when you stumble across a Flash game, online comic or art piece that shows you the delightful message: “If you have a Wiimote, pick it up now.”
Instantly, you’ve left behind the world of work and its input devices, and you’re prepared to experience something unusual—the more unexpected, the better. Even if the piece doesn’t make use of the remote’s motion sensitivity at all, the device itself has still managed to carve out headspace where software art and entertainment no longer have to compete with every other application to remap the meanings of your mouse and keyboard. We’ve got transitional space now; we’ve got a lobby to ease people out of the workaday and into alternate realms.
Calling it a technological breakthrough just doesn’t do it justice, does it?
Pictured at right: the cover of one of my all-time favorite digital experiences, the almost completely non-interactive Shining Flower (aka Hikaru Hana), developed by Maze Inc. and published by The Voyager Company, with concept and illustrations by Kikuko Iwano. My niece (those are her cornrows you see at the bottom of every page) recently returned to me the Power Macintosh 8500/120 I lent her when she went to college, and with it I regained the ability to run Shining Flower, to my delight.
Shining Flower was published in 1993 while I was working at Voyager as an audio commentary editor for the Criterion Collection. I have vague memories of seeing it demoed at one of the monthly open houses Voyager held at their offices on the beach at Santa Monica. Love at first sight; that immediate feeling of creative jealousy you get when you see something you wish you’d made. I bought it.
Shining Flower is beautiful, contemplative, quiet, and makes excellent use of limited resources. It’s not pretending to be a movie, or cel animation, or anything other than an 8-bit Director piece (with exemplary use of the lost art of color cycling, I might add). A single character holding a glowing flower makes his/her way through a series of surreal vignettes, on a kind of spiritual journey. Interactivity is limited to basically choosing which several-minute-long sequence you want to watch next.
I remember some grumbling at Voyager about the lack of interaction; at a company which was pioneering the application of same to content of cultural significance, why publish this? On the surface, it did appear to be a misstep, but if you caught the spirit behind this piece—the total commitment to expressing something in this medium, approaching it with the same respect afforded to cinema or literature, fully embracing the technology of the day without harboring self-defeating disdain for its limitations—the appeal of the work was undeniable.
Now when I watch it I find myself wanting to write code that makes it all dynamic, semantic, syntactic and syllabic…
Enjoy: the “beach” vignette of Shining Flower.
Word comes from across the pond that Swing is going to be shown at tomorrow’s London Flash Platform User Group meeting. At a one-hour session called “Fwii Style” (all these Wii puns remind me of the early days of the Macintosh, when all software had to have “Mac” in the title) Adam Robertson of Dusty Pixels will hold forth on the wonders of Wii and Flash:
Forget your PS3’s and 360’s, the Wii is officially the coolest console ever, all thanks to its innovative Wiimote controller. And now you can get in on the motion sensing goodness using Flash.
In this session we’ll take a quick look inside the Wiimote to learn a bit about how it works, then discover how you can use it to control your own Flash projects, both through the official Wii browser (with the Wiicade API) and on your desktop (with FWiidom & WiiFlash). Much arm waving guaranteed.
The followup session, called “Make Things Physical” and taught by Leif Lovgreen, sounds pretty great too:
An introduction to physical interaction. Adobe Flash, the Make Controller Kit from MakingThings and a handful of analogue sensors. This session covers the basics of getting started with analogue input as an interface to Flash.
Expect strange things like ice cubes, food, flashlights and a boxing ball to be natural ingredients in this session.
Interaction with analog sensors was something I thought was still beyond the capabilities of Flash; glad to hear this barrier’s coming down.
Those in London environs, take note; sounds like an interesting evening.
One of the things that’s apparent in this first generation of Wii titles is that many developers have underestimated how much attention needs to be given to instructing the user in how to hold and move the controller. Static icons don’t cut it anymore; you’ve got to have animation, and even then it takes some finesse, as simply playing a loop of the controller being waved around can still be confusing if the loop point itself unintentionally conveys some kind of gesture.
The best in-game controller tutorials I’ve seen to date are in the upcoming title Zack & Wiki. They actually show a little 3D animated guy (upper body only) holding the remote, along with text prompts. Seems like overkill at first, but it’s actually great because you not only pick up on controller movement, you also get posture and timing. When necessary, they can also switch to a first person view of the figure, or even a “disembodied hand” view to aid with object manipulation. Check out some videos of the interface in action, the game does look pretty fun.
I wonder, though, you think they’ll let you customize the guy’s skin color? I’m assuming he’s not a character in the game but is supposed to represent some kind of abstracted ideal human, which opens up a whole set of issues… many of which Anne Friedberg and I also ran into when picking silhouettes for The Virtual Window Interactive (and which we tried to skirt by letting users create their own). Gestural interfaces are increasingly going to require representation of the human form to explain, so whose form do we represent? Do we need an interactive 3D update to the 1974 AIGA/DOT symbol system?
Scratching the surface. What else belongs here?