At the intersection of game mechanics and story we call “narrative design,” it’s clear that producing either entails the other. Narrative design is game design, in that all game mechanics inherently tell a story and all game stories solicit certain mechanics. Among the fresh storytelling tools this introduces, text becomes the most prosaic – a vestigial crutch for writers clinging to that title – and those of us re-conceptualizing narrative within the medium abstracted our role beyond it to include audio, art, UI/UX, and basically every sphere of design.
All divisions die in player experience.
It’s no issue then that as VR takes off, our old storytelling techniques – especially the now-nauseating text – demand reevaluation. Luckily, and to our field’s credit, the most progressive inclinations seem to define the new paradigm.
For years narrative designers have been fostering “immersion” in games with ambient VO, environmental detail, and narrative-driven mechanics, so now that that word is literal instead of figurative, those methods graduate to the crux of our role. Text is hard to read on current hardware – and we certainly want that to change – but it’s a tool we were minimizing anyway. Narrative dwells in everything – from vandalized statues to buzzards overhead – dissolving the boundaries between specialists to the point where interdisciplinary collaboration is no longer optional. Segregate your departments and your world will show its seams.
Our challenges as narrative designers pale to those facing UI technicians, upon whom VR’s viability depends. In fact, developments there open the doors that narrative will explore – doors we’ll all explore once the cobwebs disappear. As UI designers ask “What new input can we read?” narrative designers ask “How can our content react? What new contextual triggers can we design around?”
Of course NPCs become more believable by responding to player gaze, but what about biofeedback like heart rate or sudden breath (see Nevermind by Flying Mollusk)? What emotional subtleties can we detect in voice commands? Anything the user does that the system can measure is information to which worlds can react, and this is what we meant all along by “immersion.”
Who better to drive it home than those already on the way?
Featured Image: Screenshot from Adr1ft by Three One Zero