Nathan Kracklauer, Abilitie chief research officer, outlines the dangers of excessive realism in simulation design and the negative impact it has on learning outcomes.
To generate learning impact, how closely does a simulated environment need to resemble reality? It’s a question we’ve wrestled with often over the last two decades as we’ve designed learning simulations and serious games for our clients, from simple tabletop exercises to complex business and management simulations.
During that time, the training industry’s go-to example of effective simulation-based learning has been the flight simulator, and it still is. Flight simulators are easy to demonstrate. More importantly, so is their value. No one would argue that it’s better to train pilots in a highly realistic representation of the life-and-death scenarios they will encounter as they keep us all safe. Who wants a pilot trained by slide presentation only?
Machines that look like cockpits and react to the pilot’s actions in ways that resemble aircraft behavior have a surprisingly long history. As early as the pre-digital 1930s, the Link Trainer simulated not only the cockpit controls but was also mounted on a platform that replicated in-flight motions. The trend in flight simulator development since has been toward ever greater verisimilitude in the virtual experience of flight, enabled by technology, digital and otherwise.
While development costs have been significant, they are dwarfed by the safety benefits. Shepherding pilots through the learning-from-mistakes phase of their training without putting life, limb, and multi-million-dollar aircraft on the line easily justifies the cost of even the most expensive simulators on the market.
Given that the flight simulator remains the training industry’s favorite example of simulation-based learning, and given the trend towards ever greater fidelity in that use case, it’s tempting to conclude that a simulation’s learning effectiveness correlates with its fidelity in representing reality. The closer to reality the better, the flight simulator suggests, and how close you get is just a matter of the available technology. But is closer to reality always more effective? Or are there situations where greater fidelity can be counterproductive?
That question takes center stage when we help clients choose between a customized business simulation that tries to closely model a particular business or a proven off-the-shelf solution. Does a customized simulation with greater fidelity to the industry deliver greater learning impact than a more generalized business simulation?
Reason 1: Complexity Obscures Learning Outcomes
The most important reason is that the more detailed your representation of reality, the more complex it becomes. Complexity can overwhelm and confuse, reducing clarity and obscuring learning outcomes.
Consider maps. Maps, like simulations, are representations of reality. Mapping apps often offer satellite and map views, and the satellite view is, strictly speaking, more realistic. It conveys far more detail about a region’s objects on the ground than the abstract – and inaccurate – colored lines of the map view. But what view do we use to make our way from point A to point B? For most use cases, the map view is more helpful. The satellite view bombards us with mere data, while the map view serves up information that enables decision-making.
In the same way, an excessively detailed simulated environment can overwhelm with sensory inputs. We, as simulation designers, serve our learners best when we “curate” reality to reveal the underlying concepts learners need to discover. In business simulations, those concepts are things like: “Payment terms can drive cash flow and profits in opposite directions,” “Feedback to direct reports needs to be specific and actionable,” and “A strategy that doesn’t rule out some options is not a strategy.” The learner should be able to discover those concepts in reality, too. But in reality – double meaning fully intended – we discover them slowly, if at all. That’s because reality buries those concepts under a thick layer of factoids, trivia, and smartphone notifications. A perfectly realistic simulation would be just as perfectly confusing as the real world.
Much pilot training, too, takes place in lower-fi simulators. Some simulators train on procedural knowledge without graphics or motion, and some only model specific cockpit instruments. At the heart of effective training, for pilots and business leaders alike, is the careful scaffolding of knowledge – taking what is known and slowly building competencies on top, without leaping too far ahead and leaving the learner behind. In simulation-based learning design, we follow Miles Davis’s dictum: “It’s not the notes you play, it’s the notes you don’t play.”
Reason 2: Complexity Uses Learners’ Time Inefficiently
The complexity that comes from replicating the real world with ever higher levels of fidelity not only obscures the learning outcomes but also makes learning how to use the simulation more difficult and time-consuming. “Complexity” in simulation design means, roughly, more decision points with more decision options, more information to process, and an exponentially growing space of decision outcomes. In practical terms, that increases the effort required of learners to immerse themselves in the simulation and understand its rules. Greater fidelity of representation means more time spent learning the simulation, which means less time learning from the simulation, our actual goal. There is a cost-benefit analysis here, not in terms of the simulation’s design and development costs, but in terms of participants’ limited time for training.
Reason 3: The Uncanny Valley Distracts From Learning Outcomes
Finally, there is the often-cited phenomenon known as the “Uncanny Valley” associated with the field of robotics. The more human-like a robot is designed, the more comfortable we get with it, up to a point: the Uncanny Valley. From there on, the resemblances – or rather, the remaining small but telling differences – begin to be unsettling, even eliciting a form of disgust. The classic examples are the famed Star Wars droids. R2D2 looks nothing like a human or even a cuddly mammal, yet it became one of the franchise’s most beloved characters. Humanoid and courteous C3PO, on the other hand… the less said the better.
Researchers disagree about whether the “Uncanny Valley” is a problem specific to the simulation of humanity or an instance of a wider problem that occurs when something meets most, but not quite all, of the criteria for membership in a familiar category. When we encounter something that is “neither flesh nor fowl,” it disquiets us, or at the very least, it captures our attention on the categorization rather than the thing itself.
Similarly, learners asked to immerse themselves in a simulated organizational environment that closely resembles their own reality find themselves focused on the remaining – often inconsequential – differences. Learners carry an additional cognitive load that inhibits the learning process. Ironically, closer resemblance demands greater effort to suspend disbelief than a more distant context.
We’ve learned the dangers of excessive realism in simulation design the hard way. Simulation design is fun, and you can easily get lost in your quest for perfect representation. Here are a few of the “reality filters” we apply when we design and customize business simulations:
- Define clear learning objectives and stick to them: Obvious, but easier said than done. Concretely it means that all player decisions and actions have to tie to one of the learning objectives. If not, that decision or action has no place in the learning simulation, even if it’s a part of the audience’s real-world experience.
- Display information, not data: Facts and figures should be relevant to the players’ decisions and actions. The real world is a mess of sensory inputs and distraction. But it’s unlikely that any red herrings you design into your sim will be like the real-world ones your learners will encounter, and there’s no sense in preparing them for fake red herrings.
- Use randomness sparingly: The real world is characterized by randomness, but allowing true randomness to drive outcomes only if dealing with it is intrinsic to the learning objectives. That doesn’t mean the simulation has to be predictable, it just means that you should painstakingly choreograph the unexpected for the learner.
- Use stories but don’t overplay them: Games from Chutes and Ladders to chess can be completely defined abstractly and mathematically, but they capture our imaginations with a narrative veneer. Still, nobody plays chess because it’s a story of warring kingdoms; the story fades into the background quickly, even as interest in the game itself grows. Stories are important: They motivate the immersion, provide vocabulary, and help explain the rules. For those reasons, they need to be simple and memorable. But once participants are immersed, the story has usually served its purpose. It can’t become a distraction.
We apply these filters to our simulation and game design for learning. We also believe similar filters must guide design choices made for any virtual environment used for learning. The pandemic has driven the vast majority of training into virtual environments, and much of it will remain there.
Meanwhile, we are at the beginning of the experimentation with those virtual environments, their representations of ourselves, and the spaces in which we interact and learn socially. We won’t be at all surprised if the conventions around how people, places, and interactions are represented in the metaverse wind up being decidedly lower-fidelity than anything we can already experience in the world of pure gaming.