Monday, May 20, 2013
The term "content locusts" came up today in a Google+ post by Richard Bartle discussing the direction of "free-to-play" online games.
This term content locusts has come into use as a shorthand way to decribe the phenomenon that, when a new computer game (especially a multiplayer online game) is released, there is a sizable subset of players who will begin playing that new game as soon as it's available, try to experience its primary content as rapidly as possible, and then move to a new game. The notion is that these players are like locusts -- they swarm a new game, buzzsaw through its content, and then fly away (often complaining that the game was "too short" or "too easy").
I remember having mentioned a few years back that the Achiever Bartle Type was most closely related to this behavior, mostly because the behavior seems keyed directly to the Achiever motivation that "game" means a challenge to be beaten.
That got me thinking: what was the earliest use of this term?
There are several forms of the basic idea. The earliest mention I could find of "locusts" in the context of computer games was a comment by "Wolfshead" (saved by Google on May 29, 2004) describing player guilds in EQ: "The EQ Devs were caught off guard by the tenacity of the uberguild phenomena. These guilds consumed content like locusts and in many cases actually tested major encounters."
The next mention showing up is by Mike Sellers at Terra Nova on June 13, 2005: "As far as I know instancing has been introduced to reduce the immersion-shattering practice of camping, lining up for spawn points, and seeing popular dungeons or hunting grounds having been essentially clear-cut by roving locust-like bands of players."
The first reference I can find that specifically links content, locusts, and Achievers was my "Will The Real Explorers Please Stand Up?" blog entry (inspired by the Terra Nova discussion of the same name from January through July of 2005): "Achievers tend to become bored quickly -- like locusts, they swarm to a new game, burn through anything resembling "content," then zoom off again to consume the Next Big Game."
According to Google, the first use of the specific term "content locusts" is in the "Time flies when you're having fun" post by Isabelle Parsley (AKA Ysharros) at the Stylish Corpse blog on November 24, 2009: "It takes work to provide a smorgasbord of content that the content locusts can NOM NOM NOM their blind hungry way through, but that the … let’s call them content slugs can enjoy much more slowly and completely."
Finally, the use of the term "content locusts" that ignited its widespread usage appears to have been the "Content Locusts Killed My MMO" article by the very same Isabelle Parsley at MMORPG.com on January 27, 2012: "I like to blame the content locusts for this, at least to a large extent – that small percentage of players whose goal isn’t to experience content but to consume it as fast as possible as they race inexorably through a game."
Following that article, 2012 was littered with uses of the phrase "content locusts." And the design of SWTOR seems to have been directly related to how quickly the term entered general usage -- it's what most people who used the term were talking about.
Assuming anyone else is intrigued by this kind of linguistic archeology, can anyone else find earlier expressions of this idea?
Monday, March 4, 2013
Congratulations! You've just decided to build a world.
Building a world means imagining and implementing stuff. The "stuff" of a new world consists of places, and of objects set within those places.
Having decided to create a computer game that exists as a world, you now get to decide what kind of world will best suit the sort of game you want to make.
If you want your newly-created world to emphasize action, you'll want most of the objects in your world to express rule-based behaviors like movement and damage status. You'll also want to provide ways for players to manipulate those behaviors, since having "verbs" that allow players to (usually destructively) manipulate objects is what allows them to feel active.
If you want your world to emphasize meaning, then you'll need to make some of your stuff look and (to some extent) act like people, or at least be artifacts created by people and imbued with emotional value. The appearance and characteristics of all places and objects should help to express their inner meanings.
If you want your world to emphasize interaction and problem-solving, then some of the stuff in the world will need to appear to have complex dynamic behaviors. These behaviors can emerge from simple internal behavioral rules, but they need to interact enough to create systems that have patterns but that can't be summed up in a wiki entry by the first person to encounter it.
These goals aren't mutually exclusive. You can have a game that emphasizes action and interaction (Minecraft), or meaning and interaction (Myst), or action and meaning (The Sims). It's even possible to build a world that provides all three of these modes of play. The Elder Scrolls V: Skyrim is arguably a decent current example of this.
The pertinent thing about such multi-modal worlds is that they tend to be BIG. Big worlds are large places, and they are full of stuff... or as game developers like to call it, "content." How you choose to structure the content that defines the world of your game is the point of this article.
THE STRUCTURE OF BIG NEW WORLDS
Most game developers choose to organize a big world either by breadth or by depth.
Breadth is about making a world whose navigable terrain is large relative to the player's character and that contains many objects. Open-world games such as Skyrim and Minecraft tend to feel like enormous places overflowing with objects.
A gameworld built for breadth will offer wide expanses of terrain and numerous "inside" locations with their own terrain. And all that terrain and all those interiors will have objects located on and in them (grass, rocks, plants, animals, furniture, tools, weapons, people, etc.). As a side effect of having to build large amounts of stuff, that stuff will mostly exist either as a static texture map (you can see it but you can't do anything with it), or as a usable item with a single, simple, predetermined effect.
Depth is about making a world whose places and objects have many details. Deep games don't have as many places or objects as in a broad game. But the places that are built are carefully constructed to feel lived-in like a family home in a Spielberg movie. And the pieces of stuff in these places will be tagged with highly relevant information, usually called "lore." The objects in a deep game will also typically be richly dynamic -- they'll have several "verbs," or different but plausible ways for players to interact with them as gameplay activities.
What most developers don't try to do is make a game that has both breadth and depth. They don't try to make a big world that's both very large (in spatial size and object count) and very detailed.
There's nothing that forces this as a design choice. But historically there have been two serious practical constraints: time and money.
Trying to make a big world that contains a lot of content is hard. Trying to make a big world that contains very detailed content is hard. Trying to do both (the thinking goes) doubles or trebles the required development time and money. So most developers of big worlds pick one structural approach and try to do it well.
BREADTH VERSUS DEPTH: THE CHOICE
Making either of these choices means a tradeoff.
Games that do breadth well -- they have physically large worlds filled with stuff -- are often critized as "shallow." Games that do depth well -- their places and objects are intricate and filled with meaning -- are criticized as inducing claustrophobia by not enabling the feeling of rapid and frequent motion.
Bethesda's post-Morrowind console-focused games (Oblivion, Fallout 3, Skyrim) are known for their breadth. The traversable area of these worlds is enormous compared to the linear environments of most games. But this size means they're often condemned as being shallow. The action and meaning and interaction are almost entirely surface-only -- what you can see is pretty much all there is.
In a couple of ways, that's an unfair criticism. Developers of games that use Bethesda's engine do try to include some depth in their games, in addition to the massive amount of broad content that needs to be created to fill the outdoor spaces and interiors. Objects in rooms, for example, are frequently selected and arranged to tell a kind of micro-story about the person whose place that was. Terminals and books abound, giving some emotional depth to the world through tiny stories.
Also, while it's true that these are exceptions to simply having lots of stuff, the need to fit a very broad gameworld into the constraints of a console imposes limits on depth. It's simply not possible, even if there were time and money enough to do so, to tag every object with information and to allow every object to be usable in multiple ways. Consoles enforce simplicity, which favors zone-loading breadth over information-dense depth. (Obsidian's Project Eternity and Chris Roberts's Star Citizen are two Kickstarted games that are meant to be both big and developed specifically for the PC. It will be interesting to see whether the removal of the console limits allows these games to be both broad and deep.)
Those defenses noted, the reality is that there just aren't many games that do depth well, even as the design emphasis.
THE CASE AGAINST DEPTH
Adventure games used to try for depth. Things in adventure games had stories, and behaviors, that you could discover if you took the time to click on them. Exploring this depth was often a necessity, in fact. The depth was built into the core game design; you could not win the game (without cheats) except by reading and interacting with the details of the world.
But this structural choice -- gameplay through investigating a small but information-rich space -- means less sensation of movement. There is vastly less of a kinesthetic sensation of energetic action in a deep game. Most of the "motion" is in one's mind. Designing that kind of game takes a very different kind of effort than designing a broad but shallow world.
Sometimes this led to simply clicking on pretty pictures, as in Myst. Later adventure games were dismissed as "hunt the pixel games" when they tried to make interaction with objects more of a gameplay activity requiring a physical challenge, rather than following a path toward greater exploratory or narrative depth.
In addition to feeling constrictive for gamers whose main playstyle interest is motion and activity, the work curve in deep games feels more like a sequence of high stairsteps. Every new place that is created in a deep game needs to be constructed with numerous very detailed and active objects. These objects must work on their own, they must contribute to the intended purpose of that particular space, and they have to support the theme of the overall world of the game. That's not a mechanical function that someone can be trained to do -- you need someone who can feel, and who is creative, and who can combine those talents to make moments that other people can feel. So adding even a single new space becomes a major undertaking in a deep game.
Compared to the generally smooth slope of the work to be done for a game with breadth, with many similar objects scattered over a large area, development of deep content is simply harder to produce.
The practical result of these structural effects is that big worlds tend to be broad (but shallow) because breadth is easier.
A small but deep world, by its nature, requires hand-crafting. When there are only a few places and only a few things in those places, every place and every thing will be seen and assessed on its merits. For the game to feel right, designers and storytellers need to carefully stage all the visible pieces.
A broad world can be filled with content using programmatic tools, then given a relatively simpler hand-crafting pass or two. Large swaths of terrain can be sculpted using terrain generation tools; vegetation can be "planted" automatically according to exposed terrain type; buildings and dungeons can be selected from a few pre-built models; and so on. There's still a lot to do, but the broad strokes on the canvas can be filled in by code that applies generative rules.
Another important reason why breadth is easier is that depth is about meaning. Programatically "planting" a tree object in a particular location in an open-world game might have some meaning if someone takes the time to go into that space and tweak the location and type of the tree to give it meaning in that place.
But it's just a tree -- just a nice-looking texture. And there are many thousands of such nice-looking objects to be placed in addition to all the other objects that need to be placed... and trying to give all such things emotional value takes time that just doesn't exist. (And let's be honest, the number of people who enjoy and are good at choosing and placing gameworld objects in ways that express meaning to gamers is probably extremely small.)
It's simply faster and easier to make a big space with lots of things in it that don't have any particular emotional content.
BUILDING DEEP WORLDS ANYWAY
Not everyone has given up on depth as a viable structural option in games, however.
Richard Cobbett recently made a plea for more depth in a Eurogamer article, Saturday Soapbox: Hollow Worlds - Looking for "Look At". In this piece, he laments what today's games miss by not including the "Look At" feature. This ability to learn more about the details of places and objects was once common in text-based and point-and-click adventure games. But games increasingly emphasized action and excitement and motion. As they did, the very idea of stopping the action to learn more about the nature of the things in the gameworld became harder even to imagine.
Assuming it's implemented in any serious way, designing a look-at capability into a game creates the opportunity for more depth than is found in most of today's games. Games with exploratory and narrative depth are worth making. It will be interesting to see if anyone takes up Cobbett's challenge.
Another example of wishing for games that emphasize depth is the "One City Block" concept, as evangelized occasionally by Warren Spector. This is a game that is deliberately designed to limit the existing space of motion in the gameworld to just a single block of a city. In place of constant action and new sights, the variety and interest in a One City Block game would be found in the people and objects existing in this small patch of reality and the deep connections among them.
A potential example of this kind of design may be found in Gone Home, currently being developed by the small team that created the "Minerva's Den" DLC for BioShock 2. As in the best of the mature adventure games, Gone Home promises to reward players not for using physical dexterity to "beat" the game as quickly as possible (which is a perfectly valid playstyle satisfied by many current games), but for engaging with the deep world of the game at an emotional and intellectual level. This doesn't guarantee it will be a good game -- but it will be a different game than most of what's released today, and a deeper game, and that makes it worth watching.
BREADTH AND DEPTH IN ONE GAME -- CHALLENGES
Finally, is it possible to have both breadth and depth in a single game? Are there any work-organizing processes and technical capabilities by which a gameworld could be created within some reasonable time frame (say a year or two) that is both large and detailed?
One step in a positive direction is procedural content generation (PCG). To have both breadth and depth, developers need help with one of those two forms of content so that they have time to focus on the other. Since it's so much harder to define rules for meaning-filled (depth) content compared to expansive (breadth) content, having lots of both would seem to depend on creating tools that generate lots of good content.
Really large (broad) games already do this. Huge chunks of land can be generated randomly to an arbitrary level of complexity. This can be done by the developers, then hand-tweaked, in order to create a world (such as Skyrim) that is common to all players. Or it can be done dynamically, as in Minecraft -- this approach restricts placement of large, detailed structures, but it requires less data storage if world details are generated only when the player actually approaches that part of the world.
Random generation of fixed content can help greatly with providing a large amount of physical terrain. Once the first and second passes through the terrain generator are done, developers can then edit this base structure to individually adjust the look of key locations. They can then add objects (many, many objects) for yet more breadth of content. Finally, they can tag objects with meaningful deep content.
This is basically how Bethesda creates its Elder Scrolls and Fallout open-world games. Bethesda come as close as anyone to the ideal of a game that is both broad and deep... but in an ironic twist, it is the very breadth of these games that points out the shallowness of the characters and objects relative to how many of them there are. If these games were considerably smaller, the many small details of object placement and NPC behaviors would be better recognized and appreciated.
The downside to this method for trying to have both breadth and depth is that it's still almost as expensive as making a big, deep game purely by hand. Although random terrain generation helps, it's not really enough to reduce the number of people needed to hand-tweak the thousands of places and objects and actors.
Games that dynamically generate content have it even worse. While this approach makes it possible to enjoy spaces that are very large and that have lots of naturalistic "stuff" in them (rocks, trees, etc.), not generating content until the player is ready to experience it makes it impossible for the developer to hand-edit that content to add depth.
BREADTH AND DEPTH IN ONE GAME -- POSSIBILITIES
Handling both of these cases seems to drive at one question: is it possible to programmatically create depth?
Programmatically creating breadth is valuable. The more breadth that can be added using automatic systems, the more time is available for adding depth.
But the degree of difficulty is lower for breadth-creation rules. That's not to say it's easy; it's just easier to define rules that determine where and how to plop down places and objects (including people-shaped objects) than it is to imagine and apply the contextually-plausible information that gives those places and things and people emotional value.
As a general rule, anything to do with simulating people and people-related artifacts is hard. Automatically generating a forest is (relatively) easy; it's terrain and plants. There are even third-party tools like SpeedTree that help with this. If you're feeling energetic, you might add animals as well, whose impact on their environment is generally negligible. You can tweak those content elements for aesthetic or simulationist value, but it's still reasonably simple to generate lots of them according to predetermined and encoded rules.
Add people, though, and now you have the task of creatively imagining and representing the effects of human intentions and actions -- for example: roads (what kind? how should they run?), buildings (architectural style? size? purpose? proximity to other buildings? clean or filthy?), tools (what kinds of tools would different cultures use? where should they be placed outside or in a home? does the owner take care of them? does the owner have special feelings toward any of these objects?), and of course people themselves (who are they? what do they want? how do they feel about other characters? what actions are they capable of taking? what actions do they take given specific environmental phenomena?).
In a very broad game, adding people-related depth is an epic undertaking. When you have a cast of thousands, what are the rules that will let all those people behave like unique but still plausible individuals? An army of developers could hand-tweak each NPC, but that's expensive... and what if you want your game to add new characters over the course of play?
Some developers are working with the physical aspect of supplying meaningful depth to human-related content. Miguel Cepero, for example, is already doing some very interesting work with procedural generation of architecture. I expect commenters can provide plenty of examples of other efforts along these lines.
And the good folks at Storybricks are still working on ways to allow interactive emotional connections and behaviors to be dynamically generated among NPCs. This remains very much a work in progress, but it's one of the steps in the direction of games that can be both broad and deep.
For now, though, the problem remains: how do you encode the creativity and aesthetics of game developers as generative rules so that the people and things in a game can dynamically produce their own depth?
I may come back to this in a future blog post. For now... suggestions and ruminations are welcome.
Thursday, February 14, 2013
This is from May 9, 1999.
ALL MY CYBORG FRIENDS
(sung more or less to the beat of of Hank Williams, Jr.'s "All My Rowdy Friends")
Well, I woke up this morning with a pain in my head
All the crew on Citadel Station are dead
The messages they left make it plain to see
That whatever got them is gonna come after me
I'm heading down the hallways, picking up clues
Trying to survive while I assimilate the news
That on Citadel Station all the exits are blocked
And it looks like I'm gonna get my system shocked, uh-oh
We got humanoid mutants in the Medical bay
And the cyborgs in Research are headed my way
There's a crazy computer yelling for all she's worth
She keeps calling me "insect" and threatening Earth
But cyberspace is full of security codes
And the Reactor's linked to Shodan's processing nodes
If I find that the door to the armory's locked
I'll either open it up or get my system shocked, uh-huh
-- bridge --
Maintenance, Storage, and the Flight Deck too
Are crawling with drones looking for you-know-who
They're in Exec, Engineering and Security
And what's in the Groves needs some DDT
Security robots are all over the place
And Edward Diego keeps getting in my face
So my Mark Three is loaded with the hammer cocked
If it gets in my way, it gets its system shocked, that's right
(up a semitone)
Now it's forty years later and I'm back on the scene
As a psionic, hardware hacking, space marine
It's irrational to peer through the looking-glass
Instead of taking names and kicking cyborg ass
It's time to get moving, to get on with the show
And those security cameras are the first things to go
I'm on the Rickenbacker and I'm ready to rock
I had to come back again... to get my system shocked
Saturday, January 12, 2013
Computer games, which can actually show you what created worlds look and sound like, are also fun to experience.
But roleplaying games on a computer? There's a problem. And it's a big one. And not many game developers seem much interested in trying to fix that problem, with serious consequences for gamers who want to play interactive stories where their choices matter.
So what's this problem I somehow think I can see that others can't?
THE CREATIVE FREEDOM OF TABLETOP ROLEPLAYING
Consider the following experience of playing a table-top roleplaying game (RPG).
You and your friend are roleplaying as a warrior and a thief, respectively. The character you're playing is a barbarian warrior who has been able to learn some magic spells. Your friend is playing as a thief who's remarkably good with a short bow.
Your two characters enter a dungeon that your other friend, the Game Master (GM), has prepared for you. Before you got together, the Game Master spent time mapping out all the rooms and secrets and filling them appropriately with enemies and puzzles and traps and wonderful loot. And the whole dungeon is just one segment of an intricately designed campaign that will involve you and your friends in a deep and engaging story.
As the barbarian and thief characters of you and your friend enter the first room of the dungeon, your Game Master has noticed that you (the barbarian character) are walking a few steps ahead of your nimble thief friend. He rolls a die, and tells you that you've inadvertently stepped on a trap -- and the room is now filling with smoke.
Being a relatively clever warrior, you announce to the GM that you are lying on the floor to see if the air is clear there. The GM wasn't really expecting that, but he rolls a die and tells you that, yes, you can see the mail-shod feet of what appear to be five enemies striding toward you.
This removes the element of surprise from the bad guys, and you and your friend are able to prepare for battle.
Out of the smoke, enemies attack the two of you. Despite your preparations, the dice are against you tonight and you both take a lot of damage very quickly. Your barbarian character, and the thief character of your friend, are probably about to die. It won't be the end of the world; you'll just roll up new characters. But it would be less fun than keeping your current characters.
You look at your friend, he looks at you... and both your characters turn as one, bolt from the smoky room, run out of the dungeon, and flee at maximum speed to the village down the road.
THE GAME MASTER RESPONDS
The GM looks a little puzzled that you've completely ducked the adventure he had designed for you. But he shrugs and lets you keep running. You fall over yourselves getting into the tavern, where the GM tells you that the locals and traveling patrons stare at you for a moment, then shake their heads and mutter "some heroes" into their mugs.
After a bit of quick dice-rolling, your Game Master friend tells you that you stick around the village for a day or so, with nothing much happening. The next night, though, while you're back in the tavern trying to drink away the memory of your brush with non-existence, a well-dressed stranger introduces himself politely, mentions that he couldn't help but notice that you both must have come from the nearby ruins -- "We get a lot of that here," he chuckles -- and suggests that he might have some easier work for you... if you're interested. Nothing dangerous, just a bit of guard duty for some merchants.
You and your friend accept. It turns out to be a set-up; you were hired because word of your flight from the dungeon got around and the stranger needed a couple of patsies who would run from a fight. Then he'd steal the merchants' goods for himself and blame it on you. Your characters are no longer welcome here, and you'll need to choose whether to fight to restore your good names or seek adventure elsewhere.
What's important to notice here is that the Game Master was able to adapt quickly and effectively to your surprising choice to bail on the dungeon adventure that he had set up for you. Your choices mattered, and had consequences, but because of the GM's creativity your actions in that individual story could still be worked into the overall narrative.
In a later adventure, you would learn that the GM used your unexpected actions (and the consequences) to deepen the emotional theme of the overarching story. That would never have happened without your choice of actions, even if they weren't what had been plotted out to start with.
THE COMPUTER AS GAME MASTER
Now consider an example of what it's like to play a Computer RolePlaying Game (CRPG), and in particular one of the variants of CRPGs known as a Massively Multiplayer Online RolePlaying Game (MMORPG) that are played with lots of other people over the Internet.
You and your friend are roleplaying as a warrior and a thief, respectively. You're a level 38 Warrior, so you have the exact same warrior-specific skills defined by the developers for every person in the game who is playing a level 38 Warrior. Your friend is a level 33 Thief, with the same abilities as everyone else playing a level 33 Thief.
Your characters enter a dungeon that the developers have coded for the many thousands of people playing the game. You can't all play at the same time, of course; this dungeon like all the others is "instanced" so that you and your friend are the only players there. The dungeon is laid out as rooms filled with enemies and loot, and it's in a part of the map that's pre-defined as having a degree of challenge appropriate for mid-40-level characters.
You walk into the first room of the dungeon. You enter the range (hard-coded at 10 meters) at which one of the enemies standing there is programmed to detect a player character. All of the enemies in the room immediately attack you. You die. Then they attack your friend, and he dies.
You both respawn outside the entrance to the dungeon. You go back in. The dungeon is exactly the same, with the same enemies in the same room. You both die again.
You both respawn outside the entrance to the dungeon. You go back in, and get a little further in, then both die again. You repeat this several times, getting better at it with each run-through, until you finally reach the end of the dungeon and collect the nice loot.
MORE OF THE SAME
Several days later, after you've both improved your characters a few times so that they're level 40-ish, you go back to this dungeon.
Everything is still the same. The same enemies are standing in roughly the same locations; the same kind of loot drops when you kill them (though more quickly this time).
You go through this dungeon a few more times, grinding without much risk through the now-easy enemies to collect loot and and earn experience points for leveling up your characters again.
The next night you go into a different prebuilt dungeon with somewhat more powerful enemies and slightly better loot. You grind through this dungeon, too. If there's a "story" to the elements of either dungeon, they aren't connected to each other in any way. Nor are the stories of either dungeon related thematically to the challenges that deliver the main story line, which you can only do in order. Also, you can't advance your character abilities past certain levels until until you've successfully completed these developer-designed core challenges.
Other people you know played the same dungeons and challenges, and had the same kind of stories to tell about them. You like that this common experience gives you something to share with all those other players. But you also find yourself wishing you had some stories to tell that were uniquely your own.
You've made thousands of choices. But they all resulted in your character experiencing the same story as everyone else who played your kind of character. Despite playing an interactive game, where the computer is programmed to detect your actions and change the world of the game according to encoded rules, your choices ultimately didn't seem to matter. And you wonder why.
If you're playing a unique character in an interesting world -- which is the promise a roleplaying game makes -- shouldn't what your character does have more effect on your personal story as it helps define the story of that world?
THE CENTRAL PROBLEM DEFINED
The obvious difference between these two scenarios is that there is no human Game Master in roleplaying games that are presented completely by a computer.
I believe this is the central problem in all computer-based roleplaying games developed so far. Every feature in a computer RPG is an attempt to program a computer to do some of what a human Gamemaster (GM) can do. And they're all unsatisfying compared to tabletop RPGs because there is no computer program yet written that can do what a human being can do.
There is no computer-based roleplaying game that can perceive what each individual player chooses to do and why they make that choice, that can understand the kind of game that each player wants to play, and that can respond to those unique desires by using their choices to help tell a good story for each player.
The computer game has not been written that can adapt any or all of its existing content elements -- or create new features on the fly -- to satisfy the play interests of one or more human gamers while dynamically weaving their choices and consequences into a larger story unique to that gameworld.
People still enjoy roleplaying games, though. And it would be great to be able to play these games even when a human Game Master's not around. Computers are the obvious solution to this wish. Let them handle the generation and presentation of gameplay features and events, and especially let computers do the tedious calculation chores for table-driven outcomes.
So computers have been programmed to do those things. They can generate pseudorandom numbers, and render pictures of places and objects. They can process inputs according to simple rules and select outcomes from lists or calculations. They can even cause objects within the gameworld to perform specific behaviors in response to a small set of player-activated trigger events.
As a result, there have been computer roleplaying games for years now. Some of them are a lot of fun. But none of them would claim to be as perceptive and understanding and creatively effective as a human Game Master, or as deeply satisfying as a game whose character-focused content is tailored on the fly to what a player enjoys by a responsive human GM.
Computers remain bad at portraying interesting characters who demonstrate human-like behaviors and communications. Despite the welcome recent interest in procedural generation, computers are still not good at dynamically generating new content that responds plausibly to a wide range of human player inputs. And they are almost completely hopeless at figuring out how to weave individual events into a logically and emotionally engaging high-level story.
The result is that for all the effort spent thus far, something wonderful has been lost in the conversion of roleplaying games from the tabletop to the computer. From the expressive richness of worlds that adapt to all your unique actions, we get play experiences that are simplified to one-size-fits-all movies with limited interactivity so that a computer program can deliver a story that has been predetermined by the developer.
MASS EFFECT AND A TASTE OF MEANINGFUL CHOICES
There is reason for hope that this can improve. Some efforts have been made to allow more player choice, and to support consequences for those choices.
Unfortunately, there's some evidence that if you choose to make this kind of roleplaying game, you have to be prepared to give up the need to tightly control the full story. If the choices of the players don't really determine how the story ends, players will recognize that the limited interactive freedom you gave them is just an illusion.
I think this is precisely what we saw in the respose of many gamers to the end of the last game in the Mass Effect trilogy.
Players who were unhappy that the ending(s) of the final game didn't reflect the roleplaying choices of their character were both right and wrong. They were right that the player agency EA/BioWare implicitly promised -- by letting them develop the character of their Shepard through their gameplay choices -- wasn't delivered in the epic ending of the story. They were right to want and expect their choices to be respected, because EA/BioWare designed Mass Effect to solicit choices almost continuously throughout all three games.
But players were wrong to think that Mass Effect, which was designed from Day One as a world and a story created entirely by EA/BioWare, would ever permit an epic story to have a less-than-epic ending. What happens to the galaxy, and to Shepard, were built into the Mass Effect story from the very start because it was conceived as a developer-directed story.
The presence of choice-permitting gameplay mechanics (some of which were severely curbed in Mass Effect 2's RPG system, which should have been a heads-up) was never going to shift any of the core storytelling to the player. Mass Effect was EA/BioWare's story, and the conclusion to it had to be -- was always going to be -- dictated by the game's developer, not the player.
The relative absence of some form of adaptive/creative human storyteller in Mass Effect, either a real person or a computer simulacrum, meant that players were going to get a single predetermined core story designed for a mass audience. Perversely, it was EA/BioWare's attempt to allow a small amount of human storytelling adaptability in these games that caused the ruckus: having only the appearance of meaningful choice turned out to be insufficient.
The convention in computer-based roleplaying games has been that these games must tell an epic core story. This renders these games into books or movies that happen to offer some moment-to-moment interactivity. Mass Effect seemed to promise more; it broke with the games-as-interactive-novels convention by implying that the player's choices of how Shepard behaved in emotionally-weighted moments would change the story.
And ultimately the story of the world was affected. Multiple endings were keyed to Shepard's overall nature. But for most players, the story of their Shepard failed no matter what choices they had made within the Mass Effect CRPGs... and that was the story that mattered.
Players who care about story in a roleplaying game want their choices to matter for the character into which they've invested so many roleplaying hours. Without that personal responsiveness, computer-based roleplaying games will remain interactive novels. And even if they can't explain why, players will continue to feel puzzled and disappointed that that some vital and unique benefit of roleplaying games is missing from the computerized version, and that some part of the promise of interactivity is being wasted.
I think they are exactly right to feel that way. The central problem of computer roleplaying games so far is that they don't have the ability to respond creatively to the vast range of possibilities of what characters ought to be able to do in a gameworld.
STEPS TOWARD SOLVING THE CENTRAL PROBLEM
So let's say you're willing to accept my theory that this lack of a responsive, interactive storyteller is the real source of the unsatisfyingness of today's computer-based roleplaying games. If that's the problem, what's the answer?
I don't know exactly. (If I did, I'd be doing that.) But I do feel pretty confident that there are a couple of design approaches that could improve CRPGs. They might be separate, and surely will be at first, or they might work together, but there are two paths I can see that lead toward better CRPGs:
1. Catalog and evaluate what human GMs do, encode many more of those abilities as rules that computers can process, and design CRPGs that can apply those rules.
2. Accept that sentient human beings can be really good at providing creative and adaptive storyplay opportunities, and design CRPGs that give some players powerful tools to act as GMs for other players.
Let's consider that second possibility first: how can a CRPG be designed so that real people can act as GMs for other players?
ALTERNATIVE 1: ENABLE HUMAN GAME MASTERS
Obviously this isn't really optimal for a single-player CRPG.
Players may create and share content as objects, as Spore demonstrated. (And why hasn't any game developer followed up on the massive success of this aspect of Spore, which saw millions of creatures created before the game itself was even released?) But a single-player game has no one outside the player actively guiding the dynamic generation of immediate action or strategic story progression.
Integrating a person as a kind of GM is a more natural possibility for games that are designed from the ground up to be multiplayer games. Instead of spending so much time and effort trying (unsuccessfully) to replicate various human abilities to create and organize events, why not design the game to support human GMs doing what they do so well?
A hint that this is possible can already be seen in the Artemis Starship Bridge Simulator.
This is a game that allows several players, using personal computers linked in a local network, to play as the different roles (such as Chief Engineer, Weapons Officer, Communications Officer) aboard a Star Trek-like starship. The player in the Captain's role doesn't drive a computer, but instead gives direction to the other players, suggesting where to go and how to respond to the various pre-written encounters.
Currently there's little opportunity for the Captain in an Artemis simulation to generate playable content on the fly as a Game Master does in a tabletop game. Perhaps future versions will include some story management features that the Captain can select as an encounter progresses. Even now, though, this is still a useful step toward allowing an imaginative individual to dynamically guide gameplay for other players.
GAME MASTERS IN MMORPGS
There's no reason why something like this couldn't also be done in an online roleplaying game like a MMORPG. Several online games already offer players the ability to script missions for other players as pre-constructed stories. Why not extend this with features that allow the mission creator to select and insert choices and consequences while a mission is being performed?
In a game like that, something closer to the example given at the top of this article might be possible. The story creator, faced with an unexpected choice by the players of his story, could pop up a screen allowing pregenerated NPCs to be selected. A slightly randomized Generic Person could be dropped into the bar, and the storyteller could either write text or actually speak to the players in character over voice chat.
While doing this, the storyteller could bring up a drop-down list of planned alternative adventures, and activate one that seemed like something the players would find more fun. He could then guide the players to this new play opportunity. To the players, all this would appear narratively seamless -- they would not know (nor would they need to know unless they wanted to) that this flow of events wasn't exactly what the storyteller had in mind for them from the start... just as a good GM in a tabletop RPG can do.
By designing a MMORPG to have real-time features for paging story elements (including NPCs, objects, and text/voice information) into and out of the local gameworld, players would enjoy an entertainment experience that is much more personalized to their interests.
Naturally there are practical questions to such a design. There would have to be enough people willing to be "virtual GMs" for all the players who just want to be entertained. The numeric progression mechanics would have to be controlled so that standings couldn't be manipulated by storytellers building "Monty Haul" missions for their friends or customers. The experience with current MMORPGs that allow static mission building suggests that these concerns can be overcome... but someone will have to try extending this to dynamic storytelling for us to find out if that scales.
ALTERNATIVE 2: BUILDING BETTER NPCS
But maybe it's not completely necessary to require a live human Game Master dreaming up content on the fly for a computer-based RPG to enjoy better stories. What about that first option -- what if computer code could be written that does a much better job of allowing stories to modify themselves to adapt in an enjoyable way to the unpredictable choices of one or many players?
The Storybricks team has talked about this possibility in a developer diary at Massively.com. [Note: I'm part of the Storybricks team and helped edit the source of this article.] To summarize it: why not design NPCs to have emotional states, and allow them access to actions that can detect and alter the emotional states of other characters?
The notion here is that "story" is about what people do for emotional reasons. By building emotion-altering mechanics that the characters of a game can use, the game itself can invent new stories for players to experience. As NPCs interact with player characters and with each other, player actions begin to have ripple effects -- friendships and antagonisms, alliances and emnities, come into existence and then change because of what players choose to do.
This effect is different than a game with a GM. It's not under one person's direct control. No one knows where the story will go. That's both scary and liberating!
This approach is also limited strategically for now. Although the current technology (like that of Storybricks) is getting better at the low-level story beats, weaving those individual moments into a thematically coherent saga still requires a human storyteller. Dynamically changing a story to achieve a unified artistic vision from the sum of many individual parts still requires the human touch... but there's no reason to think we can't start to come closer to that capability in computer RPGs as well.
CRPGS BEYOND THE CENTRAL PROBLEM
Computer-based roleplaying games have reached a capability plateau. The graphics are excellent. The worlds are detailed. The core mechanics have been refined and polished. Now it's time to allow all that in-game activity to mean something, to let player choices really matter by making the story more of a collaboration between the developer and each player and the gameworld itself.
Giving creative players tools to adapt stories on the fly for other players is one way to get there. Building emotional intelligence into NPCs and letting them generate satisfying story opportunities is another way. There may be (probably are) other ways. What's important is to start.
It won't always go smoothly. The first steps are likely to be hard, as Mass Effect demonstrated. But it's the right direction to go. Computer-based roleplaying games need more emotional plausibility.
Computer RPGs are good enough now at letting characters fight each other. For this genre to survive and thrive, though, it needs to mature by letting gamers, acting through their characters, express more of the kinds of things we can do that make us human.
When game designers finally acknowledge and start to directly solve the central problem of computer-based roleplaying games -- the absence of some form of creative, adaptive mind interactively creating stories with individual players -- these games will begin to express their real potential as a distinctive and worthwhile entertainment experience.
Wednesday, December 5, 2012
Some games are about exploration. In these games, the pleasure of discovering and understanding is the central form of play. The mechanics of the game all encourage and reward exploratory play.
Other games are about something else, such as excitement or accumulating stuff, and just happen to have some exploration in them. Most games fall into this category.
I bring this up because I'm reading more comments these days calling some game that doesn't emphasize exploration an "exploration game." It's not. Just having some exploration content in it does not make it an exploration game.
SOMETIMES TERMINOLOGY MATTERS
It's good that some developers are willing to wedge a little exploration content into their games. I appreciate it when someone offers content that respects the Rational/Explorer/Simulationist playstyle.
It's also understandable (if a bit sad) that, for the gamers who are starved for exploration-specific content, a game with even a little exploration in it can be considered an "exploration game."
But it isn't, not really. To say that a thing is a member of some distinctive group when it functionally is not is to destroy the meaning of the words used to name that group. That makes it harder to communicate usefully.
In the case of a game, the descriptive term "exploration game" loses its value for both marketing and critical discussion when it's applied to games that aren't actually about exploration play. It's misleading to gamers.
More importantly from the design perspective of this blog, to say that a game is an exploration game just because it can sort of be played that way occasionally confuses the set of possibilities that new game designers can even form in their heads of what an "exploration game" contains. If you grow up thinking that a game about exciting non-stop action and loot collecting counts as an exploration game just because there's some optional terrain to view or variation in loot drops, what are the odds that you will really understand what Explorer gamers actually want?
INTERRUPTIONS INHIBIT EXPLORATION
In particular, I've come to think that action-focused games that have as a primary play mechanic some kind of ticking clock or other (often mobile) threat that disrupts planned play behaviors are almost certainly not exploration games, and shouldn't be called that.
In short: mechanics that interrupt exploration actively oppose exploration play.
If you're a game designer, it doesn't matter how many goodies you hide, or how big the world is, or if you provide the occasional alternate track off the Direct Path To Fastest Victory. If your game persistently interrupts the perception and thinking process of the player with some kind of threat, then it's not an exploration game. That game is explicitly telling players that exploration is not your highest priority for them.
Interrupt threats are great for action games. Unexpected survival challenges generate excitement and the immediate requirement to do things. Doom 3, for example, was all about interrupt threats. Something similar is true of Minecraft in Survival mode -- getting jumped by a giant spider in the dark tends to distract one from sightseeing.
That's why I play Minecraft in Peaceful mode. Even without the ability to save at will, Peaceful mode at least allows exploration of the generated gameworld without having to worry constantly about being interrupted by a mobile survival threat.
On the other hand, Peaceful mode is not an option in Doom 3. Doom 3 was all about interrupt threats. Doom 3 was not an exploration game.
THE GAME DESIGN FUNCTION OF INTERRUPTS
Is the primary design intention of your game is to deliver an action-filled play experience? If so, then by all means, implement some speed-related interruption threats such as a limited amount of time in which to observe and plan, or enemies that come looking for you to attack you after a certain time, or ever-decreasing amounts of time in which to plan your next move.
Those are great for ramping up tension. You might even choose to deliberately prevent players from quicksaving all of their game state, or give them only one save slot. (Both of which are precisely what the PC version of Far Cry 3 does.)
All these are valid design choices for making a high-intensity action game...
...and every one of them directly opposes exploratory play. Survival interruptions significantly increase the difficulty of obtaining knowledge of the gameworld. Restrictions on saving game state significantly increase the risk of losing some of that knowledge. Interruptive mechanics and save restrictions penalize exploration. They send the unmistakable message that exploring the gameworld is not what you're supposed to be doing. Anyone who tries to get that kind of enjoyment out of such a game is clearly not playing it the right way.
It's only when exploration is designed to matter most that the description "exploration game" is appropriate.
Survival interruptions and save restrictions very clearly tell the player that the point of a game is not exploration, that things to discover are only there to add some "content" or to provide a brief contrast to the action moments. If exploration actually mattered as primary gameplay, then the designer wouldn't emphasize gameplay elements that constantly interfere with perception and pattern recognition or that make it difficult to accumulate and organize knowledge.
ACTION AND EXPLORATION? YES!
An obvious question at this point is whether a game that has lots of combat can ever be considered a true exploration game. My feeling is that combat, by its nature, tends to be interruptive, and so almost always works against exploration. But it's entirely possible to have an exploration game with some combat in it.
System Shock 2 is a good example of this. Unlike many games with combat, the combat system of System Shock 2 was systemically deep enough to be something interesting to explore in its own right. And it's telling that one of the few complaints about SS2 was the "respawning enemies." As in Doom 3, this worked against being able to concentrate on perceiving and understanding the patterns of the gameworld. When those interruptions didn't occur, System Shock 2 was extremely rewarding to explore, not in spite of having a strong combat element but partly because its combat element was a richly designed system.
A true exploration game asks the player to observe, and think, and understand. Not in an "oh god, oh god, we're all gonna die if you don't get it right instantly!" kind of way, but in a contemplative, creative, and strategic way. Success in a true exploration game, and the rewards for that success, go not to the player with the most well-developed fast-twitch muscles, nor to the most tactically adaptable player, nor even to the player who can bring out the best in other people. Success in a good exploration game -- most likely designed by someone who understands and values the joy of discovering interesting things -- goes to the player who sees the patterns in complex systems, and who can conceive long-range plans for applying available forces to those patterns to create new, more desirable configurations.
Players just don't get to do that when you're putting them in life-termination scenarios every few seconds. Games that are about interruptive excitement are not about exploration.
THE TRUE PLEASURE OF EXPLORATION GAMES
Games that are about exploration have mechanics and content that promote the different but equally valid kind of satisfaction that comes from realizing the governing pattern in a complex system, or from devising a new system that is both functional and elegant.
Not everybody likes that kind of gameplay. Great! There's plenty of room for different kinds of games that offer different kinds of fun. And there are plenty of games available that emphasize action and token-collection.
But being designed to offer a primary gameplay experience of something other than adrenaline-pumping excitement does not make a game flawed or bad. Not delivering eyeball kicks every ten seconds does not make a game buggy or broken.
That's a game that could, if its mechanics emphasize the pleasure of discovery and understanding, be a game that deserves to be called an "exploration game."
Saturday, November 24, 2012
There's a subset of developers who seem to think so. They like the idea that players should be able to express behaviors and create objects in a gameworld that they (the developers) never thought of.
But these appear to be a distinct minority. Most games are deliberately designed and developed to prevent any truly creative play. In particular, the number of in-game effects that characters and objects can demonstrate are cut back as much as possible.
Why take such pains? Why are most developers so determined to strictly limit player verbs or possible system interactions if player creativity is such a great thing?
There are several not entirely bad reasons why. Unfortunately for the game industry, I believe the combination of these justifications winds up leading to a severe majority of games that are so tightly controlled as to nearly play themselves.
One problem with allowing player creativity is rude content.
If you let players do things that modify the gameworld, particularly if they can interact with other players in any way, they are guaranteed to spell out naughty words, erect enormous genitalia, and build penisauruses. (Google "Sporn" for NSFW examples of how gamers immediately used Spore's creativity tools.)
Developers can accept this if they're OK with a mature rating for their game, but creativity tools make it tough to sell a multiplayer game that's kid-safe.
ANYTHING UNPLANNED IS A BUG
Another problem is that emergent behaviors can look to some gamers like bugs.
That doesn't mean they are actual bugs, defined for games as behavior that opposes the intended play experience. Just because it was unintended doesn't mean it opposes the desired play experience.
The developers of Dishonored, for example, were surprised to see their playtesters possess a target while plummeting from a great height, thus avoiding deceleration trauma. It wasn't intended -- it emerged from the interaction of systems -- but it made sense within the play experience Arkane had in mind. So it wasn't a bug, it was a feature... and it got to stay in the game. That appears to be a rare exception to standard practice, though.
NO CRAFT IN CRAFTING
Crafting in MMORPGs is not creative. Crafting -- making objects -- in MMORPGs has nothing to do with "craft" or being "crafty"; it's about mass-producing widgets to win economic competition play. That's a perfect valid kind of play. But it isn't creative.
An argument might be made that some creativity is needed to sell a lot of stuff. But that's not related to crafting as a process of imagining new kinds of objects that meet specific purposes and elegantly bringing them into existence within a gameworld. That's "craftsmanship," and it's what a crafting system worthy of the name would be... but that's not what crafting in MMORPGs ever actually is.
A truly creative crafting system would allow the internal economy of a gameworld to grow through the invention of new IP. Wouldn't that be an interesting way to counter mudflation?
To be fair, a creative crafting system would probably far outshine the rest of most MMORPGs. Part of the crafting system in the late Star Wars Galaxies (SWG) MMORPG was highly regarded, but in an odd way it was so much fun that it didn't ever really fit into a Star Wars game.
So what might a MMORPG (i.e., not Second Life) with a truly creativity-encouraging crafting system look like? In what kind of gameworld would the ability for players to imagine and implement entirely new kinds of things be appropriate?
CLASSES VERSUS SKILLS
Yet another reason to deprecate player creativity is game balance. Especially in multiplayer games, developers not unreasonably want to try to keep the playing field level for players using (marginally) different playstyles.
A common way this gets expressed is by organizing character skills in level-controlled classes. It's more interesting to key character abilities to skills, and let players pick and choose the skills they want. But this (developers have decided) allows the emergence of character ability combinations that may be either unexpectedly "overpowered" or too "weak" to compete effectively with players of similar skill levels.
This perspective that "interacting systems allow emergent effects that interfere with the intended play experience and therefore must be minimized" explains (as one example) why Sony Online Entertainment completely deleted the extensive individual skills system of the original Star Wars Galaxies and replaced it with a few static classes with specific abilities at developer-determined levels, just like pretty much every other MMORPG out there.
The New Gameplay Experience was well-regarded by some of SWG's new players. But many long-time players felt that the original SWG's unique skills-based ability model was much more creatively satisfying. When it was changed so radically to a class-based model, eliminating their ability to express themselves in a detailed way through their character's abilities, they left the game.
EVE Online also allows skill selection, but in practice most people wind up with the same skills. So is it possible any longer to offer a major MMORPG that encodes player abilities in mix-and-match skills, rather than a small set of classes in which my Level 80 Rogue is functionally identical to your Level 80 Rogue?
CODING TO THE TEST CASES
One more reason why emergence gets locked down in games starts, ironically, with sensibly trying to use more mature software development practices.
Test case driven software development is the process of documenting what your code is supposed to do through well-defined requirements, then writing test cases that describe how to find out whether the software you actually write meets those requirements.
That's often a Good Thing. It helps to insure that you deliver will be what your customers are expecting. But there is a dark side to this process, as there can be for any process, which is that if your organization starts getting top-heavy, with a lot of layers between the people running things and those doing the actual game development, the process eventually tends to become the deliverable. Reality becomes whatever the process says it is. Process is easier to measure than the meaning of some development action: "How many lines of code did you write today?"
The practical result of enforcing the "everything must have a test case" process is that every feature must have a test case. That's actually pretty handy for testing to a well-defined set of expectations.
Unfortunately, the all-too-common corollary is: if we didn't write a test case for it, you're not allowed to have that feature. At that point, the process has become your deliverable, and your game is very unlikely to tolerate any creativity from its players. It might be a good game by some standard. But it probably won't be memorable.
Still, a process for reliably catching real bugs is valuable. So how can the desire to allow some creativity and the need to deliver measureably high quality coexist?
EPIC STORY MUST BE TOLD AS-IS
Finally, there is the problem of the Epic Story.
Emergent gameplay invites exploratory creativty. But broadly emergent gameplay interferes with a carefully-crafted narrative. The more epic and detailed the story -- which translates to more development money spent on that content -- the less freedom you can permit players to go do their own wacky things, because then they might not see that expensive content. The Witcher 2 fought this somewhat, but it's emphatically the exception.
Is there a middle ground between developer story and player freedom? Or is there a way to design a game so that both of these can be expressed strongly?
To sum up: from the perspective of many game developers, especially in the AAA realm, "emergent" automatically equals "bug" in all cases. A mindset that only the developers know how the game is meant to be played, rather than a respect for what players themselves enjoy doing, is leading many developers to design against creativity. The idea of of actually increasing the number of systems or permitted system interactions seems to be something that just will not be permitted.
The result is that player creativity in these games is so constrained as to be nonexistent. You're just mashing buttons until you solve each challenge, in proper order, in the one way the developers intended.
Is there any sign that this might be changing, perhaps as the success of some indie games demonstrates that there is a real desire for games that encourage player creativity?
Monday, November 12, 2012
For roleplaying games such as the recently very well Kickstarted Project Eternity by Obsidian Entertainment, the conversation can be informed and thoughtful. Not all of the ideas suggested by enthusiastic armchair designers will be right for a particular game, but the level of discussion is frequently very high.
However, in the years since I've observed such forums, there is inevitably a conversational glitch that appears. It doesn't take long before even very knowledgeable commenters will begin to argue in favor of gameplay systems that replicate real-world physical effects.
THE DESERT OF THE REAL
They might be asking for armor that has weight and thus restricts physically weak characters from equipping it at all. Or maybe it's for weapons that require specialized training, so that a character must have obtained a particular skill to use certain weapons. Sometimes there's a request for a detailed list of damage types, or for complex formulas for calculating damage amounts, or that environmental conditions like rain or snow should reduce movement rates or make it harder to hit opponents.
What all these and similar design ideas share (other than an enthusiasm for functionally rich environments) is an unspoken assumption that the RPG in question needs more realism.
Later on I'll go into where I think this assumption comes from. For now, I'd like to consider why I think there's a better approach when trying to contribute to a game's design -- instead of realism, the better metric is plausibility.
The difference between realism and plausibility is a little subtle, but it's not just semantic. Realism is about selecting certain physical aspects of our real world and simulating them within the constructed reality of the game world; plausibility is about designing systems that specifically feel appropriate to that particular game world. Plausibility is better than realism in designing a game with a created world -- what Tolkien called a "secondary reality" -- because realism crosses the boundary of the magic circle separating the real world from the logic of the constructed world while plausible features are 100% contained within the lore of the created world.
To put it another way, plausibility is a better goal than realism because designing a game-complete set of internally consistent systems delivers more fun than achieving limited consistency with real-world qualities and processes. Making this distinction is crucial when it comes to designing actual game systems. Every plausible feature makes the invented world better; the same isn't true of all realistic features imported into the gameworld. Being realistic doesn't necessarily improve the world of the game.
Despite this, a design idea based on realism often sounds reasonable at first. We're accustomed to objects having physical properties like size and weight, for example, as well as dynamic properties such as destructibility and combustibility. So when objects are to be implemented in a game world, it's natural to assume that those objects should be implemented to express those physical characteristics.
But there are practical and creative reasons not to make that assumption.
WHY NOT REALISM?
Creating simulations of physical systems -- which is the goal of realism -- requires money and time to get those systems substantially right relative to how the real world works. Not only must the specific system be researched, designed and tested, but all the combinations of simulated systems that interface with each other must be tested -- all the emergent behaviors have to seem realistic, too.
Trying to meet a standard of operation substantially similar to something that exists outside the imaginary world of the game is just harder than creating a system that only needs to be consistent with other internal game systems.
Maybe worst of all, there are so many real-world physical processes that it's impossible to mimic even a fraction of them in a game. Something will always be left out. And the more you've worked to faithfully model many processes, the more obvious it will be that some process is "missing." This will therefore be the very first thing that any reviewer or player notices. "This game claims to be realistic, but I was able to mix oil and water. Broken! Unplayable! False advertising!"
This isn't to say that all simulation of physical processes is hard/expensive, or that there can never be a good justification for including certain processes (gravity, for example). Depending on your game, it might be justifiable to license a physics simulation library such as Havok.
But the value of implementing any feature always has to be assessed by comparing the likely cost to the prospective benefits. For many games (especially those being made on a tight budget), realism should almost always be secondary to plausibility because realism costs more without necessarily delivering more fun for more players.
SOME GAMES ARE MORE REAL THAN OTHERS
Knowing when to apply either realism or plausibility as the design standard depends on understanding what kind of game you're making. If it's core to your gameplay, such as throwing objects in a 3D space, then the value of simulating some realistic effects like gravity increases because the benefits of those effects are high for that particular game. Otherwise, you're better off implementing only a plausible solution or excluding that effect entirely.
Let's say you're making a 3D tennis game. The core of tennis is how the ball behaves, and the fun comes from how players are able to affect that behavior. So it makes sense for your game design to emphasize in a reasonably realistic way the motion of a ball in an Earth-normal gravity field (a parabola), as well as how the angle and speed of a racquet's elastic collision with a ball alters the ball's movement. If it's meant to be a serious sports simulation, you might also choose to model some carefully selected material properties (clay court versus grass) and weather conditions (humidity, rain).
But you probably wouldn't want to try to simulate things like the Earth's curvature, or solar radiation, or spectator psychology. They don't have enough impact on the core fun to justify the cost to simulate them. And for a simple social console game, ball movement and racquet impact are probably the only things that need limited realism. The better standard for everything else that has to be implemented as the world of the game is plausibility. If it doesn't help the overall game feel logically and emotionally real in and of itself, then it doesn't meet the standard and probably should not be implemented.
A ROLEPLAYING GAME NEEDS ITS OWN REALITY
This is even more true for a character-based game, which requires a world in which those characters have a history and relationships and actions they can take with consequences that make sense. In designing a 3D computer role-playing game, the urge to add realistic qualities and processes to the world of that game can be very hard to resist.
Action-oriented Achiever gamers are usually OK with abstracted systems; the fun for them comes in winning through following the rules, whatever those rules might be. But for some gamers -- I'm looking at you, Idealist/Narrativists and my fellow Rational/Explorers -- the emotional meanings and logical patterns of those rules matter a great deal.
Characters who behave like real people, and dynamic world-systems that behave like real-world systems, make the game more fun for us. We find pleasure in treating the gameworld like it's a real place inhabited by real people, not just a collection of arbitrary rules to be beaten. For us, games (and 3D worlds with NPCs in particular) are more fun when they offer believable Thinking and Feeling content in addition to Doing and Having content.
Providing that kind of content for Narrativist gamers is non-trivial. "Realistic" NPC AI is hard because, as humans, we know intimately what sensible behavior looks like. So while there are often calls for NPCs to seem "smarter," meaning that they're more emotionally or logically realistic as people, it's tough for game developers to sell to publishers the value of the time (i.e., money) that would be required to add realistic AI as another feature. Developers of RPGs usually have a lot of systems to design and code and test. So working on AI systems that give NPCs the appearance of a realistic level of behavioral depth, in addition to all the other features, is very hard to justify. (The Storybricks technology is intended to help with building more plausible NPCs. But that's the whole point of that technology.)
Another argument against realism in a gameworld is complexity. Most developers prefer to build simple, highly constrained, testable cause/effect functions rather than the kinds of complex and interacting systems that can produce the kinds of surprising emergent behaviors found in the real world. Explorers find that hard to accept. Explorers aren't just mappers of physical terrain; they are discoverers of dynamic systems -- they love studying and tinkering with moving parts, all interacting as part of a logically coherent whole.
Explorers also tend to know a little about a lot of such systems in the real world, from geologic weathering to macroeconomics to the history of arms and armor and beyond. So it's natural for them to want to apply that knowledge of real systems to game systems. Since you're building a world anyway (they reason), you might as well just add this one little dynamic system that will make the game feel totally right.
Now multiply that by a hundred, or a thousand. And then make all those systems play nicely together, and cohere to support the core vision for the intended play experience. "Hard to do well" is an understatement.
A CREATED WORLD DOESN'T NEED OUR WORLD'S REALISM
That's the practical reason why, for most systems in a worldy game, plausibility will usually be the better standard. If the game is meant to emphasize melee combat, for example, then having specific damage types caused by particular weapons and mitigated by certain armors might sound good. Moderate simulation of damage delivery and effects might be justifiable. Those action features will, if they're paced well and reward active play, satisfy most gamers.
But a role-playing game must emphasize character relationships in an invented society, where personal interactions with characters and and the lore of the world are core gameplay and combat is just one form of "interaction" among several. For that kind of game, the better choice is probably to limit the design of that game's combat system to what feels plausible -- get hit, lose some health -- and to abstract away the details.
Plausible systems are especially desirable in roleplaying games because they meet the needs of Explorers and Narrativists. Intellectually and emotionally plausible elements of the game feel right. They satisfy our expectations for how things and people should appear to behave in that created world.
A plausible combat system deals and mitigates damage; it can but doesn't need to distinguish between damage types. A plausible "weather" system has day/night cycles and maybe localized rain; it doesn't require the modeling of cold fronts and ocean temperatures and terrain elevation. A plausible economy has faucets and drains, and prices generally determined by supply and demand; it doesn't have to be a closed system. Plausibility insures that every feature fits the world of the game without doing more than is necessary.
This is the best of both worlds. Making plausibility the standard gives players lots of different kinds of things to do, and it makes the implementation of those systems simple enough for them to be worth implementing and feasible to implement by mere mortals.
So, to gamers enthusiastic about adding realistic interactive capabilities to a brand-new gameworld, I say: whoa, there! Before you ask the designers to add this one additional little thing that would help make the game more "realistic," stop and think about that idea from a game designer's perspective.
Developers can't add every great idea, even if they're still in the design phase. But the chances of seeing some feature suggestion implemented improve if you can explain how it makes the unique world of that game feel more plausible at minimal development cost.
Tuesday, October 30, 2012
"[T]housands at his bidding speed,
And post o'er land and ocean without rest;
They also serve who only stand and wait."
-- John Milton, "On His Blindness"
There's a belief I've seen expressed by a number of developers that can be paraphrased as: "All players want to be The Hero. Every gamer expects to be the all-powerful savior, prime mover of all action, star of the show, center of all attention. Therefore every game that personifies the player as a character in a world must to be designed to allow the player to be the hero. And that goes for multiplayer games, too."
Not to pick on Emily Short, who is a respected creator of Interactive Fiction games, but an example of this perspective can be found in a talk she gave at GDC Online in 2012, as summarized on Gamasutra: Making Everyone Feel Like a Star in a Multiplayer Game. As Gamasutra's Frank Cifaldi summarized it: "Even in a multiplayer game, every player has to feel as if they are playing out their own personal, unique story. They cannot feel as if they are in a supporting role, or their investment in the narrative will fall apart."
It's a nice theory. Is it true?
I'm Ready For My Close-Up
For some gamers, it is true. They do want to be the hero, and they do expect any and every personification game (where you play as a character) to cater to that desire. Character-based games, they feel, are essentially power fantasies where the world is supposed to revolve around them. Although they are unlikely to say it this way, these gamers expect every feature in a game to be about letting them express dominance, either "physical" (as through combat in a three-dimensional game world) or emotional (as in much interactive fiction).
Even if a desire to follow some version of Campbell's "Hero's Journey" isn't baked into people, gamers today have grown up with games in which you are the hero who saves the world. So many games follow this pattern that it would be surprising if many gamers have not come to expect it as a natural, even required, element of all personification games.
The problem is that this expectation of epic centrality is demonstrably not true for all gamers. Despite the pattern, not every gamer wants to be the hero. There's evidence of a meaningful minority of gamers who are happiest when a game gives them ways to help other players succeed. These gamers truly do not want to be the star -- they prefer a supporting role.
The Cleric as Un-Hero
Consider the four archetypal classes: warrior, wizard, rogue, cleric. Warriors deal mêlée damage and are pack mules; wizards cast ranged spells and know lots of lore; rogues backstab, detect traps and steal shiny things. All of these are heroic in their own way -- their gameplay content is about acting for themselves. The actions the game is designed to allow them to take are all focused on their own self-enhancement.
But clerics, while they can sometimes do divinely-inspired damage, are mostly about healing the wounds or diseases of other characters, along with protecting ("buffing") other characters. That's been the traditional functional definition of the cleric role in roleplaying games dating back to Dungeons & Dragons (and probably before that). A modern addition is some form of "crowd-control" feature, but the function is the same: providing support to the actively heroic characters.
That style of play is not about indulging power fantasies. The game actions that a cleric is built to perform aren't centered on the person playing the character, but on other players. So why are cleric roles implemented in games at all? Why do developers even bother implementing a cleric role if character-based games are supposed to be all about letting the player feel like a hero?
The World Needs a Healer
One explanation is that the healer role is included simply as a matter of utility. If a game is pretty much all about killing (as most computer games are), then to make it interesting there needs to be some risk of being injured yourself. If there's no way to heal your own injuries, then you need someone else to do the healing. And in a typical fantasy setting, that character is the cleric. In a modern setting, this role is often called a "medic," as in Valve's Team Fortress 2 multiplayer game, but it's pretty much the same other-focused functionality.
But of course it's not a hard design requirement in any constructed computer game to have some other person heal your character. It's simple enough to provide the healing function through potions or stimpaks that magically undo character damage. And yet game developers keep implementing character class roles whose abilities are focused on helping other characters.
Perhaps developers do this because enough gamers like playing clerics to justify moving those abilities to a separate class role. But that begs the question: if a roleplaying game offers a role whose primary function is to support other players, why are there so many gamers who are happy to fill that role? If everyone really expects to be the star, who are all these people looking to be part of the supporting cast?
The Craft of Helping
In fact, clerics aren't the only source of supportive abilities in roleplaying games, particularly in the massively multiplayer online variety (MMORPGs). A popular alternative activity in these games is "crafting," which involves creating objects that are usable and useful inside the game world.
Although there is pleasure in the crafting of new things (though, from my Explorer perspective, that reason for crafting is almost never emphasized), most crafting in MMORPGs is there to provide useful objects for other players. Often these are specific to combat gameplay -- weapons or ammunition -- but crafting can also be defined as a source of tools such as fishing poles or resource detectors.
Either way, in a game where usable objects can be looted from defeated enemies, implementing crafting gameplay insures that combat players don't have to hope and wait for certain items to drop as loot. Crafting also allows some players to serve a useful role in a game without forcing them to participate in direct combat gameplay. This allows more people to play the game (and pay for the privilege) than would have been the case in a combat-only game.
One of the best-known descriptions of this playstyle preference is the article posted to Stratics in 2001 by Lloyd Sommerer (as "Sie Ming"): I Want to Bake Bread. In this plea for game developer understanding, Lloyd ably points out the kinds of supportive behaviors that some gamers enjoy providing, and wonders why developers don't seem interested in the benefits a game can obtain from including features that attract gamers like these.
It's still a good question.
Supportive Play is Good Gameplay
To sum up: some people enjoy helping other people, but few games reward that playstyle. That's a missed opportunity, both in terms of revenue and of including people in your gaming community who are genuinely helpful. If they don't play the hero, that's OK, and smart developers will create gameplay for them instead of trying to force them into the hero's boots (which won't work).
The people who come to a computer game wanting to play a healer, or a maker of things, are there specifically because they want to play a character-filled game that does not force them into the spotlight. Being able to play a supporting role satisfies a deeply held need of some people to be of service to others. These helpful souls are not only content to let others have the limelight, they actually prefer it that way. Their pleasure comes from helping others succeed.
That kind of character is in direct contradiction to how pretty much every personification game is designed. Whether you like it or not, you're forced to be the star, to make all the big decisions for yourself and maybe others, too.
But by assuming that every game has to be designed that way, developers are telling many would-be gamers that their playstyle interests aren't wanted. That's a shame both artistically and commercially.
Games don't need to be only about support roles. You could create a game where the player can only be a healer or a crafter, and those might be fun -- but it's not necessary to go that far.
A game that offers the option of rewarding players for being supportive, for helping out NPCs or other players in ways that don't involve saving the world or being put on a stage for it, would be one that more people would find enjoyable. It would be more fun for more people, and would bring more cooperative play to gameworlds that are often harshly contentious.
As game design goals go, that's not a bad one.
Thursday, October 18, 2012
That raises an interesting general question. What makes a new game in a series welcomed, tolerated, or reviled by fans of previous entries?
Assuming the later games are of the same or better quality as the first, what makes one game "a welcome update to a series getting stale" and another "a betrayal of everything that fans of this series have come to expect?"
Bearing in mind that love and hate for particular games are often highly subjective responses (to put it politely), I think it's possible to make some broad but useful observations. Here are some suggested categories of gamer sentiment regarding sequels:
A GOOD THING MADE BETTER
- Wing Commander 1 & 2 => Wing Commander 3 & 4
- System Shock => System Shock 2
- Thief => Thief 2: The Metal Age
- Uncharted: Drake's Fortune => Uncharted 2: Among Thieves
- TES IV: Oblivion => TES V: Skyrim
NOT THE SAME, BUT SURPRISINGLY GOOD
- Deus Ex => Deus Ex: Human Revolution
- UFO: Enemy Unknown (X-COM) => XCOM: Enemy Unknown by Firaxis
THEY DUMBED IT DOWN
- TES III: Morrowind => TES IV: Oblivion
- Deus Ex => Deus Ex: Invisible War
- System Shock 1/2 => BioShock
- Mass Effect => Mass Effect 2
YOU KILLED MY PUPPY
- Fallout 1/2 => Fallout 3
- UFO: Enemy Unknown (X-COM) => XCOM by 2K Games [tentative]
- Resident Evil 1-4 => Resident Evil 6
All of these are debatable in their details. I don't agree personally with all of them, and some of them many not even be accurate in an objective sense. But they do, I think, accurately reflect how these games are assessed by gamers generally. So for now, let's assume that you're willing to accept most of the category assignments I've proposed here. Some sequels are loved, some are accepted, and some get mostly bad press.
What do the games in each category have in common with each other? And what sets them apart from the games in the other categories?
One fairly obvious difference is time -- specifically, how much time has passed from one entry in a series to the next. Games that are perceived as improvements on their predecessors tend to be released fairly soon after the prior game, while games made much later tend to be judged more severely. Skepticism probably colors beliefs before a late sequel is released, and nostalgia for a very highly regarded earlier game makes a fair comparison harder for any follow-up. This suggests that it's a good idea to have a design for a fairly similar sequel ready to implement if the initial game in a new franchise takes off.
Slightly less obvious, and related to time, is who makes the follow-up game. A sequel made by the original game's creator (or members of the team that made the original game) is likely to be perceived more positively than a game made by a completely different developer.
(There are exceptions for a few studios. Knights of the Old Republic 2 and Fallout: New Vegas, developed by Obsidian Entertainment, while less appealing to some players of KOTOR and Fallout 3, received higher marks from many gamers. And Eidos's respectful handling of its Deus Ex prequel muted much of the negative discussion of its in-development Thief sequel. At worst, sequel games made by studios that early game fans feel they can trust fall somewhere between positive and mixed reception.)
Changing the display engine or target platform often generates some disapproval. This showed up in particular after 2000 when primary development shifted from the PC to the new generation of game consoles. Deus Ex and The Elder Scrolls are examples of franchises that suffered from this perception; Deus Ex: Invisible War and TES IV: Oblivion were developed first for consoles then ported to the PC platform of their original games, and are frequently given the "dumbed down" criticism by fans of the earlier games.
BioShock, though not a direct sequel to the PC-based System Shock games, also met with some of this criticism, but overcame it by creating a new and strongly-realized setting for the fairly similar game mechanics. BioShock also shows that falling into this category doesn't imply that the later games must be "bad," either artistically or commercially. Gamers lost to the "dumbed down" problem may be replaced by those who gravitate to or grow up using the newer target platforms.
A final factor appears to be whether a sequel makes significant alterations to the primary gameplay mechanics (and often the player visual perspective) associated with a popular franchise. The X-COM and Fallout franchises went through this -- fans of the pausable, tactical third-person format of the earlier games reacted very negatively to the shift to real-time, first-person shooter gameplay of the later games. Fans of Fallout 1/2 can still be found grousing about the change in Fallout 3 despite the later game's evident quality and popularity. Mass Effect 2 was criticized for significantly reducing the number of character skill options from the more RPG-like orginal Mass Effect. And 2K's as-yet-unreleased first-person shooter take on X-COM (which was recently revealed as having been changed to third-person perspective) generated more negative comment than Firaxis's more faithful recreation.
These effects are understandable, and maybe unavoidable. It's impossible for a sequel to perfectly please every gamer who enjoyed the initial game(s) while at the same time changing to attract new players. Gamers as a group are notorious for wanting "the same, only different." If it's too different, you lose the fans who liked the original game. But if it's too similar, you'll be criticized for "charging for the same game twice."
It's also creatively and financially risky to make too many trips to the same well without perking things up somehow -- consumers of any kind of entertainment will eventually tune out. Finally, from a developer's viewpoint it's just less fun to iterate on a well-known formula than to make a new game that stretches some different developer muscles.
Those realities acknowledged, it's also true (as Simon Ludgate recently pointed out) that if you're going to make a game that purports to be a new entry in a popular series, then your new game's design ought to at least include some core elements from the games that made the series popular. This is both a matter of courtesy and business: it does not pay to antagonize the people who are the biggest (and often most vocal) fans of the franchise you're trying to extend.
Finding the balance point between respecting the past while meeting new modern expectations is hard. But the reward for doing it well is gamer trust that translates directly into future sales.
Otherwise, just call it a "spiritual successor"....