Wednesday, December 5, 2012

Exploration, Interrupted

Some games are about exploration. In these games, the pleasure of discovering and understanding is the central form of play. The mechanics of the game all encourage and reward exploratory play.

Other games are about something else, such as excitement or accumulating stuff, and just happen to have some exploration in them. Most games fall into this category.

I bring this up because I'm reading more comments these days calling some game that doesn't emphasize exploration an "exploration game." It's not. Just having some exploration content in it does not make it an exploration game.


It's good that some developers are willing to wedge a little exploration content into their games. I appreciate it when someone offers content that respects the Rational/Explorer/Simulationist playstyle.

It's also understandable (if a bit sad) that, for the gamers who are starved for exploration-specific content, a game with even a little exploration in it can be considered an "exploration game."

But it isn't, not really. To say that a thing is a member of some distinctive group when it functionally is not is to destroy the meaning of the words used to name that group. That makes it harder to communicate usefully.

In the case of a game, the descriptive term "exploration game" loses its value for both marketing and critical discussion when it's applied to games that aren't actually about exploration play. It's misleading to gamers.

More importantly from the design perspective of this blog, to say that a game is an exploration game just because it can sort of be played that way occasionally confuses the set of possibilities that new game designers can even form in their heads of what an "exploration game" contains. If you grow up thinking that a game about exciting non-stop action and loot collecting counts as an exploration game just because there's some optional terrain to view or variation in loot drops, what are the odds that you will really understand what Explorer gamers actually want?


In particular, I've come to think that action-focused games that have as a primary play mechanic some kind of ticking clock or other (often mobile) threat that disrupts planned play behaviors are almost certainly not exploration games, and shouldn't be called that.

In short: mechanics that interrupt exploration actively oppose exploration play.

If you're a game designer, it doesn't matter how many goodies you hide, or how big the world is, or if you provide the occasional alternate track off the Direct Path To Fastest Victory. If your game persistently interrupts the perception and thinking process of the player with some kind of threat, then it's not an exploration game. That game is explicitly telling players that exploration is not your highest priority for them.

Interrupt threats are great for action games. Unexpected survival challenges generate excitement and the immediate requirement to do things. Doom 3, for example, was all about interrupt threats. Something similar is true of Minecraft in Survival mode -- getting jumped by a giant spider in the dark tends to distract one from sightseeing.

That's why I play Minecraft in Peaceful mode. Even without the ability to save at will, Peaceful mode at least allows exploration of the generated gameworld without having to worry constantly about being interrupted by a mobile survival threat.

On the other hand, Peaceful mode is not an option in Doom 3. Doom 3 was all about interrupt threats. Doom 3 was not an exploration game.


Is the primary design intention of your game is to deliver an action-filled play experience? If so, then by all means, implement some speed-related interruption threats such as a limited amount of time in which to observe and plan, or enemies that come looking for you to attack you after a certain time, or ever-decreasing amounts of time in which to plan your next move.

Those are great for ramping up tension. You might even choose to deliberately prevent players from quicksaving all of their game state, or give them only one save slot. (Both of which are precisely what the PC version of Far Cry 3 does.)

All these are valid design choices for making a high-intensity action game...

...and every one of them directly opposes exploratory play. Survival interruptions significantly increase the difficulty of obtaining knowledge of the gameworld. Restrictions on saving game state significantly increase the risk of losing some of that knowledge. Interruptive mechanics and save restrictions penalize exploration. They send the unmistakable message that exploring the gameworld is not what you're supposed to be doing. Anyone who tries to get that kind of enjoyment out of such a game is clearly not playing it the right way.

It's only when exploration is designed to matter most that the description "exploration game" is appropriate.

Survival interruptions and save restrictions very clearly tell the player that the point of a game is not exploration, that things to discover are only there to add some "content" or to provide a brief contrast to the action moments. If exploration actually mattered as primary gameplay, then the designer wouldn't emphasize gameplay elements that constantly interfere with perception and pattern recognition or that make it difficult to accumulate and organize knowledge.


An obvious question at this point is whether a game that has lots of combat can ever be considered a true exploration game. My feeling is that combat, by its nature, tends to be interruptive, and so almost always works against exploration. But it's entirely possible to have an exploration game with some combat in it.

System Shock 2 is a good example of this. Unlike many games with combat, the combat system of System Shock 2 was systemically deep enough to be something interesting to explore in its own right. And it's telling that one of the few complaints about SS2 was the "respawning enemies." As in Doom 3, this worked against being able to concentrate on perceiving and understanding the patterns of the gameworld. When those interruptions didn't occur, System Shock 2 was extremely rewarding to explore, not in spite of having a strong combat element but partly because its combat element was a richly designed system.

A true exploration game asks the player to observe, and think, and understand. Not in an "oh god, oh god, we're all gonna die if you don't get it right instantly!" kind of way, but in a contemplative, creative, and strategic way. Success in a true exploration game, and the rewards for that success, go not to the player with the most well-developed fast-twitch muscles, nor to the most tactically adaptable player, nor even to the player who can bring out the best in other people. Success in a good exploration game -- most likely designed by someone who understands and values the joy of discovering interesting things -- goes to the player who sees the patterns in complex systems, and who can conceive long-range plans for applying available forces to those patterns to create new, more desirable configurations.

Players just don't get to do that when you're putting them in life-termination scenarios every few seconds. Games that are about interruptive excitement are not about exploration.


Games that are about exploration have mechanics and content that promote the different but equally valid kind of satisfaction that comes from realizing the governing pattern in a complex system, or from devising a new system that is both functional and elegant.

Not everybody likes that kind of gameplay. Great! There's plenty of room for different kinds of games that offer different kinds of fun. And there are plenty of games available that emphasize action and token-collection.

But being designed to offer a primary gameplay experience of something other than adrenaline-pumping excitement does not make a game flawed or bad. Not delivering eyeball kicks every ten seconds does not make a game buggy or broken.

That's a game that could, if its mechanics emphasize the pleasure of discovery and understanding, be a game that deserves to be called an "exploration game."

Saturday, November 24, 2012

Player Creativity Considered Harmful

Is player creativity desirable in games?

There's a subset of developers who seem to think so. They like the idea that players should be able to express behaviors and create objects in a gameworld that they (the developers) never thought of.

But these appear to be a distinct minority. Most games are deliberately designed and developed to prevent any truly creative play. In particular, the number of in-game effects that characters and objects can demonstrate are cut back as much as possible.

Why take such pains? Why are most developers so determined to strictly limit player verbs or possible system interactions if player creativity is such a great thing?

There are several not entirely bad reasons why. Unfortunately for the game industry, I believe the combination of these justifications winds up leading to a severe majority of games that are so tightly controlled as to nearly play themselves.


One problem with allowing player creativity is rude content.

If you let players do things that modify the gameworld, particularly if they can interact with other players in any way, they are guaranteed to spell out naughty words, erect enormous genitalia, and build penisauruses. (Google "Sporn" for NSFW examples of how gamers immediately used Spore's creativity tools.)

Developers can accept this if they're OK with a mature rating for their game, but creativity tools make it tough to sell a multiplayer game that's kid-safe.


Another problem is that emergent behaviors can look to some gamers like bugs.

That doesn't mean they are actual bugs, defined for games as behavior that opposes the intended play experience. Just because it was unintended doesn't mean it opposes the desired play experience.

The developers of Dishonored, for example, were surprised to see their playtesters possess a target while plummeting from a great height, thus avoiding deceleration trauma. It wasn't intended -- it emerged from the interaction of systems -- but it made sense within the play experience Arkane had in mind. So it wasn't a bug, it was a feature... and it got to stay in the game. That appears to be a rare exception to standard practice, though.


Crafting in MMORPGs is not creative. Crafting -- making objects -- in MMORPGs has nothing to do with "craft" or being "crafty"; it's about mass-producing widgets to win economic competition play. That's a perfect valid kind of play. But it isn't creative.

An argument might be made that some creativity is needed to sell a lot of stuff. But that's not related to crafting as a process of imagining new kinds of objects that meet specific purposes and elegantly bringing them into existence within a gameworld. That's "craftsmanship," and it's what a crafting system worthy of the name would be... but that's not what crafting in MMORPGs ever actually is.

A truly creative crafting system would allow the internal economy of a gameworld to grow through the invention of new IP. Wouldn't that be an interesting way to counter mudflation?

To be fair, a creative crafting system would probably far outshine the rest of most MMORPGs. Part of the crafting system in the late Star Wars Galaxies (SWG) MMORPG was highly regarded, but in an odd way it was so much fun that it didn't ever really fit into a Star Wars game.

So what might a MMORPG (i.e., not Second Life) with a truly creativity-encouraging crafting system look like? In what kind of gameworld would the ability for players to imagine and implement entirely new kinds of things be appropriate?


Yet another reason to deprecate player creativity is game balance. Especially in multiplayer games, developers not unreasonably want to try to keep the playing field level for players using (marginally) different playstyles.

A common way this gets expressed is by organizing character skills in level-controlled classes. It's more interesting to key character abilities to skills, and let players pick and choose the skills they want. But this (developers have decided) allows the emergence of character ability combinations that may be either unexpectedly "overpowered" or too "weak" to compete effectively with players of similar skill levels.

This perspective that "interacting systems allow emergent effects that interfere with the intended play experience and therefore must be minimized" explains (as one example) why Sony Online Entertainment completely deleted the extensive individual skills system of the original Star Wars Galaxies and replaced it with a few static classes with specific abilities at developer-determined levels, just like pretty much every other MMORPG out there.

The New Gameplay Experience was well-regarded by some of SWG's new players. But many long-time players felt that the original SWG's unique skills-based ability model was much more creatively satisfying. When it was changed so radically to a class-based model, eliminating their ability to express themselves in a detailed way through their character's abilities, they left the game.

EVE Online also allows skill selection, but in practice most people wind up with the same skills. So is it possible any longer to offer a major MMORPG that encodes player abilities in mix-and-match skills, rather than a small set of classes in which my Level 80 Rogue is functionally identical to your Level 80 Rogue?


One more reason why emergence gets locked down in games starts, ironically, with sensibly trying to use more mature software development practices.

Test case driven software development is the process of documenting what your code is supposed to do through well-defined requirements, then writing test cases that describe how to find out whether the software you actually write meets those requirements.

That's often a Good Thing. It helps to insure that you deliver will be what your customers are expecting. But there is a dark side to this process, as there can be for any process, which is that if your organization starts getting top-heavy, with a lot of layers between the people running things and those doing the actual game development, the process eventually tends to become the deliverable. Reality becomes whatever the process says it is. Process is easier to measure than the meaning of some development action: "How many lines of code did you write today?"

The practical result of enforcing the "everything must have a test case" process is that every feature must have a test case. That's actually pretty handy for testing to a well-defined set of expectations.

Unfortunately, the all-too-common corollary is: if we didn't write a test case for it, you're not allowed to have that feature. At that point, the process has become your deliverable, and your game is very unlikely to tolerate any creativity from its players. It might be a good game by some standard. But it probably won't be memorable.

Still, a process for reliably catching real bugs is valuable. So how can the desire to allow some creativity and the need to deliver measureably high quality coexist?


Finally, there is the problem of the Epic Story.

Emergent gameplay invites exploratory creativty. But broadly emergent gameplay interferes with a carefully-crafted narrative. The more epic and detailed the story -- which translates to more development money spent on that content -- the less freedom you can permit players to go do their own wacky things, because then they might not see that expensive content. The Witcher 2 fought this somewhat, but it's emphatically the exception.

Is there a middle ground between developer story and player freedom? Or is there a way to design a game so that both of these can be expressed strongly?

To sum up: from the perspective of many game developers, especially in the AAA realm, "emergent" automatically equals "bug" in all cases. A mindset that only the developers know how the game is meant to be played, rather than a respect for what players themselves enjoy doing, is leading many developers to design against creativity. The idea of of actually increasing the number of systems or permitted system interactions seems to be something that just will not be permitted.

The result is that player creativity in these games is so constrained as to be nonexistent. You're just mashing buttons until you solve each challenge, in proper order, in the one way the developers intended.

Is there any sign that this might be changing, perhaps as the success of some indie games demonstrates that there is a real desire for games that encourage player creativity?

Monday, November 12, 2012

Plausibility Versus Realism

Every now and then, a forward-thinking, open, courteous, kind, thrifty and generally very attractive group of developers will choose to let gamers see a little of their design thinking for a game in development.

For roleplaying games such as the recently very well Kickstarted Project Eternity by Obsidian Entertainment, the conversation can be informed and thoughtful. Not all of the ideas suggested by enthusiastic armchair designers will be right for a particular game, but the level of discussion is frequently very high.

However, in the years since I've observed such forums, there is inevitably a conversational glitch that appears. It doesn't take long before even very knowledgeable commenters will begin to argue in favor of gameplay systems that replicate real-world physical effects.


They might be asking for armor that has weight and thus restricts physically weak characters from equipping it at all. Or maybe it's for weapons that require specialized training, so that a character must have obtained a particular skill to use certain weapons. Sometimes there's a request for a detailed list of damage types, or for complex formulas for calculating damage amounts, or that environmental conditions like rain or snow should reduce movement rates or make it harder to hit opponents.

What all these and similar design ideas share (other than an enthusiasm for functionally rich environments) is an unspoken assumption that the RPG in question needs more realism.

Later on I'll go into where I think this assumption comes from. For now, I'd like to consider why I think there's a better approach when trying to contribute to a game's design -- instead of realism, the better metric is plausibility.


The difference between realism and plausibility is a little subtle, but it's not just semantic. Realism is about selecting certain physical aspects of our real world and simulating them within the constructed reality of the game world; plausibility is about designing systems that specifically feel appropriate to that particular game world. Plausibility is better than realism in designing a game with a created world -- what Tolkien called a "secondary reality" -- because realism crosses the boundary of the magic circle separating the real world from the logic of the constructed world while plausible features are 100% contained within the lore of the created world.

To put it another way, plausibility is a better goal than realism because designing a game-complete set of internally consistent systems delivers more fun than achieving limited consistency with real-world qualities and processes. Making this distinction is crucial when it comes to designing actual game systems. Every plausible feature makes the invented world better; the same isn't true of all realistic features imported into the gameworld. Being realistic doesn't necessarily improve the world of the game.

Despite this, a design idea based on realism often sounds reasonable at first. We're accustomed to objects having physical properties like size and weight, for example, as well as dynamic properties such as destructibility and combustibility. So when objects are to be implemented in a game world, it's natural to assume that those objects should be implemented to express those physical characteristics.

But there are practical and creative reasons not to make that assumption.


Creating simulations of physical systems -- which is the goal of realism -- requires money and time to get those systems substantially right relative to how the real world works. Not only must the specific system be researched, designed and tested, but all the combinations of simulated systems that interface with each other must be tested -- all the emergent behaviors have to seem realistic, too.

Trying to meet a standard of operation substantially similar to something that exists outside the imaginary world of the game is just harder than creating a system that only needs to be consistent with other internal game systems.

Maybe worst of all, there are so many real-world physical processes that it's impossible to mimic even a fraction of them in a game. Something will always be left out. And the more you've worked to faithfully model many processes, the more obvious it will be that some process is "missing." This will therefore be the very first thing that any reviewer or player notices. "This game claims to be realistic, but I was able to mix oil and water. Broken! Unplayable! False advertising!"

This isn't to say that all simulation of physical processes is hard/expensive, or that there can never be a good justification for including certain processes (gravity, for example). Depending on your game, it might be justifiable to license a physics simulation library such as Havok.

But the value of implementing any feature always has to be assessed by comparing the likely cost to the prospective benefits. For many games (especially those being made on a tight budget), realism should almost always be secondary to plausibility because realism costs more without necessarily delivering more fun for more players.


Knowing when to apply either realism or plausibility as the design standard depends on understanding what kind of game you're making. If it's core to your gameplay, such as throwing objects in a 3D space, then the value of simulating some realistic effects like gravity increases because the benefits of those effects are high for that particular game. Otherwise, you're better off implementing only a plausible solution or excluding that effect entirely.

Let's say you're making a 3D tennis game. The core of tennis is how the ball behaves, and the fun comes from how players are able to affect that behavior. So it makes sense for your game design to emphasize in a reasonably realistic way the motion of a ball in an Earth-normal gravity field (a parabola), as well as how the angle and speed of a racquet's elastic collision with a ball alters the ball's movement. If it's meant to be a serious sports simulation, you might also choose to model some carefully selected material properties (clay court versus grass) and weather conditions (humidity, rain).

But you probably wouldn't want to try to simulate things like the Earth's curvature, or solar radiation, or spectator psychology. They don't have enough impact on the core fun to justify the cost to simulate them. And for a simple social console game, ball movement and racquet impact are probably the only things that need limited realism. The better standard for everything else that has to be implemented as the world of the game is plausibility. If it doesn't help the overall game feel logically and emotionally real in and of itself, then it doesn't meet the standard and probably should not be implemented.


This is even more true for a character-based game, which requires a world in which those characters have a history and relationships and actions they can take with consequences that make sense. In designing a 3D computer role-playing game, the urge to add realistic qualities and processes to the world of that game can be very hard to resist.

Action-oriented Achiever gamers are usually OK with abstracted systems; the fun for them comes in winning through following the rules, whatever those rules might be. But for some gamers -- I'm looking at you, Idealist/Narrativists and my fellow Rational/Explorers -- the emotional meanings and logical patterns of those rules matter a great deal.

Characters who behave like real people, and dynamic world-systems that behave like real-world systems, make the game more fun for us. We find pleasure in treating the gameworld like it's a real place inhabited by real people, not just a collection of arbitrary rules to be beaten. For us, games (and 3D worlds with NPCs in particular) are more fun when they offer believable Thinking and Feeling content in addition to Doing and Having content.

Providing that kind of content for Narrativist gamers is non-trivial. "Realistic" NPC AI is hard because, as humans, we know intimately what sensible behavior looks like. So while there are often calls for NPCs to seem "smarter," meaning that they're more emotionally or logically realistic as people, it's tough for game developers to sell to publishers the value of the time (i.e., money) that would be required to add realistic AI as another feature. Developers of RPGs usually have a lot of systems to design and code and test. So working on AI systems that give NPCs the appearance of a realistic level of behavioral depth, in addition to all the other features, is very hard to justify. (The Storybricks technology is intended to help with building more plausible NPCs. But that's the whole point of that technology.)

Another argument against realism in a gameworld is complexity. Most developers prefer to build simple, highly constrained, testable cause/effect functions rather than the kinds of complex and interacting systems that can produce the kinds of surprising emergent behaviors found in the real world. Explorers find that hard to accept. Explorers aren't just mappers of physical terrain; they are discoverers of dynamic systems -- they love studying and tinkering with moving parts, all interacting as part of a logically coherent whole.

Explorers also tend to know a little about a lot of such systems in the real world, from geologic weathering to macroeconomics to the history of arms and armor and beyond. So it's natural for them to want to apply that knowledge of real systems to game systems. Since you're building a world anyway (they reason), you might as well just add this one little dynamic system that will make the game feel totally right.

Now multiply that by a hundred, or a thousand. And then make all those systems play nicely together, and cohere to support the core vision for the intended play experience. "Hard to do well" is an understatement.


That's the practical reason why, for most systems in a worldy game, plausibility will usually be the better standard. If the game is meant to emphasize melee combat, for example, then having specific damage types caused by particular weapons and mitigated by certain armors might sound good. Moderate simulation of damage delivery and effects might be justifiable. Those action features will, if they're paced well and reward active play, satisfy most gamers.

But a role-playing game must emphasize character relationships in an invented society, where personal interactions with characters and and the lore of the world are core gameplay and combat is just one form of "interaction" among several. For that kind of game, the better choice is probably to limit the design of that game's combat system to what feels plausible -- get hit, lose some health -- and to abstract away the details.

Plausible systems are especially desirable in roleplaying games because they meet the needs of Explorers and Narrativists. Intellectually and emotionally plausible elements of the game feel right. They satisfy our expectations for how things and people should appear to behave in that created world.

A plausible combat system deals and mitigates damage; it can but doesn't need to distinguish between damage types. A plausible "weather" system has day/night cycles and maybe localized rain; it doesn't require the modeling of cold fronts and ocean temperatures and terrain elevation. A plausible economy has faucets and drains, and prices generally determined by supply and demand; it doesn't have to be a closed system. Plausibility insures that every feature fits the world of the game without doing more than is necessary.

This is the best of both worlds. Making plausibility the standard gives players lots of different kinds of things to do, and it makes the implementation of those systems simple enough for them to be worth implementing and feasible to implement by mere mortals.


So, to gamers enthusiastic about adding realistic interactive capabilities to a brand-new gameworld, I say: whoa, there! Before you ask the designers to add this one additional little thing that would help make the game more "realistic," stop and think about that idea from a game designer's perspective.

Developers can't add every great idea, even if they're still in the design phase. But the chances of seeing some feature suggestion implemented improve if you can explain how it makes the unique world of that game feel more plausible at minimal development cost.

Tuesday, October 30, 2012

They Also Serve Who Only Stand and Wait

"[T]housands at his bidding speed,
And post o'er land and ocean without rest;
They also serve who only stand and wait."
-- John Milton, "On His Blindness"

There's a belief I've seen expressed by a number of developers that can be paraphrased as: "All players want to be The Hero. Every gamer expects to be the all-powerful savior, prime mover of all action, star of the show, center of all attention. Therefore every game that personifies the player as a character in a world must to be designed to allow the player to be the hero. And that goes for multiplayer games, too."

Not to pick on Emily Short, who is a respected creator of Interactive Fiction games, but an example of this perspective can be found in a talk she gave at GDC Online in 2012, as summarized on Gamasutra: Making Everyone Feel Like a Star in a Multiplayer Game. As Gamasutra's Frank Cifaldi summarized it: "Even in a multiplayer game, every player has to feel as if they are playing out their own personal, unique story. They cannot feel as if they are in a supporting role, or their investment in the narrative will fall apart."

It's a nice theory. Is it true?

I'm Ready For My Close-Up

For some gamers, it is true. They do want to be the hero, and they do expect any and every personification game (where you play as a character) to cater to that desire. Character-based games, they feel, are essentially power fantasies where the world is supposed to revolve around them. Although they are unlikely to say it this way, these gamers expect every feature in a game to be about letting them express dominance, either "physical" (as through combat in a three-dimensional game world) or emotional (as in much interactive fiction).

Even if a desire to follow some version of Campbell's "Hero's Journey" isn't baked into people, gamers today have grown up with games in which you are the hero who saves the world. So many games follow this pattern that it would be surprising if many gamers have not come to expect it as a natural, even required, element of all personification games.

The problem is that this expectation of epic centrality is demonstrably not true for all gamers. Despite the pattern, not every gamer wants to be the hero. There's evidence of a meaningful minority of gamers who are happiest when a game gives them ways to help other players succeed. These gamers truly do not want to be the star -- they prefer a supporting role.

The Cleric as Un-Hero

Consider the four archetypal classes: warrior, wizard, rogue, cleric. Warriors deal mêlée damage and are pack mules; wizards cast ranged spells and know lots of lore; rogues backstab, detect traps and steal shiny things. All of these are heroic in their own way -- their gameplay content is about acting for themselves. The actions the game is designed to allow them to take are all focused on their own self-enhancement.

But clerics, while they can sometimes do divinely-inspired damage, are mostly about healing the wounds or diseases of other characters, along with protecting ("buffing") other characters. That's been the traditional functional definition of the cleric role in roleplaying games dating back to Dungeons & Dragons (and probably before that). A modern addition is some form of "crowd-control" feature, but the function is the same: providing support to the actively heroic characters.

That style of play is not about indulging power fantasies. The game actions that a cleric is built to perform aren't centered on the person playing the character, but on other players. So why are cleric roles implemented in games at all? Why do developers even bother implementing a cleric role if character-based games are supposed to be all about letting the player feel like a hero?

The World Needs a Healer

One explanation is that the healer role is included simply as a matter of utility. If a game is pretty much all about killing (as most computer games are), then to make it interesting there needs to be some risk of being injured yourself. If there's no way to heal your own injuries, then you need someone else to do the healing. And in a typical fantasy setting, that character is the cleric. In a modern setting, this role is often called a "medic," as in Valve's Team Fortress 2 multiplayer game, but it's pretty much the same other-focused functionality.

But of course it's not a hard design requirement in any constructed computer game to have some other person heal your character. It's simple enough to provide the healing function through potions or stimpaks that magically undo character damage. And yet game developers keep implementing character class roles whose abilities are focused on helping other characters.

Perhaps developers do this because enough gamers like playing clerics to justify moving those abilities to a separate class role. But that begs the question: if a roleplaying game offers a role whose primary function is to support other players, why are there so many gamers who are happy to fill that role? If everyone really expects to be the star, who are all these people looking to be part of the supporting cast?

The Craft of Helping

In fact, clerics aren't the only source of supportive abilities in roleplaying games, particularly in the massively multiplayer online variety (MMORPGs). A popular alternative activity in these games is "crafting," which involves creating objects that are usable and useful inside the game world.

Although there is pleasure in the crafting of new things (though, from my Explorer perspective, that reason for crafting is almost never emphasized), most crafting in MMORPGs is there to provide useful objects for other players. Often these are specific to combat gameplay -- weapons or ammunition -- but crafting can also be defined as a source of tools such as fishing poles or resource detectors.

Either way, in a game where usable objects can be looted from defeated enemies, implementing crafting gameplay insures that combat players don't have to hope and wait for certain items to drop as loot. Crafting also allows some players to serve a useful role in a game without forcing them to participate in direct combat gameplay. This allows more people to play the game (and pay for the privilege) than would have been the case in a combat-only game.

One of the best-known descriptions of this playstyle preference is the article posted to Stratics in 2001 by Lloyd Sommerer (as "Sie Ming"): I Want to Bake Bread. In this plea for game developer understanding, Lloyd ably points out the kinds of supportive behaviors that some gamers enjoy providing, and wonders why developers don't seem interested in the benefits a game can obtain from including features that attract gamers like these.

It's still a good question.

Supportive Play is Good Gameplay

To sum up: some people enjoy helping other people, but few games reward that playstyle. That's a missed opportunity, both in terms of revenue and of including people in your gaming community who are genuinely helpful. If they don't play the hero, that's OK, and smart developers will create gameplay for them instead of trying to force them into the hero's boots (which won't work).

The people who come to a computer game wanting to play a healer, or a maker of things, are there specifically because they want to play a character-filled game that does not force them into the spotlight. Being able to play a supporting role satisfies a deeply held need of some people to be of service to others. These helpful souls are not only content to let others have the limelight, they actually prefer it that way. Their pleasure comes from helping others succeed.

That kind of character is in direct contradiction to how pretty much every personification game is designed. Whether you like it or not, you're forced to be the star, to make all the big decisions for yourself and maybe others, too.

But by assuming that every game has to be designed that way, developers are telling many would-be gamers that their playstyle interests aren't wanted. That's a shame both artistically and commercially.

Games don't need to be only about support roles. You could create a game where the player can only be a healer or a crafter, and those might be fun -- but it's not necessary to go that far.

A game that offers the option of rewarding players for being supportive, for helping out NPCs or other players in ways that don't involve saving the world or being put on a stage for it, would be one that more people would find enjoyable. It would be more fun for more people, and would bring more cooperative play to gameworlds that are often harshly contentious.

As game design goals go, that's not a bad one.

Thursday, October 18, 2012

Squeal vs. Squee: How Game Sequels are Received

The recent release of Resident Evil 6 generated a fair bit of discussion regarding how some players of previous installations of the the survival horror franchise are not happy with RE6's new, more action-oriented direction.

That raises an interesting general question. What makes a new game in a series welcomed, tolerated, or reviled by fans of previous entries?

Assuming the later games are of the same or better quality as the first, what makes one game "a welcome update to a series getting stale" and another "a betrayal of everything that fans of this series have come to expect?"

Bearing in mind that love and hate for particular games are often highly subjective responses (to put it politely), I think it's possible to make some broad but useful observations. Here are some suggested categories of gamer sentiment regarding sequels:

  • Wing Commander 1 & 2 => Wing Commander 3 & 4
  • System Shock => System Shock 2
  • Thief => Thief 2: The Metal Age
  • Uncharted: Drake's Fortune => Uncharted 2: Among Thieves
  • TES IV: Oblivion => TES V: Skyrim

  • Deus Ex => Deus Ex: Human Revolution
  • UFO: Enemy Unknown (X-COM) => XCOM: Enemy Unknown by Firaxis

  • TES III: Morrowind => TES IV: Oblivion
  • Deus Ex => Deus Ex: Invisible War
  • System Shock 1/2 => BioShock
  • Mass Effect => Mass Effect 2

  • Fallout 1/2 => Fallout 3
  • UFO: Enemy Unknown (X-COM) => XCOM by 2K Games [tentative]
  • Resident Evil 1-4 => Resident Evil 6

All of these are debatable in their details. I don't agree personally with all of them, and some of them many not even be accurate in an objective sense. But they do, I think, accurately reflect how these games are assessed by gamers generally. So for now, let's assume that you're willing to accept most of the category assignments I've proposed here. Some sequels are loved, some are accepted, and some get mostly bad press.

What do the games in each category have in common with each other? And what sets them apart from the games in the other categories?

One fairly obvious difference is time -- specifically, how much time has passed from one entry in a series to the next. Games that are perceived as improvements on their predecessors tend to be released fairly soon after the prior game, while games made much later tend to be judged more severely. Skepticism probably colors beliefs before a late sequel is released, and nostalgia for a very highly regarded earlier game makes a fair comparison harder for any follow-up. This suggests that it's a good idea to have a design for a fairly similar sequel ready to implement if the initial game in a new franchise takes off.

Slightly less obvious, and related to time, is who makes the follow-up game. A sequel made by the original game's creator (or members of the team that made the original game) is likely to be perceived more positively than a game made by a completely different developer.

(There are exceptions for a few studios. Knights of the Old Republic 2 and Fallout: New Vegas, developed by Obsidian Entertainment, while less appealing to some players of KOTOR and Fallout 3, received higher marks from many gamers. And Eidos's respectful handling of its Deus Ex prequel muted much of the negative discussion of its in-development Thief sequel. At worst, sequel games made by studios that early game fans feel they can trust fall somewhere between positive and mixed reception.)

Changing the display engine or target platform often generates some disapproval. This showed up in particular after 2000 when primary development shifted from the PC to the new generation of game consoles. Deus Ex and The Elder Scrolls are examples of franchises that suffered from this perception; Deus Ex: Invisible War and TES IV: Oblivion were developed first for consoles then ported to the PC platform of their original games, and are frequently given the "dumbed down" criticism by fans of the earlier games.

BioShock, though not a direct sequel to the PC-based System Shock games, also met with some of this criticism, but overcame it by creating a new and strongly-realized setting for the fairly similar game mechanics. BioShock also shows that falling into this category doesn't imply that the later games must be "bad," either artistically or commercially. Gamers lost to the "dumbed down" problem may be replaced by those who gravitate to or grow up using the newer target platforms.

A final factor appears to be whether a sequel makes significant alterations to the primary gameplay mechanics (and often the player visual perspective) associated with a popular franchise. The X-COM and Fallout franchises went through this -- fans of the pausable, tactical third-person format of the earlier games reacted very negatively to the shift to real-time, first-person shooter gameplay of the later games. Fans of Fallout 1/2 can still be found grousing about the change in Fallout 3 despite the later game's evident quality and popularity. Mass Effect 2 was criticized for significantly reducing the number of character skill options from the more RPG-like orginal Mass Effect. And 2K's as-yet-unreleased first-person shooter take on X-COM (which was recently revealed as having been changed to third-person perspective) generated more negative comment than Firaxis's more faithful recreation.

These effects are understandable, and maybe unavoidable. It's impossible for a sequel to perfectly please every gamer who enjoyed the initial game(s) while at the same time changing to attract new players. Gamers as a group are notorious for wanting "the same, only different." If it's too different, you lose the fans who liked the original game. But if it's too similar, you'll be criticized for "charging for the same game twice."

It's also creatively and financially risky to make too many trips to the same well without perking things up somehow -- consumers of any kind of entertainment will eventually tune out. Finally, from a developer's viewpoint it's just less fun to iterate on a well-known formula than to make a new game that stretches some different developer muscles.

Those realities acknowledged, it's also true (as Simon Ludgate recently pointed out) that if you're going to make a game that purports to be a new entry in a popular series, then your new game's design ought to at least include some core elements from the games that made the series popular. This is both a matter of courtesy and business: it does not pay to antagonize the people who are the biggest (and often most vocal) fans of the franchise you're trying to extend.

Finding the balance point between respecting the past while meeting new modern expectations is hard. But the reward for doing it well is gamer trust that translates directly into future sales.

Otherwise, just call it a "spiritual successor"....

Thursday, July 5, 2012

Game Design and the Two Cultures of Art and Engineering

A recent article on the computer game development website Gamasutra -- "Fun Is Boring" by Niels Clark -- takes a verbal bat to the recent uptick in arguments that start off as theory and turn into semantic quibbling over the Real Meaning of words like "game" and "fun."

I didn't disagree. But as I kept reading, this not-unreasonable rant seemed to turn into a jeremiad against theories of game design in general.

As the writer of one piece of theory (which I was brazen enough to call a "Unified Model" of personality-based gameplay styles), that bothered me greatly.

I grew up reading and loving science fiction and fantasy, as well as making music of all kinds, and vividly recall being mocked once for the mistake of saying of a remarkable sunset, "That's beautiful." The artistry of meaning has mattered to me.

I also learned to love science and engineering, and the processes by which real things can be efficiently created to accomplish intended purposes. I took to computer programming like a mammal to oxygen, and get paid to manage software development projects. So I also know something about the practicality of production.

And that's why articles that appear to denigrate either art or engineering -- in general, and in computer game design particularly -- seem self-evidently counter-productive to me.

Consider the design of computer games (which is what this blog is about, mostly). Is "fun" an ineffable, Platonic quality that strikes randomly like lightning? Or is it a specifically definable Thing that, with the right planning and execution, can be produced reliably and whose fitness can be measured?

Why are some designers unwilling to accept that making a broadly enjoyable game depends on both artistry and engineering?

A professional computer game development website like Gamasutra is full of how-to articles -- but why have those if making "fun" is random? Why tell aspiring developers to study how games get made? Why bother trying to have or use a vocabulary for expressing the nature of fun at any level if successfully applying that vocabulary is a complete crapshoot? Even if it's not perfect, having some shared language of design increases the likelihood that a particular gameplay mechanic will suit its intended design purpose.

At the same time, it's obvious that engineering isn't enough, either. There are plenty of games that follow sound software development methodologies for both schedule and cost that somehow miss capturing the spark of enjoyability. There is no perfect recipe for fun; if there were, everyone could and would be doing it. (That cake really is a lie.)

Articles and blog comments pushing (or putting down) either the Artist or the Engineer -- as though they're mutually exclusive -- always feel like yet another rehash of C.P. Snow's "Two Cultures" observation. Even game design veteran Raph Koster commented on July 6 (on his blog and reposted to Gamasutra) on the "two cultures" divide in game design.

I'm never going to quite understand the need some people seem to have to dismiss or disparage any style of understanding the world that isn't theirs. All I can see are the anti-examples where both art and engineering are respected as equally necessary to bring into existence a complex new thing that engenders joy.

A Pixar movie, to take one good example, is both a real thing and a joyful thing. It's a product that got made according to a schedule with budgets, and that resolved a massive number of functional/technical considerations. It's also a glorious exploration of human feeling that's fun for many people. Something like that doesn't happen despite engineering or artistry. It happens because both creative modes are applied. Both are necessary, but neither is sufficient.

So why is there so much resistance to believing the same is true for computer games? Why can't we talk about the theory of making games as well as the practice, while at the same time acknowledging that a truly enjoyable gameplay concept whose creators care about its expression is required for all the process and theory to mean anything?

Artistry is expressed in conceiving ideas for experiences that different kinds of people can find satisfying. Engineering is about turning ideas into reality efficiently enough to make such creative projects achievable.

Why does anyone think that favoring one over the other is necessary?

Saturday, May 5, 2012

The Uncheatable Puzzle Game

Puzzle games have become notorious for having the solution to every puzzle posted online. If you never have to think about a puzzle, but can just look up every answer, is that "cheating?"

Well, what's the point of playing a single-player puzzle game? If it's the pleasure of perceiving elegant (or at least correct) solutions to a series of puzzles, then looking up answers online is, in a way, cheating yourself of that pleasurable experience.

If on the other hand the point of playing a game is to win -- to conquer and get past challenges -- then looking up the answers to puzzles is not cheating at all; it's simply being efficient in reaching the "win" state.

The point here is that different people are going to approach games in different ways, and it's to be expected that they'll respond to any particular game based on their preferred style of play. Someone who enjoys exploring a complex problem-space will spend hours tinkering with just one puzzle in SpaceChem, while someone who enjoys simple and pleasantly repetitive activities will turn to YouTube a few times before deciding that it's a terrible game, not even really a game at all. (Of course it is a game; it's just not their preferred kind of game.)

So the first thing in designing a puzzle game is to understand that even if you design it for people who enjoy thinking about how to solve puzzles, it's going to be played by people who just want to win. And those latter folks are going to share the answers with the world to prove they won. This is why things like "thottbot" came into existence.

This means the puzzle game designer has a choice: build the game knowing that people will share the answers to every puzzle and hope that the people who enjoy solving puzzles won't be put off by that, or design the game's puzzles in such a way that the solution to each one is unique to that player at that moment.

People have been trying that first approach for a long time now. It doesn't work too well for keeping computer games interesting when it's so easy to share and read solutions.

The second approach is harder. It cuts way back on the number of puzzle types that you can imagine (or borrow from Martin Gardner's "Mathematical Recreations" column in Scientific American). It also has the downside -- a significant one for a commercial game -- that it excludes the "I just want to win" gamer almost entirely. They're not into puzzle games to start with; thinking and imagining and perceiving are not their preferred forms of fun, and when those are the only way to win they won't play.

If you as a game designer are OK with that, then how do you make a puzzle whose solution is unique to each player?

A starting point can be found in SpaceChem. Rather than defining a single perfect solution to each challenge, SpaceChem's "convert these inputs to those outputs" puzzle style allows for multiple ways to accomplish each goal. This means the player can either accept the first solution that meets the requirements, or keep thinking and looking for ways to solve the challenge using either the fewest number of symbols or the shortest number of cycles.

In a way, this design creates two games in one. One game is about finding any solution. This works for the "just want to win" players, who as soon as they beaten the current puzzle can move on to the next one until the whole game is finished. The other game is one of optimization, where the goal is to figure out an optimal solution. The "perception is fun" gamers can still enjoy this kind of puzzle even after the "win" gamer has moved on.

(In a way, this non-binary, "multiple solutions to challenges" design approach is also the theme behind the Looking Glass school of gameplay, as found in games like System Shock and Thief and Deus Ex. I believe this is why these games appeal to the thoughtful/imaginative player as well as the dextrous action gamer.)

SpaceChem's isn't quite the "unique solution for each gamer" kind of puzzle, though. "Win" gamers can still brute-force solutions, and they'll still upload to YouTube the most efficient solutions (in both cycles and symbols) for every challenge.

Something close enough to unique might be a puzzle design where the puzzles are procedurally generated, and where the number of possible puzzles is something like a hundred million or more. A game might consist of a set of 30 puzzles randomly selected from the full set of possible puzzles. Even working together, gamers will not figure out all the solutions and post them online. Of course, in this case you have to be sure that no one can figure out the procedural generation algorithm! (Good luck with that.)

Finally, there's the puzzle design where each puzzle really is unique to each player so that the solutions can't be looked up but must be solved personally. The "elapsed time since the start of play" puzzle in Fez comes close to this form. Taken to its final form, it might be possible to design a game where the kind of puzzle you get in one round depends on some elements of the puzzles presented in previous rounds.

Even for this kind of game, the solutions to the first few puzzles would still be posted online. (Actually, that might not be a bad thing as it would help new players ease into the game.) But the further into the game, the less likely it would be that someone else would have gotten the same puzzle, solved it, and posted the solution. In a way, this is really a variation on the "massive solution space" approach, but it does have the virtue of including the player's actions in the generation algorithm.

There are almost certainly other ways of designing a puzzle game so that each puzzle has more than one correct solution. The overall point is that if you really want puzzles that remain as much fun for the perceptive gamer as for the persistent player, then you need to find a puzzle design that emphasizes perceptiveness and creativity over persistence.

Wednesday, April 4, 2012

Programming the DCPU-16 Simulated Computer in Notch's New 0x10c Game

Notch (creator of the wonderful computer game Minecraft), has released the specs for the DCPU-16 simulated computer that will be programmable by players of his recently-announced space game 0x10c.

The DCPU-16 is interesting in a number of ways. ("Interesting" in the sense of "abstract symbol-systems that can represent other things are interesting." This may not apply to you.)

In capability it's stronger than the PDP-8 but not as advanced as the first commercial microprocessor, the Intel 4004. (Not completely unreasonable since the story of the game is that it uses 1988-era technology in some ways, although the 4004 was actually released in the early '70s.) The simulated CPU supports indexed addressing in all registers, rather than having separate accumulator and index registers as the old CPUs did, which is nice.

On the other hand, currently there's no support for interrupts, code/data share memory with the stack, and there's no support I can see yet for indirect addressing (as the Motorola 6809 and even the PDP-8 had). There's also only one conditional instruction, which just makes coding more tedious. A macro facility may help there, however.

Naturally, some industrious souls have already written simple emulators for this imaginary processor. :)

Other than some promised "engineering," the rest of the game (fly through space, shoot people) sounds pretty conventional. The ability to do things to ships through clever programming of a simple computer, though -- that elevates this new game to something potentially remarkable.

Monday, February 27, 2012

Joining the Team

Since writing the previous entries on the Storybricks system, I've been invited to join the Storybricks team as Community Manager.

Since I'm now affiliated with Storybricks in a direct way, I wanted to be sure to point out here that this site is my personal game design blogging site. All of the opinions expressed here are my own and don't represent any official positions of the Storybricks organization.

From time to time I may have some thoughts on game design or the game industry that I feel I need to unleash. :) When that happens, again, those will just be my personal views, and they should not be considered to reflect any official statement from or regarding Storybricks.

That said, I certainly encourage you to join us in the Storybricks forums!