Tuesday, November 9, 2010

My Favorite PC Games

A little light reading this time, as compared to the "heavy" stuff I usually write.

Everybody has their favorite bits of entertainment, right? Some people like TV shows; my dad is a huge fan of the "Indian" stories by Joseph Altsheler... and I have a short list of PC games that I consider among the best pieces of entertainment ever created.



Adventure (Crowther & Woods, 1976) — originally for the PDP, but came bundled with the original IBM PC and might have even helped sell a few boxes.

Zork I (1979) — Again, adapted from the PDP version, but still helped garner attention for the IBM PC as a gaming system.

Leisure Suit Larry in the Land of the Lounge Lizards (Sierra On-Line, 1987) — Yes, the PC went there.

Ultima Underworld: The Stygian Abyss (Origin, 1992) — A good story, cool weapons to find, an open world, remarkable level design, quest-giving NPCs, a clever magic system, context-aware music, and even an invented language… and textured full 3D. Wow.

Sim City (Maxis, 1989) — The first great graphical simulation game, since ported to darn near every device on the planet.

Wing Commander (MicroProse, 1990) — Die, Kilrathi scum! A near-perfect blend of flight sim, arcade dogfighter, and space opera.

Civilization (MicroProse, 1991) — If turn-based games never stop being made and played, Civ will be the reason why. How many hours of work have been lost to people still saying, as the sun rose the next morning, “OK, just one more turn…”?

Darklands (MicroProse, 1992) — Combined a near-simulation of medieval weaponry with an implementation of magic through the invocation of saints, and switched from an overworld view when traveling to an isometric view of frozen fields or stone dungeons for combat. Incredibly addictive.

Master of Orion (MicroProse, 1993) — Made more strategic than Civilization by shrinking the playing field and increasing the importance of the technology tree through its effect on planetary development and player-designed starships, MoO defined what Alan Emrich called the new “4X genre”: explore, expand, exploit, exterminate.

DOOM (id, 1993) — How many PCs did this game sell? How many people upgraded from 386DX-based PCs to the 486 because they saw how incredibly smooth the game played? Still the benchmark game for validating the PC as a viable gaming platform.

System Shock (Blue Sky/Looking Glass, 1994) — Perhaps exceeded only by Deus Ex, System Shock fused sensationally clever systems design with solid open-world gameplay and memorable levels, and topped it all off with SHODAN, arguably one of the greatest villains in all of computer gaming.

Jedi Knight: Dark Forces II (LucasArts, 1997) — Jedi Knight improved on the original Dark Forces with clever Force-based puzzles to solve, highly varied lightsaber battles, and some of the best level designs ever seen in any computer game. In particular, “The Falling Ship” must be counted as one of the greatest levels of all time.

Age of Empires (Ensemble, 1997) — Not quite a real-time tactical game like Warcraft, and not quite a turn-based historical strategy game like Civilization, Age of Empires took the best parts of both genres and created something new and wonderful from them.

Baldur’s Gate (BioWare, 1998) — Introduced BioWare’s Infinity Engine for multi-person isometric RPG play, and built a phenomenally good AD&D-based game with it, mating an excellent story with solid fantasy gameplay, and including some legendary characters (“Go for the eyes, Boo! Go for the eyes!!”).

Half-Life (Valve, 1998) — Between the highly intelligent design of levels and tactical challenges, the thoughtful user interface, brilliant scripted sequences, interesting enemy AI and funny NPC barks, Half-Life permanently raised the bar for what a first-person shooter could be. Valve’s openness to player modding of their games, starting with Half-Life, also makes this an important PC game.

The Sims (Maxis, 2000) — Subversive brilliance, The Sims takes playing with virtual dollhouses (and our willingness to torment little computer people) and uses it to communicate a criticism of materialist consumer culture, but it still manages to be fun to play with just for the crazy things you can do with and to the characters in the game.

The Elder Scrolls IV: Oblivion (Bethesda Softworks, 2006) — Despite the limited number of NPC voice actors, Oblivion signaled a great leap forward for open-world first-person RPGs with high-quality visuals, a large number of quests, and a huge world to explore.

Portal (Valve, 2007) — Portal’s wonderful momentum-based first-person physics puzzles would have made for a good game. The inspired insanity of GLaDOS elevated this good game to an all-time classic.

The Witcher (CD Projekt RED, 2007) — Took the bar for intelligent and mature RPGs and kicked it a mile down the road. The mark of greatness is how many other things are compared to you, and by that measure The Witcher is one of the modern great games.

Minecraft (Mojang Specifications, 2010) — No, it’s not too early to include the alpha version of this game in the list of all-time great PC computer games. It’s already proof that there absolutely is a good market for exploration-oriented games, as well as being a great success story for indie game development.

Runners-up: Wolfenstein 3D, MegaTraveller: The Zhodani Conspiracy, Half-Life 2 & Half-Life 2: Episode 2, Sid Meier’s Alpha Centauri, Star Wars: TIE Fighter, Giants: Citizen Kabuto, Knights of the Old Republic, The Operative: No One Lives Forever, Redneck Rampage.

Sunday, September 19, 2010

Who Owns That Computer Game?

The latest hot debate in the computer game industry has been the question of how the publishers of a game can make money on used game sales.

GameStop, and now Best Buy, have been making some money on reselling "used" games. But the publisher of a game (and eventually the developer) sees not a dime of this secondary market. To try to offset this, some games are now being designed with an online component that must be activated with a special one-use code that comes with the original game. Once a player uses that code for that copy of the game, it's no longer available. This means that only the original purchaser of a game can get full value for their money -- purchasers of resold games that have this feature are stuck with whatever offline content is provided.

Naturally some gamers have objected to this, arguing that they've bought the game and are entitled to all the features of that game.

At this point many of the discussions on the question of reselling computer games eventually sink to the fundamental legal question: when you go to a store (whether it's a retail store such as Best Buy or a digital download system such as Valve's "Steam") and plunk down cash for a game, what is it that you're really getting for your money?

I believe considering that question reveals a severe disconnect between what a lot of people believe the answer to that question is, and what it actually -- legally, ethically, economically and philosophically -- is.

What a lot of people today seem to think is that when they pay for a game, they're actually buying the game -- that is, that some kind of transfer of ownership of property takes place when they hand over their money. And most sellers promote this by talking about "buying games" from them.

But the reality is that no consumer is actually buying a computer game to play it. As the End User License Agreement in most commercial games states, all that any player is doing is paying for a license to play the game -- no transfer of ownership of any kind takes place.

To understand this, we need to look at where the notion of the "license" comes from and what it's for. (Note: I'm not a lawyer, but I believe the following to be correct from a layman's perspective.)

The concept of the usage license is defined through a series of beliefs and legal articles encoded in our legal/economic system:

1. Created things, whether tangible/moveable objects or organized ones and zeros, are in our economic system considered to be the personal/private property of the entity who created those things. That applies to software developers (game creators) like anyone else.

2. The fact of ownership confers a couple of fundamental rights: the right to sell an owned thing, and -- importantly -- the right to allow others to use a thing without ceding ownership of that thing.

3. The right to sell a thing is pretty clear, but the important bit is that when you sell something, you give up all claims over how that property can be transferred or used by its new owner. Again, that's a requirement for a workable definition of "ownership."

4. The right to allow others to use a thing without giving up ownership of a thing is also hugely important for a functional economic system because (sort of like the concept of capital lending by banks) it dramatically expands the productive use of created things. People can create more things than they can personally use, but those things still get some use because the law says other people can be allowed to use things belonging to someone else without transferring the actual ownership of those things.

5. This concept of use-without-ownership is important enough to get its own specific and accepted legal formalisms: leasing and licensing, of which the latter is what we care about in terms of computer games. As the End User License Agreement for every game says (if unclearly), when you pay to play a game, you are not "buying" the game itself -- ownership of the code doesn't change hands -- you are instead paying the owner of property for the limited use of that property.

NOTE: There is a link to massively multiplayer online roleplaying games (MMORPGs) here. Real-money transfers (RMT) for in-game items are considered illegal because of this notion that you don't own the virtual objects in a gameworld. The game's owner licenses you to use the ones and zeroes defining those intangible goods . You don't own them, therefore you're not permitted to "sell" them outside the game for real money to other players.

6. So the confusion is about licensing versus ownership -- people believing they have rights to dispose of some piece of software that they don't actually have. When you agree to license the use of some piece of property from that property's owner, you agree to whatever usage restrictions the property owner may impose... in the case of games, up to and including that you don't get to use the online component for free. (Of course these restrictions should be reasonable, but if you agree to an unreasonable restriction that was your decision, and judges have so held.)

For that matter -- despite the fact that publishers have chosen not to enjoin game retailers from the practice -- it would seem that even "reselling" games is generally not permitted, since the person who paid to play a game never owned that game in the first place. However, this brings us to the notion of selling the medium that the game is stored on (such as a CD-ROM or DVD-ROM). Just as books can be resold even if the content belongs to the author or publisher, can't game software be resold?

7. There are usually two objections to the reasoning that licensed games aren't owned by the player and thus can't be resold. One is that ones and zeros are somehow different from tangible property, that the fact of ones and zeros being relatively easy to clone somehow makes them less a form of property and therefore less deserving of the legal protections for the assertion of ownership rights over property. So far, however, I haven't seen anyone make a principled defense of this argument, which really is nothing more than the "because I wanted it" excuse for theft.

8. The other objection is the slightly more sophisticated "first sale" argument. This appears to be based on people reading Wikipedia (edited by individuals who occasionally exclude facts that don't support their preferred beliefs on some subject) and seeing that one judge ruled that an agreement to let someone use a piece of software (through a license) was exactly the same as selling a physical object to someone, which caused the "first sale" doctrine to apply. This (supposedly) set a precedent that software licenses don't exist (regardless of EULAs), and that once you pay for a piece of software you own it and can do whatever you want with it. (Hence "used game" sales.)

In reality, this "first sale" doctrine says that when you sell a copy of an owned object to someone, you can't dictate what the new owner will do with that copy. That's fine... but the judge then magically decided that licensing a piece of software for someone else's use actually constituted a sale. At a stroke, this one judge (or perhaps judges in several states; the Wikipedia entry is strangely vague on this) bizarrely chose to arbitrarily discard the entire concept of for-use licensing that doesn't convey property ownership. On balance, I think that's clearly a bogus ruling; even if you buy the Wikipedia entry there are still plenty of states where established licensing law -- applying to software like any other kind of intangible property -- still happily applies.

On balance, then, I think the problem here is one of understanding. Gamers need to understand that they don't own games -- they're paying for the opportunity to play a game, just like they'd rent an apartment or a car without actually owning either.

At the same time, End User License Agreements ought to be much clearer, with the point made in plain English: "You don't own this software -- what you're paying for is the opportunity to user our computer software to play a computer game. That means you can't resell this software as a 'used game' because it was never owned by you in the first place."

Whatever publisher does that will probably get to enjoy a legal fight, since it would be a direct blow to resale revenue from GameStop and Best Buy. But it would least help to better define how -- or whether -- the "first sale" doctrine applies to computer game software or not.

Friday, August 20, 2010

On the Varieties of Evil Villains

[Note: a spoiler for the game Dragon Age: Awakening follows.]

One of the design questions that developers of most computer games usually need to address is how to explain the bad guys. You generally play some kind of hero up against the forces of evil -- well, what makes them evil? Why do they oppose whatever it is you want, and how good are they at their job?

Stupid Evil

In nearly all the computer games I can think of, the choice seems to come down to "Stupid Evil" versus "Misunderstood Evil." For action games, it's almost always Stupid Evil. When Stupid Evil is personified, there's simply some generic bad guy. He doesn't need explanations; he's just eeeeeeevil. This is an offhanded justification for the existence of waves of equally stupid enemies for the player's character to cut down like wheat before a scythe.

Wolfenstein 3D and DOOM, which focused on action over story, took this approach. In a Stupid Evil game, the enemies are intended to be disposable challenges with no moral/ethical component; the point is fun through action.

Misunderstood Evil

Some games do try to set the action within a story, where the opposition has a reason for trying to do whatever it does. But in most cases the opposition is almost never driven by a truly malign intelligence -- it's most often painted as Misunderstood Evil, as someone who only does horrible things for nearly-plausible reasons. The darkspawn in Dragon Age: Origins were driven by Stupid Evil; they were simply monsters. In Dragon Age: Awakening we are offered an explanation for the waves of enemy beings we've been slaughtering, but it turns out to be a Misunderstood Evil, a good intention gone wrong.

Sometimes the Misunderstood Evil is even deliberately painted as being no more than an alternative lifestyle, in a moral equivalency that says everyone is equally bad. This was the direction Blizzard went when it adapted the Stupid Evil Horde of the Warcraft real-time strategy games to the massively multiplayer online format. In World of Warcraft, the Horde are depicted as ethically no more good or evil than the Alliance.

To some extent, the Imperial faction in the online game Star Wars Galaxies was given the same Misunderstood Evil treatment. Rather than letting players be consciously evil as the Empire was clearly portrayed in the films, the developers felt it was necessary to allow Imperial players to justify their evil actions as not really evil. "Sorry about that whole blowing-up-Alderaan thing, just a misunderstanding, really."

Smart Evil

Every now and then, though, there's a "Smart Evil." This is a villain (such as GLaDOS in the game Portal) who really does hate you and who actively, intelligently and unapologetically wants to do you harm. These are the truly memorable baddies because they don't make any excuses for choosing to knowingly commit acts of evil. Like Lucifer in Paradise Lost, Smart Evil enemies are more interesting than Stupid Evil or Misunderstood Evil (and possibly even more interesting than Good) because they present a clear alternative to the Good that seems like the choice any rational being would make. We want to know *why* they choose to oppose us, why they hate us so...

...and that search for understanding is the beating heart of a great story.

Why aren't there more games that offer the challenge of Smart Evil?

Friday, July 23, 2010

The Prisoner's Dilemma and Multiplayer Game Design

One of the most fascinating things about massively multiplayer online roleplaying games (MMORPGs) is that, although these games are for the most part designed to promote competitive behavior, cooperation among players frequently emerges.

These games do usually provide some mechanism for four or five players to work together as a temporary "pick-up group," or for up to 100-200 players to form a somewhat longer-lasting organization as a "guild" or "corp." But these forms of cooperative play are always subsidiary to competitive behavior -- ultimately winning means doing better than the other guy.

And yet, in these games there can be remarkable examples of cooperative behavior, which seem to emerge despite the rules of the game that clearly favor looking out mostly for oneself. How does this happen? What are the features of these gameworlds that enable islands of cooperation to emerge out of a sea of advantage-taking?

Understanding this means taking a look at a simple game that allows two players to choose whether to cooperate with each other or to "defect" and take advantage of the other player. From this simple game -- with some tweaks -- it's possible to see how people behave, and from that behavior identify the factors that allow cooperation to become a viable strategy.

What follows is an essay on this game -- the "Prisoner's Dilemma" -- that I wrote in 1998. The principles haven't changed, though, so I thought I'd add that essay to this blog since it offers some useful insights into certain core game design concepts that are interesting to think about.

We begin with the surprising (if you think about it) observation: sometimes people choose to cooperate with each other.

THE EMERGENCE OF COOPERATION

Why do we cooperate with one another?

Don't we do better personally when we take advantage of someone who tries to cooperate with us?

How can we justify cooperating when others don't?

It would seem that cooperation is for suckers. And yet there are examples of cooperation all around us: no single person could build a skyscraper, or fight a war, or agree to an international treaty. For all of these things (and many others) to happen, sufficient numbers of self-interested individual human beings must agree to cooperate even when cheating is easy. But why does this happen?

In the late 1970s a researcher named Robert Axelrod was studying the question of how cooperation can emerge in an uncooperative world. Given that in many real-world situations the payoff for taking advantage of others is greater than the reward for cooperation, how can the observed prevalence of cooperative behavior be explained?

THE PRISONER'S DILEMMA

Axelrod began by considering the "Prisoner's Dilemma" experiment. This is a kind of thought game which examines the rewards and punishments for either cooperating or not. The story usually given runs like this:

Suppose you and an accomplice (whom you barely know) in some crime are arrested. The chief detective visits you. He says, "We know you and your pal did it. And you're both going to jail for it. But we always like to make sure, so you've got a choice. You can tell us what your pal did, or you can keep quiet. (And by the way, we're giving him this same choice you're getting.)"

"If you give us evidence against him, and he keeps quiet about you, then you get one year, and he spends six behind bars. If you keep quiet and he gives us the goods on you, you stay for six and he's out in one. If you both keep quiet, you both get three years; if you both turn each other in, you both get five years. So what'll it be?"

You must select one of two options -- you can either cooperate with your fellow prisoner (by keeping quiet) or defect (by providing evidence). You have no way of passing information between you, and you don't know him well enough to predict what he'll choose based on his personality. So which will you choose?

The table below describes what is called the "payoff matrix" of this classic formulation of the Prisoner's Dilemma:

Classic Prisoner's Dilemma
Accomplice Cooperates Accomplice Defects
You Cooperate R = -3 S = -6
You Defect T = -1 P = -5
Note: In this table the results are the payoffs to you. They are categorized as follows:

T: Temptation for defecting when the other party cooperates

S: "Sucker's payoff" for cooperating when the other party defects

R: Reward to both players for both cooperating

P: Punishment to both players for both defecting

Try it for yourself. Here's a link to an on-line version of the Prisoner's Dilemma.

Looking at the table in a purely rational way, it would seem that your best choice is to defect. The two possible results of your defection mean a chance for jail time of either 1 year (if your compatriot cooperates by keeping quiet about you) or 5 years (if he talks), for an average risk of 3 years. On the other hand, if you cooperate with your accomplice, you get 3 years if he cooperates too and 6 years if he defects, for an average risk of 4.5 years.

What's more, you have to assume that your accomplice is capable of thinking about this just like you did. Since he -- just like you -- is likely to conclude that defection is less risky, he'll probably defect... in which case you have even less incentive to cooperate.

And so you defect. And so does your accomplice. And so you both come away worse off than if you had cooperated with each other.

It appears that as long as the temptation to defect is greater than the reward for cooperating, there's no reason to cooperate. Yet it's clear that in the real world we do cooperate. So could there be some other factor at work, the addition of which to the Prisoner's Dilemma might make its outcomes more realistic?

THE ITERATED PRISONER'S DILEMMA

This is where Robert Axelrod enters the story. He considered a possibility others had suggested: What if instead of a single chance to cooperate or defect, you and an ally had numerous opportunities on an ongoing basis? Would that affect your choice on any single interaction?

Suppose you arrange to sell to a fence some stolen goods you regularly receive. To protect you both, it is agreed that while you are leaving the goods in one place, he will leave the payment in another place. At each exchange, each of you will have to decide whether to cooperate by leaving the goods or the money, or to defect by picking up the goods or the money without leaving anything of your own. Furthermore, each of you knows that this arrangement will continue until some unspecified time; neither of you knows when or if at some future date the exchanges will cease.

Assume that the payoff values remain the same as in the basic Prisoner's Dilemma. Does your strategy of defection in the earlier one-shot Prisoner's Dilemma change in an environment of repeated interactions with the same individual?

In 1979, Axelrod devised an ingenious way of testing this possibility (known as the "iterated" Prisoner's Dilemma). He contacted a number of persons in various fields -- mathematicians, experts in conflict resolution, philosophers -- explained the payoffs, and asked each of them to come up with a strategy for a player in an interated Prisoner's Dilemma tournament that could be encoded in a computer program.

No limitation was placed on strategy length. One strategy might be as simple as "always defect." Others might take into account their memory of what the other player had done on previous turns. "Always cooperate but with a random ten percent chance each encounter of defecting" would be still another strategy, and so on.

Axelrod collected 13 such strategies and encoded each of them in the form of a computer program. (He also added one strategy of his own, which randomly chose cooperation or defection on each turn.) He then began to pit each of the 14 strategies against every other strategy over 200 iterations. This would determine if any one strategy would prove to do well against all other strategies (as measured by average payoffs to that strategy).

The winner was the shortest strategy submitted. It consisted of four lines of BASIC code submitted by psychology and philosophy professor Anatol Rapaport of the University of Toronto in Ontario, Canada. In its entirety it consisted of the following: Cooperate on the first turn, then in all subsequent turns do whatever the other player did on its previous turn. This strategy was dubbed "Tit for Tat".

THE "ECOLOGICAL" PRISONER'S DILEMMA

After deriving some preliminary conclusions about this result, Axelrod tried an even more interesting innovation. In this new round, for which Axelrod publicly requested submissions from any source, there were 62 entrants plus one (RANDOM, from Axelrod) for a total of 63. All these strategies were then pitted against one another in a giant free-for-all tournament.

The winner was Tit for Tat, submitted again by Rapaport. (But, oddly, by no one else.) Again, it had the highest average score of payoffs.

Axelrod scored the results of the tournament as a 63x63 matrix which showed how each strategy had fared against every other strategy. An analysis of the strategies played revealed that there were six strategies that best represented all the others. Since the 63x63 matrix showed how each strategy played against all others, Axelrod was able to calculate the results of six hypothetical "replays" in which one of the six representative strategies was initially dominant.

Tit for Tat scored first in five of the six replays, and scored second in the sixth.

Then came the cleverest innovation yet. Suppose, Axelrod's notion had it, we performed a hypothetical replay in which all strategies were pitted against each other, and in each turn the "loser" was replaced by a copy of the "winning" strategy, thus altering the population of players? Each strategy's score -- already known from the 63x63 matrix -- could treated as a measure of "fitness" against other strategies in a kind of "ecological" tournament.

The results left no doubt. The lowest-ranked strategies, which tended to be "not-nice" (in other words, which tried to defect occasionally to see what they could get away with), were extinct within 200 rounds. Only one not-nice strategy (which had been ranked eighth in the original 63x63 competition) lasted past 400 rounds, but by then the population of surviving strategies consisted only of those which replied to defections with immediate retaliation. Because the not-nice strategy had no more strategies which could be taken advantage of, it began a precipitous decline. By the thousandth round, it too was virtually eliminated.

And the winning strategy? Once again it was Tit for Tat, which was not only the most prevalent strategy at the end of 1000 rounds, but the strategy with the highest rate of growth. Tit for Tat was not merely successful, it was robust -- it did well in all kinds of environments.

Why did Tit for Tat do so well? How could such a simple strategy perform so capably in such a broad mix of more complex strategies? More to the essential point, how could Tit for Tat do so well even when surrounded by strategies which depended on defecting and so would supposedly tend to earn better payoffs? It appeared that a strategy which cooperated by default was able to not only survive but actually thrive amidst a sea of defectors.

In other words, cooperation evolved over time in a world dominated by uncooperative players. If this simulation bore any relation to the real world of humans, there could be some important lessons in it for us.

HOW COOPERATION WORKS

What actual, emotion-driven human beings do with individual choices of cooperation or defection is, of course, unpredictable. But in general, rational players will tend to make similar choices. This allows those interested in human behavior to work out some mathematical predictions of behavior.

In this case, it turns out that a careful choice of the four payoffs for cooperation or defection results in being able to say that there exists a rational choice of cooperation, even in the midst of a majority of defectors. Specifically, Axelrod found that if just four conditions were met, cooperation can be the most rational choice -- even in a population consisting almost entirely of defectors. These are the "world" conditions that affect whether a gameworld such as a MMORPG will tend to encourage cooperative behavior to emerge or not.

First, the players must be able to recognize one another. Anonymity, becauses it decreases the penalty for defection in an environment of iterated interactions, tends to work against the evolution of a population of cooperators. This has a direct impact on MMORPGs, in which players are somewhat recognizable by the names they choose for their characters, but because they can play multiple characters on multiple servers, players are for the most part anonymous to each other. (This is why the recent attempt by Blizzard to switch to a "Real Names" system in their online game forum had a chance of promoting more cooperative behaviors there, instead of the flaming and hyperemotional verbal abuse -- "defection" behavior -- that characterizes game forums currently. It's unfortunate that Blizzard's concept was shouted down and not given a chance to be implemented; it would have been useful to see whether changing the rules of the "world" to minimize anonymity would, as suggested here, have encouraged more cooperative discussion.)

Second, the total number of potential opportunities for interaction must be unknown to each player. It is the uncertainty preventing players from calculating just how much defection they can get away with that decreases the long-term reward for defection. If you don't know when your final interaction will be, and thus can't plan to defect on that turn, you must take into your calculations the fact that what you do on this interaction will affect the response to you on the next interaction. This creates an incentive to cooperate.

Third, the payoff for mutual cooperation (that is, both players cooperate with each other) in each interaction must be greater than the average payoff of a cooperation-defection (the payoff to the defector plus the sucker's payoff divided by 2). In mathematical terms, this is the condition in which R > (P + S) / 2.

Fourth and last, there must be a certain minimum proportion of cooperating players in the population... and here was one of the greatest surprises. Axelrod calculated that -- amazingly -- if all the preceding conditions are met, and there is a high probability that players which have interacted before will do so again (specifically, ninety percent), then cooperation can eventually evolve to include the entire population if only five percent of the total initial population consists of cooperators. It's reasonable to expect that there must be enough cooperators so that they can create a sort of island of trust in a sea of defection. What's surprising is that so few are necessary.

HOW TIT FOR TAT WORKS

Axelrod concluded that Tit for Tat succeeded not by trying to do the absolute best for itself in every transaction, but by trying to maximize the sum of its own and the other player's reward in all transactions combined. In other words, Tit for Tat did well for itself because the effect of its strategy was to allow every player with whom it interacted to do well.

An intriguing aspect of this is found in the raw scores of the various Prisoner's Dilemma tournaments. Looking at the numbers, it quickly becomes obvious that in individual encounters Tit for Tat never did better than strategies which were more "aggressive" (i.e., defected more often) or -- interestingly -- strategies which were more "forgiving" (i.e., didn't always respond immediately to a defection with a defection of its own). In individual transactions, Tit for Tat's numbers were solidly middle-of-the-road.

But over iterated transactions the consequences of defection began to outweigh the benefits. As more players started to resemble Tit for Tat, which always retaliated immediately to a defection but was always open to cooperation, the long-term payoff for defection dropped. Soon there were no players who could be taken advantage of by a defecting strategy. Meanwhile, the Tit for Tat-like cooperating strategies were busy cooperating. Their long-term payoffs were never outstanding... just better than those of the defectors.

THE PRINCIPLES OF TIT FOR TAT

Axelrod distilled several principles from his observation of how well Tit for Tat did against various defecting and cooperating players. Not only do these explain how Tit for Tat did better than even other cooperating players, they have useful implications for real world human interactions.

Be Nice
Don't be the first to defect. Assume cooperativeness on the part of others. If you go into an interaction assuming that you're going to get ripped off, then you might as well try to take advantage of the other person. But if instead the other person turns out to have been willing to cooperate with you, you've just missed a chance for both of you to do well.

Be Forgiving
Don't overreact. When taken advantage of, retaliate once, then stop. Meeting one defection with a harsh response can create a series of echoing mutual defections that prevent cooperation from ever occurring.

Be Provocable
When a defection occurs, always respond in kind. Don't be too forgiving. In the instructions for the second tournament, Axelrod included the two lessons ("be nice" and "be forgiving") that he had drawn from the first tournament. Several of those who submitted second tournament strategies concluded that being forgiving was essential to the evolution of cooperation. Their strategies tended to let a few defections slide. In effect, these strategies tried to elicit cooperation by allowing not-nice players to take advantage of them without penalty. But the actual result was to encourage not-nice strategies to keep defecting. A lesser penalty for defecting made that lack of cooperation more valuable, so cooperation became less valuable. A better choice is to always defect when provoked.

Be Clear
Respond in kind immediately. Strategies that tried to be clever tended to appear unresponsive, which elicited defection. (If your attempts to cooperate are ignored, then you might as well defect to get as much as you can while you can.) Cooperation should meet with immediate cooperation, and a defection should be met with an immediate defection.

THE IMPLICATIONS OF TIT FOR TAT

What if anything does this mean for actual human interactions? There is a strong suggestion that the behaviors that elicit cooperation in this restricted world of the Prisoner's Dilemma do indeed carry over to our real world.

One finding particularly worthy of note was the evidence that too much forgiveness actually works against the evolution of cooperation. The notion of "tolerance" so trendy today turns out to be an invitation to defection, rather than the means to a better society as its proponents claim. While being "nice" is necessary to evoke cooperation in others, it's not enough. Bad behavior requires a proportionate response, or the result will be more bad behavior.

This applies as well to criminal justice. There is a vocal minority today calling for a reduced emphasis on incarceration as societal retribution, and a commensurate greater attention given to rehabilitation. Without disputing the goodness of the impulse, the success of Tit for Tat suggests that it's a bad idea. If an individual member of a society defects (commits a crime), that defection should provoke an immediate retaliation from society. Not an overreaction, but some equivalent reaction nonetheless appears necessary in order to elicit future cooperation from that individual, and to demonstrate to other players the value of cooperation and the price of defection.

The ancient policy of lex talionis -- "an eye for an eye, and a tooth for a tooth" -- may be the wisest policy after all.

THE FUTURE OF COOPERATION

For cooperation to evolve, there have to be enough cooperators who interact with one another on a sufficiently regular basis. Such "islands of cooperation," once established, can grow... but too small an island will sink beneath the waves of defectors.

One critical factor not addressed by any other commentator on Axelrod's work I've seen concerns being able to recognize other players. The Tit for Tat strategy depends on remembering what another player did on the immediately previous turn. But if the other player is anonymous, or is encountered only once, it's impossible to associate a history with that player. This leads either to cooperating with an unknown (and possibly being taken advantage of repeatedly) or defecting from lack of trust (and possibly missing an opportunity to create an environment of cooperation).

This takes on added relevance today. Not only are the streets and highways filled with persons whom we'll never see again -- and who thus have no qualms about defecting (in other words, driving like jerks) -- we are spending more time surfing the Web as anonymous entities than we once did sitting in the back yard talking with our neighbors. Our contacts with other players in the game of trust/don't-trust are more likely to be brief encounters with strangers: ephemeral and anonymous. Under such conditions, not only is it unlikely that new clusters of cooperative behavior will evolve, but even the maintenance of what cooperation there is becomes difficult. Trust breaks down.

How long can such a state of affairs last?

Can we find a way to balance legitimate privacy interests with the guaranteed recognition required for cooperation to emerge? Or is anonymity and the Hobbesian, everyone-out-for-himself world imposed by anonymity inevitable?



". . . perhaps the chief thesis of the book on The Fatal Conceit . . . is that the basic morals of property and honesty, which created our civilization and the modern numbers of mankind, was the outcome of a process of selective evolution, in the course of which always those practices prevailed, which allowed the groups which adopted them to multiply most rapidly (mostly at their periphery among people who already profited from them without yet having fully adopted them)."
-- letter from Friedrich A. Hayek to Julian Simon, Nov. 6, 1981.

Friday, June 25, 2010

Game Development and National Tax Policy

There's been a lot of yelling lately about the UK coalition government's campaign-trail promise to enact a moderate reduction in the rate at which game producers are taxed, and the elimination of that tax break in the national budget the Conservative/Liberal Democrat government has actually proposed.

Let's leave aside for a moment the issue of politicians breaking promises made while campaigning. It's worth stopping for a second to consider the language that's being used in the news stories that gamers are reading to inform themselves about this issue of tax breaks for game development.

As a representative example, here are a couple of quotes from the story as written by the Computer and Video Games website:

"[P]ulling the tax breaks from the budget saves the country £190 million [$283.54 million]."

"Although the body [TIGA] admits that the planned tax break would cost £192 million, it claims over £400m would be recouped in tax receipts."

Both of these statements are misleading; they describe tax policy in terms of "costs" and "savings" that have no connection to the normal meanings of these words.

Reducing the rate at which businesses will be taxed in the future is not a "cost" because the government hasn't taken that money yet and therefore doesn't have it to spend.

And choosing not to reduce a tax rate does not "save" money. All it does is continue to extract money from producers at the existing rate. There is no "savings" in the normal (non-government weenie) sense of preserving money that would otherwise have been spent because, as noted above, no existing money is spent if the rate of future taxation is reduced.

More importantly, this deliberate misuse of language (which is definitely not restricted to the UK government) to portray reducing national taxation as "costs" and preserving existing tax rates as "savings" flows entirely from the assumption that all money belongs to the government to begin with. Only if all money is considered the government's money is it a "cost" to reduce the rate at which government takes that money from the producers who earn it through their labor, or a "savings" to continue taking the existing amount of money from businesses and individuals.

That assumption needs to be questioned.

Seen from the perspective that money belongs to the people and corporations who work together to earn it, a reduction in the rate at which the income of UK games producers is taken by the government would mean several things: a future UK government would have slightly less money available to spend; UK games producers would have more money available to them for investment in game development and publishing projects; and -- importantly -- investment in more games production than otherwise would have happened (because lower tax rates mean more money available for new projects) would potentially result in the government receiving *more* money in tax receipts even at a lower tax rate (though perhaps not as much as £400m as TIGA speculates).

But consider: if a reduction (not "elimination"!) of taxes for games producers could actually help generate slightly higher tax receipts to the government through the increase in business activity prompted by the games producers having more money to invest in new projects, then why not apply that logic across the board? Why make a special deal with games producers, which the government could then turn around and threaten to take away? Instead, why not reduce taxation on all producers to enable revenue-generating capital investment throughout the private sector of the national economy?

And by all accounts, that's precisely what the new budget from the UK coalition government proposes... at least for businesses. As Gamasutra reported in its own story on this issue: "The new budget also raises the value-added tax to 20 percent, makes cuts to National Insurance, and reduces the corporations tax." This is more of a shifting of revenue sources than an actual revenue-generating budget, since the likely benefits of reducing the corporate tax rate will be offset somewhat by hiking the VAT that increases prices everywhere.

Still, it's a step in a better direction than just mindlessly raising tax rates, which fails to maximize taxable new capital because it promotes government spending that is less efficient than private sector investment. Gamers don't need to be upset by the coalition government reneging on its promise of a tax break for the game industry specifically -- *all* industries, including game development, will be getting a break if this budget is enacted. (Gamers and everyone else can certainly be ticked off by politicians breaking promises, but that's an old and separate problem from economic policy.)

It's just a shame that so much of the reporting simply parrots the government spin that reducing tax rates to let people and businesses keep more of the money they work to earn is a "cost," and choosing not to reduce any tax rate constitutes a "savings." Reporters ought to be more careful that the language they use isn't unthinkingly promoting a government's self-interested agenda, and news consumers need to hold journalists to that reasonable standard.

Thursday, May 27, 2010

The Hero No One Recognizes


There's a design question that's been nagging at me for a few years now that I recalled today. Maybe this is a good time to drag it out into the light for a good review.

Although generally an enjoyable game, The Elder Scrolls IV: Oblivion had a few quirks. (Not unexpectedly for such a large gameworld with so many non-player characters and quests for the player to follow.) One of these quirks had to do with the various factions that your character could join.

In Oblivion as it originally shipped, there were two public factions -- the Fighters Guild (physical combat) and the Mages Guild (magical combat), two secret factions -- the Thieves Guild (stealing stuff) and the Dark Brotherhood (assassination), and the Imperial Arena (gladiatorial-style combat). Your character was able to join each of these organizations and, by successfully completing various quests for the members and leader of each organization, rise in rank within each organization.

What I found exceedingly peculiar when I stopped to think about it was that the separate plotlines for rising in rank in these organizations allowed your character to take over as leader or undisputed champion in each one.

In other words, your one character could, by completing every factional plotline, be simultaneously the Archmage of the Mages Guild, the Master of the Fighters Guild, the Gray Fox of the Thieves Guild, Listener of the Dark Brotherhood of assassins, and the Grand Champion of the Imperial Arena.

I didn't play Morrowind, the predecessor game to Oblivion, but I understand that there were some restrictions in the prior game on what you could do in one faction based on your relationship with some other faction. I assume those restrictions were excluded in Oblivion simply to allow the player to experience all the factional content, and I understand that from a business perspective... but it just doesn't make any sense from a world-y perspective that a single person (your character or anyone else) would be permitted to control all the resources and personnel of these incredibly powerful organizations.

And this becomes even more problematic if you completed the main questline in the Shivering Isles expansion. Not only do you retain all your factional leadership roles, you become the incarnation of the Daedric god Sheogorath!

This is just too much to swallow. I understand it wouldn't have been much fun to make the reward for mastering a faction to become smothered by bureaucracy, constant second-guessing by underlings, and a never-ending stream of tedious management decisions to make. Even so, why didn't anyone even seem to notice my remarkable public accomplishments? It remains terribly strange to me that no NPC of any station ever expressed a single word of concern, wonder, admiration, fear, or anything else while speaking to someone (my character) who controlled so many of the threads of power in the Empire. How could one person be allowed to be head of all those groups, rivaling or even exceeding the Emperor of Cyrodiil in power, without anyone caring or even noticing?

I should add this didn't "ruin" the game for me. It was just a bit of dialogue programming that Bethesda didn't have time to do.

Even so, it did make the otherwise well-defined gameworld of Oblivion feel less like a plausible world.

Addressing this objection takes us into two related subjects: (NPC) knowledge representation and knowledge application. In other words, how can we define what characters in the gameworld know, and how can we enable them to act in plausible ways on that knowledge?

I'll take that up in a future blog post.

Monday, April 26, 2010

Interactive Fiction: Character Versus Player

Interactive fiction writer and designer Emily Short recently considered multiple-choice interactive stories, where -- instead of a completely free-form interactive mode where the game parser tries to figure out what the player asked for -- the game supplies a limited set of pre-determined choices for the player to select from, each leading to a different pre-determined event within a particular narrative that defines the story the player experiences.

Her concern is that this type of game doesn't allow the kind of emergent storytelling possible in "open-world" roleplaying games (RPGs) such as Fable 2 or Fallout 3. On the other hand, she notes that RPGs aren't very good at focusing game events on a particular well-paced and engaging story, and she attributes this to the smaller "granularity" of events in an RPG which are harder to tie together into a coherent dramatic narrative.

I think that's a rather good way of analyzing these two approaches to storytelling in story-driven games. I do look at it in a slightly different way, however. The incompatibility I see between character stats and story-relevant possibilities is that each approach puts dramatic choice in a completely different person's hands.

Emphasizing character stats means that attributes of the character, such as intelligence or lockpicking skills, must to some degree condition or even determine story choices, taking gameplay out of the player's hands. Too much of that and you get a simulation that plays like a movie -- you lose the "interactive" part of interactive fiction.

But emphasizing on-the-spot decision-making by the player can make the game about the player, rather than about a character existing inside the secondary world of the fiction and acting in a dramatically appropriate way for that character in that world. The fiction has to constrain choice in some way or it's impossible to tell a coherent and world-appropriate story.

Two approaches to synthesizing these models of play in interactive fiction might be described as The Middle Way and Some Of Each.

In The Middle Way you'd try to pick some in-between spot between the player and the character -- a medium granularity. That's probably relatively simple to implement technically; I'm just not sure how satisfying it would be as story-based gameplay. The character's nature would sort of matter, and the player's imagination would sort of matter, but neither could be strongly activated. I suspect there are already some games like this....

Alternately it might be possible to develop gameplay where you have both small choices (determined by the character's nature) that add up over time and drama-important choices (actively selected by the player) that form the core of a particular story. This sounds a bit to me like a system in which the player decides "what" to do and the character's nature (as encoded in RPG-style statistics) have an impact on "how" each choice is expressed.

I like the sound of that second approach; it feels to me like it might have a "best of both worlds" quality. But I can see a couple of potential gotchas. One is technical: for each major choice you'd have to code multiple ways it could be expressed based on each one of the relevant qualities of a character's nature. That could wind up being pretty cost-intensive, even if the payoff might be significant.

The other possible problem is how players might feel about such a system that takes some choice away from them. If for example I chose to create a character with a roguish nature, should I be unhappy if, when I make a particular choice at a dramatic opportunity, my character twists my choice in a roguish way with consequences I might not have preferred? Or would that help my choice feel even more satisfying than those common in today's story-based games where my "character" is little more than an empty vehicle in which I-the-player ride?

Still, I think the Some Of Each approach probably holds the most promise for interacive fiction that is both satisfying as drama and enjoyable as gameplay. I'll have to think some more about this.

Saturday, March 27, 2010

Living in the Living World Game

Back in June of 2008, I published a blog post describing a game design concept that fascinated me: the Living World game.

Since then I've remained fascinated by the idea of a game that models social dynamics at both a very high and a personal level, and that runs constantly like an online game (simulating changes while the game is turned off). I would still love to see a game consciously developed as a gameworld, where the fun comes not from following some game developer's linear story to a predetermined conclusion but from being a part of and exploring an truly enormous and dynamic open world.

One of the possibilities in my original work-up of this concept was that players would create their own characters who, in typical role-playing game fashion, the player would mold and "level up." Lately, however, I've been thinking it could be much more interesting to eliminate level-based character advancement entirely.

Rather than the standard RPG model of creating a new character out of thin air and then leveling up that character through progressively more difficult challenges, a Living World might not need to copy that model. Instead, the primary gameplay system really ought to be designed to highlight the scope and dynamic features that make the Living World unique.

And what comes most strongly to mind is not giving the player a new character at all, but instead making gameplay about "jumping into" existing NPCs. (Credit Where Credit Is Due Dept.: this line of thinking was inspired in part by some comments made by Owain abArawn to the Gamasutra version of the original Living World essay.)

In this model (which, yes, now that I think about it does somewhat resemble the notion behind "Quantum Leap"), players starting the game would first experience the gameworld from a disembodied perspective. They would be shown the Living World from a great distance; the camera would then zoom in to various regions and then to individual NPCs going about their lives. The player would be shown how to take control of two different NPCs as a tutorial. After returning to the big picture view of the world (to impress on the player the scope of the Living World), they would then be free to begin exploring the world by inhabiting the NPCs of their choice.

This is where the core gameplay of the Living World would be found. Every NPC would be defined to have a particular role. (Role definition would be part of the toolset for building the content of the Living World.) Players would be able to observe or inspect an NPC to see the role he or she has, and then choose whether or not to inhabit that NPC.

Inhabiting an NPC immediately gives the player access to all the skills -- defined as gameplay activities -- that are associated with the role to which that NPC was assigned. So if you want to fight monsters, inhabit a Ranger; if you want to chase thieves, jump into a City Guard NPC; if you want to practice crafting, inhabit a Blacksmith or Baker; if you're looking for economic gameplay, inhabit a Merchant; and if you feel like trying your hand at the very difficult game of city or kingdom management, you would be able to inhabit a town's Mayor or even the King of an entire nation.

As noted above, in each case the kinds of gameplay available to you would depend on the role of the character you choose to inhabit. These gameplay activities would need to be predefined by the developer as selectable actions which use and/or affect objects inside the gameworld. The "Ranger" role, for example, might be defined to optimize skills (gameplay actions) such as Shortbow, Shortsword, Tracking, Herbal Medicine, and Stealth. Meanwhile, the "City Guard" role could be defined as having special training in Shortsword, Tracking, Negotiation, Perception, and the Guard badge which allows that character to summon other Guards. Other roles would have their own appropriate skill optimizations.

In addition to roles having particular skill optimizations, roles would also be keyed to preconstructed gameplay content. In other words, the role of the NPC you choose determines the kind of prescripted gameplay content offered to you. If you choose to inhabit a Ranger, you would not only be free to explore the gameworld while doing Ranger-y things, the act of inhabiting that NPC would also activate any number of world events (either automatically or at the player's discretion) in which Ranger skills could be particularly useful -- say, a monster invasion, or finding a lost child for some villagers. The same would be true for every role. Even highly mundane roles such as Baker would have gameplay events (e.g., baking challenges) scripted for them that would be fun for someone who voluntarily chose to be a Baker. (Note that this design integrates into the "epic storylines" feature of a Living World game.)

The assignment of NPCs to particular roles would need to be integrated into the social dynamics of the gameworld. Because the Living World models the birth, growth, and death of individual NPCs (possibly only at a statistical level except where the player has traveled or currently exists), the mechanism of children maturing into adults would need to be fitted into the role system. For example, a village experiencing dynamic growth would encourage children to fill high-priority roles as openings (due to death, injury, retirement, or possession by the player) occur, then useful roles, then expansion-oriented or support roles as the group's resources and rules permit. A village or social group that can't or doesn't care to continue, on the other hand, would probably find most of its children emigrating to cities or taking on "solo/loner" roles. In other words, the Living World needs to be designed so that role assignments to juveniles are made dynamically based on a determination of whether to maintain/enhance the existing social group or to break it up in favor of forming new groups elsewhere.

One other note on the gameplay design approach of "inhabiting" NPCs: Because the Living World would simulate not just large-scale social movements but personal social structures as well, in many cases the NPC whom the player chooses to inhabit would be part of a social group -- a member of the city guards, or the blacksmith for a village, or a husband or wife who is the parent to some children. This raises the question: should those NPCs realize when someone who is part of their social group is "inhabited" by an entity with the unique power of possessing people's bodies and minds?

I very much like the idea that these NPCs would definitely know when someone in their group is inhabited by the player, and that they would be able to react toward the player according to their beliefs and fears. It seems reasonable that stories of this kind of possession, occurring for thousands of years, would be part of the legends and histories of all the peoples of the Living World. Would you be considered a god? or a demon? or perhaps just a very talented sorceror? If you choose to inhabit an NPC who, in addition to her gameplay role, is a wife and mother, how might her husband and children react to you when they realize that she no longer exists as herself and that you might do anything at all to her, from taking her across the wilderness to getting her killed? These relationships, I think, would also be an excellent opportunity for prescripted gameplay activities to be activated, with the twist that they could have more than the usual amount of emotional resonance -- would you choose to risk the life of an NPC whose children are begging you not to take her away?

I'll be thinking more about this approach to character-based gameplay within the Living World concept. For now, the more I think about it, the better I like it as a system that supports and improves the overall design of the Living World game.

Thursday, February 25, 2010

Dear Producers: You Are Not Designers

Why do game designers allow producers to dictate design choices?

A little background: I'm a systems designer by inclination and a project manager (outside the game development industry) by professional experience. So I appreciate the value of both roles in creating properly functional systems on a budget and a schedule.

But if anything, this is why I'm annoyed every time I discover that some producer -- or worse, an executive -- is dictating design choices for a game. People acting in multiple roles tend to do none of them sufficiently well. And people who aren't designers by choice are probably going to make more incorrect design decisions than those who do choose to be designers.

So why do so many producers seem unable to stop themselves from dabbling in game design? Why is this allowed to happen so frequently in game development?

When a developer blog/chat/interview reveals that it's the producer who is determining core design elements, that suggests two things to me: firstly, that the producer is probably neglecting actual production-related tasks in favor of fiddling with the design. That's a bad sign for delivering a good product on time. Who's monitoring and managing the development process while the Producer is arguing with the Lead Designer over whether the game's hero needs a sidekick character?

And secondly, why the heck did they hire a Lead Designer if they're not going to let that person do their job? The title Lead Designer needs to mean something. That person needs to have the authority to specify the high-level systems of a game, and their creative authority needs to protected from producer encroachment. If the producers or other suits are so concerned about the creative direction of a game that they feel they need to do that person's job for them, then the correct action is to fire that designer and hire a new one because interfering can only reduce the impact of the vision for the game. When the designer does not have the power to enforce a consistent creative vision, when non-designers can impose their preferences solely because the org chart says they have the power to do so, the result will be a game that plays like it was designed by a committee... because it was.

Frankly, I think most producers are not equipped to be designers, and they should not try to fill both roles. A good producer is a valuable person, but that value is diminished when they're not doing their job (producing) because they're trying to do someone else's (designing). If you're a producer, but what you really want is to design, then demonstrate the courage of your convictions: step down as Producer and ask to be hired as the Lead Designer or Creative Director or its equivalent.

Finally, I note that this applies to both internal producers and publisher producers. Personally, I wouldn't want to sign a deal with a publisher that didn't include some text saying that final authority for creative decisions rests with the developer. (Naturally such a contract should also include a provision allowing the publisher to back out of the deal if they feel strongly enough that the creative direction is just too wrong.) I don't see any good in producers dictating design choices, whether those producers are part of the development studio or represent the publisher.

This does not mean that producers (or any other member of the development team) shouldn't be able to offer design ideas. Designers aren't perfect; sometimes it's helpful to hear what others think. What I'm arguing against here is allowing the Lead Producer or Senior Vice President in Charge of Whatever to dictate design choices simply because they sign the team's checks.

Bottom line: Power in a corporate hierarchy does not imply design competence. Let the designers do the designing, and let producers stick to producing.

Of course I recognize that this is wishful thinking on my part, and that producers of whatever ilk will continue to abuse their power by overriding the creative work of the people who supposedly were hired on the strength of their design abilities.

Doesn't mean I can't complain about it as a sub-optimal business process....

Thursday, February 18, 2010

MMORPGs: The Evolutionary Dead End


Brian "Psychochild" Green, in a thoughtful post (MMOs Change Over Time) on his blog, asks the question: Do you enjoy your favorite MMORPG more or less because of the changes that have been applied to it?

Unhappily, my MMORPG experiences since EQ have led me to precisely the opposite conclusion: as a gamer, I’m just not interested in playing any of these games any more because my perception is that they have ceased to change in any meaningful way.

I find it terrifically frustrating to consider the fact that these games, as forms of virtual worlds, could be about anything... and yet the best that their designers today can do is to copy the mechanics that have become conventions of the genre. I recently helped beta test an online game based on a well-known IP, and I was shocked to see that many of the mechanics, far from being designed fresh to fit the IP, were not merely copied from existing MMORPGs -- they were actually called by exactly the same names: root, buff, aggro. But this game’s designers are not alone in seeming to believe that these arbitrary mechanics have become non-negotiable requirements that simply have to be copied wholesale into the core design. So does everyone else.

Even at the next level up, every MMORPG designer seems obsessively fixated on delivering only one kind of entertainment experience: kill mobs and take their stuff. Ask today’s typical MMORPG player to define “MMORPG,” and that’s how they’ll describe the whole genre: combat and loot.

Change? What change?

When I see game after game aping their predecessors (while ads proclaim them to be “revolutionary”), and then think about the possibilities of MMORPG play that are bounded only by human imagination... yes. It’s infuriating.

Why are so many designers willing to put up with such limits to creative expression?

Why are so many gamers willing to tolerate such an unnecessary lack of choice in entertainment experiences?

From my perspective, the problem with MMORPGs is not that there is too much change -- it’s that the genre has already gone into creative rigor mortis long before its time. Whatever changes we perceive are merely various stages of decay and rot.

At this point I’m about ready to declare that monolithic MMORPGs are the shambling dead, and that social games on networks like FaceBook will soon rule the Earth as our new overlords.

Is there any cause to think I’m wrong in that forecast? Is there any hope for the MMORPG?

Tuesday, January 26, 2010

Let's Pwn Homer, Too!


Electronic Arts, in the wisdom of its executives, has chosen to fund the conversion of Dante's Inferno, an epic allegorical poem concerning the search for divine love in an impure world, into a God-of-War-like slasher game.

At first I thought this was completely nuts, being both derivative as a game and artistically rude. But no; clearly I Just Don't Get It. What's wrong with strip-mining the classics for some completely irrelevant element that seems roughly similar to a mindless hack-'em-up game someone else has already made money from?

Following the lead of some very highly paid executives at a game publishing corporation, I have been inspired to look for similar development opportunities. Here are some ideas off the top of my head for giving a few more pieces of Classical and Western verse the chainsaws and blood and Adrenaline-Pumping Action! their authors no doubt always intended them to have.

  • The Road Not Taken: Fire and Ice Edition
  • The Rime of the Ancient Mariner: Albatross Hunter 3D
  • Metamorphoses: Titan vs. Titan Brawling
  • The Raven: Lenore's Revenge
  • The Song of Hiawatha: Extreme Canoeing
  • Elegy Written in a Country Church-Yard: Zombie Apocalypse 2

More to come on this subject, no doubt....