Friday, May 23, 2008
Paul Graham has suggested that the great goal of a programming language should be succinctness.
On reflection, I would argue that if being succinct were truly the ne plus ultra of programming, we'd all be using APL. Alternately, we'd be using Lisp... but despite Paul Graham's obvious predilection for Lisp, it never became the language of choice for commercial software development. So why not?
As someone who (after earning a CS degree) has toiled in the trenches as a working programmer for a couple of decades, and who's also been a project manager for some of those years, what I've come to believe is simple if unexciting: terse languages are harder for normal human beings to maintain than simpler (and therefore more verbose) languages, and a necessary requirement for a commercially effective programming language is that programs written in it are simple enough for normal human beings to understand and maintain.
The first part of this argument speaks to the people who actually use computer languages in commercial environments to build functional software systems. Although universities attract the people who are comfortable enough with abstractions and theory that they can get used to content-rich procedural languages, businesses hire practical programmers who are most comfortable with simple imperative statements. Those are the folks who are available for hire to crank out code.
Not surprisingly, then, programming languages that clearly spell out what a computer should do become most popular among those who must not only write programs but who also need those programs to be understood and modified later by other typical programmers. Languages like Lisp and Prolog are too high-bandwidth; they are so succinct that they pack too much information into the typical page for the average programmer to grasp at a glance. This is also the problem with baroque Unix-flavored scripting languages like Perl -- a program that does anything useful is too dense for the normal business programmer to understand without spending a lot of time groveling over it.
The second part of this argument concerns the value of apparent verbosity. Why does English retain articles like "the" and "an?" We don't need them -- other human languages don't have them -- but we still use them. I'd submit that these bits of syntactic sugar survive because they have value in understanding speech. They give our brains time to comprehend the nouns and verbs we hear before the next batch come at us. Similarly, I would not be surprised that more "verbose" languages like C and Java survive (and even thrive) in commercial environments versus more succinct languages like Lisp because the former are designed to have enough filler to allow the average programmer to understand one page's worth of the typical program.
That's not to say "more filler equals better language." There's a limit beyond which you wind up fighting with the formalisms instead of coding useful commands (see COBOL). Instead, what I think this means is that there is a proper balance point between succinctness and clutter. A commercially effective programming language will include some simple connective elements in addition to the verbs and nouns so that the average person who gets hired as a programmer isn't overwhelmed by pure content.
In summary, then, I can't agree that succinctness is a proper goal for a commercial programming language. I'd never claim that BASIC or C or Java are perfect languages; I've had my compaints about all of them from time to time. But I would say that those languages have enjoyed their times of popularity for business software development in part because they are not succinct, because they put just enough of the right content on a display page to allow the average working programmer to understand what that piece of code is supposed to be doing.
If there's a word that describes that quality, perhaps it's "clarity." That doesn't quite capture the nuances of "balance between content and syntactic separators" I'm looking for, but it's close.
So as an alternative viewpoint, I'd have to say that in a business environment I'll take clarity of code over pure succinctness any day. I can get more work done with a programming language designed to generate programs that the average programmer can understand than I can with a language whose code is so elegant that only the highly-trained expert can grok it.
In a conversation between Gamasutra and new Infogrames president Phil Harrison, the following exchange got my attention:
Gamasutra: I know a lot of developers with whom I've spoken have indicated that, as far as they're aware, most players don't actually finish their games, and most people who sit down to watch a TV show or a movie always will finish it.
Phil Harrison: Exactly, and that was a key motivation for this game, was the fact that they did some research, and I forget the exact numbers, but it was something like only eight percent play to the end of the game.If this is true, then I think it tells us something important about where computer games are headed.
The old paradigm was the "big" game: lots of levels, lots of NPCs and mobs, lots of stuff with which to interact, lots of story and dialogue. Even the early computer games followed this model (although the term "lots" is relative), in that the game was conceived as being something so big that you couldn't finish it in a single sitting. There'd be so many hours of content to explore -- 30 hours, 60 hours, even 100 hours -- that you'd have to stop several times on your journey to the end of the game.
What occurs to me is that the audience for home computer games has changed. In the Elder Days, the few folks playing computer games enjoyed the big game that took a long time to complete. These folks were already "core gamers" who were invested in playing all the way through any game they bought; the extent of the journey made finishing the game feel satisfying.
But it seems that things have changed. Old-school gamers who expect to finish a 100-hour game may now be a minority (perhaps an extreme minority of gamers); the majority may now look at computer games as just another quickie entertainment form -- not as a memorable experience, but just another way to kill some time. They either don't have lots of free time, or they simply aren't interested in investing large chunks of the free time they do have into just one game. This would explain why so few gamers may finish today's computer games.
Which leads to the question that Phil Harrison and probably other publishers are asking themselves: if most gamers never finish such big games, why make big games? Why invest millions of dollars' worth of development time and effort into an endgame that most players will never see?
If that's so, then we can expect to see other publishers follow Phil Harrison's lead and start funding only the games that appear better aligned with today's casual and short-attention-span gamer's lifestyle. Examples of this would be shorter games like Portal, and episodic games like Sam & Max. In other words, it's likely that we'll be seeing fewer epic-sized games like Oblivion or GTA IV.
Let's say this analysis is more or less accurate. That raises several questions.
First of all, if this is a real phenomenon, how does it relate to books? If people today really hated long narratives, who would read anymore? Is there any data that people today are no longer finishing books that they've bought? (I almost don't want to know the answer to that one in case it's actually "yes.") Is there any correlation between the kinds of people who buy and read big thick novels and those who play epic computer games through to completion?
Secondly, where does this leave MMORPGs? The typical online role-playing game is, by nature, a big game. If players play characters, those characters need a world in which to act... but building a world implies creating an environment that's big enough to feel like a well-realized place in which characters can participate in stories. Furthermore, since virtually all current MMORPGs use the character class/level convention, content must be created for different classes, but most players will only experience the content for one or two classes at most. From the individual player's perspective, they never see most of the character-specific content of any MMORPG.
Some of these concerns have been addressed by the recent emphasis in MMORPGs on quests/missions. By designing the typical quest to be something that can be completed in an hour or two, MMORPGs give players a convenient stopping point between gameplay sessions. However, this doesn't address the point that there's no "end" in a persistent-world game. As with epic single-player games, the end for most players of a MMORPG is simply to abandon the game.
So what happens to MMORPGs in a game development milieu where only the shorter/simpler games get development money?
Finally, a personal reaction: If we're looking at far fewer "deep" games in favor of shorter, simpler games, where does that leave me? I still love the deep, complex, highly immersive games; I enjoy investing myself in the games that feel like richly detailed and fully realized worlds. To me, the Looking Glass-style games -- Ultima Underworld, System Shock, Thief, Deus Ex, BioShock -- those are the games that really show off the unique art form that is computer games.
I really enjoyed Portal. At heart, though, I'm still a fan of games built as large worlds with extended narratives. A bag of chips, an apple, a box of popcorn -- those things are OK, but what's wrong with occasionally wanting something more substantial?
When substantial games that immerse the player's heart and brain are no longer being made in favor of cheap-to-produce game-y action games that can be played to completion in an hour or two, what's my incentive to care about computer games any more?
Thursday, May 22, 2008
Gamasutra published an article today by Matt Matthews which suggested that "artificial scarcity" would benefit the game industry.
The gist of this idea was that game publishers should copy Disney, who make their products available for only a relatively short time before yanking them from the market for 7-10 years. Something similar to this for games, the author says, would allow publishers to increase their prices since there'd be fewer games to choose from. And the benefit to consumers for this reduction in choice would be fewer "bad" games crowding out the good stuff.
While I applaud the author's willingness to suggest improvements to a system, there are several questions worth asking about this particular idea.
First and foremost, how does this proposal benefit consumers? What's the incentive for consumers to participate in a scheme that's clearly more about artificially inflating prices for the publisher's benefit than about satisfying consumer demand?
To this criticism that unnaturally restricting consumer activity would be bad for business, the author says, "The numbers behind the current digital markets are nebulous, to say the least. All we know are that prices are generally low and the truly high-quality titles get lost among the mediocre-to-poor." Really? Leaving aside the question of whether the prices of games (digitally distributed or otherwise) are actually "low," are gamers really unaware of which games are good and which are not? If that assumption is false, what's left of the argument in favor of artificial scarcity?
On balance, this proposal strikes me as another in the line of short-sighted tamperings that, like wage and price controls, create more problems than they solve because they're never as efficient over the long run as free markets.
Hands off, please. Let publishers provide good information about their products, and then let consumers and publishers negotiate freely on price.
Monday, May 12, 2008
The good news is that BioWare has announced that Mass Effect for the PC will not, after all, require online verification every ten days.
The bad news is that all the other DRM elements are still being forced on PC gamers who would otherwise be willing to pay to play this game.
BioWare noted that the change to the every-ten-days requirement was being made (at least in part) to help out military personnel who, after activating the game online, would not have Internet access and would otherwise not be able to play the game after ten days. If we take that at face value, then BioWare deserves a word of thanks.
But that still leaves open the question of why BioWare (or perhaps EA) shows no sign of altering its decision to require online activation of a single-player game.
Their stated reason is that it will deter piracy. They also suggest that this change better serves players by no longer requiring them to keep the game DVD in the disk drive while playing.
I'll leave the question of piracy alone for now. I'm certainly not going to parrot the chorus of assertions that "people will just pirate the game anyway" or even that "putting copy protection on a game makes it more likely to be pirated" as some people claim, confusing "pirated" with "stolen."
What about that other claim -- that forcing players of a single-player game to verify online is somehow better for them than requiring the DVD to be in the drive?
Here, I think some speculation might be fun. What if this is part of a strategic decision by EA to move away from retail sales of PC games?
That's not necessarily a bad thing for PC gamers. Big games are expensive to make, and part of the cost is retailer mark-up. If you can move to digital distribution as your primary sales medium, your costs should drop. Theoretically, that could allow producers to lower the purchase price of their games. Practically, of course, that won't happen; the vastly more likely outcome is that unit prices would remain the same and publishers would pocket the difference. But even that's not all bad for gamers, since it increases the likelihood of publishers staying in business, which means more and better games available.
EA may also see moving into digital distribution as a long-term competitive necessity. If that's "the future," EA will not want to leave it solely in the hands of Valve and/or Sony.
But if this is the real reason for pushing online activation on all their games, including single-player games like Mass Effect and Spore, then why not say so? If I can see that this is a business decision on EA's part, surely the professional MBAs over at Valve and Sony can figure it out, too. No competitive advantage would be lost by acknowledging a business strategy that everyone can see being implemented.
Furthermore, gamers are becoming increasingly sophisticated consumers. They know when they're being spun. So why do it? Why throw out some tepid rationalization for requiring online activation of single-player games like "it's easier than requiring a DVD" (when that's simply not true for many gamers) instead of an honest statement that "we're moving to digital distribution for all our games to reduce costs"?
If I'm correct in any meaningful percentage of this speculation, then it's another example of the Reality Distortion Field at work in the upper echelons of EA (and perhaps BioWare as well).
Here's another way to say it: Never, ever allow Marketing to run your company. Like fire, Marketing is a useful servant but a dangerous master... and they are never more dangerous than when your customers are knowledgeable about your industry and passionate about your products.
Friday, May 9, 2008
First BioShock came out with the SecuROM Digital Rights Management (DRM) copy-protection software that not only required online activation and reactivation, but limited the number of installations to two (originally). The problem I had with this is the same I've had with Steam: it assumes that all consumers are criminals and insists that they submit to having invasive software placed on their PCs if they want to play the game. (A single-player game, I might add.) I despise that attitude; it is lazy (because it fails to make the effort of deterring the actual criminals) and insulting.
Now comes word that EA, publisher of the PC version of Mass Effect and of Spore, will be using SecuROM on those games and all their other major games to come.
Here's what Derek French of BioWare had to say when asked about this for Mass Effect for PC:
Mass Effect uses SecuROM and requires an online activation for the first time that you play it. Each copy of Mass Effect comes with a CD Key which is used for this activation and for registration here at the BioWare Community. Mass Effect does not require the DVD to be in the drive in order to play [as BioShock does], it is only for installation.My view of this was nicely summed up by a subsequent reply in that thread:
After the first activation, SecuROM requires that it re-check with the server within ten days (in case the CD Key has become public/warez'd and gets banned). Just so that the 10 day thing doesn't become abrupt, SecuROM tries its first re-check with 5 days remaining in the 10 day window. If it can't contact the server before the 10 days are up, nothing bad happens and the game still runs. After 10 days a re-check is required before the game can run.
[Derek later added:]
For clarity, though, an internet connection is not required to install, just to activate the first time, and every 10 days after. You can be completely connectionless for 9 days and encounter no problems playing Mass Effect. And you don't need the disk in the drive to play.
Seriously, if I wanted to play an online game, I'd buy an online game. Once is okay to activate. Two checks... well, I can deal. But constant checks for as long as I own and play the game, every 10 days? That's gotten a tad excessive. Sure, I have an always-on net connection but what happens if I don't play for 11 days and the moment I want to play my connection is down? Are you saying I'm not going to be able to play my perfectly legitimate purchased copy of the game, even the retail version, until I get permission? That's the kind of idiocy that annoys customers.Matt Martin over at GamesIndustry.biz obviously gets it as well. So why is it so hard for game publishers to grasp the concept that making their games less friendly to valid purchasers than to pirates is stupid?
I'd much rather have to put the DVD in the drive when I play than be forced to do more than one [validation]. At least I can guarantee that I'll always have the DVD, but there's no way I'll believe any guarantees that my net connection will be there and/or EA's servers won't mess it up.
I have no patience with or sympathy for those who pirate copyrighted materials. So it's freakin' irritating to me that the single-player games I'm most looking forward to playing are going to include copy-protection schemes that assume I will mass-copy and distribute their product on the street unless prevented from doing so. I'm not opposed to systems that counter pirates; I'm opposed to systems that treat everyone -- including me -- like a pirate.
If I were the paranoid type, I might wonder whether this is the latest tactic by console makers to try to eliminate the PC as a gaming platform, in this case by getting game publishers to make playing PC games so obnoxious that gamers just give up on them....
Thursday, May 8, 2008
In the beginner's mind there are many possibilities, but in the expert's mind there are few.About a year after I started working as a project manager (after numerous years toiling as a programmer), I'd come up with several ideas for making the software development process a little more efficient. Since then, of course, I've learned that only highly formalized and rigidly documented processes produced by a big committee and blessed by the new groups of auditors required to evaluate compliance with the new rules are acceptable... but hey, those naive early days were creative.
-- Shunryu Suzuki (quoted in Little Zen Companion, by David Schiller, 1994)
I was reminded of one of those ideas today when I read an interview on Gamasutra with Aaron Loeb, COO of Planet Moon Studios, developers of (among other games) the wonderful Giants: Citizen Kabuto. In this interview, Loeb describes Planet Moon's experience with the Scrum mode of agile software development. It's something a group where I work is using, and I'm always interested in improvements to my own development process that I won't be allowed to implement, so I spent some time today reading about Scrum.
One of the ideas in the Scrum process is that, instead of a lead or manager assigning tasks to the developers in a group, the developers themselves are expected to select tasks from the pile of current tasks (the "sprint backlog"). This reminded me of the idea I'd had back in my project managing youth... but with a twist.
My notion was to apply economic theory to software development to maximize the efficiency of the process. Specifically, at the start of a new release cycle I'd provide descriptions (and any requirements) for all known tasks, sorted by priority. Over the next couple of days, each programmer would then bid on each task (in priority order) with an estimate of the number of hours they thought it would take them to implement that task. At the end of the bidding period, the low bidder for each task would be assigned that task. Once the number of hours for a developer reaches the alloted number of hours in the schedule for that release, that developer would exit the bidding process and the task would be assigned to the developer with the lowest remaining estimate for that task, and so on until one developer remains. That developer would then be assigned tasks from the pile in priority order until their workload (in estimated hours per task) reaches the limit for that release.
Bearing in mind that these would be estimates only, and estimates are likely to be off somewhat, this should -- theoretically -- result in the maximum amount of prioritized work being performed in the available time.
Of course there are some gotchas in the process described above. Some special-case provisions would have to be made; for example, what if for the next prioritized task every developer estimates that the task will require more time than anyone has remaining for the next release? Either the task will have to be deferred to a subsequent release (which may not be appropriate), or someone will have to have one or more assigned tasks unassigned and replaced with the new, larger task (which requires a lead/manager to interfere with an otherwise automated process). Another structural issue with this process is that very efficient developers would tend to be assigned a large number of small tasks, while the less efficient developers would get stuck with working a few big tasks -- not necessarily optimal for group dynamics over the long term of a project.
But there's a more fundamental issue to be resolved. Namely, what's the incentive to be the low bidder? There's no penalty for doing so, since everyone gets loaded up with roughly the same amount of work (in estimated hours). But that also means there's no incentive to figure out the most efficient way to code an enhancement or bugfix. So perhaps the developer with the lowest average number of hours per task in each release should get some kind of bonus or goodie. Some might feel that this would generate an inappropriately competitive group environment, but you can hardly have an economic solution without providing some kind of tangible advantage for one behavior over another.
Of course, a manager who can't reward one developer and not another according to documented results isn't likely to appreciate an agile software development environment in the first place. An environment where everyone has to be treated exactly the same regardless of results is probably not a place where process efficiency is a goal.
For the more agile workplace, I still wonder whether this economic approach could be tweaked into functionality, and whether it would actually work as I imagine it might.
Wednesday, May 7, 2008
So it looks like Grand Theft Auto IV may have become not only the best-selling game "OF ALL TIME"; if Strauss Zelnick is to be believed (according to a Gamasutra story) it has earned the most first-weekend revenue of any entertainment product ever. So I acknowledge that my earlier skepticism regarding GTA IV's sales numbers was wrong.
But why was I wrong? Why has GTA IV sold like this?
Is it really the content? Are so many normal people really so eager to play a game where you shoot lots of people in the head and steal various vehicles? (I'm not even getting into the whole "drunk driving" thing, which is probably overblown.) Is there truly such an attraction to pretending to do this kind of stuff to give GTA IV the greatest launch of any entertainment product ever?
I have to think there's something else going on here, that it's not really the content of the game that's responsible for its outstanding sales. I don't see how the content alone can account for such extreme sales numbers.
One possibility is the blitz advertising. It's thought that GTA IV took $100 million to make; if so, it wouldn't surprise me to learn that a meaningful chunk of that was devoted to advertising. (And this might be a good place to observe that there doesn't doesn't seem to have been any cannibalization of GTA IV sales by Iron Man or vice versa, as some claimed to fear.) Maybe a significant number of people found the GTA IV video ads compelling, or just figured anything advertising that much had to be a "blockbuster."
Or perhaps GTA IV is the beneficiary of the Hula Hoop Effect, as I think has been the case for World of Warcraft. If a bunch of your friends say they're doing it, you're likely to do it too in order not to be perceived as uncool. I suspect this is the major reason behind most high-volume sales experiences, including that of Grand Theft Auto IV.
Finally, there were the suspiciously hyperenthusiastic reviews of the game by pretty much all the known game reviewers, led by the completely over-the-top coverage by G4 TV's X Play. As someone who was closely following the development of Star Trek Online by the former Perpetual Entertainment, I haven't forgotten the admission by the former PR firm Kohnke Communications in their complaint against Perpetual that they were "successful in creating pre-release 'buzz' around Gods & Heroes, and in convincing reviewers to write positive reviews about the game."
Kohnke C. later insisted that "reviews" was a typo, and should have read "previews"... but what's the real difference? Either way, it highlights the reality that there are PR firms out there who are pushing game reviewers to express positive impressions of a game, regardless of the actual quality of that game. That, of course, is what a PR firm is paid to do... but reminding people that this kind of spin goes on in the gaming press does not create confidence in the objectivity of reviewers -- just the opposite.
So who did the PR for Grand Theft Auto IV? Looks like they earned their money.
No, that doesn't mean I think all the gushing reviews of GTA IV were biased by some insider deals. But I do have to wonder; again, are people -- including game reviewers -- really so hugely interested in dealing out urban mayhem? I earnestly hope there's something more behind their glowing reviews than that.
Sigh. My less-than-enthusiasic response to this game is probably generating thoughts of "but dude, you haven't even played it!" Not that I'd mind comparisons to some elements of Fox News, but the uninformed Mass Effect criticism is not something I want to copy. So I guess if I want to continue questioning GTA IV's wondrousness, I'll have to reward Rockstar's promotion of simulated inner-city violence with some cash of my own.
Bleh. We'll see.