Sunday, November 11, 2012

Releasing on a Fixed Schedule

This is a continuation of Firefox and version Numbering which I wrote a year ago and promptly forgot about. This draft was dated 2/20/11. I've been making edits to it the last two days. Hopefully it still makes some sort of sense.

Second system syndrome is a software development pathology that often happens when a piece of software is developed in two phases. The first release of the software includes everything that's relatively easy and quick to implement. The second version includes the rest. Unfortunately some of the features in "the rest" turn out to be unfeasible. This can lead to the second version never being completed. Time goes by, the seasons change, but the software release is always a year away.

The best way to develop software is to admit from the start that there will be many iterations and that not all features will make any particular release. First you make a priority list consisting of the features you want in order of importance. You then implement this list, making sure to revise it as new information becomes available. You then release your software on a fixed schedule.

The fixed schedule is useful for making sure you ship something. It changes the question from "What are we going to do?" to "What can we do by the next release?". This is subtly important because in software there are things that will take an extremely long time to do but nothing is really impossible. Estimating how long something will take becomes harder the longer the time span and the more complex the problem. If a new feature is really complex then you can find yourself on a project that will takes years to complete. What is more, complexity feeds on itself, causing schedule ships and increasing the amount of confusion about how to fix problems. Software can become a hellish tar-pit of intense pressure and slipping schedules.

Making the commitment to release software on a fixed schedule turns the problem on its head. Instead of shipping a product late because one single feature is horribly behind schedule you instead focus on trying to accomplish the most important things you can before the next release. This essentially guarantees that you'll always have the most critical features in any given amount of development time. If a feature is behind schedule it will miss the release date and get automatically pushed back to the next one.



If you release on a fixed schedule make sure that you're aggressive about taking features out of the release that they aren't ready. There needs to be a clear and enforced policy that if a feature isn't ready on time then it's not going to be in the release. This means that developers have to ensuring the feature they are working on doesn't break the codebase. Invasive new features need to be written in such a way that they can be turned off or otherwise disabled if need be. Frankly, clearly separating new development is a good practice in all cases since it helps QA test new features or bug fixes without getting mangled in a new feature. Modern revision control tools like Git and Mercurial can be a help here with their branching features.

The most common complaint when trying to implement rapid releases is that some feature is vitally important and it's only a little behind and so the release should be delayed just a little bit. This is not only the thin end of the wedge but things are rarely only "a little delayed". What I've seen is that the release gets delayed to include this "vital" feature but several others are added in since there's now time to do them. Next *those* features get delayed slightly so more features are added to fill in the gap. Then those get delayed.. It can turn into Zeno's project management. The release is always just a little bit in the future and the goal of a release every month turns into a single release in one year. What you're actually doing is you're delaying all features for a single feature. It's not a nice thing to do to all your customers whose features are actually shippable.

Being strict about the cutoff date stops feature creep, it allows customers to get features sooner and it increases software quality.

My favourite benefit is that development teams no longer rush to meet an unrealistic deadlines and by skipping on testing. If development is running late they can take their time and do it properly because their feature can hop on the next release. It also remove the temptation to commit the sins of self deception; passing off unfinished software as merely buggy, for example. The effect of this one is an endless QA cycle as developers use QA as a sort of to-do list generator. "I'm done! And on time too! Oh, there's a bug? Ok I'll fix it. At least I was officially 'done' on time.". Yeah right.

If you release often enough, say every four months, you don't need to create maintenance releases for existing branches. Any bugs can be fixed on the trunk because customers will have that code in good time. Additionally, it's less likely that you will introduce a regression because less has changed since the last release. If there's a really critical problem you can still ship a maintenance release but it's rare you'll need to do this in practice.

When it comes to testing, adding automated unit tests and regression test becomes more important in rapid release software. Since the codebase is always changing it's important to not constantly break things every release. Automated unit test and regression tests is a best practice to avoid unmaintainable software. Rapid releases just make the consequences of unmaintainability more dire.

There are a overheads associated with creating a new release of the software. Most of these should be automated anyway. Those that can't like documentation updates and manual QA feature testing should be easier with short release cycles since less has changes since the last version of the software. This means less to update and test.

https://officeimg.vo.msecnd.net/en-us/images/MH900443454.jpg 
Version 4 is out? Yeah, whatever, everyone knows that version 3.4 is the best. Nice laptop by the way.


Another potentially annoying aspect of releasing on a schedule is it's hard to make a fuss about the new version of the software because major features are done incrementally. When I used a rapid release cycle on the Myster project this turned out to not be a problem. What happened was that we were releasing so often people would be visiting the website regularly to see if there had been a new release. I released on a monthly basis and the public realized the Myster was constantly being updated an improved. A majority of our user base upgraded every time we released a new version. And we didn't have a auto-update system! Having rapid release cycle communicates to customers that you care about issues and new features and can fix them quickly. It also creates a constant background buzz. We found ourselves on the front pages of many a news web site every time we released - once a month.

Releasing often and on a fixed schedule does mean that your marketing team has to think of the product more as a continuous stream rather than a single specific version. It doesn't stop you from selling the features of the new version but it does mean you should direct people to the latest version and not develop a brand around a specific release. If anything, I'd considered creating a brand around a specific version of a piece of software a marketing anti-pattern. It means you have to compete against your own software's older version every time you release. How silly is that?

Remember, only you can help prevent second system syndrome.

Saturday, November 10, 2012

Balancing Starcraft II - Making an E-Sport

It's no big secret that I'm a big Starcraft II fan. Apart from Portal and the odd session of Angry Birds it's the only video game I play. For those not in the know, Starcraft II is a real time strategy game. Think of it as competitive SimCity building but with marines. The best way to play Starcraft II is over the network with friends  However, it's really hard to design a real time strategy game that's balanced and fun. In fact it's taken Blizzard 5 tries to get up to this point.

Warcraft-logo.gif

Blizzard's first attempt was the original Warcraft way back in 1994. It featured two races; orcs and humans. You can play either side and each side had different units and abilities. Well, by "different" I mean mostly different graphics. The actual abilities between the two sides were really quite similar. The game units were also of vastly different abilities meaning that games always degenerated into a rush to some big unit and then produce as many as possible. It was a fun game but a bit simplistic.

Warcraft-2-Tides-Of-Darkness-Pc.jpg

Blizzard's next attempt was Warcraft II: Tides of Darkness. This game was also a huge amount of fun. In all the inital head-to-head games I played it felt really balanced. Unfortunately there was this unbalancing orc spell called blood-lust that would allow you to obliterate your human opponent. In the end, my friends and I reverted to simply playing against the computer on custom maps - which was also an insane amount of fun.

The box art of StarCraft

The original Starcraft was a big win for head to head play. The expansion pack called "Brood war" was even better. This was the game that created the e-sport phenomenon in South Korea. Not only were all 3 sides (!) balanced but the game had depth to it. You could play forever and keep getting better. There were two big problems though. Finding a person to play against online was hit and miss. Because there was so much depth you'd either play against someone who was clearly better or against someone who was clearly worse. There was also the problem that the super balanced head to head play meant that playing against a real person was very, very intense. So intense, in fact, that we often just played against the computer. That could be intense but not to the point where you had to take a break after each game :-O.

WarcraftIII.jpg

Warcraft III came next and it was a serious attempt to create good head to head playing experience. The biggest improvement over Starcraft was the opponent matching system. The system would keep track of who won against who and try to automatically match you with someone of your skill level.

Warcraft III was also less intense than Starcraft. Warcraft III was built so you focused more on managing your troops and less on building an maintain you bases. It focused on the generaling more and less on the SimCity building aspect. Blizzard's idea was that the troop control was the fun part and base building a distraction. It isn't. The real fun in Starcraft, and the earlier games, was managing both the SimCity aspect and the troops at the same time! To be honest it's actually more multi dimensional. You can have to balance your technology and upgrade, with the quantity and composition of your army, while balancing troop production with how quickly your mineral production expands and then balancing that with how many troop production building an of what type you want to build. Oh, yes and on top of that you have to be the general in the field and tell your troops what to do.

StarCraft II - Box Art.jpg

Thanks to the enormous success of Stacraft and its use in e-sports, Blizzard, for the first time, made a game that was focused primarily on making that experience awesome. They also took the opponent-matching ladder system from Warcraft III, made a huge number of improvements and stuck it into Starcraft II. The whole package is a work of art.

So that brings me to the point of this post. I have recently come across a talk by one of the people involved with designing the Starcraft II online gaming experience. In this talk he relates how difficult it was to build a game that will work as an eSport while looking good and being fun to play for multiple levels of players. I found it fascinating.

Thursday, November 8, 2012

Light vs heavy mutex

I discovered a nice post at Preshing on Programming that discusses Light vs Heavy Mutexes. Mutexes are what allow you to do critical sections which, in turn, allow you to create programs that run on multiple processors. That got me thinking about how Java's "synchronize" keyword is implemented. Using Synchronized is the default way of creating critical sections. It used to be really slow but has gotten much, much faster recently. Apparently, synchronize is implemented using a combination of light and heavy locks as well as other techniques that make it even lighter than a light mutex.



Given that so much work has been done on it, I've still had performance issues with it. Using things like AtomicInteger with its check and set is still much faster.. Assuming that you can use it (it's not always possible.).