If I had to name the most useful thing to plan for during the development of an MMO, it would be failure. Unfortunately, this requires the management team to develop multiple personalities, because the only way to have enough CS/QA/billing support/hardware/content is to plan for stupendous success. But if we want to actually get to our launch day with our sanity and budget intact, we need to assume catastrophic failure will occur at regular intervals.
Outsourcing will take twice as long as we thought.
Smart managers assume there will be a failure of communication. This holds true whether the outsourcing is domestic or foreign. If you have never managed an outsourced project before, you will not know how to provide instructions and notes for your offsite team in a way that makes the best use of their time. If your outside sources are from another culture, you are going to misunderstand and be misunderstood. Whatever they turn in will need to be refined. When we are told (by the outsourcer or our own boss) that it will take six weeks to deliver X, and don’t worry your pretty little head about margins, that six weeks already has a cushion… we put twelve weeks on our own calendar.
Third party products and tools will not work out of the box.
When we use a component we did not build with our own little hands, it doesn’t work with our game without massaging it, buying it dinner, calling its last boyfriend and asking for advice… I’m going to stop with this analogy right now. The point is, if it’s advertised as “plug and play,” we need to allow several days of work time and multiple test runs. Ask yourself, if this component is plug and play, why everyone with any kind of budget builds their own proprietary version. Better yet, ask someone who ended up building a proprietary version.
Sure, I’ve met programmer types who say “Because I wanted my version to sing, dance, and make me a sandwich,” even though the sandwich part was strictly for convenience and all they really needed was singing and dancing. Have pity, for they see not the forest for this one perfectly proportioned and elegantly designed tree. It’s no way to live.
Mostly, I’ve met people who’ve said “because when it broke, I couldn’t fix it.”
When a third party tool breaks, you have to put in a service call. You are at the mercy of the provider, the provider’s sense of priority, the provider’s holiday schedule. Your own progress stops dead. Therefore, we assume that for every “plug and play” tool we use, we will face several occasions where it doesn’t matter how ready we are to move forward, we’re going to need some help.
The smaller our team, the more the Bus Syndrome will affect us.
Small teams rock. I vastly prefer a small team (or while on a large team, a small management structure). We get more done with fewer meetings, decisions are simple, on-the-fly adjustments are a piece of cake, and there’s none of the hyper-specialization that eventually atrophies our collective brain.
But if someone on the team gets hit by a bus, we are so, so screwed.
It doesn’t even need to be a bus. Someone’s wife might need surgery. Someone’s parent could die. A real doozy of a virus can sweep through the team, and not all at once, either. When our lead artist finally staggers back to work, our entire design team might be home wishing for death or at least a cleaner bathroom floor.
A small team keeps their programmers in a locked closet and sprays anyone who comes near them with disinfectant. Just in case.
At any rate, project managers with any experience build lost manpower time into the schedule to allow for people to get sick, and the more true believers on the team, the more sick time. This is because in a cruel twist of fate, the more dedicated the team is, it becomes more likely someone will come in to work with a fever and the sniffles “because I just have to finish this one thing” and they’ll get plague germs on everything.
CTRL ALT DELETE.
At least one element is just not going to work, and we’re not going to know about it until we get into beta testing. There’s no way to know what it is or test for it. Good planning and experienced personnel can reduce the number of these things. Good luck can avoid a few more. But there is going to be something that costs weeks of time.
A full flush and restart where everything is tossed is a luxury only major studios indulge in, and usually only if they’re spending someone else’s money. But every game team runs into something where they have to go back to zero - correct a foundation level mistake, change a primary assumption, whatever.
How much time to allow for these things varies. But let’s say nothing goes wrong. If that happened (HA!), we’d finish the game and have weeks and weeks of time left before launch.
And if that happened, we’d then have, oh, about half of the time requested by the QA department for testing.