I just started a new job this week. It's funny, but the more things change, the more they stay the same.
Just about every company I've gone to work for attempts to develop software using the same antiquated methods. It's as if there has been nothing learned in the past 30 years about how to develop more software, of higher quality, quicker.
It's not that the people I work with are stupid. Most of the time, far from it. It's just that they've never been exposed to a different way of doing things. Or, if they have been exposed, it's in drips and drabs of techniques that are attempted only academically - read in a book, attempted to be applied out of context in an environment where they add little value, and then determined (amazingly!) to add little value.
One of the reasons I think this is so is that most people evaluate the product of software development at a fairly gross level. In other words - did we turn out some software that did something? If we did, then we must be doing it right, right? Because so many software efforts don't even turn out anything that works.
Unfortunately, this is more a statement on the dismal state of the software development field than it is about the efficiency and effectiveness of the development methods employed.
Assessing questions such as "could we have produced more working features with the same level of effort," and "could we have produced this same functionality with fewer defects, or in a shorter period of time" require some of the very techniques that are shunned by these organizations. Techniques of estimation, productivity measures, defect velocity and density and other quality measures.
And since these techniques require a certain overhead (and since anything that adds overhead must, by defnition, take away from the amount of time there is to develop working code), most organizations fail to see the value or choose to take the risk of their introduction. After all - if it isn't broke, don't fix it, right?
Unfortunately, this utilitarian wisdom ignores a couple realities that eventually come back to bite the organization in this mindset. Two that I have seen first hand include growth and competition.
Let's make the optimistic assumption that the company is growing. Techniques that work with a 5 person team do not work with a 15-50 person team. Organizations that haven't gone through certain growth plateaus make the mistake of assuming that they can just keep doing what they're doing now, but just do it "bigger." But it isn't just a matter of scale - larger organizations require fundamental differences in communication, data tracking, management, and planning.
It would be like the police assuming that "hey, it only takes a couple cops to manage an unruly bunch of a few individuals...so to handle a riot just takes more cops, right?" Well, yes...but if all you do is add more cops, each doing their own thing, you end up with an even worse mob run amok (because now it's an aggravated mob run amok). It takes more cops, yes, but organized into police lines, supported with barriers and special equipment, with strategies for containing and channeling the crowds, and rules for who to arrest and who to ignore. I use this example, not because we can all relate to police tactics, but we have all seen the result of mob rule. The behavior of the mob is fundamentally different than the behavior of a small group, and must be managed differently. Or you get riots run amok.
The second reality ignored by this attitude is competition. It may be true that you are turning out working software. It may be true that it is working well enough for customers to buy it and use it. But what happens when a competitor moves into your niche? Now, not only must you produce working product, but you must produce more of it, faster, and better, than your competitors. Whoever produces the most of the best quality will win. And here is where you need to ask the hard questions - can we produce more than we are? Can we produce higher quality than we are? How?
Anyone who has spent much time in the software industry can vouch for one fact - it is an extremely competitive industry. It doesn't take decades to unseat an established large company leader (like some of the durable good industries, like autos or appliances). In software, he who executes a good idea best, wins.
To be a winner, be your own strongest competition. Strive to get better in all dimensions - before someone else beats you to it. The casualty rate of software companies is higher than in just about any other industry (I think maybe the restaurant business has as high of a rate of failure).
So why do software companies never seem to learn from history? I have a lot of ideas about this, none of which I think I can prove. It has to do with how software is taught (or more specifically how people learn to be programmers); about the differences between hard goods and soft goods, and how lay people can assess the relative quality (I can kick the tires - harder to kick the bits); how the category "software" covers anything from your thermostat controller, to sales management, to MySpace pages (and how any engineering discipline that said you could build a bridge the same way you build a circuit board is doomed to failure of overgeneralization); and perhaps a few others.
Which, from a personal perspective, is all ok. I know how to build software - it's why people hire me. So I suppose I benefit from the state of the industry.
Which, come to think of it, is probably the main reason it doesn't change. Like politics, or our screwed up healthcare system, there are too many vested interests in the way it is now.