A professional approach in software development started in 1990 when I was offered a job at SAS Institute as consultant/developer/trainer. A few years later I was part of an R&D facility in Copenhagen, creating Business Intelligence software specifically for the European markets.
The Agile leap came in the year 2000, when we made a subdivision to a much larger US-based division, focused only on Business Intelligence. We would gradually lose the domain knowledge advantage that had made us important to SAS Institute, so we had to create value another way. Our aim was to be the best engineering unit within the company. That meant superior service to customers, internal as well as external, never miss a deadline and have the best software quality of all engineering teams. That required serious process focus.
After a while we presented a plan to our team. One of the key elements was that we wanted to start doing formal reviews, since that was claimed to be one of the most costeffective ways to improve quality. The plan was met with resistance from the development team, especially from one developer who told us that he had just read about a new thing called eXtreme Programming. “Why don’t we try this instead?” he asked, “XP claims that review is done all the time, as part of the pair programming practice.”
I have to admit that I, as a manager, was skeptical at first, when I learned about the concepts in XP. Two people doing one person’s work – that cannot be productive! Collective code-ownership – doesn’t that mean that nobody takes responsibility?
But since our developers were very much in favor, and I definitely could see the value of thorough automated testing, and at the same time the pragmatic and disciplined approach to design and planning, we decided to give it a try.
We did not change completely overnight but took a gradual approach. We tried to take only a few steps at a time, and always made sure to carefully evaluate and make necessary adjustments.
One of the defining moments came when we had completed a number of iterations and felt we were doing pretty well. The team had created a utility to measure our test coverage. The results were depressing! We thought we followed the XP paradigm of “test everything that can break” and in reality we had tests for only around 30 % of the classes in our software. After analyzing the problems, it became clear that because of the competitive relationship we had with another group, we had emphasized making slick demos at the end of each iteration rather than creating production quality software. Good motivation made us do the wrong things, an interesting insight!
As a team we decided that if we were to take ourselves seriously, we needed to fix this problem so it would not come back. We decided to stop doing more features in the current iteration and let our counterparts know that we could not deliver as promised at the end of the month. We then spent all the time creating the missing tests. Much of the code was not testable, so we had to rewrite it in order to make it work. But after that, we had reached a new level in our practices. This helped install test discipline in everybody, and we never faced that problem again. It became as natural as making sure that a program complies, to never declare anything finished without an accompanying suite of tests.