Shahzad Bhatti Welcome to my ramblings and rants!

February 24, 2009

Does software quality matters?

Filed under: Methodologies — admin @ 9:24 pm

In last couple of weeks, I followed interesting debate between Bob Martin and Joel Spolsky/Jeff Attwood (listen Hanselminutes) on the issue of SOLID principles. The SOLID acronym stands for five principles, i.e.,

  • Single Responsibility Principle – this principle is based on the principle of Cohesion from Tom Demarco’s Structured Analysis and Design book and mandates that a class should have one, and only one, reason to change.
  • Open Closed Principle – this principle is based on Bertrand Meyer’s book on Object Oriented Software Construction that says that a class should be open to extend without modifying it.
  • Liskov Substitution Principle – this principle was introduced by Barbara Liskov that says derived classes must be substitutable for their base classes.
  • Dependency Inversion Principle – states that your should depend on abstractions, not on concrete implementation.
  • Interface Segregation Principle – states that you should make fine grained interfaces that are client specific.

I learned these principles many years ago and attended Bob Martin in Best Practices 2004 conference, where he talked about these principles. These principles sound good, though in reality they should be a broader guidelines rather than principles. But the heat of the debate was the way these principles have been evangelized by Bob Martin who insisted on using these all the time. Over the years, I have seen similar debates between Cedric Beust and Bob Martin over the use of test driven development. There is also debate on topic of TDD and architecture between Bob Martin and Jim ‘O Coplien. Overall, I find that the issues from these debates boil down to the following:

Usable software vs High quality but unused software

One of the controversial point that Jeff Attwood raised was that the quality does not matter if no one is using your application. I find there is lot of truth to that. In fact, this is the route that most startup takes when developing a new idea and trying something new. For example, everyone was blaming Twitter for choosing Rails when it had scalability and reliability issues. However, Twitter would not have existed if it was written with most scalable and reliable language or framework that took two more years or if the application itself wasn’t as useful. I tend to follow the age old advice: First make it work, then make it right and then make it fast. There is a reason why rapid prototyping frameworks such as Rails, Django, Zend framework, etc. are popular because they allow you to focus more on business functionality and to reduce time to market. So, I agree the first goal of the software should be to make the software that solves real problems or add value. Nevertheless, if first implementation is horrible then it takes hercules effort to make it right and some companies like Friendster never recover.

Customer Experience vs Internal design

One of the earliest advice I got on software development was to write manual before writing the code. It focuses you to solve business problem of the customer rather than writing with top down architecture, which is in a way similar to behavioral driven design in spirit. I find that most useful software developed bottom up and driven by the users. Kent Beck often says that you can’t hide a bad architecture with a good GUI. Nevertheless, for an average user, the usability matters a lot. I remember back in early 90s, IBM OS/2 was superior operating system than Windows but largely loss the market due to the usability (and marketing) issues. The same could be told why people think Macs are better than PC. Rails is also a good example that became popular because you could whip up a webapp in ten minutes despite the fact that its code has been plagued with maintenance issues from monolithic architecture and tricks like chain of alias methods. Other examples include WordPress and Drupal both written in PHP and are the leader in the blogging and content management area due to their usability rather than quality of the code. Again as your software crosses some threshold of number of users it may have to be redesigned, e.g. Rails recently announced that it will merge with another framework Merb in 3.0 because Merb has been designed with micro-kernel and plugable architecture. This also reminds me of merge between Struts and WebWork that turned out to be failure. Joel Spolsky cautions about software rewrites in his blogs and book and I have also blogged on Software rewrites earlier. In the end, you want to extend your application incrementally using Strangler Fig model, which is not an easy thing to do. Ultimately, people matters more than technology, processes or best practices in software development as good people can ship a good software regardless of the language or tools you use.

Evolutionary architecture vs Up front architecture

This has been brought up in debate between Jim Coplien and Bob Martin, where Jim took the position of more upfront design and architecture and Bob took position of evolutionary architecture. I have a lot of respect for Jim Coplien, I still have a copy of Advanced C++ I bought back in ’92 and it introduced me to the concepts of abstraction, body/handle, envelop/letter principles which are sort of DIP. In the interview with Bob Martin, Jim Coplien raised a lot of good points that YAGNI and test-driven based bottom design can create architecture meltdown. Though, I believe good software is developed bottom up, but I do believe some architecture and domain analysis beneficial. I am not necessarily suggesting BDUF (big design up front) or BRUF (big requirements up front), but iteration 0 style architecture and domain analysis when solving a complex problem. For example, the domain driven design by Eric Evans or Responsibility driven design by Rebecca Wirfs-Brock require working closely with the domain experts to analyze the business problems and capturing essential knowledge that may not be obvious. Any investment in proper domain analysis simplifies rest of development and make your application more extensible. There are a number of agile methodologies such as feature driven development and DSDM that encourage some upfront architecture and domain analysis, which I find beneficial.

Extensibility and maintenance

Once your product is hit and loved by users, your next task becomes extending it and adding more features. At this stage, all -ilities such as scalability, performance, security, extensibility becomes more important. Each team can decide on what practices and principles are appropriate and follow them. Agile methodologies encourage collective ownership and pair programming that can spread knowledge and skills, though there are some alternatives such as code reviews or mentoring. I think, having a technical lead who ensures overall quality and keeps the bar up for rest of developers can help with extensibility and maintenance.

Test driven Development

Bob Martin has been adamant proponent of test driven development with his three laws of TDD. I blogged about original debate between Cedric Beust and Bob Martin back in June, 2006 and showed how Bob Martin’s position was not pragmatic. This reaction has also been echoed by Joel, Jeff, Cedric and Jim, who agree 100% coverage is unrealistic. Lately, more and more people are joining this group. I watched recently an interview of Luke Francl who expressed similar sentiments. In my earlier blog, I wrote various types of testing required for writing a reliable and enterprise level software. One of the selling point of unit testing has been ease of refactoring with confidence but I have also found too many unit tests stifle refactoring because they require changing both the code and unit tests. I have found that testing only public interfaces or a bit high level testing without dependency on internal implementation can produce reliable software and also is less prone to breakage when doing refactoring. Though, no one refuses value of testing, but it needs to be practical.

Dependency management

A number of SOLID principles and extended principles such as reuse principle, dependency principles mandates removing or minimizing dependencies between packages. Those principles were made in C++ environment that historically had problems with managing dependencies. Again, the problem comes from religious attitude with these principles. For example, I have worked in companies where different teams shared their code with java jar files and it created dependency hell. I have used jDepend in past to reduce such dependency but it’s not needed in all situation. I am currently building a common platform that will be used by dozens of teams and I intentionally removed or minimized static dependencies and instead used services.

Conclusion

Unfortunately in software industry, new buzzwords often appear every few years that are often used to foster consulting or selling books. There are lots of books on principles, best practices, etc. and the entire agile movement has turned into prescriptive methodologies as opposed to being pragmatic and flexible based on the environment. People like uncle bob use these principles in draconian fashion such as always stressing on 100% test coverage or always keeping interfaces separate from implementation (DIP). Such advice may bring more money from consulting work or publication, but is disingenuous for practical use. In the end, people matters more in software development than arbitrary technology or principles and you need good judgement to pick what practices and solutions suit the problem at hand.

No Comments »

No comments yet.

RSS feed for comments on this post. TrackBack URL

Leave a comment

You must be logged in to post a comment.

Powered by WordPress