What Are You Optimising For?

Very few software designs/architectures are completely and objectively bad. Most designs optimise for something. Here’s a [probably incomplete] list of things that you could optimise for in software architecture:

  • Developer Time Now
    When we say we’re doing something “quick and dirty”, or that we are taking out “Technical Debt”, then we are optimising the development time now.
  • Developer Time Later
    This is what people traditionally mean by “good” code. A developer wants to write code that will be easy to work with several years from now. This is based on the perception that most of the hours a developer spends working, they are maintaining an existing system.
  • Product Iterations
    In simple terms this means building something that is highly flexible and is likely to meet a broad range of future (as yet unknown) requirements. By making the system flexible we reduce the number of iterations we are likely to need before the customer is happy with the product.
  • Testing Time
    While optimising for Product Iterations means making a flexible product, optimising for testing time is almost the exact opposite. The ideal system to test is one with a single button labelled “Do {Something}” which, when clicked, does exactly what it says it will. The more options, configuration, flexibility and programmability you introduce to a system the more time it will take to test to a reasonable level of satisfaction.
  • Backwards Compatibility
    This means designing a system to accommodate existing users who potentially don’t want to have to think about the new features. Or perhaps have lots of existing data in a structure that is not conducive and to the new features you want to introduce. By optimising for backwards compatibility, we’re building something in a way we wouldn’t normally like, but that will make it easier to roll out.
  • Deployment Risk
    In these days of hosted services and sand boxed apps this is becoming less of an issue. But anyone who’s had to think about rolling out an upgrade to a user community where you don’t know exactly what data they have or what the environment its running in looks like, then this will be familiar. Depending on the context you might make users go through long-winded upgrade procedures or pass up on framework upgrades and new technologies just to be sure that your next version will not fall over when it hits real world user machines.
  • Deployment Effort
    Sometimes the solution you want to build will be a nightmare to setup in production. It may require lots of servers, or perhaps a combination of services for caching, queuing, persistence, messaging and load balancing. In some cases, you may want to spend extra development effort building automated installation tools or even take a performance hit so that rolling it out is easier.
  • Support Time
    Sometimes it’s preferable to take extra developer and testing time to introduce automated installations, lots of logging and diagnostics, remote crash reporting, user friendly failure messages with troubleshooting guides, extra documentation, lots of context sensitive sensible defaults or reduced flexibility. All so that it’s less work to support once it goes out the door.
  • End User Learning Curve
    There’s often a market for an entry level version of apps that maybe don’t have all the power of the competitor products but which is easy to pick up and use. Alternatively, you could trade development and testing time and build an easy to use product that has a lot of flexibility behind the scenes for the power users.
  • End User Usage Effort
    If you’ve ever used VIM, you’ll understand what I’m getting at. VIM sacrifices End User Learning Curve in order to optimise for End User Usage Effort. Sometimes developer and tester time is taken to build hundreds of special business logic that will make the system do exactly what the user needs exactly when they need it.
  • Performance
    This often means sacrificing Developer Time Now/Later in order to make the system run faster. For example, using an off the shelf ORM tool will reduce the code you need to write and maintain, but to get that extra performance people often resort to hand crafting the database queries they need.
  • Scale
    The ability to handle millions of users or Terra bytes of data often comes at the expense of developer time, deployment effort or single user performance. If you really need to handle that scale then you’ll have to sacrifice something else on the list to do it.
  • Budget
    If you have very deep pockets then most other problems can be made to go away. If you’re willing to build your own data centre with 10,000 servers in and an army of people to keep your site running then you can defer having to think about writing scalable code or deployment effort. This    is sometime a logical approach. This is not always true though. A baby takes 9 months no matter how many women you have.

I find that conversations between engineers about the best architecture to use are always easier and less emotive when people are clear about what they’re optimising for, why, and what they’re willing to sacrifice in the process. It’s helpful to keep asking: what are you optimising for?


I'd love to meet you on twitter here.