When teams develop software, they use products from other vendors to aid them in following their chosen process. Usually data is captured during development that can be used to create reports or do analysis from these other vendors’ products resulting in some insight into capability. We can answer questions like “how long did this bug take to close?” or “how long after this work item was created, was it marked as completed?”.
The most common statistic analyzed in agile teams is “team velocity” which is a measurement for how much your team can get done in one iteration (sprint). Managers love this statistic because it helps them figure out how efficient a team is, and can be used to calculate potential rough estimates for future availability of some feature.
However there is a much more important metric to your business related to software development, and to measure it correctly we need to redefine or at least clarify a regularly misunderstood word in development processes, and that’s being “done”. Too many teams I encounter work like this:
- Business stakeholder has an idea
- Idea is placed in product backlog
- Idea is pulled off backlog (at some future iteration/sprint) and scheduled for completion
- Developer considers the task “done” and reports this in a standup meeting
- Developer starts work on the next task
- Tester finds bugs 2 weeks later
- Developer stops his current task, switches to the old one, and fixes bugs
- Months from now, someone does a production deployment that includes the feature, and users (as well as business stakeholders, unfortunately) see it for the first time
The duration of time that has elapsed between the first and last step above is known as cycle time. This is an important statistic because it measures the length of time that it takes to go from an idea, until that idea is available to users. Only when the last step is completed is a feature truly “done” and due to a lack of embedded quality and deployment verification in most processes, often a team or individual’s efficiency is determined by omitting everything after #4 above.
It doesn’t matter if your team has developed 20 new features if they aren’t available to users, and they can’t be made available without significant disruption to ongoing work until they have sufficient acceptance tests. This is similar to lean manufacturing, in which you have inventory on the shelf that isn’t being used but this costs something to create and store. We can optimize our cycle time by measuring and working to improve all aspects of the process within the start and end of a cycle.
Reducing cycle time is a key tenet of continuous delivery, which seeks to automate and gate all the phases in your development process with the goal of improving an organizations’ efficiency at delivering quality features to their customers. To improve cycle time, there are many things you can do but I’ll start by talking about analysis and acceptance.
Analyze and accept during the sprint
Many development teams attempt to do requirements analysis on features before or while they are on the backlog, but before they have been added to a sprint. This is a mistake for a couple of reasons:
- It spends effort on a feature that has not been scheduled for implementation. The backlog is about waiting to act on work until the last possible moment, to reduce waste and embrace the reality that up-front design (waterfall) doesn’t work.
- It encourages managers to cram as much into a sprint as possible, assuming all developers need to do is “write the code” and misses the cost of doing analysis in measuring overall efficiency.
In reality, a feature should be added to the backlog and prioritized there without effort being attached to it. When that item becomes high enough on the list to schedule for the sprint, it is assigned to a developer and they work with a business analyst or tester during the sprint to write acceptance tests for the feature. These acceptance tests should be automated when implemented, but a tester should be able to write in English a description for what constitutes sufficient acceptance. Developers write the tests first, and then write code to pass the tests using test-driven development approaches.
Often teams new to this approach will schedule too much to get completed in one sprint. This is a learning experience and over time, you will get better at scheduling smaller units of work into sprints, and describing features at a level of granularity necessary for completion by a single developer. During this adjustment period, be prepared that features added to a sprint, once analysis and acceptance is done, will often be identified as too large to complete in the sprint and need to be split up into smaller tasks on the backlog – only scheduling the ones that can be developed AND acceptance tested prior to the end of the current sprint.
This may seem like a trivial process nuance but the goal is to pursue continually delivering new features to your users as quickly and with as little defects as possible. This can only be done if the acceptance criteria for the feature is clear, and there is a repeatable means for verifying it. Automated acceptance is a must here, as manual testing means a longer cycle time.
Once you start accepting this definition of being done, you can start to look at all the pieces of your process that make up cycle time and optimize them. Managers and development leads love to suggest ways that developers can be more efficient, but they rarely look at opportunities for process improvement in business analysis, testing, and deployment. Often, these are more costly to cycle time than development itself, which tends to be limited in opportunities for optimization by the skill of your resources.
I’ll go into more detail about individual practices within your software delivery process that can reduce cycle time in future posts.