If you are a developer that writes code (yes, some don’t), you’ve inevitably been boxed into the “refactoring justification corner”. At some point you realize that a task you’ve been assigned affects more than just the code you thought it did, and that you’ve got a deeper design change to deal with.

Earlier in my career, when this would happen I was at product companies and we would just work overtime, get help from another resource, or be late. When this started to happen more often, we’d include “refactoring time” in our estimates. These were both insufficient approaches, and led to management looking at refactoring as “you didn’t do it right the first time”, and us feeling like we were doing something wrong. This is a manufacturing driven economy mindset, with fixed effort and materials, that doesn’t account for the reality of software projects. But we also had things to learn.

I see refactoring as falling into two distinct categories. How you react to it when it pops up, and which type of refactoring you are encountering has a big impact on your options.

Functional refactoring

The first of these, functional refactoring, occurs when code you originally thought didn’t need to be touched creeps into the picture to complete functional requirements. Basically, if you don’t do this refactoring, you can’t make the feature work. The tension here is stronger when you work directly for the company making the product, because being a dedicated resource you are usually thought of as an expert and held directly accountable for your actions on the company as a whole. You made a rough estimate, got into doing the work, and found the effect on design is bigger than you originally envisioned. Leaders who are uneducated as to the realities of the trade see this as you not doing your job correctly.

Since I started doing consulting 5 years ago, I was lucky to work for an employer (and still do) who understands that this is simply the nature of the beast, and we have processes in place to deal with it. When you aren’t intimately familiar with a codebase, or even a set of classes in a codebase you deal with all of the time, the rough estimate is just that – rough. As a development or project manager, part of your job is to instill the understanding into your culture that building software is not like building a house, as we are using materials that are unproven, trying to meet requirements that are in conflict with each others’ goals, using personnel with subjective evaluation of skills, and encounter architectural “works of art” at times. We include this opportunity for changes in complexity during the engagement as something clients must acknowledge as a possibility in our statements of work.

When this happens on a consulting engagement, we ask ourselves: can I do the extra work without disrupting the estimates for my next tasks? If so we just do it. If not, we schedule time to meet with the client, and explain the situation. At this point, we offer an estimate for the additional work, and give them a chance to either pay for the change, or opt not to do it. On large projects, we will occasionally give clients this work without additional fees, but it only happens once or twice, regardless of the size. Otherwise we get into a situation where many small changes add up to one big chunk of unpaid work.

At a product company, your process needs to be in line with realities of the trade in much the same way. Personnel should know that software development is one of the most unpredictable jobs in the world, and that they must be prepared to allow for extra time to complete tasks that turn out to have a greater cost. To do this properly, the organization must embed this into its culture, and developers have to feel safe that they can communicate this without being reprimanded. If development leads say it’s OK to communicate discovered extra effort, but they ridicule their developers every time they do it, they lose their respect and will have a hard time keeping their trust with future mandates or cultural changes.

The bottom line is that the business should be able to make factual decisions on what they want to pursue without expecting heroics to save them when unplanned complexities occur. If a task that was estimated to take 1 week blows up into something that takes 4, divide the new work up into smaller units and throw what can’t get done that iteration back onto the backlog. If the business can’t afford the overall effort completely, assign the developer a new task and throw it on the bottom of the backlog again. Agile and SCRUM processes allow for businesses to react quickly to market and technical changes – they do not predict the future or prevent development teams from encountering unknown complexity.

Cross-cutting refactoring

The second type of refactoring we encounter relates to nonfunctional requirements or patterns.

Before you start iteration one of your project, your business analysts or customer stakeholders should have requirements for nonfunctional aspects of the system. These include things like max response time (pages should load in under 2 seconds), capacity (the system should support 1000 requests per second to page x without causing other performance requirements to be exceeded), auditing (all changes to data should include who made the change, when, and what was changed), and archiving strategies (when do we purge old data). Testers should be able to help developers create automated acceptance criteria at the beginning of the project that run during later phases of your build process to ensure these are being met. You’ll need to create a separate environment that is a clone of production to measure these accurately.

Cross-cutting refactoring can also occur when patterns are not established prior to starting the project (or as part of the first few iterations). Something as crucial as your validation approach, error handling approach, data access strategy, dependency injection integration points, and security model should be established before any other features start getting built.

The reason for this early priority on patterns and nonfunctional requirements is that refactoring to meet cross-cutting requirements is one of the most expensive to encounter, because it typically impacts most code assets in one or more layers or silos of your system’s architecture. If you’ve already built 50 forms and now change (or come up with) your validation approach, you’ve got 50 existing assets that have to be massaged into following the pattern. The forms may have been implemented in a way that was simple to meet the initial requirements, but is not sufficient given the new cross-cutting ones. If you establish these cross-cutting requirements up front, the pattern is available to follow at the beginning of working on any task that encounters that pattern, and reduces the opportunity for waste through existing incompatible implementations.

As a development lead or manager, it is your responsibility to ensure that time is spent identifying cross-cutting patterns as early as possible on the project. Leverage your business analysts for the nonfunctional ones, and leverage your developers to identify the patterns. If you don’t do this, it is of no fault to your developers that as you introduce these into the backlog, “visible progress” in the project may slow to a standstill to implement them.

It is for this precise reason that nonfunctional requirements, and establishing of patterns, need to be backlog items that can be prioritized in their own right. This gives the business the power to decide if it is more important to accept credit card payments (functional), or to allow 1000 simultaneous requests (nonfunctional). Refactoring is an important tool to be used as necessary when you know what kind you’re dealing with, and why it has occurred. The better you get at understanding the causes for it, the more comprehensive of planning you can do to ensure a smooth delivery cycle as the iterations of your project progress.