Over the past 11 years, .NET technology, fueled by Microsoft’s ability to deliver sophisticated development tools, has arguably ruled the enterprise business software landscape. Intellisense, drag-and-drop UI design, XML configuration, dependency injection, unit testing, IDE extensibility APIs for third party controls, continuous integration, and more attempt to ease the use of Agile and SCRUM processes as the Visual Studio IDE supports more and more of these features through wizards and templates. .NET started as a better version of Java, and as such inherited many of the powerful capabilities, but also limitations in that development and deployment approach.
Visual Designers for 20%
However, to take part in this new shift requires a different mindset. What if you had to do all of your user interface development without graphically previewing it first? A dirty little secret in many Microsoft shops is that we rarely use the UI designers anyway. Clients and customers are always asking for features that negate the productivity enhancements touted by RAD design, and force us to the code to do more sophisticated things. I’ll argue that this is due to an inferior separation of concerns in ASP.NET, and not simply because you’re doing something more complicated. If your framework requires you to break patterns to do something complex, how good of a framework is it? When a development tool only really shines for the minority of projects, your on the losing end of the 80/20 rule. When you design tools to focus on letting developers visually design things, you are continuing to treat UI assets as single units that encapsulate their behavior, presentation, and data. Modern frameworks that separate these concerns make it difficult (if not impossible) to visually represent things as they appear at runtime, but the tradeoff is an increase in productivity due to patterns that decouple responsibilities.
Interpretation vs. Compilation
What if you don’t compile your project to test it out? There are legitimate applications that require compilation due to performance constraints. But if quality is the concern, the efficiency enhancements afforded by these frameworks coupled with a disciplined automated testing approach negate this concern.
Document what you hope to design vs. what’s built
What if you don’t create requirements documents, but rather rapidly implement and write tests to serve as the documentation for what the system currently does? We already know from years of SCRUM and Agile debates that documenting a system up front more often than not results in bad designs, slipped deadlines, and stale documentation. Most customers and clients are not system analysts and as such can’t be expected to communicate all of their needs on the first try. A picture is worth 1000 words, and we’ve all been in the meeting where the customer advocate is shown what was built based on their design and they realize things missing that they didn’t communicate. Doesn’t it make sense to use a development process that encourages and adapts to this situation instead of fighting it?
Contrary to popular belief, to pull this off one needs to be a better developer, who follows patterns even more than before. Developers also need to communicate with stakeholders more, and incrementally deliver *tested* features. Increasing the ability for a developer to communicate has all kinds of other benefits as well, such as the ability to clarify requests, think outside of the box, and generally be more pleasant to work with.
If the thought of letting your tests provide your documentation sounds crazy, tell that to Sara Ford.
Get better at learning your framework instead of fighting it
We’ve all been in the code review where someone implemented an API call that’s already in the .NET framework. If we’re honest with ourselves as developers, we really don’t keep much of the .NET technology stack in our head, we just know how to use Google well. If we were able to reduce the number of patterns and APIs used in our solutions, we could retain that knowledge and know the best way to leverage the framework to do what we need to do instead of fighting it. ASP.NET MVC and Rails both exhibit this, and I’ll argue Rails does a better job. ASP.NET MVC won’t complain if you make the mistake of trying to throw a bunch of logic in your view, where in Rails you really have to fight the framework to instantiate classes here. As DHH says “constraints are liberating” (start at 3:50 in).
If you could challenge yourself with one technical hurdle this year, would you rather learn another API? Perhaps a way to interface with a new business system? Or would you rather experience a shift in your approach to development that has long lasting, measurable effects on your ability to deliver rapid value and makes you more attractive to established companies and startups? To do so requires several things.
- Approach these new patterns and capabilities without attempting to compare them to existing methods. As humans we love to do this, but often we get caught up in analysis paralysis or think we’ve “got it” when we grasp just one of the many innovations in these newer frameworks.
- Do not declare competence with these frameworks until we’ve actually grasped them in their entirety. Learning Rails without understanding caching, convention based mocking of tests, or the plugin architecture is like learning C# but ignoring generics and lambda expressions.
- Don’t try to figure out how to shim legacy .NET patterns into these frameworks. You wouldn’t expose a low level communication protocol through a web service or REST API where clients are expected to allocate byte arrays, so why would you expect to figure out how to host a third party ASP.NET control in MVC or access a database using T-SQL from an MVC view. Sure, you can do it, but you’re missing the point. And that is to embrace new patterns and learn to abstract the old way of doing things. We’ve been doing it with .NET for years, now let’s see if we can do it when legacy .NET patterns are what we’re abstracting.