Archive for the ‘Agile Principles’ Category.

Technical debt: the good, the bad, and the okay

Reviewing the community wisdom on technical debt, I am struck by the variety of definitions of the term, which are not merely variations on a theme—they are entirely contradictory concepts. I’m starting to think that we should abandon the term technical debt altogether in favor of more accurate labels.

The debt metaphor is appealing because it offers a way to explain at the executive level that development may be building up software liabilities along with software assets. However, I worry that the metaphor may also be offering some undeserved normalcy to process problems: Businesses, even profitable ones, are used to taking out lines of credit to support operations. Even more insidiously, arguments in favor of good kinds of “technical debt” are easily misused to defend bad kinds, when they really have little in common.

My point here is not merely to offer a taxonomy of technical debt, as others have attempted already (Martin Fowler; Steve McConnell). I don’t think these should be seen as variations on a common theme, but rather as distinct situations with distinct considerations. One definition leads to something that I think is always good; another leads to something that I think is always bad; and yet another leads to something that I think is okay if you’re careful.

The good: releasing early

As originally coined, technical debt referred to being willing to release software early even with an imperfect understanding of the domain or even with an architecture insufficient for the eventual aims of the product (Ward Cunningham; Bob Martin). We know that we’ll have to refactor the domain model as our understanding matures, or else live with gradually slowing development as our domain model grows increasingly out of sync with reality. But we gain a tremendous advantage in the process: We get feedback earlier, which ultimately means we will better understand both the domain and the actual needs of users.

Ward described this situation using a debt metaphor: The benefit we gain by releasing the software early is like taking out a loan that will have to be repaid by refactoring the domain model later, and we will be paying interest in the form of gradually slowing development until that refactoring happens. Ward’s intention was primarily to explain the refactoring he was doing on a project, not so much to justify releasing early.

The debt metaphor was clever, but I don’t think it actually fits. If we don’t release early, does that mean we’re not going into debt? We still have an imperfect understanding of the domain; we still have only a rudimentary architecture. We still have to refactor as our understanding matures in order to avoid slowing development. We have all the negative aspects of debt—the obligation to repay, the interest until we do—with none of the positive aspect—the up-front benefit of early feedback.

Releasing early mitigates our existing liabilities, it doesn’t add to them, and therefore it should not be described as a kind of technical debt. It’s almost always a good idea, but it does take discipline to make it successful: We have to actually listen to the feedback and be willing to change course based on it, and we have to keep the code clean and well-tested at all times so that we can change course as needed.

The bad: making a mess

Most people seem to define technical debt in terms of messy or poorly-tested code. This seems to be pretty universally acknowledged as bad, but it is often regarded as a necessary evil: In order to release on time (nevermind early!), we have to cut corners, even though we know it will cost us later in slowed development and lots of bug-fixing time. In practice, of course, we all know how this turns out: The cut corners lay in wait as traps for future development and the bugs pile up to an unmanageable degree.

If this is debt, it most resembles credit-card debt. It’s easy to take on lots of debt, and the interest is steep and compounds quickly. A credit card can be safe if you actually pay it off in full every month—in technical terms, refactor your code and make sure your tests are solid after every task or (at most) after every story. Otherwise the balance grows quickly and gets out of hand sooner than many of us would like to admit.

But it’s actually worse than that. Most of us have stories of nasty bugs hidden in messy code that took weeks (or longer) to figure out and fix. Debt can pile up, but at least it’s well-behaved; you know what the interest rate is. The problem we’re talking about now is actually a kind of unbounded risk: Any day, a new requirement can come in or a critical bug can be hit that lands in the middle of a poorly-designed and poorly-tested part of the code, potentially requiring a wide-reaching rewrite. Steve Freeman has argued that this is more like selling an unhedged call option: It can be exercised at any moment, putting you at the mercy of the market.

I would further argue that code that is poorly factored or tested should be considered work in process, and subject to the team’s work-in-process limits. A story simply isn’t done until the code is clean and demonstrably correct. I define a defect as anything short of being done: If the code is messy, doesn’t have tests, or simply doesn’t do what the customer actually wanted, those are all examples of defects.

Ideally we would catch such defects before calling the story done in the first place; if so, we don’t take on additional work until that existing work is done. If we declare a story to be done and then discover later that it really wasn’t done, either by someone reporting a bug or by developers finding messy or untested code, we should effectively add the affected story back into our current work-in-process and fix the defect before taking on additional work. Looked at this way, prioritization is meaningless for defects: The story was already prioritized some time ago; the defect is merely a new task on an old story, which is by definition higher priority than any current or future story. (Just be careful to weed out feature requests masquerading as bug reports!)

When faced with time pressure, cutting corners amounts to taking on debt or risk for the company without accountability. Instead, we have many options for explicitly cutting away pieces of work by splitting stories and prioritizing the pieces independently (Bill Wake). Each piece that actually gets prioritized and implemented needs to have clean code and tests to be considered done; the beauty of splitting stories is that you can have piece #1 done and piece #2 not started, instead of having both #1 and #2 started but not done, with a similar amount of effort.

The okay: running an experiment

William Pietri has offered the only interpretation of technical debt that I think actually fits the debt metaphor, having an actual benefit now and an actual promise to pay the debt at a definite point in the future while avoiding unbounded risk. The idea is that it’s okay to have some messy or untested code running as an experiment as long as it is actively tracked and all stakeholders agree that the code will be deleted at the end of the experiment. If the experiment gave promising results, it will generate new ideas for stories to be prioritized and implemented properly.

The benefit of this approach is very rapid feedback during the experiment. It truly counts as a valid debt because the business agrees to it, and agrees to pay it back by allowing the code to be deleted when the experiment is completed. Since the developers know the experimental code is short-lived, it must naturally be isolated from other code so that it is easy to remove later; there is no chance that some unexpected new requirement or reported bug will incur any risk beyond simply ending the experiment and deleting the experimental code early. An active experiment counts as work in process as well, so the team should agree on a limit for the number of active experiments at any one time.

Conclusion

I’m sure there are other definitions of technical debt out there, but these seem to be the big ones. Releasing early was the original definition, and I don’t think it represents debt at all. It’s simply a good idea. Making a mess is the most common definition, and while it’s a little like a debt, it’s even more like the unbounded risk of an unhedged option. It’s always a bad idea. Running experiments is a new but important definition, and actually fits the meaning of debt. It’s okay as long as you track it properly.

Reviewing a draft of this post, William Pietri offered a refinement of the debt metaphor for messy, untested code: “Instead of thinking of it as an unhedged call, one could also talk of loan sharks. If one has substantial gambling debts to a capricious mobster, then one comes to expect erratic demands for extra payments and menacing threats.” I like it: Imagine your programmers making shady backroom deals to get their work done under schedule pressure, and then being hassled by thugs when it comes time to work with that code again.

What do you think? Are there other metaphors that might work better than debt? Do you agree or disagree with my good/bad/okay evaluations?

Avoiding the knowledge transfer bottleneck

In software development there are many ways to transfer the knowledge about how to build a product to the people who do the actual building. Production can be severely hampered, however, if that knowledge is being produced more rapidly than it can be consumed. This is the knowledge transfer bottleneck.

I recently hosted a workshop that let participants experience three different ways of transferring knowlege in a production environment. The product, in this case, was a paper airplane of unusual design. The idea was to try different ways of transferring the knowledge about how to build the airplane from the “chief designer” (me) to the production workers, and to compare the relative productivity of the different methods, which were:

  • Documentation – The workers were given written instructions (22 steps worth) for building the airplane.
  • Reverse Engineering – The workers were given a completed airplane which they could study in order to reproduce the steps required to build it.
  • Mentoring – The “chief designer” built an airplane step by step and the workers replicated each step as it was performed.

The experiment was conducted in two phases. In the first phase, all 8 participants used the Documentation method. In the second phase, one team of 4 tried Reverse Engineering, while the other team of 4 tried Mentoring.

The results were interesting. Using the Documentation method, only one person out of a total of 8 came close to being able to build the airplane at all in the 5-minute period allotted.

Using the Reverse Engineering method, 1 person out of a total of 4 produced a completed airplane in 5 minutes.

Using the Mentoring method, each of 4 team members produced a completed airplane, and in less than the 5 minutes available.

The knowledge transfer bottleneck in software

In a software development effort, knowledge transfer takes place all the time, and it’s easy to imagine a software developer in the “chief designer” role described in the exercise above.

Let’s say I’m a developer who has discovered, and written the code to implement, a technique for binding some data to the controls in a user interface, and that this technique forms a pattern that my fellow developers want to know about. If you were one of my fellow developers, would you rather I (a) gave you a document I had written about the technique, (b) told you where the code was and suggested you figure it out for yourself, or (c) paired with you to implement the pattern for a new set of data?

Now, certainly, pairing with you takes more of my time, and might seem less efficient from my viewpoint. After all, I could be off designing the next pattern, and the one after that. But the  productivity of the team as a whole, rather than my personal productivity, is what’s important. And mentoring helps increase the team’s productivity by avoiding knowledge transfer bottlenecks.

Fad-proofing your Agile adoption

As Agile methods become more popular, they risk turning into a pointless fad. Here are my tips for avoiding that trap on your own projects.

Agile is the new black

A friend of mine is a founder at a startup that’s very Agile: weekly iterations, releasing 2-10 times a month, heavy test coverage, and lots of pair programming. He met for coffee with a friend of a friend, an executive in charge of building his company’s first significant piece of software. Part of the conversation went like this:

My friend: So, how are you developing your software?

Executive: We’re doing Agile!

My friend: That’s great! We started out two years ago with a typical Extreme Programming set of practices, but since then we’ve made a number of significant changes, including [list of new and modified practices]. And we’re still continuously improving. How about you?

Executive: We’re doing Agile!

My friend: [Sighs, shakes his head sadly.] Nice weather we’re having, isn’t it?

In my experience, this is becoming more and more common: somebody hears that Agile is what all the cool kids are doing, so they take a very shallow understanding of what that means and run with it. This rarely works out well, and always puts me in mind of somebody who goes to a buffet restaurant and makes a meal out of the desserts. It’s theoretically appealing, but the long-term costs are fearsome.

Fad-proofing your Agile adoption

Agile methods have become popular because there’s real value to them. However, just jumping on the bandwagon won’t do you any good. Take the time to think through what you’re trying to achieve. Do an Agile adoption poorly and people may become cynical enough that you won’t have another chance. Here are 10 things you can do to help:

  1. Focus on what works – Your team isn’t there to create process or make software. You’re there to deliver sustainable long-term value via software. Use that as your primary ruler to measure success.
  2. Don’t expect miraclesThere are no silver bullets! Agile methods don’t fix your problems for you; they mainly make the problems obvious quickly, so you can solve them yourselves.
  3. Give yourself room – Thinking that “doing Agile” must be quick and easy, many teams don’t allow enough time to learn how to make the practices work well for them. Allow for an initial productivity hit, one that will get paid back richly down the road.
  4. Don’t get suckered – Now that “Agile” is a big buzzword, there’s an equally big incentive for consultants and authors to sell you Agile lite, the software process equivalent of cotton candy. Don’t fall for it!
  5. Don’t just “do Agile” – I’ll tell you a secret: there is no “Agile” that you can do. There is a collection of Agile methods; you can do any one of them. There are a whole host of practices to adopt. But unless you know enough to tell them apart, you don’t know enough to do any of them well.
  6. Get help from people who have done it – As a coach, I’m probably a little biased, but I think everybody who tries an Agile approach should involve somebody who’s done it before. Whether that’s a key player, a manager, or a coach doesn’t matter much. But it’s hard to learn Agile methods from books, let alone a couple of articles on the web.
  7. Track and pay down your debt – Part of an Agile adoption is raised standards. That leaves most people with a lot of cleanup to do. Make a list of all your code debt and testing debt, and have a plan for paying it off.
  8. Push power down – Agile methods do best in a context where teams are given broad latitude to solve problems on their own. Focus on supporting rather than directing your Agile teams.
  9. Improve continuously – An Agile method isn’t something like a turbocharger that you bolt on once and then use forever. If you aren’t having regular retrospectives and continually improving your process, you’re not Agile.
  10. Use your ignorance for good – One of the biggest mistakes people make is thinking that even though they’ve never used an Agile approach, they know enough to make big changes to it from day 1. Instead, accept that you’re trying something truly new to you, and act like it.

Feedback wanted

Have any war stories about the dangers of being fashionably Agile? How about tips on doing it right? I’d love your comments, and other readers would, too.

P.S. Hat tip to my old pal Christopher DeJong for the phrase “Agile is the new black”.

Mindful engineering

I’ve been pondering lately what was special about the teams I’ve been on that have felt the most effective to me. It wasn’t just a certain set of practices, though those were helpful. The essence, I think, was a certain openness to ourselves and to each other, a willingness to really look at what we’re doing and how what we’re doing affects the project, ourselves, and each other.

Retrospectives are a specific practice that provides regular opportunities for this kind of openness. But even more, when I’ve really felt productive there’s been a pervading attitude on the team that it’s always okay to take a moment to explore some interaction or event at a deeper, more personal level.

One day, on a team of four, we were programming in two pairs. At some point my partner and I noticed that the other pair had reduced to a solo. We asked, “Where’s your partner?” and he replied, “I think he got upset with me.” My partner replied, “Well, I hope we can talk about that later.” And the really amazing thing is: We did. The partner returned, and described what he had gotten frustrated about, and we were all able to talk constructively about how to deal with that kind of frustration.

Another day, on another team, also while pairing, I noticed my partner withdraw and get quiet. I wasn’t quite sure why, though I was aware that we had been debating some technical point. She started to go on with the task, but I stopped to ask what was up, and she said, “I’m just not getting much ‘Yes, and…’ from you right now.” We had both learned the “Yes, and…” principle from improv classes—which I highly recommend—and it was an easy way to remind ourselves to respect and build upon each others’ ideas rather than only pursuing our own. This quick check-in in the middle of a task short-cut what could have been a long period of frustration, leading instead to a productive, collaborative session.

As I said, this isn’t about specific practices as much as it is about a thorough willingness to talk to each other openly and to respect each others’ thoughts and feelings. Obviously, I think that Agile practices, especially retrospectives and pair programming (when done well), are a good way to foster this kind of openness. In fact, what first drew me into the Extreme Programming community was the openness to learning that it demonstrated.

In my short career up to that point, I felt that I wasn’t really learning much—I was already a pretty good programmer, but the really tricky problems on my projects were less about programming and more about communicating with each other and with the project’s stakeholders and about finding ways to keep everyone sane in the face of growing complexity and business uncertainty. These were the real challenges, but no one was really addressing them—until I found the XP community, who were meeting every month to talk about exactly these issues, and about what they were trying and what they were learning about them.

Now, of course, this kind of concern for how we work together is not the exclusive domain of Agile communities, though some of us often talk that way. The CMM community is likewise dedicated to trying new things and talking candidly about how they’re working, albeit with a more formal approach. The Space Shuttle software project case study described in the CMM book is actually quite an inspiring story of worker empowerment and process experimentation and improvement over time.

Why do many in the Agile community criticize the CMM, lumping it together with the straw man process known as “Waterfall”? I think that Waterfall is an example of formality in a process without authentic openness and introspection. When a manager looks at an undisciplined project and sees it spinning out of control, the most common reaction is to impose some control with some mandated formality. The CMM may be used as a framework for that formality, even though the authors of the CMM themselves strongly warn against using it to impose formality without honest assessment and openness to continuous improvement attached. (They even have nice things to say about XP.)

I’ve been exploring the practice of mindfulness lately, and I think it nicely captures the kind of openness and introspection I’m talking about here. Mindfulness can be practiced through activities such as meditation, improvisation, journaling, and reflective listening, and has documented benefits for compassion, empathy, concentration, and even happiness itself.

I can easily see the appropriate level of formality varying based on the nature of the project, but this quality of mindfulness seems essential to any endeavor. Agile methods tend to be moderate to low on the formality scale, and—when they work—high on the mindfulness scale.

How are you mindful of your work? How are you not?

Take small steps and work close together

I think most Agile advice boils down to two things:

  • Take small steps.
  • Work close together.

When I’m struggling, I try to see if there’s any possible way to take smaller steps, or to work closer together with others on the project.

Take small steps to get to done early and often

In software development we work at many different scales, from high-level business goals to specific user actions all the way down to lines of code. Agile methods emphasize getting to done early and often at all of these levels:

  • We take one business goal at a time and make sure it’s met before tackling the next one (exemplified by the “sprint goals” of Scrum and “minimum marketable features” as described in the book Software by Numbers).
  • We implement one user-level feature in an end-to-end slice (from persistence to user interface) before going on to the next (the “user stories” of Extreme Programming).
  • We write and test a few lines of code at a time (the “test-driven development” of Extreme Programming).

At every level, the point is to get one thing “really done” before starting on the next thing.

How do we know how small our steps should be? Experience and intuition are a starting point. If things are flowing along nicely it’s tempting to take slightly larger steps. But as soon as troubles start cropping up, a good bet is to take smaller steps: Breaking work down into smaller pieces while maintaining, or even strengthening, our definition of “done” helps to get things done cleanly and consistently.

On a recent project, the original plan was over-ambitious, allowing only a few months for what was turning out to be a couple years of work with the available resources. So we got together with the project champion and identified some specific short-term business goals: First, we needed to make sure that the first big client, on the verge of signing the contract, would get what they were promised. Second, we wanted to make a marketing announcement about the general availability of the system.

It turned out that meeting both of these goals required much less than the total vision for the software. In that case, they were both driven more by the content that we were going to be publishing through the system than by the end-user software functionality. We went through all the user stories and put a big red dot on each story card that was essential for the first goal, and a red circle on each story card that was essential for the second goal. Taking our velocity into account, it turned out that the “red dots” fit comfortably into the time available before launching with the client, and it was quite satisfying to have such clarity about what to focus on and what not to focus on for those few iterations.

Work close together to enhance communication

On every software project, we need to communicate with the customer about what to do and what’s been done, and we need to communicate with our teammates about how to do what needs to be done. Agile methods suggest frequent and lightweight communication by finding ways to work closer together, rather than staying far apart and communicating through impersonal means.

Barriers to communication can be both physical and process-related. Physical barriers can be anything from cubical walls and long office hallways to separations of hundreds or thousands of miles. Being separated by thousands of miles leads to both time zone challenges and cultural differences.

Extreme Programming (XP) recommends that the customer and the entire team work together in the same physical location. When this seems impractical for a project, it is often tempting to abandon XP or Agile methods altogether. It is also possible to miss the point, by working nearby but avoiding contact with each other. However, keeping in mind the advice to work closer together, we can be creative: We can do video conferencing, we can pair program remotely using screen-sharing tools, and we can at least visit each other from time to time, especially at the start of the project. Meeting each other face-to-face is an extremely powerful way to cut through cultural differences and interpersonal misunderstandings.

One example of a process-related barrier is individual task assignments, since if I’m on the hook to finish a task myself I will be less inclined to take a break to help you. Agile methods usually recommend whole-team estimating and planning, so that the whole team is interested in working close together to get all the tasks done. If one person is stuck, the whole team feels stuck and is quick to rally around the issue.

Regarding another process-related barrier, Agile methods have a reputation for throwing away documentation. But documentation really isn’t the point.

I was once in a meeting with a product manager talking about the requirements documents he liked to write, where at one point he leaned in close, pointed toward the QA group (not represented in the meeting) and whispered, “Those guys don’t even read the spec!” He clearly thought that was the QA group’s problem, because his spec document was clear and complete — he had put a lot of effort into it.

Agile methods do not recommend, “Throw away the requirements documentation.” Agile methods do recommend, “If the requirements documentation is causing pain, find ways to work closer together with the intended audience.” If the QA group isn’t reading the spec, maybe that’s because it’s not helping them do their job. Maybe it’s just not communicating effectively.

Clicky Web Analytics