Simple Planning for Startups

Recently on Hacker News, a non-technical startup founder was struggling to figure out whether or not his team was productive. The CTO had left unexpectedly, and his developers were learning Node and Angular as they went. Opinion differed; some thought the developers were just padding their resumes; others thought that they were doing their best. What’s a founder to do?

What kills me is that this sort of mystery about progress is not in any way necessary. The planning methods from, e.g., Extreme Programming, are 15 years old. So that nobody else has to suffer like this, here’s my quick intro to a basic version:

The first time

To start out, everybody should sit down at a table with a large stack of 3×5 index cards, all of one color that isn’t white (I like blue or green), and n+1 sharpies.

Talk about the product. As you go, every time something buildable is mentioned, write it on an index card with a sharpie. 3-7 words is usually enough. The cards aren’t documentation; they’re just tokens representing conversations. Every card should describe business value that is visible to all stakeholders. No “set up the IDE”, no “learn Node”, just business value cards. They can be very small. My first card is often, “Visitor sees home page with logo.”

After 30-60 minutes, put the cards in strict linear order of business desire, ignoring technical dependencies. Now take a bunch of cards, write “RELEASE” on them, and insert them into the order every time something shippable is necessary, including investor demos, user tests, private betas, and the like.

Next, look at the top 10-20 cards. Ask the developers if each card will take a week or more. No wizardry needed, just gut-level estimates. If the answer is yes, tear up the card and replace it with cards that are all smaller than a week of effort. Feel free to reorder them; often splitting a card leads people to discover things that can be left for later.

Congrats! Now you have a plan that is enough for developers to get started. Make a permanent home for the cards on a table or wall with three columns: backlog, working, done.

As you work

When people start on a card, move it from “backlog” to “working”. When everybody agrees that it is completely done and the business value is has been demonstrated, move it to “done”. Work hard to minimize the number of things in “working”. Some tips:

  • Definitely don’t have more “working” things than developers. The more people collaborate, the more resilient your team will be.
  • Don’t move things from working back to the backlog.
  • Try to pull only from the top of the backlog. That encourages broad understanding and collaboration. It’s ok to say, “Oh, I don’t understand how to the back-end stuff on this card. Will somebody help me with that?”
  • If something ends up too big, look for ways to split it. Find something useful that can be finished quickly, create new cards for the rest, and put them in the backlog.
  • Be firm about “done”. It should mean “released” or “will go out at the next scheduled release”. It must not mean “mostly done”, “still buggy”, or “I’m tired of working on this”.
  • Bugs in things that are already done are written up as new cards and prioritized like everything else.

If you’re used to working with specifications, it can feel weird to grab a card with only a few words: you don’t know quite what to do with it. But that’s exactly how it should go. Startups move too fast to be endlessly writing and rewriting specifications. When you grab a card off the top of the backlog, you’re now holding the most important thing not yet in progress. It’s time to go and discuss what the card really means.

That conversation will have a lot in common with the one had before the card moves from “working” to “done”. It’s much easier to agree that the card is done if the conversation starting it happened just a few days before. That recency keeps the focus on whether what’s built is what the startup needs, not on conformance to something written down weeks ago in a meeting nobody remembers very well.

Each week, each day

Each week on Monday morning, everybody should sit together, look at the backlog, and take an hour or less to discuss everything that will likely get worked on. Collectively make a guess about where you’ll be by Friday afternoon.

Each day take 5-15 minutes for everybody to quickly chat about status. Make sure the board is up to date. If cards are sticking too long in the “working” column, chat briefly about why. If somebody needs something from someone else, make sure it gets discussed here. Anything that threatens to take the meeting longer should be scheduled as a separate thing.

And the end of the week, sit down together again, review how it went, count the cards completed.  (This is also a good time to talk a bit about process. What’s working? What’s not? Does the board correctly represent the state things? Should you be shooting for smaller cards or more frequent releases? Do people feel productive? What could you try changing next week to see if you can make things better?)

Why this works

After a few weeks, the business stakeholder can look at the number of cards completed each week, look at the backlog, and make reasonable guesses as to how long things will take. And after a few weeks seeing the team deliver business value each week, they’ll have a good notion of how effective their team is.

Conversely, if the team really is in trouble, it should be obvious. Common warning signs:

  • Cards sticking in “working”.
  • Repeated failures when trying to move something from “working” to “done”.
  • Too many items in “working” at once.
  • A week goes by with no business value delivered.
  • Lots of fighting, blaming, or excuse-making.

The nice thing about working collaboratively like this is that failures are quickly apparent to everybody. If the team starts out the week thinking that they’ll get the top 5 cards done and they end up getting 0, nobody has to yell at anybody. Everybody feels the failure. (At this point, some teams will dump the process, but that’s a mistake. The main value in processes like this is exposing problems. Work through the pain.) They’ll be eager to do better next week.

There’s more (but start now regardless)

This simple version is enough to get people going. I’ve built entire products this way, so I know it works. But the key to success is improving your process every week. As you notice an issue cropping up repeatedly, try to find a process fix for it. Experiment and see if the change makes things better. (If you have a choice between removing something and adding something, favor removal.)

To make next week a little better, you have to get started this week. You can read endlessly trying to figure out a perfect process. But there is no perfect process,  just the right one for you and your team at the moment. The real trick is to a perfect working rhythm is to get started with some simple process now.

Please do ask questions in the comments, but if the question is, “What happens if we have a problem with X,” I encourage you to wait until you actually have the problem. If you ever have the problem, having a habit of regular small process improvements will mean you will probably know the answer better than any outsider.

Technical debt: the good, the bad, and the okay

Reviewing the community wisdom on technical debt, I am struck by the variety of definitions of the term, which are not merely variations on a theme—they are entirely contradictory concepts. I’m starting to think that we should abandon the term technical debt altogether in favor of more accurate labels.

The debt metaphor is appealing because it offers a way to explain at the executive level that development may be building up software liabilities along with software assets. However, I worry that the metaphor may also be offering some undeserved normalcy to process problems: Businesses, even profitable ones, are used to taking out lines of credit to support operations. Even more insidiously, arguments in favor of good kinds of “technical debt” are easily misused to defend bad kinds, when they really have little in common.

My point here is not merely to offer a taxonomy of technical debt, as others have attempted already (Martin Fowler; Steve McConnell). I don’t think these should be seen as variations on a common theme, but rather as distinct situations with distinct considerations. One definition leads to something that I think is always good; another leads to something that I think is always bad; and yet another leads to something that I think is okay if you’re careful.

The good: releasing early

As originally coined, technical debt referred to being willing to release software early even with an imperfect understanding of the domain or even with an architecture insufficient for the eventual aims of the product (Ward Cunningham; Bob Martin). We know that we’ll have to refactor the domain model as our understanding matures, or else live with gradually slowing development as our domain model grows increasingly out of sync with reality. But we gain a tremendous advantage in the process: We get feedback earlier, which ultimately means we will better understand both the domain and the actual needs of users.

Ward described this situation using a debt metaphor: The benefit we gain by releasing the software early is like taking out a loan that will have to be repaid by refactoring the domain model later, and we will be paying interest in the form of gradually slowing development until that refactoring happens. Ward’s intention was primarily to explain the refactoring he was doing on a project, not so much to justify releasing early.

The debt metaphor was clever, but I don’t think it actually fits. If we don’t release early, does that mean we’re not going into debt? We still have an imperfect understanding of the domain; we still have only a rudimentary architecture. We still have to refactor as our understanding matures in order to avoid slowing development. We have all the negative aspects of debt—the obligation to repay, the interest until we do—with none of the positive aspect—the up-front benefit of early feedback.

Releasing early mitigates our existing liabilities, it doesn’t add to them, and therefore it should not be described as a kind of technical debt. It’s almost always a good idea, but it does take discipline to make it successful: We have to actually listen to the feedback and be willing to change course based on it, and we have to keep the code clean and well-tested at all times so that we can change course as needed.

The bad: making a mess

Most people seem to define technical debt in terms of messy or poorly-tested code. This seems to be pretty universally acknowledged as bad, but it is often regarded as a necessary evil: In order to release on time (nevermind early!), we have to cut corners, even though we know it will cost us later in slowed development and lots of bug-fixing time. In practice, of course, we all know how this turns out: The cut corners lay in wait as traps for future development and the bugs pile up to an unmanageable degree.

If this is debt, it most resembles credit-card debt. It’s easy to take on lots of debt, and the interest is steep and compounds quickly. A credit card can be safe if you actually pay it off in full every month—in technical terms, refactor your code and make sure your tests are solid after every task or (at most) after every story. Otherwise the balance grows quickly and gets out of hand sooner than many of us would like to admit.

But it’s actually worse than that. Most of us have stories of nasty bugs hidden in messy code that took weeks (or longer) to figure out and fix. Debt can pile up, but at least it’s well-behaved; you know what the interest rate is. The problem we’re talking about now is actually a kind of unbounded risk: Any day, a new requirement can come in or a critical bug can be hit that lands in the middle of a poorly-designed and poorly-tested part of the code, potentially requiring a wide-reaching rewrite. Steve Freeman has argued that this is more like selling an unhedged call option: It can be exercised at any moment, putting you at the mercy of the market.

I would further argue that code that is poorly factored or tested should be considered work in process, and subject to the team’s work-in-process limits. A story simply isn’t done until the code is clean and demonstrably correct. I define a defect as anything short of being done: If the code is messy, doesn’t have tests, or simply doesn’t do what the customer actually wanted, those are all examples of defects.

Ideally we would catch such defects before calling the story done in the first place; if so, we don’t take on additional work until that existing work is done. If we declare a story to be done and then discover later that it really wasn’t done, either by someone reporting a bug or by developers finding messy or untested code, we should effectively add the affected story back into our current work-in-process and fix the defect before taking on additional work. Looked at this way, prioritization is meaningless for defects: The story was already prioritized some time ago; the defect is merely a new task on an old story, which is by definition higher priority than any current or future story. (Just be careful to weed out feature requests masquerading as bug reports!)

When faced with time pressure, cutting corners amounts to taking on debt or risk for the company without accountability. Instead, we have many options for explicitly cutting away pieces of work by splitting stories and prioritizing the pieces independently (Bill Wake). Each piece that actually gets prioritized and implemented needs to have clean code and tests to be considered done; the beauty of splitting stories is that you can have piece #1 done and piece #2 not started, instead of having both #1 and #2 started but not done, with a similar amount of effort.

The okay: running an experiment

William Pietri has offered the only interpretation of technical debt that I think actually fits the debt metaphor, having an actual benefit now and an actual promise to pay the debt at a definite point in the future while avoiding unbounded risk. The idea is that it’s okay to have some messy or untested code running as an experiment as long as it is actively tracked and all stakeholders agree that the code will be deleted at the end of the experiment. If the experiment gave promising results, it will generate new ideas for stories to be prioritized and implemented properly.

The benefit of this approach is very rapid feedback during the experiment. It truly counts as a valid debt because the business agrees to it, and agrees to pay it back by allowing the code to be deleted when the experiment is completed. Since the developers know the experimental code is short-lived, it must naturally be isolated from other code so that it is easy to remove later; there is no chance that some unexpected new requirement or reported bug will incur any risk beyond simply ending the experiment and deleting the experimental code early. An active experiment counts as work in process as well, so the team should agree on a limit for the number of active experiments at any one time.

Conclusion

I’m sure there are other definitions of technical debt out there, but these seem to be the big ones. Releasing early was the original definition, and I don’t think it represents debt at all. It’s simply a good idea. Making a mess is the most common definition, and while it’s a little like a debt, it’s even more like the unbounded risk of an unhedged option. It’s always a bad idea. Running experiments is a new but important definition, and actually fits the meaning of debt. It’s okay as long as you track it properly.

Reviewing a draft of this post, William Pietri offered a refinement of the debt metaphor for messy, untested code: “Instead of thinking of it as an unhedged call, one could also talk of loan sharks. If one has substantial gambling debts to a capricious mobster, then one comes to expect erratic demands for extra payments and menacing threats.” I like it: Imagine your programmers making shady backroom deals to get their work done under schedule pressure, and then being hassled by thugs when it comes time to work with that code again.

What do you think? Are there other metaphors that might work better than debt? Do you agree or disagree with my good/bad/okay evaluations?

Agile’s Second Chasm (and how we fell in)

Some years back, I remember the excitement in the Agile community when the general public started taking us seriously. The common reaction went from “Get away from me, crazy people,” to “That’s interesting; tell me more.” It felt like the car we had been pushing uphill had finally reached the crest.

At the time, it was very exciting. In retrospect, perhaps that was the beginning of our doom: the car passed the crest, picked up speed, and careened out of control. Now the state of the Agile world regularly embarrasses me, and I think I’ve figured out why.

The first chasm

Geoffrey Moore’s seminal book “Crossing the Chasm” is about a gap between a product’s early adopters and the rest of a market. Writing in 1991, he looked at why disruptive technology products fail, and concluded that there was an chasm between the early market for a product and everyone else. Some companies fall in, some cross it to success.

crossing-the-chasm

The essence of this is that the early market, who are circa a sixth of the total market, are different. They are comfortable with changing their behavior, and are eager to do it to get a jump on the competition. In contrast, early mainstream purchaserswant evolution, not revolution. They want technology to enhance, not overthrow, the established ways of doing business.”  Later mainstream purchasers don’t want to change at all; they do it because they have to. For a physical technology product, that mainly shapes marketing and product accessories like support and documentation. But what does that mean for something as ethereal as a development process or a set of values?

The second chasm

You can’t buy an idea. Instead, people buy things like books, talks, classes, and consulting services. By and large, those buyers don’t measure any actual benefit. When wondering whether they got their money’s worth, they go by feeling and appearance.

For early adopters, that can be enough. As people who seek sincerely and vigorously to solve a problem or gain an advantage, they are strongly motivated to find ideas of true value and to make them work. But for the later adopters, their motivation for change isn’t strong. For many, it’s merely to go along with everybody else, or to appear as if they are.

Because those later adopters are numerically greater, they’ll be the majority of the market. And unlike early adopters, who tend to be relatively self-sufficient, they will consume more services, making them an even bigger influence. They’ll also tend to be the least sophisticated consumers, the least able to tell the difference between the product they want and the product they need.

Putting that together, it suggests that when people want an idea, the most money can be made by selling products that mainly serve to make later adopters look or feel like they’re getting somewhere. Any actual work or discomfort beyond that necessary to improve appearances would be a negative. Bad products and bad ideas could drive out the good ones, resulting in a pure fad. The common perception of the idea would come to be defined by the majority’s behavior and results, burying the original notion.

That’s the second chasm: an idea that provides strong benefits to early adopters gets watered down to near-uselessness by mainstream consumers and too-accommodating vendors.

Agile fell in

From what I can see, the Agile movement has fallen in the second chasm. That doesn’t mean that there aren’t bright spots, or people doing good work. You can, after all, find old hippies (and some new ones) running perfectly fine natural food stores and cafes. But like the endless supply of grubby, weed-smoking panhandlers that clutter San Francisco’s Haight-Ashbury district, I think there is a vast army of supposedly Agile teams and companies that have adopted the look and the lingo while totally missing the point.

Originally, I based this notion on the surprising number of people I talked to who were involved in some sort of Agile adoption that was basically bunk: Iterations that weren’t iterating. Testing that wasn’t testing. Top-down power relationships. Cargo cult Agile. Fixed deadlines and fixed scopes. Teams that aren’t teams. Project managers who pretend to be Scrum Masters. Oceans of Scrumbut.

That could have just been me, of course. But over the past year or so I’ve talked to a number of Agilists that I met in the 2000-2003 timeframe. They have similar concerns. “This isn’t what we meant,” is something I’ve heard a few times, and I feel the same way. Few these days get the incredible improvements we early adopters did. Instead, they get little or no benefit. The activity of the Agile adoption provides a smokescreen, and the expected performance gains become a stick to beat employees with.

Honestly, this breaks my heart. These days, every time an acquaintance starts off saying, “So now we’re ‘doing Agile’ at work…” I cringe. I used to be proud to be associated with the Agile movement. But although I still love many of the people involved, I think what happened is a shame. Looking back, I see things I wish I had done differently, things I intend to do differently next time. Which makes me wonder:

  • Have other early Agilists noticed this?
  • If you had it to do over again, what would you do differently?
  • To what extent do you think the problem is solvable?

I’d love to hear what other people think, be that via email, on Twitter, or in the comments below.

Against Managerial Time Travel

I just realized that two things that frequently bug me about software projects are really two sides of the same problem: managers who think they are living in a fantasy/science fiction novel. (I’m pretty sure it’s sci-fi, for reasons I’ll explain below.)

See the future!

Johanna Rothman’s recent post “But I Need to Know When the Project Will Be Done” is a great example of the first. A manager wants to know when a project will be done. It’s for a hazily defined feature set, one that will be changed by future feedback. Often, this is asked of teams with little history together, working with novel technologies, and with some staff still to come. But somehow they want a fixed date. It’s like they’re asking for this:

equation2

Do they believe that developers are psychic, able to part the mists of time with the raw power of their minds? Could they think that the team, having already sold their souls for power over bits, can just ask their dark minions about the future? In other words, is it the stuff of fantasy?

Or is it that they think we nerds, among our other magic machines, have one that lets us travel through time? Is it a science fiction dream they’re living?

Change the past!

I think it has to be the latter, because of their desire to change the past. Because what else can you make of project post-mortems that seek to identify every mistake made, and every person who made it? Of bringing up again and again people’s past mistakes, without investing at all in preventing future problems. Of playing out endless “If only you had done X” scenarios. What other purpose could all their blame-seeking behavior serve? After all, it hurts morale, reduces innovation, and eventually drives your best workers out the door.

The most reasonable explanation for this is that they are just saving up issues for when they finally get their time machine, so they can go back and fix all those niggling things that went wrong the first time. Like so many time-travel mad scientists, their obsession with perfecting the past makes them heedless of the future consequences of their actions.

Flip it and reverse it

Of course, people doing this (which certainly includes me, sometimes) have it exactly backwards. What managers (and everybody, but especially managers) should practice is seeing the past and changing the future. And I think the best way to do that is focusing on the present: the people we work with, the systems that guide us, and our current realities. Our powerful imaginations may make the past seem mutable and the future fixed, but they aren’t. When we act as if it were otherwise, we live a fiction.

The Lean Startup Kanban board

In my last post, “Responsible Technical Debt: Not an Oxymoron“, I said I’d propose a way to responsibly handle technical debt in a Lean Startup context, where a substantial number of your units of work are experimental, and unlikely to be kept as first written. That way follows. I should explain that although I haven’t seen in practice, it’s one I’ll be trying soon in my very own startup, so I’ve given it a fair bit of thought. My thanks to the many Lean Startup and Kanban people who have inspired this.

Standard Kanban boards

The core notion of a software development Kanban board is that you represent visually the state of various units of work. These units of work are pretty small, generally a few person-days. A basic one looks something like this:

normal

Stories flow in from the backlog on the left. After they are worked on (D-E) and reviewed (F), they are released and put in the file of finished stories, rarely to be looked at again. There are generally limits, so that a team will only have a certain number of stories in each stage at one time. Because the code could be around for years, we need to make sure to write it in a sustainable way, and the Agile movement, especially the Extreme Programming end of it, has discovered and promoted a lot excellent techniques for making code that lasts.

This flow makes a lot of sense when you are using Agile techniques to solve a known problem, and have pretty high confidence that the things you build will last indefinitely.

Lean Startups are different

However, in software startups, and especially when following the Lean Startup approach, you are solving an unknown problem. The goal, especially in the early stages, isn’t to build software; it’s to discover needs and wants that people will pay you to satisfy.

The standard startup approach, typified by dot-bomb Webvan, is known in the Lean Startup community as the “field of dreams” approach. You build a complete product in the hope that when you finish, your customers will come. The Lean Startup approach instead requires that you identify your hypotheses about your customers, your market, and your products, and test those hypotheses as early as possible, partly by trying a lot of things out and seeing how they work.

Another way to look at this comes from Eric Ries:

The Lean Startup Loop

Anything that makes that loop longer is worse. Technical debt can surely do that. But if you write something to last the ages and then remove it next week, that can also slow down your feedback loop.

My answer is to explicitly accept there will be two kinds of code in the system. There will be sustainable code, the kind that Agilists have spent the last 15 years learning how to create properly. And there will be temporary code, code that we explicitly plan to throw away. I still think the third kind of code, half-assed code, has no place in a serious software project. But there’s a risk, one many of us have lived: what if the temporary code sneaks into the permanent side, putting the project on the road to a technical debt bankruptcy?

Lean Startup Kanban boards

My solution to this is to expand the Kanban board to explicitly represent the flow of experiments. Below, experiments are represented in blue:

kanban-startupFor the early part of their lifecycle, experiments and regular features share a pretty similar path. They hang around in a backlog until the are put on deck (C). They get worked on by the team as part of the regular flow (E), and accepted by product managers (G). But then, after release instead of being put in the released-and-forgotten box, those stories are tracked as active experiments (H-M).

When the experiment is concluded, then the feature might be accepted permanently, in which case the code will need to be cleaned up, and the rest of the system refactored to accommodate the new code. More likely, the code will be removed. Possibly because the feature turned out to be a bad idea, at least in its current form.

Or possibly it was a good idea, but not good enough in its current form to keep.  Often the first approach to a feature (or to its implementation) isn’t the right one; it can be better to remove the experiment and its code in favor of an upcoming correct implementation. In which case, it will yield a number of new story cards for the backlog.

I think this approach will mitigate the biggest risk of accepting any temporary code: you forget that it’s temporary, and suddenly you’ve got technical debt in your system and no scheduled time to clean it up. There are other risks, some of which I hope to write about over the coming weeks and months.

Have thoughts on this? Have you tried something similar? I’d love to hear about it. Leave a comment!

Responsible technical debt: not an oxymoron

Kent Beck’s recent talk at the excellent Startup Lessons Learned conference has created some controversy. As somebody who attended the conference and who has been involved in startups for many years, I agree with some things on each side of the argument, and I want to give my take on how to be responsible with technical debt.

The Agile reaction

In the dark days of waterfall, code bases had only two states: being re-written or slowly getting worse. The project would start out with great hope. People would write something beautiful, launch a 1.0, and then spend their time patching it. Inevitably, and despite initial efforts to resist, the code base would go slowly downhill. Gradually increasing technical debt would lead to technical bankruptcy, forcing a big rewrite. Then the agonizing cycle would start again.

One of the key insights of the Agile movement, one crystallized by Martin Fowler’s book Refactoring, was that decay was not inevitable. We could clean things up incrementally as we went, without needing periodic giant rewrites. Another key insight comes via Kent Beck’s Extreme Programming: if we worked iteratively and refactored all the time, then we could start clean and stay clean, forever, ending the cycle of giant rewrites.

As easy as that sounds, it’s actually very hard; every solid Agile practitioner has spent time developing a nose for technical debt, and a hate of it. So when Kent Beck suggested that startups could responsibly create some technical debt, it caused a lot of strong reactions: to some people it sounded like the pope saying that premarital sex and adultery were occasionally fine.

But startups are about learning

For this to make sense, you have to understand two things:

  1. Startups—even software startups—are businesses, not software development projects.
  2. The point of a startup isn’t to create a software product; it’s to discover a sustainable business model.

In a startup, software development isn’t an art for its own sake; what makes sense from a pure engineering perspective may not make sense more broadly. That’s true for any business really, but it’s especially true for a startup. That’s because the main goal of a startup is to discover a valuable business. Discovery requires experimentation, and the amount you can learn is a function of the cost of your experiments.

So suppose I have a new idea for a game. I believe it will be A) fun and B) a moneymaker. I could take a number of months and design it, develop it, polish it, and publish it. Then the market would tell me whether my beliefs were correct. Or I could make a very crude version with 1 level, have some people play it, and see if the core gameplay is enjoyable. That’s my first experiment to examine hypothesis A, fun. If it works, I could put it up on the web, buy a few ads, and see if the game sounds interesting enough to click through and try it. That would be my first test for hypothesis B, making money with the game.

Now if these tests fail, then my code base may be pretty short lived. They say that hard work pays off eventually, while laziness pays off now. The same is true for good development practices. A lot of Agile practices get you long-term sustainability, but they cost you in the short term. For sufficiently small and short-lived pieces of code, worrying about technical debt does not make business sense. Better to invest in more experiments.

Danger Will Robinson!

At this moment, some of my readers are leaping up to say “But! But! But!” That’s what I wanted to do during Kent’s talk, anyhow. Because when businesspeople hear that they can have something quicker, they will often interrupt to say, “Yes! Do that!” As I explain in “The 3 Kinds of Code“, a lot of business/engineering fights arise because engineers are more aware of the long-term costs of short-term thinking. I have seen good teams, even ones otherwise well-versed in Agile techniques, give in to that business pressure again and again, creating giant messes.

So hacking things together, and the resultant technical debt is extremely dangerous, but sometimes very powerful. What should a responsible team do?

I believe there are good ways to handle this dilemma, and will propose one approach in my next post.

9 Signs of Bad Team Spaces

One of the most popular articles here is 10 Rules For Great Development Spaces, so I thought I’d follow up by explicitly listing signs of bad spaces. These are my top 9, but I’d love to hear what others people have noticed.

  1. People wearing headphones. Part of the reason to sit together is to listen to one another. If team members are wearing headphones, they can’t do that. Don’t blame them, though; figure out what noise or distraction drives people to do that and eliminate the root problem. And if they aren’t part of the team, move them elsewhere.
  2. Stale artifacts on the walls. Every artifact on the wall should be there for a reason. If there are a lot of stale plans, charts, or lists on the walls and whiteboards, that’s a sign of trouble. Immediately prune the junk!
  3. Workspace as information desert. Development teams turn knowledge into software products. Rather than requiring effort to find things out, a good workspace requires effort to avoid knowing what’s going on. Bare walls often indicate low collaboration or high confusion.
  4. Minimal interaction. If team members sit near one another and never talk, it’s often a sign of an underlying problem. I’ve seen this caused by bad relationships, code silos, excess division of labor, too-long release cycles, excess schedule pressure, and plain old shyness.
  5. Furniture as barrier. Furniture should help you work, not get in the way. Barriers are great to reduce noise and chaos at team boundaries, but true teams should be able to share space and collaborate effectively.
  6. Sad or ugly spaces. You will likely spend more waking hours in this room than any other. Shouldn’t it be nice?
  7. Seating by job description. Agile approaches require cross-functional teams to make whole products. If people are grouped by job description, that is at least a barrier to collaboration, and often a sign of unhelpful silos. Group people by project instead.
  8. Space and furniture as status markers. In some companies, being distant from the action is a sign of status. On Agile teams, that’s a mistake. Instead of using rooms and desks to indicate hierarchy, give people the tools they need to do their jobs.
  9. No laughter, no fun. This is a big one for me. Every really productive team I’ve visited enjoys the work and their fellow team members. Money can make people show up, but it’s joy that gets the best results.

That’s my list. What’s yours?

Proactive versus reactive

Somebody recently asked me if Agile approaches aren’t essentially reactive, with Waterfall being proactive. It’s a good question, and I’m sure there are a lot of interesting answers. Here’s my take.

Proactively reactive

In terms of product produced, you can be more proactive in an Agile context than you can in a Waterfall context. With Waterfall methods, you just have to hope your designs and plans are correct, reacting massively after each large release. With Agile approaches you can continually test your assumptions and hypotheses, allowing you to eliminate bad ideas early and invest more resources in areas that have been proven to deliver more long-term value.

Some Agile shops may end up stuck in purely reactive cycles, with no long-term plans. (I don’t see that much.) But the good ones are continuously updating their plans based on the new information that you can only gain if you can release frequently. It has been said that Waterfall is plan-driven, while Agile methods are planning-driven. Having tried both, I think frequent plan improvement is much more proactive.

The Agile conversation

Another way to look at it is that Agile approaches proactively take advantage of people’s reactive skills by creating situations where their reactions will be maximally useful.

Conversations are in some sense essentially reactive: you say something, I respond to it, you respond in turn. But we can proactively decide to have a conversation and expect to get something useful, perhaps novel out of it.

Agile approaches have conversations with markets and user communities, using new releases to advance the discussion. We release something and say, “How about this?” People respond: they love X, they hate Y, and how did you miss Z! We release another thing and say, “Is this better for you?” And so the conversation goes, week after week.

Defining developer productivity

Can you measure the productivity of an individual developer or team of developers? Do you really want to? If the answers to these questions are “yes”, the first thing you’re going to want is a clear definition of productivity.

(I was inspired to write this article by a thread in the scrumdevelopment Yahoo! group titled Executives Tracking Individual Developer Productivity?  The originator of the thread wrote about a real issue in his company that had to do with the CEO wanting to measure the productivity of the company’s developers based on the principle that  “you can’t manage what you can’t measure.”)

Productivity good!

We can probably agree that productivity (in the abstract) is a good thing, but can we agree about what it means concretely? For starters, what are the “units” of productivity for individual people or teams?

For example, we can compare the gas mileage of vehicles to each other in units of  “miles per gallon”, and we would probably agree that vehicles with higher numbers are more productive.

What units could we use to compare individuals or teams in a similar fashion? One commonly-employed notion from our past, and one that we have thankfully left behind, is that developer productivity could be measured in lines of code produced in a given period of time. But if that’s behind us, what are we left with?

Productivity defined

The business novel The Goal defines productivity as follows:

“Productivity is the act of bringing a company closer to its goal. Every action that brings a company closer to its goal is productive. Every action that does not bring a company closer to its goal is not productive.”

Sounds simple enough. Now all we have to do is to figure out what the company’s goal is, and we’re there.

Do you know what your company’s goal is? Not its mission statement, but its goal? Is your company in business for a profit, or do you work for a non-profit concern? Most of us, I believe, work for companies interested in making a profit. And if that is so, then most companies’ goals might very well be as simple as making a certain amount of profit per year.

But we’ll probably want to measure the productivity of our developers more frequently than once per annum, so we might express the productivity of an individual or team in terms of “profits earned per iteration”.

The thing is, can we actually measure the contribution to profits made by an individual developer or even a team of developers? Offhand I’d say that this is a difficult proposition, at best.

However, I contend that if we cannot correlate the productivity of an individual or team with profits, then there really isn’t much point in trying to assess that productivity in the first place.

Looking where the light is better

There’s on old joke about a man who is searching the area around a corner streetlight. A second man comes along and asks him what he’s doing, and the first man replies, “I’m looking for a quarter that I dropped.” The second man joins in the search, but neither man is able to spot the lost coin. Finally the second man thinks to ask, “are you sure you dropped it here?” The first man replies, “no, I dropped it in the alley, but the light’s better here.”

In a for-profit company, attempting to measure the productivity of a developer in any terms other than contribution to profits amounts to looking where the light is better. In other words, don’t measure something just because it’s convenient to measure.

The myth of “undesigned”

Some teams talk as if design is something you can add to software later, saying, “Oh, that interface we built hasn’t been designed yet.” That way of thinking is based on a fundamental error.

The problem

Many real-world teams treat certain kinds of design, including visual design, user interface design, and interaction design, as quantities that you can add later. Often this is done with the best of intentions.

For example, the team’s designer may be unavailable, so the rest of the team will get to work on a story, building an obviously rough interface. If, by the time they’re done, the designer still isn’t available, the product owner might accept the story as complete, making a mental note to come back later, perhaps much later.

When somebody external comments on the ugly interface, the developers might say, “Oh, that hasn’t been designed yet.” They’re wrong.

Software is nothing but design

People often talk about software with an implied analogy to industrial production. One group of people makes up some blueprints, and then an entirely different group of people makes physical objects. That second group is judged not by the utility of the objects, but by conformance to the design. This can sometimes be a useful analogy, but in an important way it’s entirely false.

In truth, the creation of software is 100% design. The computer does all the work of making things happen in the real world. The software is just the blueprint the computer uses to decide what to do. In the same way that a good manufacturer is one that follows the instructions well, a good computer is one that executes the software faithfully and reliably. All the humans participating in the creation of software are involved in a joint design activity.

“Not designed” is badly designed

So if software is pure design, then what’s going on with the ugly interface? The notion that it is somehow “not designed” is wrong. It’s just badly designed. Why does that matter? Because good design isn’t something you can spray on later like a new coat of paint.

Programmers already know that for the kinds of design they appreciate. Good developers try hard to avoid  spewing out reams of confusing, badly organized code that they hope to clean it up later. They know that’s wasteful, and likely to hide tricky problems. Tactically, they may choose to leave certain things messy for a short period. But experienced developers do that judiciously, painfully aware of how easily a controlled low-quality situation can turn into an uncontrolled one.

Teams should have the same attitude about every kind of design that matters for their project. Each story should be well designed from every perspective before it is declared complete. That sounds like it could be a lot of work, but it needn’t be. As with the software architecture, other sorts of design can be approached incrementally and iteratively.

The only hard part is making sure that you have all key contributors working together. The easy way to do that? Put them all in a room, and have them release something every week. They’ll figure it out.

Clicky Web Analytics