Web 2.0 and Agility – Brothers in Arms?

I recently attended AgileCoachCamp and convened a discussion on whether Agile values, principles and practices would be applicable to Web 2.0 projects. Being relatively new to Web 2.0 from a developer perspective I suspected that the answer would be something like “Duh, of course Declan!” But I was really interested in exploring this intersection and I was not disappointed…

Mark Scrimshire led the discussions by suggesting we adopt Tim O’Reilly’s Web 2.0 definition. We summarized it with:

  • Harness collective intelligence
  • Web as a platform
  • Data (it’s all about the data)
  • End of the release cycle (i.e. continuous release)
  • Lightweight programming and business models
  • Software above a single device
  • Rich user experience

And Mike threw in his own equation …

Web 2.0 = (You + Me) to the power of Us

Responding to Change

It turns our that Flickr updates its web site software as frequently as every 30 minutes. They release to a limited audience and collect feedback. They use these metrics to determine whether to roll out the changes to a wider audience.

I know a lot of agile teams struggle with getting timely end-user feed because of organizational and technical obstacles. Web 2.0 uses the web itself to amplify end-user feedback.

Test Driven Development

I have always thought that the core agile practice is TDD so I was interested if Web 2.0 technologies posed any unique TDD challenges. There are certainly Web 2.0 technologies out there such as rSpec for Ruby and various xUnit frameworks. Seems like there could be a weak spot with some of the RIA products such as Flex and AIR although there is TDD support for Silverlight. And the architectural separation of data in REST interfaces seem to makes REST applications inherently testable by mocking out REST interfaces.

So Web 2.0 seems to provide a suitable platform for test driven development. I wonder how prevalent TDD is in Web 2.0?

Hacking vs Sustainability

One miconception I constantly run up against is that agility is a licence to hack. It is hard to appreciate the discipline that is required to honestly do agile development until you try it. And early in a project it is often faster and easier to hack something together. But you pay for this many times over as the technical debt accumulates and cost of change rises and rises.

It seems that time-to-market pressure and rapid technological change might push companies to make rash decisions that impact their longer term sustainability.

Brothers in Arms

The panel seemed to agree that agility is tailor made for Web 2.0 projects. They truly can be brothers in arms.

I have a nagging fear that some in the Web 2.0 space regard agile practices as being too much process? If true, the irony would be truly breath-taking!

This is a topic which I will dig into deeper – I will be working with a Web 2.0 startup in the next few weeks and hope to learn much more …

Taking Stock of Your Software Inventory

Do you have inventory piling up between your development team and QA?

This inventory of unconfirmed functionality limits your team’s agility on a few fronts:

  • your software is not always potentially shippable
  • deferred ROI
  • delayed feedback from QA

One of the main goals of your agile team is to build potentially shippable releases every iteration. When QA testing is done after development releases your team can no longer achieve this goal.

This deferred testing also creates an inventory of unconfirmed features that need to be managed. And just like any manufacturing system, inventory is wasteful and costs your company time and money. Every day that your software is not shipped decreases the ROI.

The longer your software remains untested the more costly any corrective action becomes. The cost arises from many factors including the ramp-up time for the developers, context-switching time, overhead with issue tracking and so on.

Finally, feedback from good testers is invaluable. They think differently from developers and tend to have a more outward-looking customer perspective. If you get this feedback earlier in the process you will see a reduction in defects and less rework.

If you find your team in this situation here are a few things you could might consider doing:

Track Metrics

Select metrics such as defect counts or time to close that you feel best highlights the obstacles your team faces. Chart these metrics in big visible charts

Consider translating technical metrics into bottom-line financial data. For example, you could use daily expected revenue of the product to calculate the revenue loss due to each day of delay.

Get Developers and Testers Working Together

There has been much talk about getting quality assurance infused throughout the product cycle. Agile teams are tailor-made for this transition. Get a tester on your team and allow her to contribute throughout the iterationAside from traditional testing, the tester can bullet-proof the story tests, suggest testing tasks, pair with developers on testing and so on

This inter-personal contact will do more to promote quality within your team than any formal QA process ever will.

Stagger Iteration Testing

Staggered testing

If you must maintain a separate system test phase, consider having the QA team test the previous iterations release. This way, the QA team has a stable release for testing and feedback is delayed at most one iteration

Over time, you may be able to move additional QA activities into the development iteration.

Is Software Development Just a Game?

Agile Software Development: The Cooperative GameI am reading Alistair Cockburn’s book Agile Software Development: The Cooperative Game 2nd Edition.

I definitely buy into the game analogy – it provides an opportunity for deeper insight than the more traditional analogies to engineering processes, house building and the like. Alistair settles on rock climbing as his choice of good analogy. But for me, a team sport like soccer, hockey or basketball hits closer to home.

One thing I have to say right off the bat is that the word “game” as an analogy bugs me. The name suggests a lack of seriousness. It reminds me of how the writers of the agile manifesto selected “agile” rather than “light weight” (See pp 382-383). I think we need to play the name game and come up with a more serious name than game.

Alistair suggests that software development is a goal seeking, finite, cooperative game. Let’s deconstruct this with a team sport analogy:

Goal seeking:  A team sport always has the ultimate goal – to win. But the analogy is stronger. Teams can have other objectives like improving our rebounding, winning the next quarter, making the playoffs and so on. Good software teams should similar objectives like improving our test coverage, completing all the stories in the next iteration, and getting out a functional release by the end of the quarter. Notice how all these objectives can be SMART.  

Finite:  Almost all team sports have well defined limits including playing conditions, duration, team size and so on. Except possibly for cricket which still baffles me.

Cooperative:  Winning teams require personal inter-dependence. You need to trust your teammates and be able to count on them. When I think of really great team players like Sidney Crosby or Steve Nash what amazes me is their total awareness of where everyone is and where they are going.

But the main reason I like a sports team analogy is that you can quickly use it to point out the futility of detailed plan-driven software projects.

Stanley CupImagine you are a coach setting out to win game 7 of the Stanley Cup finals and planning each line change, each play, every face-off, fight and so on in advance. I think you would be having some lonely summer rounds of golf. As a winning coach you would make sure every player embraces the strategy that you feel can win. You count on the training, experience, skill and judgment of your team to adapt to whatever the other team throws at you.

What does QA Mean on Agile Teams?

I have struggled with getting traditional QA professionals to work effectively with agile teams. Testers are frustrated because there is scant coverage of “tester” practices in much of the agile literature. For instance, the XP bill of rights covers the rights of the customers and the developers. Testers don’t even get a mention.

I think this stems from an emphasis within agile processes on test-driven design and automated acceptance tests. Done well, these practices result in much higher software quality. Agile developers then question the value of “monkey” manual tests which are often covering the same ground as the developer and customer facing tests that are already passing. Now, to be fair, there certainly is information out there on the role of agile testers such as work done by Lisa Crispin, Janet Gregory, Brian Marick, Jonathan Kohl and Michael Bolton to name a few.

And there are groups and forums out there such as the agile-testers Yahoo group. Michael Bolton recently posted that perhaps the confusion arises from what we mean by QA:

One important step is to stop thinking about testers as “QA” — unless the A stands for “assistance” or even “analysis”, but not “assurance”. We’re testers. Management (in one sense) and the whole team (in another) provide the “assurance” bit, but since we don’t have control over schedule, budget, staffing, project scope, contractual obligations, customer relations, etc., etc., we can’t assure anything. We should be here to help — not to slow the project down, but to aid in the discovery of things that will threaten value.

I think Michael has it right. In a private email he provided a great distinction between a developer’s and a tester’s perspective:

In the developer’s world, the job is to create value. In the tester’s world, the job is to defend value by looking and seeing where it might be missing. The developer’s job is to discover how it can work. The tester’s job is to discover how it might fail — and by disproving a failure hypothesis, getting one more piece of evidence but never certainty that it will work under these sets of assumptions — ‘cos the assumptions are never sure things.

I couldn’t agree more. I see the role of a tester then as supporting quality infusion into agile teams.

Delivering Better Software

Hi, I am a software developer and agile coach. And this is the start of my blog on delivering better software.

Delivering working software is crucial. Many teams worry about their methodology or eliciting better requirements or designing higher quality and so on. These are all important but they are secondary to actually delivering systems that meet stakeholder needs. So the focus for great teams must be to deliver, deliver, deliver.

Better Software is about making software better today than it was yesterday. Better software is made with small incremental changes that compound over time. It takes great teams to make better software. So another focus for great teams must be on continual learning, reflection and improvement.