I attended a user group meeting last week where a participant asked the question: “How do I keep my stand-ups from going so long?”. He cited greater-than 30-minute time for 10 people.
I didn’t get a chance to talk to him, but here is my advice.
First, discuss the topic with the team. Ensure that there is agreement that long stand-ups are negatively impacting the team. It may be (but it’s unlikely) that the content of the long stand-ups is important to everyone,
Determine root cause. I find a couple of causes that can be addressed in different ways.
Long-winded people offering irrelevant input or input that is relevant to only one other person: Record the contributions of each of the long-winded folks and review with them. Identify specific content that is not relevant to the team.
You’ll often find paycheck rationalization offerings (e.g. “I had my one-on-one with my manager yesterday”).
Or you may find someone simply regurgitating their calendar (managers are likely to be the offenders here).
Or it could be politically driven public thrashings that are more appropriate to share in private (“David – you seem to be checking code in without any tests. Remember, we agreed that we would write tests”).
I’ve seen stand-ups where multiple team members will regurgitate events in which the whole team participated. If the whole team attended the iteration planning meeting, there is no need to mention this in the stand-up.
Start timing folks. If anyone talks more than 2 minutes, make them stand on one leg, or ask them to extend their arm holding a heavy book while talking. Or use a timer (obnoxious alarms are best).
Conversations that are not relevant to the whole team: Institute a “parking lot” flip chart or white board where topics for further conversation can be captured for discussion after stand-up. Ask the whole team to help identify potential parking lot items when they occur; add them to the parking lot when identified and move on. Ensure that those follow-on conversations occur (else you run the risk that folks will continue to insist on in-stand-up dialog).
Use a speaking token and ask the team to be rigid about not talking when they don’t have the token. As conversations occur, the token passing will make it obvious that a conversation is occurring, which should help folks to self-identify opportunities to use the parking lot.
Explain to the team that the stand-up is not the only opportunity for conversation during the day.
Use a laser pointer to have folks point out the relevant stories/tasks on the physical card wall as they speak. They will be less likely to pontificate on irrelevant details if they have no card to point to.
I’ve attended stand-ups of over 40 people that have taken less than 10 minutes. That’s less than 15 seconds per person. Granted, these were teams that were pairing, so oftentimes the contribution of the second of the pair was of the form “ditto Joe”. Still – if a team of 40+ can get it done in 10 minutes, there’s no reason why your “2-pizza team” cannot.
Showing posts with label Agile. Show all posts
Showing posts with label Agile. Show all posts
Thursday, October 06, 2011
Thursday, June 30, 2011
Prioritizing vs. Sequencing the Product Backlog
A primary tenet of agile software development is doing the highest business-value work earlier. The idea is that you achieve a minimal marketable feature set as early as possible so that a) you can issue releases earlier and b) if the money runs out, you have something more valuable than if you didn't sequence your work in that manner.
Another, less frequently cited agile tenet is to do the riskiest work earlier. The idea here is that you avoid late surprises when risky work turns out to be expensive. Better to discover this expense earlier.
When most folks talk about the backlog order, they refer exclusively to business priority.
I think ordering or sequencing the backlog must take more than just business priority into account. Yes, business priority is important, but so are a whole host of other factors, such as early exposure of risk. Balancing these factors is part of the art of project management.
Factors to incorporate into sequencing decisions:
As I mentioned, these factors are often competing. The context of your project defines which of these dimensions are more or less important. Yes, do the highest business value work earlier, but don't forget to consider these other factors as well.
Another, less frequently cited agile tenet is to do the riskiest work earlier. The idea here is that you avoid late surprises when risky work turns out to be expensive. Better to discover this expense earlier.
When most folks talk about the backlog order, they refer exclusively to business priority.
I think ordering or sequencing the backlog must take more than just business priority into account. Yes, business priority is important, but so are a whole host of other factors, such as early exposure of risk. Balancing these factors is part of the art of project management.
Factors to incorporate into sequencing decisions:
- Business Priority
- Dependency Order: Despite our best efforts to decompose the backlog into independent stories, the fact is, the tension between the INVEST priorities sometimes cause us to define stories that are dependent on other stories.
- Mixture: The Rock/Pebble/Sand metaphor is helpful here. Consider a bucket at the beach. If you fill it only with rocks, you have a great deal of wasted space in the bucket, due to the space between the rocks. Though it may be unable to accommodate another rock, you may be able to slip in a handful of pebbles, to fill the spaces. Following that, you may be able to slip in some sand, to fill the spaces that the pebbles were unable to occupy. So it goes with an iteration. You don't want to fill your iteration bucket with only large stories, because you're losing the opportunity to slip in smaller stories. For example - if a developer finishes a story at 3pm on a Friday afternoon, you'd probably rather have him knock off a small story in the next couple of hours rather than start a large story.
- Crowding: Many advise that agile development teams define iteration themes. This is a good concept in theory, as it allows the team to focus on accomplishing a larger goal in the iteration. The risk here is that you have your whole development team working in the same parts of the code base. This can merge issues as the team commits code to the code repository. Consider the source-code crowding problem when sequencing the work.
- Risk: As mentioned above, the earlier you schedule the risky elements of the project, the more insight you have into your completion date. One element of risk is embedded in non-functional requirements. For example - if you have performance or scalability requirements that are risky, it's better to implement the stories that are sensitive to those requirements earlier.
- User Feedback: If you have elements of the software for which user feedback is critical to making the right decisions, schedule this work earlier. If you delay these features towards the end, the need to change the system in response to the feedback becomes a schedule surprise. Worse, if you decide not to incorporate the feedback in order to make your date, your users are not just unhappy; they'll feel that their input has been ignored.
- Exercise the architecture: Scheduling the highest business value work first may avoid elements of the architecture into later in the development cycle. For example, perhaps the "happy path" of execution is deemed the higher business value. This might delay consideration of elements of exception handling to the end of the release. First pass implementations within an architecture are always riskier, and could introduce schedule surprise. It is wise to exercise all elements of the architecture as early as possible.
As I mentioned, these factors are often competing. The context of your project defines which of these dimensions are more or less important. Yes, do the highest business value work earlier, but don't forget to consider these other factors as well.
Tuesday, June 28, 2011
Feedback Manifesto
I have come to value
Verbal, constructive feedback over written evaluations
Measuring output over measuring input
Frank feedback from colleagues over speculative management judgment
Real-time, frequent feedback over periodic high-ceremony assessments
Though the things on the right are commonplace and often dictated by antiquated HR policies, I value the things on the left more. Much more.
Principles
Giving feedback
My highest priority in giving feedback is to help my colleagues improve - to benefit them individually and the organization collectively. I always preface my feedback with this sentiment.
I understand that not all recipients are comfortable with feedback. I choose the time and place of delivery to respect this sensitivity.
I always ask the recipient if he/she is willing and receptive to feedback at that time/place and graciously accept "not now" for an answer.
My feedback focuses on behavior and outcomes - not the person.
When providing critical feedback, I consider the constraints and challenges in play at the time of the performance for which I am providing feedback.
I acknowledge intelligent risk-taking as a necessary component of creativity and delivery of value and incorporate my appreciation for it in my feedback.
I ask for feedback on my delivery in order to continually improve my ability to give constructive, valuable feedback.
Receiving feedback
I welcome critical feedback about my performance as a gift, and express my appreciation accordingly, regardless of whether I agree.
If I am not in a good place to receive feedback, I respectfully request an opportunity to reschedule.
I refrain from defensiveness or questioning the motives of the person giving me feedback in order that I can absorb the essence of the feedback.
Verbal, constructive feedback over written evaluations
Measuring output over measuring input
Frank feedback from colleagues over speculative management judgment
Real-time, frequent feedback over periodic high-ceremony assessments
Though the things on the right are commonplace and often dictated by antiquated HR policies, I value the things on the left more. Much more.
Principles
Giving feedback
My highest priority in giving feedback is to help my colleagues improve - to benefit them individually and the organization collectively. I always preface my feedback with this sentiment.
I understand that not all recipients are comfortable with feedback. I choose the time and place of delivery to respect this sensitivity.
I always ask the recipient if he/she is willing and receptive to feedback at that time/place and graciously accept "not now" for an answer.
My feedback focuses on behavior and outcomes - not the person.
When providing critical feedback, I consider the constraints and challenges in play at the time of the performance for which I am providing feedback.
I acknowledge intelligent risk-taking as a necessary component of creativity and delivery of value and incorporate my appreciation for it in my feedback.
I ask for feedback on my delivery in order to continually improve my ability to give constructive, valuable feedback.
Receiving feedback
I welcome critical feedback about my performance as a gift, and express my appreciation accordingly, regardless of whether I agree.
If I am not in a good place to receive feedback, I respectfully request an opportunity to reschedule.
I refrain from defensiveness or questioning the motives of the person giving me feedback in order that I can absorb the essence of the feedback.
Monday, June 20, 2011
The case against iteration based re-estimation
Many agile practitioners recommend re-estimating stories at the beginning of each iteration. I disagree with this practice.
For one thing, I believe it's a waste of time. Any value that you might get (which I doubt - see below) from the practice is lost on the time spent.
It's worse than that though. By re-estimating the iteration's stories, you are almost always estimating with a greater level of detail than what you had originally. With this increased level of detail, in my experience, estimates tend to grow.
Why is this a big deal?
Let's try an example.
I come in to my iteration planning meeting with 30 points worth of stories from the backlog. The team commits to those stories, but in re-estimating, the 30 points inflates to 40. In fact, this always seems to happen, as the team gets a little nervous about hitting their historical velocity and they know management is paying attention. Let's assume the team gets them all done. This increases the observed velocity by a third (40 points is a third more than 30). Now, let's say I have 120 more points left in the product backlog to get to the minimal marketable feature set for release. How many more iterations are left?
If you said 3 more iterations (i.e. 40 points per iteration gets you to 120), you are ignoring your team's tendency to inflate estimates. Assuming your estimate inflation rate is consistent (a third), you really don't have 120 points remaining, you have 160 points, or 4 more iterations remaining. Or, calculated another way, if you consider only the initial estimates to calculate your velocity (30), then you can determine that you have 4 iterations of 30 remaining. In both cases, you end up correctly predicting 4 more iterations. Then again - if you use the initial estimates, what value did your re-estimation from 30 to 40 provide you ? I say none.
If you regularly re-estimate at iteration planning meetings, make a note of the original vs. the updated estimates. See if they grow. Consider what impact this is having on the accuracy of your release planning.
OK, I can hear you now. "My team's estimates don't inflate ... some go up; some go down". I haven't seen this, but let's say you do. Let's revisit the example from above with this assumption. You go into the iteration planning meeting with 30 points and walk out with 29. Your velocity is not materially impacted. You are still on track with 3 remaining iterations (roughly). So the question is this: what value did that re-estimation provide? I say none.
When *do* you re-estimate then?
I believe in updating estimates when information arises from experience that pertains to some shared aspect of a subset of stories. For example, let's say that your retrospectives have shown that every time you have a story that hits a certain database, it ends up being much more effort than expected. In a case like this, it makes sense to revist those database stories to ensure that this knowledge is incorporated into those estimates. I call this aspect-oriented re-estimation (adapted from the term "aspect-oriented programming").
For one thing, I believe it's a waste of time. Any value that you might get (which I doubt - see below) from the practice is lost on the time spent.
It's worse than that though. By re-estimating the iteration's stories, you are almost always estimating with a greater level of detail than what you had originally. With this increased level of detail, in my experience, estimates tend to grow.
Why is this a big deal?
Let's try an example.
I come in to my iteration planning meeting with 30 points worth of stories from the backlog. The team commits to those stories, but in re-estimating, the 30 points inflates to 40. In fact, this always seems to happen, as the team gets a little nervous about hitting their historical velocity and they know management is paying attention. Let's assume the team gets them all done. This increases the observed velocity by a third (40 points is a third more than 30). Now, let's say I have 120 more points left in the product backlog to get to the minimal marketable feature set for release. How many more iterations are left?
If you said 3 more iterations (i.e. 40 points per iteration gets you to 120), you are ignoring your team's tendency to inflate estimates. Assuming your estimate inflation rate is consistent (a third), you really don't have 120 points remaining, you have 160 points, or 4 more iterations remaining. Or, calculated another way, if you consider only the initial estimates to calculate your velocity (30), then you can determine that you have 4 iterations of 30 remaining. In both cases, you end up correctly predicting 4 more iterations. Then again - if you use the initial estimates, what value did your re-estimation from 30 to 40 provide you ? I say none.
If you regularly re-estimate at iteration planning meetings, make a note of the original vs. the updated estimates. See if they grow. Consider what impact this is having on the accuracy of your release planning.
OK, I can hear you now. "My team's estimates don't inflate ... some go up; some go down". I haven't seen this, but let's say you do. Let's revisit the example from above with this assumption. You go into the iteration planning meeting with 30 points and walk out with 29. Your velocity is not materially impacted. You are still on track with 3 remaining iterations (roughly). So the question is this: what value did that re-estimation provide? I say none.
When *do* you re-estimate then?
I believe in updating estimates when information arises from experience that pertains to some shared aspect of a subset of stories. For example, let's say that your retrospectives have shown that every time you have a story that hits a certain database, it ends up being much more effort than expected. In a case like this, it makes sense to revist those database stories to ensure that this knowledge is incorporated into those estimates. I call this aspect-oriented re-estimation (adapted from the term "aspect-oriented programming").
Tuesday, January 26, 2010
No applause, just throw money
I have an aversion to the applause that occurs in some iteration showcases (aka sprint reviews) on agile teams. The showcases are the meetings at the end of an iteration where the team shows working software to project stakeholders. Unfortunately, many teams end up displaying PowerPoint presentations instead or in addition, but that's a topic for another post.
I once witnessed a team that had 2-week iterations and held a 2 hour review at the end of each iteration. Each team member presented what he had worked on and accomplished over the course of the iteration. First of all, two hours was too long for this team for a 2-week iteration. They simply did not produce enough demonstrable working software to warrant a two-hour meeting. Worse, they often resorted to presenting partially completed stories - another problem, but again, fodder for a future post.
Having each team member speak and present his accomplishments might have been a good thing for that particular team. It can provide a similar social pressure that stand-up meetings provide - nobody wants to stand up and say they didn't accomplish anything, so each team member strives to deliver demonstrable value. The problem I had with this particular team was that the norms established that everyone applaud each person after his little piece of the presentation. Not only was the applause time-consuming, it was gratuitous. Folks in the meeting applauded simply because they were expected to, not because of any remarkable achievement.
In some cases, the team member didn't really accomplish anything of note on his own and the applause was simply out of place. It is as if my dog takes a dump in the middle of the carpet during a party and the crowd rushes up to pet and praise him, and offer him doggie treats. Clearly, this confuses the dog.
Though applauding every presenter is certainly overkill, I argue that applause at the end of an iteration showcase should be eliminated (or better, never started in the first place). In any case, it should never be de rigeur, and should never be instigated by an outsider who may have no clue as to whether the team really accomplished something special.
Applauding showcases for mediocre or even poor iterations can have a deleterious effect on motivation. Consider this possible thought going through a team member's head: "They applauded for this crap? They have no clue what we're doing".
We have a habit of rewarding performance. If my kids come home with straight-A report cards, I heap praise on them. If a see a C, I begin my own inspect and adapt cycle at home. But in report cards, the grades are clear, and are mostly objective. That is - the "A" reflects a large number of assignments, tests, and projects, and the teacher assessing the outcome has a substantial pool of similar production to compare against. Outcomes of iterations/sprints, on the other hand, are hardly objective, and are not comparable to other teams' output. It is truly only the team members who know, in their collective heart of hearts, whether they really deserve applause or not.
So my suggestion: omit the applause until the code gets into production and the customers' rave reviews start flowing. That's when the party should start.
I once witnessed a team that had 2-week iterations and held a 2 hour review at the end of each iteration. Each team member presented what he had worked on and accomplished over the course of the iteration. First of all, two hours was too long for this team for a 2-week iteration. They simply did not produce enough demonstrable working software to warrant a two-hour meeting. Worse, they often resorted to presenting partially completed stories - another problem, but again, fodder for a future post.
Having each team member speak and present his accomplishments might have been a good thing for that particular team. It can provide a similar social pressure that stand-up meetings provide - nobody wants to stand up and say they didn't accomplish anything, so each team member strives to deliver demonstrable value. The problem I had with this particular team was that the norms established that everyone applaud each person after his little piece of the presentation. Not only was the applause time-consuming, it was gratuitous. Folks in the meeting applauded simply because they were expected to, not because of any remarkable achievement.
In some cases, the team member didn't really accomplish anything of note on his own and the applause was simply out of place. It is as if my dog takes a dump in the middle of the carpet during a party and the crowd rushes up to pet and praise him, and offer him doggie treats. Clearly, this confuses the dog.
Though applauding every presenter is certainly overkill, I argue that applause at the end of an iteration showcase should be eliminated (or better, never started in the first place). In any case, it should never be de rigeur, and should never be instigated by an outsider who may have no clue as to whether the team really accomplished something special.
Applauding showcases for mediocre or even poor iterations can have a deleterious effect on motivation. Consider this possible thought going through a team member's head: "They applauded for this crap? They have no clue what we're doing".
We have a habit of rewarding performance. If my kids come home with straight-A report cards, I heap praise on them. If a see a C, I begin my own inspect and adapt cycle at home. But in report cards, the grades are clear, and are mostly objective. That is - the "A" reflects a large number of assignments, tests, and projects, and the teacher assessing the outcome has a substantial pool of similar production to compare against. Outcomes of iterations/sprints, on the other hand, are hardly objective, and are not comparable to other teams' output. It is truly only the team members who know, in their collective heart of hearts, whether they really deserve applause or not.
So my suggestion: omit the applause until the code gets into production and the customers' rave reviews start flowing. That's when the party should start.
Saturday, November 21, 2009
Squirrel Agile
A client shared this term a few weeks ago that I really liked: Squirrel Agile. Thanks Steve.
I'm sure you've seen a squirrel trying to cross a street. The squirrel starts off on one side of the street, looks, darts out, then sees something scary and retreats. Sometimes he retreats all the way... sometimes he just stops in his tracks. When he starts again, he may continue trying to cross, or may high-tail it back to the original side of the street.
In agile adoption, we sometimes see fits and starts... and retreats - just like the squirrel.
One aspect of agile adoption - self directed teams - seem to me to suffer a great deal from this behavior. Management agrees to self-directed teams in principle, but as soon as the manager fears loss of control, or loses confidence in the team's ability to deliver, the agile squirrel darts back towards the original side of the street. The command-and-control tendencies return.
Other fits and starts occur when you start taking shortcuts in your approach. "We don't need to do a showcase this iteration; we don't have much to show". Or "We're 98% complete with this story - let's take credit for the story in our burn-up, since we'll finish it quickly at the beginning of the next sprint." I'm sure you can come up with other examples.
These behaviors mirror the squirrel's fear. These short-cuts and adaptations are typically not to improve effectiveness or efficiency, but reactions to fear of judgment. My suggestion: rather than darting back and forth as you cross the road, take a deliberate approach with courage.
I'm sure you've seen a squirrel trying to cross a street. The squirrel starts off on one side of the street, looks, darts out, then sees something scary and retreats. Sometimes he retreats all the way... sometimes he just stops in his tracks. When he starts again, he may continue trying to cross, or may high-tail it back to the original side of the street.
In agile adoption, we sometimes see fits and starts... and retreats - just like the squirrel.
One aspect of agile adoption - self directed teams - seem to me to suffer a great deal from this behavior. Management agrees to self-directed teams in principle, but as soon as the manager fears loss of control, or loses confidence in the team's ability to deliver, the agile squirrel darts back towards the original side of the street. The command-and-control tendencies return.
Other fits and starts occur when you start taking shortcuts in your approach. "We don't need to do a showcase this iteration; we don't have much to show". Or "We're 98% complete with this story - let's take credit for the story in our burn-up, since we'll finish it quickly at the beginning of the next sprint." I'm sure you can come up with other examples.
These behaviors mirror the squirrel's fear. These short-cuts and adaptations are typically not to improve effectiveness or efficiency, but reactions to fear of judgment. My suggestion: rather than darting back and forth as you cross the road, take a deliberate approach with courage.
agileshout
I just recently discovered this StackOverflow implementation focused on Agile issues: http://agileshout.com. Seems to be low traffic at the moment, but perhaps my agile friends can find some value there.
I'm not quite sure why it's necessary to have a separate site from StackOverflow, since the tagging mechanism in StackOverflow permits differentiation of topics (e.g. "agile") and cross tagging of topics (e.g. "agile" and "BI") that might not otherwise find a specific home on a specific site.
I'm not quite sure why it's necessary to have a separate site from StackOverflow, since the tagging mechanism in StackOverflow permits differentiation of topics (e.g. "agile") and cross tagging of topics (e.g. "agile" and "BI") that might not otherwise find a specific home on a specific site.
Wednesday, November 18, 2009
Task breakdown - To do or not to do
I've had this conversation with agile folks over the years. It's about task breakdown. This is not entirely fair, but I'll say it anyway: I consider it one of the "thou shalt" approaches of agile by the numbers... or by the tools.
Assume a master story list with estimates based on points.
The iteration planning meeting (IPM) looks like this:
Foreach Story in Candidate list:
As the iteration progresses, we see this:
Foreach Day in iteration:
Task breakdown smells (and yes, I've seen them all):
I remember my elevated caffeine intake as a developer at interminable IPM's where I just wanted to get on with the work.
I remember debating minutia about whether a task belongs or not, and whether it's 8 hours (by Kurman) or 16 hours (by someone else). Yes - all those issues that the point-level estimating deigns to abstract by using relative estimates can rear their ugly heads in hour-based estimating. (By the way - if your solution is to assign the tasks at IPM time, you'll suffer from other problems).
I remember being the first to pick up a story as a developer and finding a better approach that implied a totally different task breakdown. Yes, I used the new approach, and yes I had to fix the accounting (XPlanner) to reflect my new approach. (Umm... on second thought - I just left the existing task list in place and fudged the actual hours into the old list. You might think this was wrong. I knew, however, in my heart of hearts, that the micro-accounting really didn't matter).
I've also worked with teams that have eliminated task breakdown/estimates and felt freed by the defenestration of the bureaucracy.
This tends to become a heated topic of conversation. In the end, the best answer, I think, is to let the team choose the approach that's right for them. Mandating task breakdown or mandating against it is almost always wrong. Drive the questioning to determine if task breakdown adds value or not, and try it both ways. If you have an agile management tool that requires tasks in order to do reporting, and you want to avoid defining tasks, just create one task per story that says "Do it". (In parallel, find a different tool that provides more flexibility).
scrum terminology decoder:
iteration = sprint
iteration planning meeting or IPM = sprint planning meeting
iteration manager = scrum master
iteration showcase = sprint review = sprint demo
master story list = product backlog (or possibly release backlog)
Assume a master story list with estimates based on points.
The iteration planning meeting (IPM) looks like this:
Foreach Story in Candidate list:
- Product owner: Describe the story
- Team: break the story down into tasks
- Team: estimate the effort for each task in hours
- Iteration Manager: Increment the task hours counter by the amount of the task hours for this new story
- Iteration Manager: Measure the task hours against the team's ideal capacity and report how full
- Team: If iteration is full (based on ideal/actual capacity) leave foreach
As the iteration progresses, we see this:
Foreach Day in iteration:
- Team member: update the remaining task hours for each of his/her tasks
- Iteration manager: udpate/publish burn-down
- Iteration manager: interrogate team members whose tasks are moving "slowly"
- Intra-story progress can be measured by the iteration manager who can address slowness ("John: you reported 2 hours left yesterday morning and today you're still not done... what's up?"
- The aggregate burn-down should show you how close you are to your target on a daily basis
- Tasks become the center of the reporting universe within the sprint and so progress against task completion may mask poor progress against story completion (e.g. due to undiscovered tasks)
- Team "feels" progress based on completed tasks, rather than on the real objective: completed stories
- Weird reward structures get created ("John - yesterday you said you had 12 hours remaining on the task, yet you finished it. Great job!")
- Weird negative feedback is inferred ("John - yesterday you said you had 2 hours left and you're not done yet": inference - you're not working hard enough)
- Daily reporting requirements implies distrust of the team to raise issues or problem: "If I don't keep an eye on the task level reporting, I can't hold them responsible on a daily basis"
- Estimating iteration capacity based on ideal task hours may conflict with iteration capacity planning based on historical velocity. What happens if my hour capacity is reached in the IPM, yet my booked story point total is below my historical velocity? (I'll save the tendency for re-estimation for another blog entry). Reminds me of the old adage - experienced sailors never go to sea with two compasses. They go with one or with three, because if the two disagree, you have no idea which one is right.
- if the team feels that the benefit outweighs the cost
- if a developer or pair needs to break the story down into tasks in order to understand the work to be done, go for it (but don't worry about tracking all the details in a tool)
- Perhaps if your team is not mature enough to understand how to break down a story into tasks, and so you must spoon feed them with tasks (even then, I think it better to have them pair with experienced developers to learn how to become self-sufficient.)
- If the tasks to implement a story can be parallelized (different developers or pairs can be working on different tasks for the same story in parallel)
- If your team has "silo dysfunction" that requires choosing stories that don't overload a skill or domain area on the team
Example: let's say you have a project with equal parts C++, Java, and Fortran code and you have C++, Java, and Fortran programmers who can't span technologies. In order to keep from overloading one technology group, you must choose stories that balance the work across those silos. Sometimes, the only way to create this balance is to task out the work across technologies, to ensure you're not overloading one camp.My feeling: Using the whole team's time in the IPM doing task breakdown and estimating tasks is usually wasteful. I recommend that task breakdown be undertaken, if necessary, at the last responsible moment... when the story moves from "ready" to "in development", rather than at the IPM. The developer or pair picking up the story should do the breakdown.
By the way - removal of this dysfunction over time is recommended.
Task breakdown smells (and yes, I've seen them all):
- The tasks look like this:
Design the code
Write the code
Write the unit tests
QA the code
Why is this an issue? You're probably doing mini-waterfalls, not simple, evolutionary design
- The "remaining work" for a given task is reduced by 8 hours (or perhaps 6) each day
Why is this an issue? Team members are not providing real data, they're telling the iteration manager what he/she wants to hear
- The hour-based iteration burn-down (or, as I prefer, burn-up) is practically a straight line.
Why is this an issue? In reality, many things take more or less time than imagined. Straight lines are an indicator of "cooking the books"
- The iteration manager asks people throughout the day... "How many hours do you have left on this task? When will you be done?"
Why is this an issue? It's command and control leadership and dilutes the power of the self-directed team.
- Iteration Planning Meetings (IPM's) take more than an hour and are more about numbers than about story understanding
Why is this an issue? You spend more time taking swags at hour estimates than you do actually thinking about the functionality to deliver
- The iteration manager applauds the team for accomplishing "400 hours" when their calculated capacity was only "350".
Why is this an issue? It's focused on task hour accounting and doesn't imply anything about how successful the team was at completing storiesI've worked on projects with both approaches - task breakdown with hour-based burn charts and no-breakdown with point-based burn charts. My suggestion is to avoid speculation about the superiority of doing it the only way you've ever done it and at least give each approach a fair shot.
I remember my elevated caffeine intake as a developer at interminable IPM's where I just wanted to get on with the work.
I remember debating minutia about whether a task belongs or not, and whether it's 8 hours (by Kurman) or 16 hours (by someone else). Yes - all those issues that the point-level estimating deigns to abstract by using relative estimates can rear their ugly heads in hour-based estimating. (By the way - if your solution is to assign the tasks at IPM time, you'll suffer from other problems).
I remember being the first to pick up a story as a developer and finding a better approach that implied a totally different task breakdown. Yes, I used the new approach, and yes I had to fix the accounting (XPlanner) to reflect my new approach. (Umm... on second thought - I just left the existing task list in place and fudged the actual hours into the old list. You might think this was wrong. I knew, however, in my heart of hearts, that the micro-accounting really didn't matter).
I've also worked with teams that have eliminated task breakdown/estimates and felt freed by the defenestration of the bureaucracy.
This tends to become a heated topic of conversation. In the end, the best answer, I think, is to let the team choose the approach that's right for them. Mandating task breakdown or mandating against it is almost always wrong. Drive the questioning to determine if task breakdown adds value or not, and try it both ways. If you have an agile management tool that requires tasks in order to do reporting, and you want to avoid defining tasks, just create one task per story that says "Do it". (In parallel, find a different tool that provides more flexibility).
scrum terminology decoder:
iteration = sprint
iteration planning meeting or IPM = sprint planning meeting
iteration manager = scrum master
iteration showcase = sprint review = sprint demo
master story list = product backlog (or possibly release backlog)
Saturday, April 18, 2009
XP Game redux
I assisted in another XP game with legos last Tuesday - led by my ThoughtWorks colleague Conrad Benham. Several other ThoughtWorks folks helped to facilitate. We did this at the Atlanta Agile User Group meeting. Kris Kemper captured some of the activity in his blog post. Another participant captured his perspective here. I particularly like his comment "I was stunned at how many real world challenges our team of 5 mirrored". I'm always pleasantly surprised at how effective these games are at conveying agile concepts
I facilitated one of these last year at a Florida .NET user group meeting which was captured in the Apress website in their content area that brings attention to various user groups.
If you have a desire to introduce agile concepts to an audience in a fun setting, you can't go wrong with an XP Game with Legos.
I facilitated one of these last year at a Florida .NET user group meeting which was captured in the Apress website in their content area that brings attention to various user groups.
If you have a desire to introduce agile concepts to an audience in a fun setting, you can't go wrong with an XP Game with Legos.
Tuesday, February 17, 2009
Iron Chef Retrospectives
I like cooking analogies to software. Lachlan Heasman recently posted a comparison between Iron Chef and scrum iterations. One missing component was the retrospective. It occurred to me that the chefs must do some sort of retrospective and that it would be interesting to see on TV. So I penned the following to Food Network this morning:
I'm a software engineer and a die-hard Food Network fan. I often use cooking analogies when coaching other software professionals on how to improve their approach to software development. Mise en place is one of my favorites.
I recently encountered a comparison of the Iron Chef competition to software development. One of the components of our approach in software is to use "retrospectives" to look back upon the iteration of software development we have completed to improve how we do things. We revisit what went well, what didn't go well and adapt our approach to learn from our mistakes
It occurred to me that the Iron Chef competitors must do "retrospectives" as well. I can imagine Cat Cora sitting around a table with her chefs over shots of Ouzo dissecting the completed competition and finding reasons to celebrate and opportunities for improvement.
I think it would be fantastic to have a postscript TV show to film the two teams in their analysis of the competition. You could intersperse clips from the actual competition when chefs refer to certain activities. It would also be interesting to hear what the chefs *really* think about the comments from the judges and how they might change what they did based on those comments.
I'm a software engineer and a die-hard Food Network fan. I often use cooking analogies when coaching other software professionals on how to improve their approach to software development. Mise en place is one of my favorites.
I recently encountered a comparison of the Iron Chef competition to software development. One of the components of our approach in software is to use "retrospectives" to look back upon the iteration of software development we have completed to improve how we do things. We revisit what went well, what didn't go well and adapt our approach to learn from our mistakes
It occurred to me that the Iron Chef competitors must do "retrospectives" as well. I can imagine Cat Cora sitting around a table with her chefs over shots of Ouzo dissecting the completed competition and finding reasons to celebrate and opportunities for improvement.
I think it would be fantastic to have a postscript TV show to film the two teams in their analysis of the competition. You could intersperse clips from the actual competition when chefs refer to certain activities. It would also be interesting to hear what the chefs *really* think about the comments from the judges and how they might change what they did based on those comments.
Friday, February 06, 2009
Iteration Length
What's the right iteration length?
In Scrum, the recommendation is to start with thirty calendar days. From the Schwaber/Beetle book: section 3.6.3 Sprint Mechanics:
"Sprints last for 30 calendar days" and "thirty days is an excellent compromise between competing pressures". Though "Adustments can be made to the duration after everyone has more experience with Scrum".
My first reaction to the number 30 is that this is a nonstarter. What happens when your sprint boundary occurs on a weekend? How do you schedule sprint transition meetings? I know of no calendaring program that schedules meetings "every 30 days". Monthly? Yes. Every n weeks? Yes. But not every 30 days. The organizations I know that have implemented scrum by the book have scheduled n-week sprints, in order to maintain a consistency of scheduling (e.g. Wednesday afternoons are for sprint reviews).
Many novice scrum practioners resort to quoting "the book" when arguing for 30-day, by which they mean quad-weekly (which I will hereafter refer to as monthly for terseness) sprints. Suggestions to shorten sprints (e.g. to 2-weeks) yield objections along the lines of "with twice the meetings, we'll have less time to get work done. We already spend two days in our sprint transition meetings"
First of all, if the team is spending two days on monthly sprint transition activities, something is very seriously wrong. Really.
Sprint reviews (or showcases in the more generally accepted agile lingo) are about showing working software. They should not be about burn-ups and burn-downs and justifying the team's existence. You should never find yourself in a showcase/sprint review talking about the percentage completion of a story. Show the working stuff or sit down.
If you follow this pattern, the showcase/sprint review length should be proportional to the length of the iteration. If your iteration length is cut in half, your showcase should take half the time. After all, you're only showcasing half the functionality! (OK... maybe it's not exactly half. I would argue that two week sprints are more productive than monthly.)
Iteration planning should be a no-brainer. Preparation for the sprint planning meeting is on the business analysts, product owner, scrum master, project manager... everything should be pretty well layed out in advance. It should take no longer than an hour to do an IPM (iteration planning meeting). Unless, of course, you feel that task breakdown is necessary during the IPM (another Scrum approach with which I generally disagree). In any case, the IPM length should be proportional to the length of the iteration.
-begin sarcasm- How 'bout a 3-week iteration? Wouldn't that be a reasonable compromise? -end sarcasm-
Sprintly Retrospectives are another issue. Can you halve the time spent in retrospectives if you halve the iteration length? Probably not. But this, I feel, is not the right question. Are you getting value out of your retrospectives? If not, figure out how to get value. You don't necessarily need all the typical pomp and circumstance of a full-blown retrospective to impart the "adapt" portion of the "Inspect and Adapt" mantra. Too many teams, in my experience, adhere to the scheduling and process of retrospectives without digging deeply into different ways of adapting the team's approach to being effective and efficient.
More advanced teams can consider whether iteration ceremony is even necessary. A more lean approach would be to consider the flow of the work and to simply feed work into the development "machine" at a rate at which it consumes the work. "Inspect and adapt" would be continuous; retrospectives would focus on specific issues or areas of concern (rather than specific time periods across which many different kinds of issues may require attention). Classic iteration planning would yield to on-demand prioritization and estimation exercises - delayed to the last responsible moment. Reviews of completed functionality would be scheduled on demand for individual stories and perhaps with more ceremony when a critical mass of functionality has been completed.
What's the right iteration length for you? My recommendation is 12 days, 6 hours, and 24 minutes. (Oops... forgot the sarcasm tag)
In Scrum, the recommendation is to start with thirty calendar days. From the Schwaber/Beetle book: section 3.6.3 Sprint Mechanics:
"Sprints last for 30 calendar days" and "thirty days is an excellent compromise between competing pressures". Though "Adustments can be made to the duration after everyone has more experience with Scrum".
My first reaction to the number 30 is that this is a nonstarter. What happens when your sprint boundary occurs on a weekend? How do you schedule sprint transition meetings? I know of no calendaring program that schedules meetings "every 30 days". Monthly? Yes. Every n weeks? Yes. But not every 30 days. The organizations I know that have implemented scrum by the book have scheduled n-week sprints, in order to maintain a consistency of scheduling (e.g. Wednesday afternoons are for sprint reviews).
Many novice scrum practioners resort to quoting "the book" when arguing for 30-day, by which they mean quad-weekly (which I will hereafter refer to as monthly for terseness) sprints. Suggestions to shorten sprints (e.g. to 2-weeks) yield objections along the lines of "with twice the meetings, we'll have less time to get work done. We already spend two days in our sprint transition meetings"
First of all, if the team is spending two days on monthly sprint transition activities, something is very seriously wrong. Really.
Sprint reviews (or showcases in the more generally accepted agile lingo) are about showing working software. They should not be about burn-ups and burn-downs and justifying the team's existence. You should never find yourself in a showcase/sprint review talking about the percentage completion of a story. Show the working stuff or sit down.
If you follow this pattern, the showcase/sprint review length should be proportional to the length of the iteration. If your iteration length is cut in half, your showcase should take half the time. After all, you're only showcasing half the functionality! (OK... maybe it's not exactly half. I would argue that two week sprints are more productive than monthly.)
Iteration planning should be a no-brainer. Preparation for the sprint planning meeting is on the business analysts, product owner, scrum master, project manager... everything should be pretty well layed out in advance. It should take no longer than an hour to do an IPM (iteration planning meeting). Unless, of course, you feel that task breakdown is necessary during the IPM (another Scrum approach with which I generally disagree). In any case, the IPM length should be proportional to the length of the iteration.
-begin sarcasm- How 'bout a 3-week iteration? Wouldn't that be a reasonable compromise? -end sarcasm-
Sprintly Retrospectives are another issue. Can you halve the time spent in retrospectives if you halve the iteration length? Probably not. But this, I feel, is not the right question. Are you getting value out of your retrospectives? If not, figure out how to get value. You don't necessarily need all the typical pomp and circumstance of a full-blown retrospective to impart the "adapt" portion of the "Inspect and Adapt" mantra. Too many teams, in my experience, adhere to the scheduling and process of retrospectives without digging deeply into different ways of adapting the team's approach to being effective and efficient.
More advanced teams can consider whether iteration ceremony is even necessary. A more lean approach would be to consider the flow of the work and to simply feed work into the development "machine" at a rate at which it consumes the work. "Inspect and adapt" would be continuous; retrospectives would focus on specific issues or areas of concern (rather than specific time periods across which many different kinds of issues may require attention). Classic iteration planning would yield to on-demand prioritization and estimation exercises - delayed to the last responsible moment. Reviews of completed functionality would be scheduled on demand for individual stories and perhaps with more ceremony when a critical mass of functionality has been completed.
What's the right iteration length for you? My recommendation is 12 days, 6 hours, and 24 minutes. (Oops... forgot the sarcasm tag)
Friday, December 12, 2008
Test Lookup
Kris Kemper has a nice blog entry on finding tests related to code you are changing that lines up well with a concept I've been pondering today.
I have a colleague who asked me to look over a rather large C# software system, made up of many (>20) .NET projects. In the best agile way, he is relying on his elegant structure and well-named/concise tests to serve as the documentation for anyone who might try to understand the code.
Being self-aware, my colleague understands that he is too close to the subject to be able to determine "understandability" of the code by others. So he asked me - an outsider - to take a look. I think he was also hoping that if I - as a supposedly post-technical project manager - could understand it then he could brag that his code was so understandable that "even a PM can understand it". (Reminds me of those old Life cereal commercial where Mikey, who never likes anything, likes the cereal and his siblings declare "Even Mikey likes it !")
So I picked a class at random. Nice short methods, but reading the code didn't give me as much insight as I hoped. Then I thought... ah the tests. I'll read the tests. So I right clicked, searched for usages, and navigated to the location that had "unittests" in the name. Then I thought - wouldn't it be nice if I could right click on a method name, or a class name, and - instead of asking for usages - I could ask for tests.
That got me to thinking - what would constitute a test? A unit test that simply invokes a method or instantiates a class may just be using it for setup or for supporting scaffolding. But then wouldn't that be a smell? Shouldn't that irrelevant setup junk be in a setup, or injected or...
Anyway, Kris gets to much of these points... my main extending thought is simply - wouldn't it be nice if the tools could help us navigate to tests and sniff out those smells (rather than force them by removing the method, as Kris's technique suggests)?
I have a colleague who asked me to look over a rather large C# software system, made up of many (>20) .NET projects. In the best agile way, he is relying on his elegant structure and well-named/concise tests to serve as the documentation for anyone who might try to understand the code.
Being self-aware, my colleague understands that he is too close to the subject to be able to determine "understandability" of the code by others. So he asked me - an outsider - to take a look. I think he was also hoping that if I - as a supposedly post-technical project manager - could understand it then he could brag that his code was so understandable that "even a PM can understand it". (Reminds me of those old Life cereal commercial where Mikey, who never likes anything, likes the cereal and his siblings declare "Even Mikey likes it !")
So I picked a class at random. Nice short methods, but reading the code didn't give me as much insight as I hoped. Then I thought... ah the tests. I'll read the tests. So I right clicked, searched for usages, and navigated to the location that had "unittests" in the name. Then I thought - wouldn't it be nice if I could right click on a method name, or a class name, and - instead of asking for usages - I could ask for tests.
That got me to thinking - what would constitute a test? A unit test that simply invokes a method or instantiates a class may just be using it for setup or for supporting scaffolding. But then wouldn't that be a smell? Shouldn't that irrelevant setup junk be in a setup, or injected or...
Anyway, Kris gets to much of these points... my main extending thought is simply - wouldn't it be nice if the tools could help us navigate to tests and sniff out those smells (rather than force them by removing the method, as Kris's technique suggests)?
Monday, December 08, 2008
Halftime
In American football, the game is divided into four 15-minute quarters. There's a short break between the first/second quarters and then between the third/fourth quarters, but the middle transition is longer. It's called halftime.
During halftime, teams return to their locker rooms for respite, but more importantly, coaches use it as an opportunity to adjust the game plan and motivate the team.
Much is made of the halftime break in football. I think the concept is equally useful in ... agile software development.
I introduced halftime on a couple of agile software development teams to step back and look at iteration (or sprint) goals at the midpoint of the iteration, to see where we stand, and adjust our approach. It has been a good opportunity to refocus, adjust sprint content, and address issues.
In a recent halftime, I pointed out that the scope for the iteration was 63, and that we had burned up 20 points. I used the analogy that our opponents had 63 points and we had 20 and that we need to figure out how to make up the difference. We adjusted our approach, removed some scope and went on.
If you're stuck in a project where the iterations are too long (e.g. four weeks), suggest introducing a short halftime (30 minutes) to refocus the team and adjust. Even if you're in 2-week sprints, doing a checkpoint/halftime at mid-iteration can be useful. It's also a good segue into shorter iterations.
During halftime, teams return to their locker rooms for respite, but more importantly, coaches use it as an opportunity to adjust the game plan and motivate the team.
Much is made of the halftime break in football. I think the concept is equally useful in ... agile software development.
I introduced halftime on a couple of agile software development teams to step back and look at iteration (or sprint) goals at the midpoint of the iteration, to see where we stand, and adjust our approach. It has been a good opportunity to refocus, adjust sprint content, and address issues.
In a recent halftime, I pointed out that the scope for the iteration was 63, and that we had burned up 20 points. I used the analogy that our opponents had 63 points and we had 20 and that we need to figure out how to make up the difference. We adjusted our approach, removed some scope and went on.
If you're stuck in a project where the iterations are too long (e.g. four weeks), suggest introducing a short halftime (30 minutes) to refocus the team and adjust. Even if you're in 2-week sprints, doing a checkpoint/halftime at mid-iteration can be useful. It's also a good segue into shorter iterations.
Saturday, October 04, 2008
Agile adoption by geography
I wonder if there is a geographical distribution to the adoption of agile philosophies. To find out, I used Dice's job listings as a proxy to determine percent of postings that matched the word "agile" by City.
Suprising to me, Salt Lake City seemed to take the honors @ 12%. Next tier: Richmond, Seattle, Portland, OR, San Fran, Denver, Austin, Raleigh, all between 5% and 9%. The remainder can be seen on the graph. It's probably a bit too small to see, but if you display it individually, you should be able to clearly see the list of cities and the rough placing of the cities within the distribution.
Results:
Suprising to me, Salt Lake City seemed to take the honors @ 12%. Next tier: Richmond, Seattle, Portland, OR, San Fran, Denver, Austin, Raleigh, all between 5% and 9%. The remainder can be seen on the graph. It's probably a bit too small to see, but if you display it individually, you should be able to clearly see the list of cities and the rough placing of the cities within the distribution.
Results:

Thursday, June 19, 2008
Interesting question on LinkedIn:
Here was my response:
What is the difference between an approach and a methodology?
What is the difference between an approach and a methodology?
Is it true that approach is just an idea and is less proven while methodology would normally start as an approach but is eventually time tested and proven? What takes an approach to convert into a methodology?
Here was my response:
I think of an approach as the foundation, or underlying principles that guide you in how you get your work done. I'm an agile software development proponent, and I strongly support the approach/philosophy expressed in the agile manifesto (http://www.agilemanifesto.org).
Methodologies, to me, prescribe more detailed steps to take to accomplish goals. Scrum is, in essence, an agile approach (the mantra is "inspect and adapt"), but it gets applied in many instances as a methodology (Though shalt provide a burn-down chart).
Methodologies excuse people from thinking about underlying principles. If you follow the rules for making a burndown chart, your methodology has been followed, but if you are avoiding opportunities to gain greater insight by taking a different tack, you are avoiding the approach/principles.
An analogy to cooking - using a methodology is like following a recipe - with your measuring spoons/cups at hand, precise temperature measurements, etc. Using an approach/principle requires more understanding of the ideas... the fact that sauteeing onions releases liquid gives you some information that allows you to adjust to other aspects of your cooking (like don't try to brown your meat while sweating your onions, because the released liquid will cause your meat to steam instead of brown).
In sum, I would say an approach requires more thinking and adapting, while a methodology provides more training wheels to give you procedures that you may not be able to link to the underlying approach/principles. I think that beginners require methodologies (like beginning cooks require recipes) while experienced folks can apply fundamental principles based on sound judgment (like experienced cooks simply press down on the steak to assess doneness instead of measuring the time).
Tuesday, June 17, 2008
XP Game in Fort Lauderdale
I facilitated a lego XP game for the Agile SIG of the Florida .Net user group at the Microsoft campus in Fort Lauderdale tonight - great fun ! Thanks to two of my colleagues from Bayview for playing the customer roles: Howard Sims (project manager/iteration manager) and Samir Patel (development manager).
Dave Noderer blogged in real time and took a video.
It's fun to watch adults get so engaged with legos.
Dave Noderer blogged in real time and took a video.
It's fun to watch adults get so engaged with legos.
Sunday, May 11, 2008
The continuum
Is your software development approach agile or not?
Are you pregnant?
These are different questions. The latter question can be answered in a straightforward fashion... either you are or you aren't.
Using an agile methodology is a different question. Or an agile approach. Or an agile philosophy. Are you agile... yes or no?
The answer is almost always somewhere in between. You can use agile techniques (pairing, continuous integration, TDD, refactoring, user stories (following the INVEST principle), standups, abstract estimation techniques (e.g. points), iterations, velocity measurement, etc... in any degree. But what combination makes you "agile" ?
I think the answer is never yes or no, but the degree to which you espouse, promote, and enact agile approaches.
I think the continuum concept applies much more broadly than folks presume. Is your team "self-directed"? That's not a yes or no answer either.
Is being agile a good thing? Yes. Consider any course of human activity... being agile is better than being less agile. Am open to contrasting opinions, but I can't imagine an argument espousing being less able to adapt to change as being preferable. The chunked-up question is ... how useful are agile techniques in getting software delivered? I argue that the answer is "very". Why? The first answer lies in the fact that you NEED to be able to adapt to changes in your environment. Guess I'll save the remainder of the argument for a follow-up.
Are you pregnant?
These are different questions. The latter question can be answered in a straightforward fashion... either you are or you aren't.
Using an agile methodology is a different question. Or an agile approach. Or an agile philosophy. Are you agile... yes or no?
The answer is almost always somewhere in between. You can use agile techniques (pairing, continuous integration, TDD, refactoring, user stories (following the INVEST principle), standups, abstract estimation techniques (e.g. points), iterations, velocity measurement, etc... in any degree. But what combination makes you "agile" ?
I think the answer is never yes or no, but the degree to which you espouse, promote, and enact agile approaches.
I think the continuum concept applies much more broadly than folks presume. Is your team "self-directed"? That's not a yes or no answer either.
Is being agile a good thing? Yes. Consider any course of human activity... being agile is better than being less agile. Am open to contrasting opinions, but I can't imagine an argument espousing being less able to adapt to change as being preferable. The chunked-up question is ... how useful are agile techniques in getting software delivered? I argue that the answer is "very". Why? The first answer lies in the fact that you NEED to be able to adapt to changes in your environment. Guess I'll save the remainder of the argument for a follow-up.
Tuesday, May 06, 2008
The "Z" Word
Zealot: A fanatically committed person.
Hmmm... good or bad?
It suggests an unfavorable hue to me. I've seen this word used before to describe folks' commitment to a particular technology platform. He's a .NET zealot, or a Ruby zealot, or Zope zealot.
In the case of technology (most cases?) zealots prefer their approach/solution to all(?) others. For instance, how could you turn back from having written significant code in Ruby to doing plain old Java? Those who aren't writing in Ruby have simply not yet been shown "the way".
I consider myself fairly pragmatic and reasoned, but I seem to have acquired the dreaded tag at work. My zealotry is in an agile approach to software development. (English zealots - note the use of the article "an" vs. the article "the" - it's an important distinction).
I'm measuring whether I should shrug the label off, welcome it with open arms, or fight it. Actually ... that's not quite accurate. I'm pretty sure shrugging it off will not be my chosen response.
At some level, I feel it's like being called a "rationality" zealot... or a "damned common sense practitioner". I'm a "Sunrise Zealot" damn it ! I find the agile manifesto chock full of common sense. (OK, here I see... manifesto => zealotry... maybe if Kaczynski and Marx hadn't also used the term manifesto we'd be in better shape).
Some software development professionals unabashedly promote a "waterfall" approach using a document-centric "SDLC" lifecycle. Templates are created to capture every known aspect of the project. Stakeholders are forced to "sign off" on documents, which form a contract-like agreement to what will be delivered. Of course, a change request process is included, which affords the opportunity to create change documents which are then signed-off.
Agile is not binary... agile is a continuum. There are agile development techniques and agile project management approaches. You can use any of them - or not. The use of one technique does not an agile project make.
Agile, chunked up, (to me anyway) is about focusing on the delivery of quality software over and above delivering service to a process. The customer is not a color-copier and a list of signatories to a spec. The customer is the user of the software; the organization that benefits from working, functional, valuable software. No organization prefers reams of documents over working software. And those that argue voluminous documentation better leads to working, functional, valuable software should start measuring the extent to which those documents are implemented as stated. After all, not many approaches refine their processes to eliminate waste (ala Lean).
One should question the value of accepted process (using the Five Whys, for example) in order to assess the usefulness of documents and other artifacts to ensure they stand the test of reason.
OK, my writing therapy is done. I hereby embrace the label "Agile Zealot". I also accept the following additional labels: "Fatherhood Fanatic", "Beer Lover", and "Education Enthusiast".
I do like the alliteration angle though... may I please be an "Agile Aficionado" instead? Ah, but perhaps that doesn't quite capture the intensity of my rapture.
Hmmm... good or bad?
It suggests an unfavorable hue to me. I've seen this word used before to describe folks' commitment to a particular technology platform. He's a .NET zealot, or a Ruby zealot, or Zope zealot.
In the case of technology (most cases?) zealots prefer their approach/solution to all(?) others. For instance, how could you turn back from having written significant code in Ruby to doing plain old Java? Those who aren't writing in Ruby have simply not yet been shown "the way".
I consider myself fairly pragmatic and reasoned, but I seem to have acquired the dreaded tag at work. My zealotry is in an agile approach to software development. (English zealots - note the use of the article "an" vs. the article "the" - it's an important distinction).
I'm measuring whether I should shrug the label off, welcome it with open arms, or fight it. Actually ... that's not quite accurate. I'm pretty sure shrugging it off will not be my chosen response.
At some level, I feel it's like being called a "rationality" zealot... or a "damned common sense practitioner". I'm a "Sunrise Zealot" damn it ! I find the agile manifesto chock full of common sense. (OK, here I see... manifesto => zealotry... maybe if Kaczynski and Marx hadn't also used the term manifesto we'd be in better shape).
Some software development professionals unabashedly promote a "waterfall" approach using a document-centric "SDLC" lifecycle. Templates are created to capture every known aspect of the project. Stakeholders are forced to "sign off" on documents, which form a contract-like agreement to what will be delivered. Of course, a change request process is included, which affords the opportunity to create change documents which are then signed-off.
Agile is not binary... agile is a continuum. There are agile development techniques and agile project management approaches. You can use any of them - or not. The use of one technique does not an agile project make.
Agile, chunked up, (to me anyway) is about focusing on the delivery of quality software over and above delivering service to a process. The customer is not a color-copier and a list of signatories to a spec. The customer is the user of the software; the organization that benefits from working, functional, valuable software. No organization prefers reams of documents over working software. And those that argue voluminous documentation better leads to working, functional, valuable software should start measuring the extent to which those documents are implemented as stated. After all, not many approaches refine their processes to eliminate waste (ala Lean).
One should question the value of accepted process (using the Five Whys, for example) in order to assess the usefulness of documents and other artifacts to ensure they stand the test of reason.
OK, my writing therapy is done. I hereby embrace the label "Agile Zealot". I also accept the following additional labels: "Fatherhood Fanatic", "Beer Lover", and "Education Enthusiast".
I do like the alliteration angle though... may I please be an "Agile Aficionado" instead? Ah, but perhaps that doesn't quite capture the intensity of my rapture.
Friday, March 07, 2008
Sausage Making
Otto von Bismarck – the “Iron Chancellor of Germany” in the 1800’s is said to have made this observation:
The comment suggests that the making of sausage is a messy business. It’s better if you just avoid thinking about what goes into making the sausage and simply enjoy the results. Similarly, the making of laws is a messy business that can be unappetizing.
I use this reference quite a bit on projects. The extension to this in the software development world is:
Software development sausage making includes our development approach (e.g. points, burnups, self-directedness) and in some cases, our technology choices. I have, at times, made the mistake of opening up the “sausage making” process for stakeholders to assess, critique, and well…watch. Sometimes this is appropriate. For example when getting into detailed conversations about cost/benefit analysis on technology purchases, or to determine whether those choices are in line with the corporate IT strategy, it makes sense to discuss them. Too often, though, I think we make the mistake of getting caught up in discussing the sausage making with stakeholders in many cases where, frankly (sausagely?), they might be better off not knowing how the sausage is made.
This is a classic mistake that technologists make. We get so enamored of our technology and our process, that we think everyone else must be interested in how we do our work.
We should focus stakeholder reviews more on the array of sausages we produce, rather than how they are made. Don't show iteration burnups, or discuss the nature of a "point" in the estimation process. Apply an adapter/interface on the information to convey only that which is appropriate to the audience. Consider this a sausage casing, if you will, that abstracts the detailed content regarding the inside of the sausage. So, rather than discuss with them that
we should be saying something like
It is in these conversations that we provide our stakeholders with information that is suited to their digestive profile.
"Laws are like sausages, it is better not to see them being made."
The comment suggests that the making of sausage is a messy business. It’s better if you just avoid thinking about what goes into making the sausage and simply enjoy the results. Similarly, the making of laws is a messy business that can be unappetizing.
I use this reference quite a bit on projects. The extension to this in the software development world is:
"Software Releases are like sausages; it is better not to see them being made."
Software development sausage making includes our development approach (e.g. points, burnups, self-directedness) and in some cases, our technology choices. I have, at times, made the mistake of opening up the “sausage making” process for stakeholders to assess, critique, and well…watch. Sometimes this is appropriate. For example when getting into detailed conversations about cost/benefit analysis on technology purchases, or to determine whether those choices are in line with the corporate IT strategy, it makes sense to discuss them. Too often, though, I think we make the mistake of getting caught up in discussing the sausage making with stakeholders in many cases where, frankly (sausagely?), they might be better off not knowing how the sausage is made.
This is a classic mistake that technologists make. We get so enamored of our technology and our process, that we think everyone else must be interested in how we do our work.
We should focus stakeholder reviews more on the array of sausages we produce, rather than how they are made. Don't show iteration burnups, or discuss the nature of a "point" in the estimation process. Apply an adapter/interface on the information to convey only that which is appropriate to the audience. Consider this a sausage casing, if you will, that abstracts the detailed content regarding the inside of the sausage. So, rather than discuss with them that
"This sausage is almost done – it needs some fennel, and a little more pork fat, and a touch of lard."
we should be saying something like
"This sausage is almost done; we are 85% confident that it will be available for consumption in the mid-April timeframe and 98% confident that it will be ready in time for the Memorial Day picnic in May. If you want to increase the probability that it is delivered in time for April, we can eliminate one of the ten sausages we have slated for April and refocus on this one."
It is in these conversations that we provide our stakeholders with information that is suited to their digestive profile.
Monday, February 18, 2008
Agile versus Discipline (sic)
I recently gave a presentation: "Introduction to Agile" at a .NET CodeCamp in South Florida. One of the attendees commented that Barry Boehm had written a book called something like "Agile vs. Discipline".
My first reaction was that this must be a mistake. No informed software expert could possibly posit agility against discipline. I looked it up after the conference. This attendee was indeed accurate. I found Boehm's article: "Rebalancing Your Organization's Agility and Discipline" here: http://www.agileprojectmgt.com/docs/Boehm20.pdf.
Wow.
Wow.
First paragraph of the abstract says: “we realize that ‘disciplined’ is not the opposite of ‘agile’ but it is our working label here for methods relying more on explicit documented knowledge than on tacit interpersonal knowledge”.
That’s like titling an article as “Rebalancing gun control and murder” and mentioning in a footnote that not all murders are caused by guns, but that “murder” is our working label for lack of gun control. However you fall on that topic, you can see the illusion being prepared.
Table 1 shows that the home grounds of “disciplined method” include “stable, low-change; project/organization focused”. How many of you, dear readers, work in such environments? Any?
Table 1 also conveys that “quantitative control” is the home ground of “disciplined” methodologies. Is this to mean that us agilists just shoot from the hip and decry measurement? Not in the least. What’s your unit test code coverage, Mr. Discipline? Do you measure it? I do.
Primary goals of the “disciplined” include “High Assurance”. Indeed. Excessive documentation, analysis paralysis, waiting to show users working code for long periods of time provides “high assurance”? That’s your “disciplined method” for you. Documents in lieu of working code? Is that what assures highly?
I would prefer Mr. Boehm’s table of “home grounds” to relabel the columns “Agile” and “Disciplined” with these terms: “Planning” vs. “Plans”. Here’s the trick. Agile relies on the act of planning and the discipline of responding to change. His so-called “disciplined” methods rely on plans that usually remain static and get modified by committee.
Mr. Boehm’s spider chart shows dimensions for attributes of a project to show whether it is more or less amenable to agile or “disciplined” methods. I’ll take them one at a time:
One of the paragraphs urges readers to “assess the likely changes in your organization’s profile over the next 5 years”. Please. If anyone has visibility beyond the next year, my hat is definitely off to you. And write it down, revisit it in a year and see how wrong you were. 5 years? No freakin way. Your vision of possible organizational stability warrants an inclusion in the fantasy walk of fame.
Mr. Boehm has a bullet that implies that dependability is a hallmark of non-agile methods. It reads: “key future trends to consider include the increased concern with software dependability and need for discipline”. The authors have crossed a line here. They're no longer using “discipline” as a catch-all for waterfall methods, they’re now using discipline as an inarguable quality. Again, Agility is all about discipline. They are clearly out of touch.
Here’s a phrase that disgusts me: “Examples of potential anomalies are: Operating with agile, fix-it-later developers with a growing, increasingly enterprise-integrated and dependability-oriented user base”. Really. “Fix-it-later developers”… implies that agile developers are hackers who let bugs fester. And that somehow, agile development conflicts with a user base that values dependability.
Unbelievable. Really. Mr. Boehm (and coauthor): Poke your head up into the real world. Join an agile project. Don’t just read about it and regurgitate uninformed opinions.
In sum, I am disappointed, and amazed at the observations in this article, which seem to be informed mostly from uninformed, anti-agile pabulum.
My first reaction was that this must be a mistake. No informed software expert could possibly posit agility against discipline. I looked it up after the conference. This attendee was indeed accurate. I found Boehm's article: "Rebalancing Your Organization's Agility and Discipline" here: http://www.agileprojectmgt.com/docs/Boehm20.pdf.
Wow.
Wow.
First paragraph of the abstract says: “we realize that ‘disciplined’ is not the opposite of ‘agile’ but it is our working label here for methods relying more on explicit documented knowledge than on tacit interpersonal knowledge”.
That’s like titling an article as “Rebalancing gun control and murder” and mentioning in a footnote that not all murders are caused by guns, but that “murder” is our working label for lack of gun control. However you fall on that topic, you can see the illusion being prepared.
Table 1 shows that the home grounds of “disciplined method” include “stable, low-change; project/organization focused”. How many of you, dear readers, work in such environments? Any?
Table 1 also conveys that “quantitative control” is the home ground of “disciplined” methodologies. Is this to mean that us agilists just shoot from the hip and decry measurement? Not in the least. What’s your unit test code coverage, Mr. Discipline? Do you measure it? I do.
Primary goals of the “disciplined” include “High Assurance”. Indeed. Excessive documentation, analysis paralysis, waiting to show users working code for long periods of time provides “high assurance”? That’s your “disciplined method” for you. Documents in lieu of working code? Is that what assures highly?
I would prefer Mr. Boehm’s table of “home grounds” to relabel the columns “Agile” and “Disciplined” with these terms: “Planning” vs. “Plans”. Here’s the trick. Agile relies on the act of planning and the discipline of responding to change. His so-called “disciplined” methods rely on plans that usually remain static and get modified by committee.
Mr. Boehm’s spider chart shows dimensions for attributes of a project to show whether it is more or less amenable to agile or “disciplined” methods. I’ll take them one at a time:
- Personnel. Essentially Mr. Boehm says you need smarter people to do agile. This is probably true. I would argue, however, that you’d rather pay one person $90K who can do the work of two people at $60K and that the diminishment of communication costs between people adds to the economies of scale beyond the numbers. This is true in any environment that you wish to consider. The best programmers, it is said, are 10 times more productive than the average. Why not pay twice the going rate for someone who’s only, say, five times as good?
- Dynamism. (%Requirements-change/month). If your requirements don’t change as much, agile is not right for you. What a silly statement. They probably meant to say that if your requirements don’t change as much, you can hire cheaper help who can’t adapt to change. Agile works well in dynamic (i.e. most) environments. You might be able to get by in the rare “stable” environment with excessive documentation, and delay between idea and working code.
- Culture (%thriving on chaos vs. order). I love this statement: “[disciplined devotee] thrives in a culture where people feel comfortable and empowered by having their roles defined by clear policies and procedures”. When’s the last time you heard someone say “I am particularly empowered when somebody tells me exactly what to do and when to do it”? I’d much rather have a team of thinking, adaptable, empowered developers than drones who simply follow the process. Am I way out there?
- Size. This is the big knock on agile… that it doesn’t scale. I’ve seen it scale… well… fail to scale. I’ve yet to see a successful large-scale agile project up close and in person. But this should not be interpreted to mean that I discount the possibility. I believe it can be done. I just think that it requires enlightened leadership. That’s where the dearth lies.
- Criticality. Really, this is plain stupid. Sorry Barry, but this dimension really irks me. When’s the last time you worked as a leaf-node part of a team on a project in either a waterfall… umm, I mean, disciplined… approach or an agile approach. This dimension alone conveys to me that you have lost touch with software development reality. Agile is not the antithesis of discipline. You probably think that agile means “no documentation” – like many ignorant folks out there. Fortunately they’re not writing about it; alas, you are.
One of the paragraphs urges readers to “assess the likely changes in your organization’s profile over the next 5 years”. Please. If anyone has visibility beyond the next year, my hat is definitely off to you. And write it down, revisit it in a year and see how wrong you were. 5 years? No freakin way. Your vision of possible organizational stability warrants an inclusion in the fantasy walk of fame.
Mr. Boehm has a bullet that implies that dependability is a hallmark of non-agile methods. It reads: “key future trends to consider include the increased concern with software dependability and need for discipline”. The authors have crossed a line here. They're no longer using “discipline” as a catch-all for waterfall methods, they’re now using discipline as an inarguable quality. Again, Agility is all about discipline. They are clearly out of touch.
Here’s a phrase that disgusts me: “Examples of potential anomalies are: Operating with agile, fix-it-later developers with a growing, increasingly enterprise-integrated and dependability-oriented user base”. Really. “Fix-it-later developers”… implies that agile developers are hackers who let bugs fester. And that somehow, agile development conflicts with a user base that values dependability.
Unbelievable. Really. Mr. Boehm (and coauthor): Poke your head up into the real world. Join an agile project. Don’t just read about it and regurgitate uninformed opinions.
In sum, I am disappointed, and amazed at the observations in this article, which seem to be informed mostly from uninformed, anti-agile pabulum.
Subscribe to:
Posts (Atom)