How’s that sprint going? ZenHub reporting and GitHub data will tell you
zenhub.comMy experience has been that no one needs a burndown chart to know how a project is going, and, to the extent that you do, it usually points to some mismanagement and an attempt to fix it through intensifying project management rigor. Especially with early stage startups, where the requirements are changing rapidly and throughout the project, I think this stuff represents very real overhead that's just not really that useful. In larger companies, where stuff like this is typically used as reporting up a chain, it can be slightly more useful to get a holistic view of the organization, but again, I'm just not sure it's worth the effort, and I really question resting a significant amount of decision making power on this over qualitative data.
To be clear: I'm not throwing all project management methodology to the wind, but this promises to give you a fancy chart that will "identify bottlenecks in your development process" and I'm sorry if you need a graph to know where those are I think there are probably larger problems in your organization. Identifying bottlenecks is easy, fixing them usually involves much higher level management functions like recruiting and training.
IMO the problem with most of these approaches is that it's incredibly hard to measure certain important things, and making decisions purely based on what you can measure is risky if you're not aware of what you're leaving out of those decisions.
Consider the team that spends 2 months implementing a feature twice or more because one product person specs out one thing, their VP boss sees it a month later and says "no, redo it this other way," and then a month after that another VP sees the result and says "no, you have to go back to the first version!" But maybe they hit their sprint velocity targets every sprint along the way.
Consider the team working on the "elegant," "clever" codebase that solved yesterday's problem extremely neatly but wasn't extensible at all to the new feature, requiring a ton of effort to work around the old assumptions in the code. This team could similarly be "crushing it" according to their story points and burndown."
Consider the team with high SLAs and a lot of old domain knowledge that now gets a critical new project while still having 70% of their time sucked up by being the experts on the old system, and never given priority to operationalize/automate some of that old work. How often do you get those wasted hours measured in a way that convinces someone to give the team the help they need?
This isn't just a non-technical thing. I've seen (and been guilty of) devs lobby for rewrites of messy legacy systems, only to end up months later with a system no more extensible than the one before, without any improvement in the ability to delivery business results rapidly. I think it comes down the company environment in the end - if you are in a place where you get good results from trusting each other, you can find beneficial trade-offs and good-faith "here's how refactoring this is gonna help you out later, product person; and in exchange for prioritizing it we'll squeeze in this extra high-priority short-term thing too" dealings. But if not, it's a prisoner's dilemma sort of situation.
As an Agile coach, I agree to an extent. Metrics can be and are often times weaponized by management to exact control over a project.
I wish more organizations saw Agile metrics, or any metrics for that matter, as conversation starters for the team to discuss how things are going and how/when to adjust course. It's a challenge to change habits of making binary decisions based entirely on metrics.
I have to agree as well.
Our team has switched from a burndown chart to a "Confidence Check" at each standup. On the count of 3, we each hold up fingers rating the chance to complete the sprint out of 5. It's been a good way to see if the blockers are likely to be overcome and see if anything was missed etc. Our combined gut feel is a lot better than the burndown chart for reporting and for starting conversations.
That feels like a way more reasonable effort to signal-value ratio to me. Thanks for sharing.
ZenHub cofounder here - some really good points in this response, and it didn't sound like you were throwing project management methodology to the wind at all! For a lot of teams we work with, simply getting these insights is half the battle. A team shouldn't expect a tool to solve their problems, in the vast majority of cases it comes down to people and processes. We hope that these reports can drive the types of insights that are catalysts for those larger changes.
Thanks for this feature though. Looks awesome to me! It is nice to see cycle time for other people in the org who dont use estimates. The fact that we are tracking change in pipeline movement makes it really easy for all to use.
I do think now about how at the end of the day, I normally like to put the next issue im going to work on in the "in progress" before I leave so it is ready for the next morning. Now I wonder if for metric tracking I should not do that.
Also I wonder what happens if you out something into In Progress then get a different priority and take it out. Does the time reset? Does it track total time it was in the pipeline?
My experience has been that no one needs a burndown chart to know how a project is going
Indeed, you use a burndown chart to know how the current sprint is going. Don't wait until the end of the sprint to know you're late, detect problems early.
I think this stuff represents very real overhead
Updating a burndown chart should be a grand total of 20sec. What takes time is the discussion you have when you're late. Yes, solving problems takes more visible time than burying them :)
In larger companies, where stuff like this is typically used as reporting up a chain
It's a terrible tool for reporting up the chain. No reason to report the details of the sprint up the chain
Identifying bottlenecks is easy
It sometimes is (usually when the situation is dire), but most of the time really not. I don't know if the burndown chart is the best tool to identify bottlenecks though, at best it shows that there is some problems that might be bottlenecks. It's just a very lightweight tool so make problems immediately visible. If you find a burndown chart to be too much overhead... I don't know what tools you'll ever use
> Indeed, you use a burndown chart to know how the current sprint is going. Don't wait until the end of the sprint to know you're late, detect problems early.
I don't know how this is different from what I said and my point is that I don't think they are a good tool to detect problems. In most cases I would consider burn down graphs to be trailing indicators of problems, not leading, which is why I consider them more appropriate for reporting up the chain than down.
> Updating a burndown chart should be a grand total of 20sec. What takes time is the discussion you have when you're late. Yes, solving problems takes more visible time than burying them :)
Right, my point is not about checking off a ticket and letting an automatic burndown chart update. My point is about the kind of project management where you have an intense focus on the how instead of the what. It's very inappropriate for early stage startups because of how quickly requirements change. It also represents a communication overhead.
> It's a terrible tool for reporting up the chain. No reason to report the details of the sprint up the chain
We can disagree about whether or not it has value for reporting up the chain, but why wouldn't the status of a sprint be something you'd want reported up the chain?
> It sometimes is (usually when the situation is dire), but most of the time really not. I don't know if the burndown chart is the best tool to identify bottlenecks though, at best it shows that there is some problems that might be bottlenecks. It's just a very lightweight tool so make problems immediately visible. If you find a burndown chart to be too much overhead... I don't know what tools you'll ever use
We're getting semantic, it depends on how you identify bottlenecks. My experience has been it typically looks something like "a bunch of tickets are stuck in design, design is a bottleneck" which isn't really helpful when the core issue is, for instance, that you don't have enough designers. That's typically a reality most people know and understand, so I don't think it's super valuable here, nor do I think a different project management or organizational system fixes it. Interested in what you consider to be a bottleneck and how you would identify it.
I feel like the use of metrics is often just a crutch to avoid real management/leadership decisions. Sprints are meaningless overhead unless they are coupled to important deliveries. Often they aren't.
In my real world experience software doesn't get delivered within sprints, it gets delivered when it's ready. And decisions about what to deliver aren't made by the sprint team.
Anyone can pad story points enough to look like a sprint super-hero. The key focus should be what are you building, are you building it the right way, and what can we do to deliver it faster/better. Sometimes the right answer is stop holding meetings, and just get out of the way and let the team focus on the problem without interruption for a while.