Why I'm not a big fan of Scrum
okigiveup.netI've been doing Scrum-driven development for a few years now after working much more independently for most of my career. I understand why managers like it because it at least lends some structure and predictability to what is an inherently unpredictable enterprise.
But the author's criticisms of the incentives of Scrum are on point I think. Because the stories are always articulated in terms of user facing features they encourage developers to hack things together in the most expedient way possible and completely fail to capture the need to address cross cutting concerns, serious consideration of architecture, and refactoring.
This is how you can get two years into a project and have managers and clients that think that things are going well when the actual code is an increasingly unmaintainable rats' nest. Good devs confronted with this kind of mess will eventually burn out on sticking their necks out defending necessary but opaque refactoring tasks and move on to greener pastures.
I am Product in a large corporation, and it's a certainly a fine balance. In a previous life, I was a developer, and I dealt with similar issues. I have a lot of respect and empathy for the the Tech team. Therefore, I asked my Tech team to inform me when things are getting out of hand -- that's the responsible Product thing to do. I instituted that technical debt is part of our KPIs. If it's not captured, it's not actionable. A few examples: - Documentation: required when method's cyclomatic complexity is high or method is just plain long (we defined what we consider "long"). Every customer facing method has a working usage example (%), etc. - Test Coverage: % unit tests (defined by modules) -- higher is not necessarily better, but it gives us a sense of where we stand. % of test automation: manual vs. automated. - Refactoring delays: # of TODO comments, # of compiler and static code analysis warnings (e.g., “this method is deprecated”) --> that is a sure sign that things will break in the future and an investment is needed.
There's some development work to be able to capture these things, but it gives visibility and power to Tech to inject some maintenance work into normal sprint development. I'd love to hear what other people are doing.
> technical debt is part of our KPIs. If it's not captured, it's not actionable
This is where I suspect manager-types and developers have a vigorous divergence in values.
Professionals routinely encounter situations where something is wrong and needs to be "actioned" but its wrongness isn't effectively measured by any metric (other than the opinions of the experienced people looking at it).
There is a certain species of manager who has taken Taylorism a bit too far and says that anything not reflected in the KPIs is not real and not getting acted upon. I hope this isn't what you're expressing here, but oh man is that mindset frustrating.
On the other hand, at the end of the road - or the rewrite - something should have been gained in terms of money, risk or time.
Being shitty is not reason enough, but it's almost always possible to reason and quantify and weigh the cost versus the benefits.
Senior staff in pretty much all organisations are given some latitude with respect to performing tasks that they believe will be beneficial without needing to produce a specified cost-benefit.
We have one-on-one catch-ups with staff, we have all-hands meetings, we visit clients, read books and articles, attend industry functions, issue press releases, meet with potential investors, etc.
There's value in all those things, but very rarely do we need to account for the fact that we spend time on them.
Senior engineering staff need that same freedom - it should be totally appropriate for them to make that call that "I'm accountable for the technical quality of this code, and I've decided that this needs to be done".
That requires some sense of business acumen - there needs to be an appropriate balance of technical investment to business investment - but that's part of the role of being a senior dev / tech lead / architect / whatever-your-organisation-calls-them.
I agree.
I am not saying that you always need a detailed balance sheet, just that it's good to reason about the benefits.
It's especially hard to quantify risks - that's where you really need the expertise.
What I wanted to point out is that there seems to be quite many developers that actually do not have a sense of business acumen, and are willing to spend alot of money on low priority tasks, just for their own personal pleasure.
Of course something should be gained, but that doesn't mean you can always tell how much would be or was.
There is no mature "actuarial science" of software development. The market can provide concrete pricing for new feature development; engineers can't tell you how many dollars of tech debt you're in or give three significant digits on the probability of a major outage tomorrow.
That doesn't make money/risk/time costs which are difficult to measure any less real and it doesn't make them smaller than the ones which are easy KPIs.
Nor can you necessarily look at the movement of measurable KPIs and say "man, that rewrite was a waste of money." Who's to say things wouldn't have been worse without it?
> it's almost always possible to reason and quantify and weigh the cost versus the benefits
I don't think so. As a pretty good analogy, can you quantify the benefit of replacing knob-and-tube wiring in an old house? Now consider that the house across the street keeps its old wiring for the next twenty years without anything bad happening.
The problem with raising issues with a framework like Scrum is that despite many share references, most people are in environments which apply Scrum in a slightly different way.
We talk about organisations applying Scrum but what we're really referring to are groups of people, and depending on those people, their mindset, their history and their current role requirements - they will use the Scrum model in their own idiosyncractic way.
Take the point about technical debt and running so fast with an Agile development workflow that you never get to refactor code or properly document etc.
Even if you just took that in reference to one specific company, the value of, cost of, and importance of these things could be different at different times.
When a startup is running to get MVP out to get customer feedback, it's usually much more efficient to set the basic expectation that the first version of your product will get thrown away once you've learned all the important aspects of what you need to deliver. With that in mind - startup development is a completely different beast than when your product is established and you have paying customers with expectations of up-to-date documentation etc.
Some managers are do not have good people skills and manage by just holding developers to account based on the expectations set in their Scrum planning day. Discussions about whether refactoring should be done yet or not may not even be something they want to know/talk about and they rely on engineers to factor that into their sprint estimates.
Depending on whether the company is being driven by sales/demo opportunities, or feature roll-out timescales etc. then the call about whether to do something 'quick' or 'right' may drop either way.
For developers it's important to understand the dynamics that are driving the business and what their role is (sometimes you have to 'JFDI'). If you have a good engineering team with a strong leader then these things shouldn't be that visible to anyone else. It's also important to be able to meet the expectations you set. If you keep delivering late then you'll also struggle to get people to let you do more than the minimum required at the time.
For the business as a whole it's all about the big picture and understanding the decisions you make and what their impact (short and medium term) is.
If you want quick releases and don't have the resources for that to allow for good documentation and/or scalable code, then you need to understand that technical debt is building up and at some point it will need to be paid.
> despite many share references, most people are in environments which apply Scrum in a slightly different way
Exactly. In fact, the linked text is a prime example: it professes to address the "standard Scrum as described in the official guide", and then goes on to condemn the notion of story points. Now, I happen to agree with those story-points criticisms, but guess what: that official guide says nothing about story points at all! :o
In my view the only way to handle this is to make constant refactoring part of the work without telling management. If you ask for permission for refactoring you will almost always get a "No".
This is how we ended up with the "Shadow Sprint" at a previous workplace. A whole bunch of utterly essential engineering work was left out of the sprint process, so many of us would just go ahead and do what needed to be done anyway while working on whatever tickets we'd picked up frome the "real" sprint.
Utterly dysfunctional of course, but if it's do-or-die, I'd prefer to 'do'.
How did you handle regression testing of the refactored functionality? That's our biggest hurdle. QC is swamped with "regular" sprint work, so there's no way they manage any additional load during the sprint.
We had pretty good (but not amazing) unit and integration tests in many of those systems, and a willingness to manually test the systems where we didn't have tests.
If we had to lean on a QA department to get this stuff done, I'm not convinced we would have been able to actually deliver that project.
Only refactor things in sections of code that you're changing anyway. That way QA already needs to test those sections of code, and gets no extra work.
We tend to bundle that work up and include it in tickets with the full acknowledgement of the rest of the team.
"Oh, since we're adding new email features, we'll need to clean up some of the old email code. That will add additional complexity to this task, so we will estimate it higher."
It works, allows us to still track the impact on velocity, keeps everyone informed, and makes technical debt clear and trackable.
I believe this is the correct approach. You're not hiding the complexity of the work, and you're also not compromising on quality. This leaves only the task of saying "no" when asked if you, "for now, could just..."
This is how it should work in my view. You just need managers that are disciplined and don't tell you to not clean up old code because of deadline pressure.
This is exactly what is supposed to happen in a Scrum system. If someone is not doing this they're not doing Scrum - just something they called Scrum to pacify people
This is mentioned in the article as possible only for small changes. When the refactoring involves more systemic problems, it becomes impossible to make the change in the course of making other client-facing changes.
I'll add that some subsystems are a mess but stable. It's difficult to make refactorings with other changes when there aren't other changes to make. I see this in particular with old but stable subsystems that will need to be ported to new environments someday, but it's never a good day to lay the groundwork for that inevitable change.
> I see this in particular with old but stable subsystems that will need to be ported to new environments someday, but it's never a good day to lay the groundwork for that inevitable change.
I think it's correct to defer that work until you need it. You may end up never porting that subsystem, in which case refactoring the currently-stable implementation is wasted effort. As and when porting becomes an actual business requirement it can be prioritized appropriately.
> You may end up never porting that subsystem, in which case refactoring the currently-stable implementation is wasted effort.
I understand this approach, and it works many times. But when the porting is delayed until the last possible minute, it's more likely that hacks are put in because the requirement turned into a hard deadline. Instead of defining a sensible OS abstraction layer, the developers might find and replace "Windows XP" with "Windows 7".
> I understand this approach, and it works many times. But when the porting is delayed until the last possible minute, it's more likely that hacks are put in because the requirement turned into a hard deadline. Instead of defining a sensible OS abstraction layer, the developers might find and replace "Windows XP" with "Windows 7".
Which may well be the right choice for the business at that point.
More generally, it's not like doing it now makes it faster than doing it later: you have to put the same amount of total work in either way. In fact you have to put more work in if you do it now, because you will have to ensure that any subsequent changes don't break the porting. If you have other tasks that are a higher priority than porting, you should do them first, almost by definition. Of course if the port is your highest priority then that is what you should be working on (again almost by definition). Porting should be left as late as possible, but no later.
> Which may well be the right choice for the business at that point.
I think it may be imporant to raise a few points for consideration here:
- what may be a good choice for the business short-term can also be a very bad choice for the business long-term
- many - personally, I believe most - businesses don't care about products they make or services they give, they care about the money they can make via those products/services; ergo, the quality doesn't matter beyond the point the customer already paid what they were expected to pay
I don't bring it up as criticism, but only to point out that there are two completely different worldviews here competing. One, shared by many developers, is that the product is what that matters. The other, shared by the "business types", is that the profit generated by product matters.
I also feel that part of one becoming a professional developer is a shift from thinking about quality of work to thinking about its money-making potential. Which I personally consider a poison to the mind, and it makes me hate working in companies. But those are just my personal feelings.
You misunderstand. If more than a small minority of your code cares what OS it's running on (beyond obvious excepted), you have a large amount of technical debt. Cleaning that up, either by refactoring or replacing the offending code, is in the business's best interest because it keeps costs down and keeps more options available to the business.
The 'business people' are not qualified by themselves to say that the cost-benefit analysis of the refactoring is worth the effort. Because while they may have a good idea of the customer benefits, they generally have little insight into what the future costs of technical debt are. You see this obviously demonstrated by 'business people' who are shocked that there is work to do on five-year-old code that has been working fine for years. Any reasonably experienced developer, any many junior developers besides, can tell you that code rots if it's not maintained properly.
> Cleaning that up, either by refactoring or replacing the offending code, is in the business's best interest because it keeps costs down and keeps more options available to the business.
IME that's very rarely the highest-value thing you can be doing for the business. It's not worth paying for flexibility that you're never going to use, and it's not like it's going to take longer to refactor later than it would to do it now.
> The 'business people' are not qualified by themselves to say that the cost-benefit analysis of the refactoring is worth the effort.
Yes they are. Cost-benefit analysis is their job. It's your job to give them an accurate picture of the costs and the benefits.
> Because while they may have a good idea of the customer benefits, they generally have little insight into what the future costs of technical debt are. You see this obviously demonstrated by 'business people' who are shocked that there is work to do on five-year-old code that has been working fine for years. Any reasonably experienced developer, any many junior developers besides, can tell you that code rots if it's not maintained properly.
Yes and no. If you don't need to make changes to a given system then it's fine for it to "rot". Presumably there is a business reason they want to make changes, in which case bringing the code up to a point where you can make those changes is part of the cost of that business-level deliverable.
"Yes and no. If you don't need to make changes to a given system then it's fine for it to "rot". Presumably there is a business reason they want to make changes, in which case bringing the code up to a point where you can make those changes is part of the cost of that business-level deliverable."
Except at that point, there's likely a hard (usually arbitrary) deadline, meaning you don't have the time. So you end up hacking stuff up again, and the tech debt doesn't get addressed. Your best people end up getting frustrated that they're not being listened to, and that they constantly have to explain this stuff to the business people, and eventually they leave.
> it's not like it's going to take longer to refactor later than it would to do it now.
It is so, when you have to build on top of something you will have to refactor later. I think the miscomunnication here is happening because the person you are replying to is assuming that case, and you are assuming the case where the code to be refactored is isolated form the rest of the system.
"More generally, it's not like doing it now makes it faster than doing it later"
No, but "faster" isn't necessarily the point. Doing it correctly leads to it being "faster" because you don't have to constantly go back and fix bugs because you used ugly hacks to get stuff done in the short timeframe you had because the company decided to wait until the last minute.
In a non-broken organization technical leads (or equivalent) have same level of authority to guide development as feature driven personnel and the necessity of features vs. quality can be triaged in bright daylight.
If the organization is not sane, that's another thing of course (and often the case, sadly).
True, but don't forget that non-broken organizations are very rare :-)
Do you know of one? I've been looking. For 25 years.
All organizations are broken, but some are more broken than others.
There are several in the Helsinki area (that I'm aware of - that have been at some point at least - sane in this regard. Many broken of course as well).
This is a typical way that agile and related approaches become a barrier between implementers and management.
The fault is almost all on the management side because, ipso facto, they're the ones making the decisions that drive this outcome.
In the real world, there are outcomes that are much worse than this. But this does have the effect of neutralizing the value of scrum: Management gets what implementers decide they get, and scrum is just the way middle management is told the story that they pass up the chain.
TBH this is what I've always assumed it should be from day one with Scrum, after training with Ken Schwaber etc. The backlog is not a Todo list.
It's the only way that seemed sane without twisting user stories into weird refactoring or architecture stories and keeping team from writing garbage code waiting for these "tech stories" to clean it up.
Why is management getting involved?
The PO should try to balance this with feature impementation and the team + SM should make it clear when it's needed. This seems to work fine in an environment based on collaboration between PO,SM and dev team, but will probably fail in an adversarial one.
I have rarely seen a PO as the single contact to the "business". Most of the time there are multiple project managers, product mangers and line managers who have a say and put pressure on the dev team.
Because managers have to report to other managers and they don't want to report that the devs are working on something that already has been implemented and "works".
Indeed. That's sometimes the case with other stuff like playability too. I've added simple animations which delighted users from time to time.
If I had asked beforehand to invest time for this, the response would be "let's postpone this (forever)".
Can't do that for deep refactoring. You'll need QA involved and the boss will never sign off on that because, well, he's a boss not a proper manager.
I don't like scrum myself, but let me try to defend it because I think the problem of most of the people who don't have a good experience with scrum is a misunderstanding of the underlying principles behind it.
Let's just remember that Scrum is just a tool that is trying to replicate some practices of the toyota production system (TPS), and an important principle of the TPS is continuous improvement, and Scrum has this through restrospectives. Another principle is quality built in.
Now, how to avoid having a rat's nest after 2 years?
First of all, you have to remove velocity as a goal for your team. It's easy to game, and it de-incentivizes finding a _predictable_ velocity, which is the goal of that tool.
The goal of velocity, is to know what's the "cost" of building an healthy system.
i.e.: Your team ships an arbitrarily 10 points per sprint by doing what they think is right. The same team could ship an arbitrarily 20 points per week. 10 is the _true_ cost of building a healthy system. There is not a lot more to do for good team, honestly, at that point. Management can't say anything, because you bring them predictability, and that's mainly what they want. They might think you're slow, but hey, all of this is relative.
Suddenly, what does that mean when your team only ships 5 arbitrary points? That they had to push extra. That there was an unexpected problem. That things were on fire and you had to stop working on features entirely.
Basically, that there is something to retro on, find the root cause and anticipate for next sprints, and use to go back to the healthy level of points.
I used this at several places. It demands trust, but it always pays off.
With some teams I worked with, we set up a technical prioritization planning meeting every week, where basically decided what was going to be prioritized for that week. Each team was picking 2/3 things to do on top of the features.
You don't tell anyone. You do it.
I think the problem of most of the people who don't have a good experience with scrum is a misunderstanding of the underlying principles behind it.
Unfortunately this has become a cliché whenever someone criticizes scrum, Agile in general, etc.: it's never the fault of the idea/process, it's the fault of the manager/team for not understanding it properly or not doing it right.
Even if these philosophies and processes are so amazingly great when properly understood and implemented, the fact that hardly anyone seems to be able to properly understand and implement them would be a fatal flaw.
Agreed, that's why I specified I don't really like Scrum. Mainly because it's hard to actually implement without all the flaws if you don't try to understand the underlying principles. Unfortunately, Scrum books and coaches rarely go through these.
> Mainly because it's hard to actually implement without all the flaws if you don't try to understand the underlying principles. Unfortunately, Scrum books and coaches rarely go through these.
If you are implementing Scrum, I don't see how you do it without the Scrum Guide (which defines Scrum), which is both quite short and does, in fact, go through the underlying principles.
> it's never the fault of the idea/process, it's the fault of the manager/team for not understanding it properly or not doing it right.
Well, almost every time I see Scrum criticized the described process is not Scrum and merely cribs ideas from it without understand the purpose or how things inter-relate.
So it's a fair criticism.
> Suddenly, what does that mean when your team only ships 5 arbitrary points?
In my experience, it means some developers failed to game the system that particular time.
But you can be sure there were many other problematic moments in previous sprints, but the developer could hack together an ugly mess of a code to avoid the shame of failing in front of everyone in a meeting, and dragging the team's points down.
Because the unwritten thing about points is that they are used to shame people. In public. Sometimes not explicitly, but the feeling is there. It's always there.
Yes.
That's why, if you have to work with and/or manage a scrum team, making velocity not the focus of the sprint is step 1 for a sane process. We count points, but it's not a contract. We try to reach the points, but it's not a contract.
In my parent comment:
_First of all, you have to remove velocity as a goal for your team. It's easy to game, and it de-incentivizes finding a _predictable_ velocity, which is the goal of that tool._
Scrum really is shame driven development in my experience
Your team ships an arbitrarily 10 points per sprint by doing what they think is right. The same team could ship an arbitrarily 20 points per week. 10 is the _true_ cost of building a healthy system. There is not a lot more to do for good team, honestly, at that point. Management can't say anything
This sort of thing is only useful for managers who don't understand what is going on. Since they are unable to look at code and understand it, 'points' give them a semi-opaque substitute.
If management trusts the team, it is not necessary.
I guess if you understand what is going on as a manager, then you might prefer the 10 points team?
I know I do.
If you understand what is going on, you will get rid of points altogether, and look at the git log from time to time.
If you are a leader, and not just a manager, you will help your programmers improve their skill, so over time you spend less and less time managing them. Your programmers will appreciate it because with greater self-management comes greater happiness and job satisfaction.
The points are (also) an estimating mechanism. They let you estimate a new task in points (effort measurement) instead of hours (time measurement), and then use history to predict a likely timeframe based on your typical rate.
The extra layer of indirection helps to account for uncertainties in the task, imprecision in the estimate, and chaos (in the scientific sense) in how long individual tasks take relative to aggregated historical metrics.
That's surely worth something, but it's easier to think of estimates as a 90% outer bound: "I am 90% sure that the task will be completed by date X." Uncertainty can be incorporated, and you don't need to learn a new conversion factor from time to points.
That doesn't work when you can't predict what other tasks might interrupt you between now and date X. That's why estimating in units of effort is much more reliable than units of time.
The drawback to units of effort is that estimates eventually bubble up to non-technical people who don't discern the difference between effort and time, and for most projects there are going to be some time-based constraints on the schedule.
That doesn't work when you can't predict what other tasks might interrupt you between now and date X
You can still take a probabilistic approach to tasks that might interrupt you.
Totally. Some teams want the points though So I let them have it. But I just set the principles I explained above for the points to have an actual meaning.
I leave my current role in just under two weeks for the very reason you describe. I'm tired of being the "difficult dev" that everyone has grown to hate. Let them launch. Let it fail. Let's see who the arsehole is then.
You, because the moment you leave you become that-guy-who-we-can-safely-blame.
Mostly joking but plenty of places do work like that.
Yes, they'll blame him. It won't make sense for the most part, but it'll still happen. Probably largely things like "it only tomelders had worked with us more instead of being 'difficult', we wouldn't be in this mess", even if the 'mess' is precisely what he was advocating people work on before he left...
I will always blame whoever most recently left for everything. The rain, the chair being broken, the mouldy cream cheese in the fridge.
No jokes. It's Brads fault.
It will indeed be his fault. He set them up for failure and left, they'll say.
Isn't there a name for this pattern?
I swear I remember reading a piece on how having a team member leave can be good for cohesion and efficiency, because everyone is free to vent about old, bad decisions without targeting anyone still at the company. I think it was part of a story about a team clearing their technical debt by convincing a reluctant manager that they needed to undo the last guy's mistakes.
"Blame Canada". PM: "We can't blame Canada for missing our targets 4 sprints in a row."
And in a way it's saner than if you blamed over and over some people who still work 5m from you, even if they deserve it; and we all have stories about that kind of people.
You often needs to be more subtile with them (x) than blatantly calling on their bullshit every time...
(x): I mean to try to influence them so they progress so your life is better, not to shit on them
It's a long time since I've had to deal with "strict" Scrummers. But I do remember being utterly baffled as to the insistence on user features being ready by the end of a sprint, even for quite complex, technical components.
Why can't we make some sub-component this sprint then the UI bit the next?
I tried various ways of reframing it such that the developer of the UI be the "user" but it didn't wash.
Because that two weeks you spent writing excellent code is mostly useless unless you have a way to get feedback on it. As a customer, you've given me no value.
What if you finally hook up UI to the sub-component and the customer/stakeholder decides they don't like any of it? You could have known about it earlier.
The problem with this is that it imposes a membrane on the process that is only permeable to work that contains a shippable piece of user-facing software. It presupposes that all work can be divided into such pieces. And since that's not true, it leads to contorted stories to squeeze necessary work through that membrane.
I agree that sprint lengths can be arbitrary and counter-productive to developing good software.
I think it's important to understand that the sprint "membrane" is not designed to help developers. It's designed as a compromise between developers and their managers.
Developers want to work uninterrupted and perfect their work before releasing it, and managers want working software quickly and like to interrupt developers and change courses frequently. Sprints are an attempt to find a middle ground, but it's not always ideal.
> It presupposes that all work can be divided into such pieces.
No, it states the reasonable conclusion that all valuable features can be broken down into such (because otherwise if it adds no value to the product).
There are also Scrum tasks around Research, Tech Debt, etc that are perfectly fine to create and work on but they have an affect on your overall time to build new features and that's ok. Because it needs to be done, so they'll get prioritized accordingly.
You italicized "valuable features" and I understand why, but I think the trouble spot is actually the word "all"; your parenthetical about adding no value merely begs the question. I'd argue that, indeed, some valuable features can be organically broken down into two-week user-facing deliverables, but certainly not all and probably not even most. There's nothing magical about being able to partition a deliverable into a sprint's time frame that, by such distinction alone, makes it valuable versus valueless.
Systems like scrum try to wrangle into a manageable bolus a process that -- if we're to make good softare -- necessarily includes creativity, inspiration, and the traveling of paths yet unseen. It's like writing a novel and having two-week deliverables like "complete the arc of the Alice character", versus "write approximately 100 pages". It's not valueless to write 100 pages, despite not finishing the Alice section, and perhaps specifically because we discover that Alice's emerging story turns out to intersect perfectly with what we want to do with the Bob arc later on.
So there's writing and engineering and lines of code versus plotting and architecture and inspiration. We need all of it, right? Does everything that's not a "valuable feature" have to be shunted into the cul de sac of a research spike, doomed to be frowned upon by the management for whom the system otherwise provides the sheen of predictable velocity? Does the critical work of "dreaming" of what to do at both macro and micro levels become a casualty of the banal necessity of marching equal-sized boluses through the development tract?
I'm making a florid point, but to me this feels like the essential tension.
> but certainly not all* and probably not even most.*
Yet to find any that can't. I've yet to hear of any either. Usually that's a problem with the people trying to break things down not being used to modeling things differently - not the process. It's pretty common.
I'd love to hear some examples tbh.
> There's nothing magical about being able to partition a deliverable into a sprint's time frame that, by such distinction alone, makes it valuable versus valueless.
Of course not! The point is not that the time-boxing makes it valuable, I hope I didn't imply that, it's that all features that have value should be specific enough to be able to be broken down. If something is too vague it's not a valuable feature because at that point is just pie in the sky spitballing. That's what requirements and grooming is for - to identify what needs to be broken down, expanded on, or specced out better by the Product Owner.
> It's like writing a novel and having two-week deliverables like "complete the arc of the Alice character", versus "write approximately 100 pages".
That would be awful Scrum process. Also the Product Owner is the Sprint Team in a novel but lets pretend they are two separate people for this: here's how that should work in a Scrum system...
- Story: "complete the arc of the Alice character"
- Feedback: Too vague. What IS the arc? What perspective should it be from? Lots of questions, needs to be broken down.
Next, after the product owner has worked out that we're going to hit these beats in a 3 act structure.
- Story: "complete Alice's arc for Act One, with exposition introducing her and the other characters, ending on the inciting incident for the main plot"
- Feedback: Better, but this is really 3 things - the introduction of Alice, the intro of the other characters, and the inciting incident. You should split those up.
Next, the PO has split this up into those three Stories and presents the first one...
- Story: "We need to introduce Alice so the audience can start to get to know her."
- Feedback: Great, this seems low complexity and simple. We have requirements for her backstory? Ok, great, lets work with that. Seems like a Complexity 2 Story.
Then, when Sprint Planning...
- Scrum Master (not PO): "Ok so we were planning on getting this Alice introduction done this sprint, we still ok with that?"
- Everyone: Yup!
- Scrum Master: Ok, lets break this down into tasks of things we want to cover then.."
And then that part of the story gets written. Obviously it's an odd metaphor, but that's kind of how many (not all, at all) professional authors break down a lot of their writing process anyhow. Some are more freeform of course, but many plan a lot too.
The point is - that just because you have a planning step doesn't remove the creative process, it helps you plan better for work. That's all.
Scrum doesn't magically change how you code or what you code, it's just a planning and change management tool that emphasizes incremental steps.
That's why I sometimes say at my workplace that we should make our projects in Flash. Faster to make an interactive UI this way and get the approval of the management/customer, no time wasted on useless things like having the program actually work in an efficient, useful and secure way.
I knew some folks in the 90's that prototyped in Shockwave. They did UI and animated uses cases. I was impressed by how quick people got their blob-type people animated use cases that were basically little cartoons. Seemed like way too much work, but I guess it worked for them.
I've learned the hard way just how effective such prototyping tools can be if you only care about... prototypes. Or the visual stuff. Seeing a designer whipping up a running example in 15 minutes in Construct2 that was equivalent to what me and two of my friends spent last 8 hours coding has taught me to respect those tools, at least in particular use cases.
And my point is, if we're focusing only on short-term client-recognizable value, we may as well just make shiny prototypes. Who cares about the pesky internals anyway.
To be fair to them, they were hella good C++ programmers and used Shockwave to prototype and lock down the business requirements. You can do an amazing amount of specification like that. It was basically CRC cards put to animation as people and things interacting.
They provided quite a bit of short and long term value by being better at giving clients an understanding of what they were actually getting. They cared about the internals and making sure their clients understood the logic the internals would use.
If I was skilled in some animation tool like they were I would do the same.
...Flash?
Yeah, that thingie in which you draw stuff, sprinkle them with a script or two, and with a few click you get something that can be run or embedded in a webpage. The thing HTML 5 supposedly replaced.
> As a customer, you've given me no value.
Erm. But it's you who is the customer in this scenario, not him? Or am I missing something?
He's saying that eventually when your feature surfaces as a UI the customer may not like it.
To which I have 2 responses:
1. not all features need a UI to be useful
2. this also demonstrates the infantilising nature of scrum where no developers can be trusted to think deeply, talk to stakeholders and otherwise do the right thing in a fully-rounded way but must just follow the exact instructions expressed
I mentioned UI because of the parent comment (about hooking UI up to a sub-component).
The core idea I was trying to get across is that until a feature is working and in front of a customer (or stakeholder), it's essentially in limbo because you don't know if you've built what they wanted. Maybe there was a miscommunication, maybe they find the feature confusing, maybe they've changed their mind. The goal is to get feedback as soon as reasonably possible.
eg: A stakeholder (or customer) requests feature X, and everyone agrees it's a good idea and we should to work on it right away. The dev team could spend 2-4 weeks writing excellent behind-the-scenes code that's not hooked up to anything, or you could spend 2-4 weeks on holiday. Either way, you've given stakeholder the same thing: No new feature.
If you're confident that you know exactly what it is you want to build then you don't need Agile, scrum or sprints. Scrum isn't supposed to be waterfall with arbitrary reviews every 2 weeks.
But he said:
> As a customer, you've given me no value.
Meaning:
> You, being a customer, have given me no value.
This is what I don't understand.
He said
> As a customer, you've given me no value.
Meaning:
> [From my hypothetical perspective] as a customer, you've given me no value.
Or more concisely:
> [Speaking] as a customer...
I'm not sure this is how the grammar works...
Customer is a poor term. It should be stakeholders. Sometimes the stakeholders aren't "customers" per se, they could be other parts of the organisation for example. I'd consider my CTO a stakeholder and I'm pretty sure he's interested in the value of fixes for a security audit or working database backups even though those don't necessarily have a nice demonstrable UI.
> I tried various ways of reframing it such that the developer of the UI be the "user" but it didn't wash.
Are you surprised? The developer isn't the user of the UI, product wise, so no wonder it didn't get very far.
The trick in this case is to break the story down. So the original story isn't do-able in one sprint? Ok, so what is actually the MVP of that Story? What's the first block that builds the overall feature? Take a 5 point Story and make it three 2 point Stories or something. That's what the refinement step is for in Scrum.
Unless you're doing abnormally short sprints there should be some part of that Story that can be abstracted into a smaller Story that fits into a sprint.
> Why can't we make some sub-component this sprint then the UI bit the next?
Because that's how you get bad UI. The user-facing design needs to drive the API interface, not the other way around.
I disagree. It varies from situation to situation but I would argue in my domain at least (healthcare) this is how you get bad data models
"The user-facing design needs to drive the API interface, not the other way around."
I don't see how this need would nullify the ability to modularize code.
Modularize by functionality, not by layer. Start with a simple end-to-end path and grow outwards, rather than trying to go top-down or bottom-up, and don't split into distinct layers until you're actually deriving value from doing so. Writing code when you don't have the use case yet is always a bad idea.
Yep. And then at the third sprint into this path you realize that the way the backend has been built during the first two sprints makes it impossible to deliver some essential or just very desirable feature that nobody took in consideration because it was outside the scope of the first two sprints. Seen it happen multiple times.
That's the opposite of my experience. It's always the extra layer that was added for some planned feature that we never actually implemented in the end that gets in the way.
I disagree. This is how you end up with messy code that needs constant refactoring. Making time to come up with a composable design shouldn't be that difficult. It usually isn't.
All code should be constantly refactored. This way your code gets refactored as the design changes (which... it will) rather than having to shoehorn an architecture that worked great originally into a set or requirements that have moved on.
Not true. If you have design ready before the implementation starts (as you should unless it is very simple) then you are free to implementation things separately. Just have the interface well specified.
Sounds like you've never worked in a shop where they think "agile" == "no need to plan ahead"
This is sad, but my point still holds. Perhaps being honest with oneself about real tasks that need to be done will help. Planning the architecture/feature, doing proof of concept etc. are all valid tasks even if they don't bring immediate business values. If you claim that you can only do business features without doing any exploratory work then you are effectively claiming to have an oracle giving you perfect solutions out of the blue. In which case you should stop whatever you are trying to do and start selling the services of your oracle.
Speaking from a UI developer perspective, this never works :) But YMMV, I suppose…
We always do a brainstorming before implementing any serious feature / change. Once we have the initial design business gets involved (if we can get their attention...). The results usually works without needing any drastic changes, which is still better than no planning at all. It does require involvement from a number of parties though.
Design is better when it's informed by implementation, IME. The only complete specification is working code; if you accept something lower-fidelity then it's very easy to miss ambiguities.
Even code have bugs so it's not perfect. Your design can incorporate high level algorithm to be implemented, ins/outs, or whatever else high level constraints are most applicable to your domain.
But if you design things ahead of time, then you're not "agile"!
See my other comment regarding oracle: https://news.ycombinator.com/item?id=12249897
In your church, maybe.
In a country where logic reigns, it depends.
Imaginary countries don't count.
;)
Depends what you're building. Which leads us to the worst (meta-)aspect of Agile/Scrum these days, which is that it's the industry's current favourite hammer and so it gets used to bash every single problem. Now, hammers are quite versatile, but you have to know when to bash, when to use the claw, when to lever or just nudge things instead of swinging.
The moment you start getting dogmatic about your process is the moment your decisions start being driven by something other than your actual needs at hand. And that's the moment when you start to produce a bad product.
Author of the post here, thanks for your comment. As you said, the fact that you have to stick out your neck and fight for code quality is very annoying, and after some time, one just stops doing it, and starts flowing with the "add the one more line of crap to get things working and collect the points".
"Lean" is supposed to address quality and habitability more. 2 of the 7 tenets of lean focus on it:
Build Quality In
Optimize the Whole
I've never practiced it and my only exposure has been reading some of the book "Lean Architecture". But it certainly seems like an improvement over Scrum which seems blind to some of the most critical things in developing complex innovative systems. In fact I'm convinced that Scrum was designed to work with basic information systems where a feature consists of essentially adding a new data entry form or report for a database (e.g. a lot of web stuff)
Remember that "Lean" (SCRUM is really a subset of those practices) is at it's core a manufacturing process. Manufacturing isn't engineering. The author of the original post captured this very well:
> ...one can claim that this is not the job of Scrum, which is a software management methodology, and not a software engineering methodology, that it's only concerned with organizing the teams' time and workload
A manufacturing process can only drive quality if both the primitives and end system have gone through some sort of engineering process. Web developers love this stuff because they have components, frameworks and clients where the engineering details are taken care of in MANY scenarios.
Using a contrived scenario involving LEGO -- you cannot expect kids and parents to design new LEGO parts. The components are designed and engineered to work together in a completely different context. Carrying this further, if you want kids to assemble a consistent, specific artifact (Say a Star Wars X-Wing fighter), someone at LEGO needs to design that and produce documentation. Designing on the fly with sprints ("ok, kids, today figure out how to make a wing") isn't going to be a productive exercise.
Most SCRUM projects that I have personally seen fail are projects where "agile" is a codeword for "we didn't think this through, so we'll figure it out as we go". Then the "sprints" become a real joke as the team repeatedly runs into a wall.
That book appears to be very interesting, I'll definitely have a look. Thanks for the tip.
I feel much the same way as much of what you've said in this post. I'm curious what you think of things like the #NoEstimates camp that removes the ambiguity and "gut check" nature of things like points or even time estimates?
I read the first chapter of the no estimates book, and have to admit that I found it weak on arguments and poorly written. I think there is real value into doing software estimations if you take them seriously, even using story points, but there must be better ways of using those estimates. One really interesting way of doing this is the Monte Carlo estimate method I linked to in the blog post. Would love to try that sometime, and see how it works out.
The notion that the business can make decision in the presence of uncertainty is the basis of No Estimates. As a participant in another accelerator our funding sources would find that laughable at best and toss us out the cohort at worst.
This sounds like more of an issue with technical debt management. I'm much more of a proponent of Kanban, whenever possible, so in this case if a user story requires a refactor or architectural change then that's just fine. Also regular code reviews to eliminate hacking and sanity checking automated tests shouldn't be made optional, they should be actively encouraged by the team leads and dev manager.
Those clear cases are not the problem since they are... clear. There is a lot of need for refactoring that arises only slowly and not through one feature. It's more like the boiling frog thing (which by the way is a bogus story but I still use it nevertheless because everybody understands the point). So a problem is you can never really justify the refactoring effort using one specific new feature. It's like partial vs. full cost accounting, sometimes you just have things you can't break down but it still needs to be done (paid). Which makes this a real problem for organizations who don't have the accounting set up for this. In my experience especially large firms may have teams that get their budget "per feature", so even the best manager can't solve that problem - the next (management) layer above doesn't care though because your little software team is too small to get them to laboriously change the accounting process and the SAP system.> if a user story requires a refactor or architectural change then that's just fineKanban is the software development equivalent of the Concentration (Memory Match) card game.
https://en.wikipedia.org/wiki/Concentration_(game)
As though a useful, usable product design (architecture) could be divined thru piecemeal revelation.
One amendment I always wanted to make was to have a "technical debt dial". You could stick it on the wall next to the scrum board.
The debt dial should be from 0% (spend all the time refactoring) to 100% (spend zero time refactoring, get it out at all costs).
Management should have complete control over the dial. Developers should have control over what kind of refactoring they do (ideally the retro should have a question "what debt/tooling issues caused you the most pain this sprint? and a parallel track for debt/tooling stories").
I often have the impression that the term "technical debt" is just an euphemism to avoid admitting that someone in the team has produced poorly thought and poorly written code.
I'm not sure I ever found myself in the position of writing bad code just for the sake of speed - I surely wrote tons of bad code because I didn't know how to do it better, or because I didn't have the requirements clear from the start, or because of bad design and planning. But to get a feature out quicker, no. If anything, it seems to me that writing bad code requires more time than writing clean and elegant code.
In the end, "technical debt" becomes a way to shift the blame from your own (the team's) inadequacy at planning, designing and developing, towards supposed time constraints that always lie outside of the team's responsibility.
I have to disagree with this comment. Technical debt is sometimes the result of just one lone cowboy coder, but even then there's some responsibility across the team because that means his or her code passed all reviews, i.e., nobody took ownership for the overall team's code quality and vetoed the bad code.
Time constraints can certainly be relevant too. It's not always an artificial shift of blame. For instance, time constraints could be why the code passed review to begin with.
And some programmers certainly do write worse code when they have to do it quickly. Typically, the code itself doesn't look all that bad in a vacuum, but it presents problems months down the line when a new feature needs to be added or an existing one changed in a non-trivial way. There was little forethought in its design.
Or, let's just look at the ways you said you might write bad code:
> I surely wrote tons of bad code because I didn't know how to do it better, or because I didn't have the requirements clear from the start, or because of bad design and planning
In other words, these things can cause you to write bad code:
1. You just didn't know better.
2. Unclear requirements at the start.
3. Bad design and planning.
In (1), you might realize after writing some code that you didn't quite know what you were doing and you should refactor it, but you're now under pressure from management to just get it out. Oops, no time to refactor. Now code that you know is bad is going into production, and it'll bite you in six months.
For (2), the reason the requirements weren't clear is because not enough time was spent by management/product owners/designers on clarifying said requirements.
For (3), it's the same thing -- bad planning is often the result of time constraints (notably, time constraints which may not be visible to you as a rank and file programmer).
but it presents problems months down the line when a new feature needs to be added or an existing one changed in a non-trivial way.
I think I have yet to see any software to which this doesn't apply sooner or later. While the whole point of "technical debt" is that it is something you're supposed to knowingly acquire because of time constraints. You're basically saying "we knew it was wrong, but they forced us to do it that way". While to me, most of the times, the truth is that you really didn't know. Yes, you coded in a hurry, but there is always a time constraint of some kind so that's no excuse. Somebody else in the same time would have done a better job.
As for my points: if I realize I didn't know what I was doing, I always refactor the code. Committing code that you know to be conceptually wrong is just sloppy. And if I am under pressure by the management is because I spent time developing without understanding what I was doing; a better developer would have gotten it right at the first try, and there would be no technical debt.
If the managers/ product owners didn't produce clear requirements, it's not a technical debt, it's a sloppy job on their part.
If the planning was wrong, that's a sloppy job on the part of who had to do it. Responsibilities should be found and action should be taken. Saying "ah yes you know, we were under pressure so we (kind of naturally) accumulated this technical debt" is just a way to save everybody's face.
> And if I am under pressure by the management is because I spent time developing without understanding what I was doing; a better developer would have gotten it right at the first try, and there would be no technical debt.
While this reasoning is probably not technically wrong, I'm not sure if it's relevant to the real world.
You can always make an argument of form "there exists a developer who could have got this feature right on the first try, with very little time spent." This simply does not matter when you do not happen to have that developer on your team right at the moment. The nature of the work is that you will work on a variety of things, and you won't necessarily be the best in the world at every individual thing. You're inevitably going to encounter work that's challenging enough for you where you don't get it perfectly right on your first try.
> If the managers/ product owners didn't produce clear requirements, it's not a technical debt, it's a sloppy job on their part.
It's a sloppy job on their part, induced by time pressure, which produces technical debt. I feel like you're just playing with definitions here to avoid admitting that technical debt can come from poorly managed time pressure.
> if I realize I didn't know what I was doing, I always refactor the code.
All that tells me is that you've never been under a lot of time pressure. That's not a bad thing. It most likely means the management at your companies have been competent. But it doesn't mean that technical debt does not exist or cannot be induced from time pressure in other companies.
I guess that what I'm trying to say - and that got clearer to me while replying to other comments in this thread- is that it feels we're using the term "technical debt" as a way to avoid talking about personal skills or lack thereof, and to avoid admitting our or other people's faults. If I say that we need one more week of work on something because of technical debt due to the exceptional circumstances is one thing; if I say that it is because one of my colleagues didn't do his or her job properly, it's different. Remember that the business might have no clue of how long or difficult some tasks are and judge them only by the amount of work required. So a series of bad design decisions and consequent technical debt can give the impression of a very hard task on which everybody is working skillfully, while in fact it's an easy task with somebody that doesn't know how to do his job.
As for your other points: maybe one of my team members produces consistently more technical debt than the others. Is it still technical debt? Manager or designers can be the source of time pressure for those down in the chain, because bad planning or decisions, made in absence of time pressure, can force others to work under pressure. Again, "technical debt" masks the real problem. I might not have worked in extremely high pressure environments - but I've surely worked in teams where we were leaving the office at ten or eleven pm every night for months on end just because demented design decisions had been made by the much respected solution architect.
>As for your other points: maybe one of my team members produces consistently more technical debt than the others. Is it still technical debt?
???
Why wouldn't it be?
Creating lots of extra technical debt, in fact, is a defining feature of poorer developers.
>>I often have the impression that the term "technical debt" is just an euphemism to avoid admitting that someone in the team has produced poorly thought and poorly written code
>Creating lots of extra technical debt, in fact, is a defining feature of poorer developers.
I think we agree then. It's just that "technical debt" makes it sound- to me at least- inevitable and impersonal, while it is possible (at least for substantial amounts of it) to ascribe it to specific people and to avoid it by hiring better people.
>And if I am under pressure by the management is because I spent time developing without understanding what I was doing; a better developer would have gotten it right at the first try, and there would be no technical debt.
This is possibly the most wrongheaded comment you've made.
1) There is no such thing as "no technical debt". It asymptotically trends to zero but never, ever gets there. If you think that you oe anybody else is the kind of developer who magically creates debt-free code all the time then you're deluded.
2) The "right first try" argument is wrong. You shouldn't even try to get it right first try - that's the whole point of red/green/refactor. You're supposed to get it working and then clean it up because prematurely 'cleaning up code' is an inefficient way to work.
It's not called red/green/refactor-if-you're-too-shit-to-get-it-right-first-try.
Ok for your first point - although I'd argue that technical debt is something that usually asks to be repayed within months. A two year old technical debt is just an improvable software - that is, it didn't yet show problems serious enough as to call for a refactor at unchanged requirements.
As for your second point, what should I do? Try to get it wrong? To get it working how? We're not talking of premature optimization here, we're talking about understanding the requirements, understanding the tools, understanding the big picture, understanding your time constraints, and pulling out the best job you can.
Simplest kind of technical debt:
1. I use a particular technique/abstraction/whatever to solve the problem, it solves the problem well.
2. Over time we solve other problems elsewhere. As we go, the bigger picture becomes clearer and we pick more suitable techniques/abstractions/whatevers as we go.
3. Eventually we have to solve a problem that interacts with the original problem, the newer techniques/abstractions/whatevers don't work cleanly with the new ones. So we have to make a choice:
a) Hack something together that solves the current problem without us having to touch much of the older code.
b) Rewrite the old code to match the newer technique/abstractions/whatevers.
c) Put down tools and thoroughly evaluate whether there's an even better technique/abstraction/whatever that solves the old problems and the new ones.
We all know that C would give us the best code, but it's also likely to mean we never get anything done because every new problem means reevaluating everything. B happens more often, but in reality we usually end up doing A due to various pressures.
At no point has bad code been written, but there's technical debt nonetheless.
Over time we become better at adopting patterns an architectures that allow for clearly defined boundaries and reduced cost of making mistakes, you still get technical debt (because just about anything you want to change can be considered technical debt), but it doesn't tend to cripple your ability to get things done.
>Ok for your first point - although I'd argue that technical debt is something that usually asks to be repayed within months. A two year old technical debt is just an improvable software - that is, it didn't yet show problems serious enough as to call for a refactor at unchanged requirements.
I've worked on five year old technical debt. It meant that bugs were far more common and fixes/new features took 10-15x as much effort as they would have otherwise.
It wasn't that it didn't 'call' for a refactor - it's that the team didn't respond to the problems by refactoring. They tried the following instead: heavy manual regression testing before release (once in two years), waterfalling, longer and longer feature/code freezes, keeping multiple branches around for different customers.
Managerial response was to hire additional mediocre developers, making the problem worse, but it wasn't like hiring better developers made developing immediately quicker and less risky. Paying that debt down to a reasonable level was impossible with mediocre developers would take ~36 months with good developers (also working on bugs/features).
>As for your second point, what should I do? Try to get it wrong? To get it working how?
You should do red->green->refactor.
After writing a failing test, your only priority should be to make the test pass. Not elegant. Just passing. Once it's passing, then make it elegant.
The reasons for this are twofold:
1) You're solving fewer problems at the same time. Something you want to avoid as much as possible as a developer is to have to juggle 40 different competing problems at the same time.
2) Refactoring-driven architectural decisions are ~95% of the time better decisions than those made during up-front design.
>We're not talking of premature optimization here
It's a closely related problem but it's not identical.
What value comes of characterising it as "inadequacy"? I mean you may enjoy the self-flagellation but does it actually help plan or work effectively?
To my mind the "debt" metaphor captures important intuitions: it accumulates interest, slowly at first but rapidly if you have too much of it, it can look like it isn't a problem until it is, taking on more is often an easy way out of your current situation in the short term.
>I often have the impression that the term "technical debt" is just an euphemism to avoid admitting that someone in the team has produced poorly thought and poorly written code.
It's absolutely not that. Technical debt is a natural by-product of working even with the best coders. There's nobody out there who doesn't create it.
Better coders just produce it more slowly and clean it up more often.
>I'm not sure I ever found myself in the position of writing bad code just for the sake of speed
I could literally spend all of my time making code nicer and none at all developing features/fixing bugs. It's always a trade off between speed and quality.
Of course. Some developers (or product owners, architects, managers) generate technical debt slower, some other generate it faster. Some generate a substantial amount of it for a task in which others would generate very little - in the same time frame. Then why don't we call it lack of skills? Sounds the same to me.
Because the same developer with the same skills can often ramp up technical debt to get a feature out in half an hour instead of a day, and there aren't any developers who haven't felt the pressure to do exactly that.
Ramping up technical debt isn't always about speed, either. It's sometimes about risk - it's often less risky in the short term to copy and paste a block of code than it is to change a block of code and risk breaking something else.
True. The amount of technical debt produced is a function of the time constraints and the person skills. In turn, the time constraints can depend on the skills of other people in the organization at planning, designing, figuring out requirements, managing the team and the process, etc. Saying "ah sorry, we'll have to work one more week/ month on this because, you know, technical debt" is sweeping all these possible issues under one big carpet.
I don't see why. Technical debt is just a measure of how much crap there is in the code. It doesn't preclude having a discussion about how much that is to do with skills and how much that is to do with pressure/time constraints/existing technical debt.
The point of the dial is just to make the trade off between quality and speed that individual developers are making every day both explicit and management's responsibility.
It means if the dial is turned up to 100% management have no excuse for asking the question "why is our product a pile of crap?". It means also if the dial was at 60% for a year and a half the developers have no excuse for why the product is still riddled in technical debt, meaning that skills problems are distinguished from time constraints and managerial pressure comes with a cost attached.
I disagree. Even among those who routinely write good code (all of us in our own minds) the schedule tends to pressure you to solve the problem before you, the one in the story, without checking if anyone else has solved similar problems already. What tends to happen is you write your own solution instead of finding the similar solution and generalizing it. Then, after a year you have have a dozen related tasks performed in slightly different ways. If the underlying (e.g.) data structure change, now you need to alter each of the dozen different ways
This sounds like a bad organization of the team, bad process or bad design. Clearly team members don't communicate enough, or groups of similar features have not been foreseen during the design phase, and the same code has been rewritten again and again by different people. This is not contingent "technical debt", this is a serious problem that needs to be addressed with changes in the process.
organization of the team and process? You mean the scrum process? Indeed, my point. Of course one can argue that no true scrum process would have such problems...
Yes, totally agree with you. But then, why "technical debt"? No, there is something wrong here and this process needs to be changed. Somebody made the wrong decisions, at some level, and there is a very specific issue to be found, analyzed and solved.
And yes of course, as we all know scrum is by definition successful, and all the teams that fail are not following the true faith.
I see technical debt as being that code you haven't written yet. You just ignore the errors from misaligned data instead of writing the date sanitizer. You monitor and hand-reboot processes that leak instead of finding the leak/automating the reboot. and so on.
I think the "speed" part is that once you realize it is bad code cause you didn't know how to do it better, you don't have the time to scrap it and start over.
When I read this I assumed you meant a dial that represents the current level of technical debt, rather than a dial that represents how much technical debt you're allowed to accrue.
The other interpretation might be a neat solution. Allow developers to indicate publicly what proportion of the time their last tasks took, they should have taken had there been no technical debt. It would handle the communication from developer into business-speak pretty effectively.
At the very least you need both dials! If you have only the one that management sets, saying how much time to spend refactoring, it will always be at 0. (Okay, they don't literally have to be dials, but there has to be some communication of the kind you describe, so management has some idea how much time is being lost.)
>At the very least you need both dials! If you have only the one that management sets, saying how much time to spend refactoring, it will always be at 0.
Management decisions are usually CYA based, and leaving the dial set at 0 both exposes them and gives developers a get out of jail free card.
i.e. leaving it at zero is suicide. That's the whole point of making it a trackable dial.
Probably what would happen in most cases is it would fluctuate between 30% during average times and 0% during crunch times.
That seems utterly wrong to me.
I'd expect any manager worth is salt to put that at a 10% or 20% (basically anywhere NOT 0) and uses that number as a reminder to devs that part of the job description includes MAINTAINING the system in proper condition, not just piling up new stuff on top of the existing random stuff.
Same as a car needs regular maintenance, a dev project needs regular maintenance.
One question I have is how does one actually objectively measure technical debt? I mean by anything more than a guess or intuition. Does every singleton/global variable count? Does every comment with a HACK count? What other factors contribute to technical debt, and by how much?
I've pondered this problem for a while and I think I narrowed it down to the following:
* Tight coupling (this would include global variables, among many other things)
* Lack of code cohesion
* Code duplication
* Code that doesn't fail fast (e.g. weak typing).
* Variable/class/method naming that is either not sufficiently disambiguated or is wrong.
* Lack of tools to run and debug code
* Lack of test coverage
Whenever I go looking for code to clean up this is what I keep an eye out for.
I'm pretty sure that each of these could be measured empirically somehow (I've read papers to that effect for a few), but we're not quite there yet in terms of tooling, or even in agreement over what technical debt actually is. Give it 5-10 years.
There is no objective measure. Yesterday's good code is today's technical debt.
But at the same time there is definitely some underlying real quantity. Different developers may not agree on every single aspect, but they'll agree about the difference between a good codebase and a bad one, and it really does take longer to make changes to the bad ones.
So what can you do?
I believe the technical debt could be measured as:
The time it takes to change code that is already written.
For instance, if you want feature N+1, but to do feature N+1 you need to change feature N, then the amount of time you spend changing feature N is the technical debt.
So when you are estimating, you could say: We need to refrob the whozzit to make it compatible with foo 2.0, then that work could be captured as technical debt.
I do believe that this N+1 - N captures the idea well.
One issue though: If the requirements have changed, that is not a good metrics.
i.e. We did that theme blue for Iphone, now people wants it to be green AND on windows. It's not necessarily technical debt, it's cancelling everything we've done to make something else entirely (even though it may seem superficially similar).
If the requirements and/or features for N+1 ignore or counter the requirements/features from N, it's not necessary technical debt, it might be you've just got no idea what you're coding and you're running in circle.
Agreed. One could argue that at time t+1, the product that meets the market need is the N+1 version. Therefore, even if feature N developed at time t was perfect for that time, it is still technical debt at time t+1. I could think of a lot of technical innovations that at their time were the perfect fit for the market, but at a future time become technical debt.
I think the point to take away is that software is just a tool, an artifact, and at the end of the day it just matters that it does what it is "supposed to do". However, civilization is dynamic, so we are constantly seeking to optimize an ever-changing potential function. So perhaps it is not the programmer that is running in circles, perhaps it is the market. So for the manager accounting for the technical debt, they must accept the change as a cost of doing business.
That will mean Technical Debt is dependent on what features you want to implement as to how much code has to change to support it. Still makes it rather subjective and hand-wavy.
If the technical debt is in areas that you will very rarely go back into, then it could perhaps be considered low-interest, compared to technical debt in high-churn code paths.
right, so the crux of the issue is that to understand how much technical debt you have, you must understand what the market wants, and what features you need to meet the demand. So I think it is extremely subjective. You could say "our product it perfect, it should never change". Thereby in itself you have accepted that all processes and information that create the product are also perfect, and therefore your technical debt is zero. You must pay nothing to achieve exactly where you want to be.
You could think of technical debt as the energy lost due to friction. It is the energy lost changing from one point to another in your domain space. Perhaps "technical friction" would be a better concept for software. However, that is even more hand-wavvy and harder to measure ;-)
In my experience, you struggle to move that dial from 100%.
In my experience, once it becomes a tracked metric that can be used to destroy a manager's career you can bet the dial will start to move.
Management is just going to leave it at 0% the whole time. Once in a great while, if you're lucky, you might get 5-10%
For most companies static 100% image would do for this dial.
Only if it's untracked. If it's tracked and the responsibility of management it'll go down.
I've been in situations like this, and when I've brought up techdebt issues, I get "YAGNI". I've tried to frame technical debt issues in terms of how it will impact future feature requests, performance, etc, and YAGNI is what normally comes back. Until... they actually NI. Then it's a hair on fire crisis. Fortunately have not been in that type of situation for years, but I know they still happen.
For those that didn't know - "You aren't gonna need it" (acronym: YAGNI) is a principle of extreme programming (XP) that states a programmer should not add functionality until deemed necessary.
Sorry - thanks - was in a bit of a rush and forgot to add that.
"YAGNI" is... in general... not that useful when used by people who've never worked on a particular type of project, because, almost by definition, they don't what what they will and won't need.
And... defining "needs" is its own set of headache. Needed by who? I'll tell you what, we need to ensure we have a logging system in place that can alert folks, and we'll need the ability to view logs in production, and share access to those. Years ago I got "YAGNI" back on that, and months later... weird bugs that no one could reproduce, and the minimal logging in place was only accessible by one guy who was out of town for a week. But hey... we got those "rounded corners" to work in IE5 and IE6 with only an extra week of work - yay...
YAGNI is a good principle, but as with most ideas, you get problems when you introduce various types of people in to the mix. Someone who's never done a project type X should not be the one making YAGNI decisions when other people on the team have done multiple "project type X" before, and are trying to introduce basic requirements.
> the stories are always articulated in terms of user facing features
Reminder: the "User" doesn't have to be an external person to the team. It can be internal for tools, or a tech debt task for internal issues like the ones you mention.
Thinking Scrum is just about end/paying User stories is mistake #2 I see when people do Scrum (#1 is the classic "We're going to not actually do Scrum but call it Scrum" that leads to a lot of "Scrum doesn't work" articles itself).
> Because the stories are always articulated in terms of user facing features they encourage developers to hack things together in the most expedient way possible and completely fail to capture the need to address cross cutting concerns, serious consideration of architecture, and refactoring.
This is one of our biggest challenges. Much of the work we're doing isn't confined just to a user feature, so square-peg-round-hole syndrome affects us a lot.
This was my favorite point out of them all:
> What about contributing to open-source software? Reading the code of an important external dependency, such as the web framework your team uses, and working on bugs or feature requests to get a better understanding was not part of any Scrum backlog I've ever seen.
Working at various startups, I have developed a methodology of contributing back to open-source projects we use without accounting for it in sprints or the ticketing system. It involves getting to the office an hour early, over-estimating on my other tasks so I have time for something extra, and a sprinkle of office politics.
And yet, it is some of the most valuable work I have done. Not for me, but for the companies I've worked for.
To get an in-depth understanding of that one Django or Express.js feature you use, or even better, to find and fix a bug that affected the business or may have affected it in the future, just gives you that much more of an edge over your competitors. Say goodbye to that nasty workaround you had to use to get around the bug--now it Just Works exactly how you need it to!
What's more, it's attractive to engineering candidates when you get to tell them the story of when you fixed a big bug in Socket.io.
The best engineering managers I have had have been receptive to the idea that this type of research and/or contribution to other projects should be considered "work" that provides value to the business.
Fixing a bug in OS software and consequently avoiding a workaround should count as double points in Scrum
Biggest mistake in scrum is to give points for bug fixes, even if they're not yours.
We now use Kanban on our team, after using Scrum for a couple years. Pointing is a process to discover unexpected hurdles or uncover hidden knowledge from coworkers, and as a guideline. By not having sprints, you just focus on your current ticket, and not artificial deadlines or points. Tickets will be done when they are done, and managers don't have expectations of completeness that as disconnected from reality. We've added priority swim lanes to our process. We have a prioritization meeting with PMs every week to assign priority to tickets, order items in the high and medium priority lanes, and review blocked items to see if anything can be nudged back to a working column. The process is clear, the expectations are fluid, the work gets done in the time it needs to get done.
There's certainly a lot to like about that approach. But my experience is that it makes it very easy to end up with tickets that end up taking months (because there's no longer a natural point at which to stop and take stock), and/or accepting tickets that don't really have clearly defined acceptance criteria, which risks working on something that won't actually turn out to be useful. I don't like the artificial 2-week cadence of Scrum but I think I might still prefer it to not having one at all.
When I worked on a team like this, the rule was that when you came free and are looking for work, before you started any new stories you looked at the work in progress to see if you can help out to expedite any of it. The idea is that it's everyone's responsibility to try to minimize the work in progress (Lean).
We also had the general expectation that one story should generally take no more than about two weeks. Before starting a story, if you think going in that it will be too big, then you try to limit the scope or defer parts of it into new stories until you're confident it can be done in two weeks.
Once a week we tracked how long stories were in the "In progress" column, and once it'd been up there for three or four weeks, people started asking how they can help wrap it up. I think the longest I remember a story being in progress was about 6-7 weeks, and that was real uncomfortable for us. Typical times were 1-3 weeks.
So we had a weekly cadence for demos, and product owners liked that they would see steady, regular progress rather than being inundated with sudden large dumps at sprint boundaries.
Two weeks! Any story that takes more than 2 days is generally broken down into smaller stories. A two-week project is more like a small epic...
Lots of teams have different expectations about how granular a user story should be, and some of this has to do with the project. How fast can you design, develop, test and deploy a meaningful amount of new content?
One explanation for the longer time is that we had a process where for each user story we'd write a short, informal design document and send it to the team and stakeholders for review. For non-trivial stories we'd then meet to discuss them and come to a consensus about how to implement it. This probably added about a day or two to a story's duration, but it meant that we had a solid system at all time. It also often served a similar purpose to a retrospective, or produce topics for a retrospective, because these meetings would surface any technical debt and other impediments to progress. It also served as knowledge transfer and helped the team converge on design principles and expectations.
So these meetings had a cost, but we felt it was essential for practicing agile design. We didn't find this made us slow to react. On the contrary, because this kept our technical debt low, and our software well designed and well understood by the whole team, it meant we could pivot on a dime.
I think there's a real risk in going too fast. Maximum speed should not be the goal of a software development process, and I don't think any business really wants that. The two primary goals should be: 1) to make predictable, steady progress over long periods of time, and 2) the ability to change priorities as quickly as possible as new information emerges. If you have to sacrifice some speed to get there, it may be worthwhile.
It's up to ticket creators to pipe up if their request hasn't been worked on forever. But we recently reviewed the whole board with the PMs and that was pretty good for getting some things out of limbo, and closing others that never got that important.
Add WIP limits to your columns. That means no cases can be added without other ones been completed first.
That doesn't help. If you have n people you need to allow n tasks in progress. The problem is one person being on the same task for 6 months.
Honestly I don't think that's an issue, if you lead developer is keeping on top of the kanban board, he will notice it and address it.
Most Kanban boards allow you set to mark items if they go past a certain time.
Manage by exception. If most cases just take a few days, set up some kind of system to mark the outlining cases out. Then you can investigate and address these exceptional cases if required.
> if you lead developer is keeping on top of the kanban board, he will notice it and address it.
Do you have some extra process that makes this happen? I don't remember anything in the version of kanban I saw that implied the lead developer should be doing this.
> Manage by exception. If most cases just take a few days, set up some kind of system to mark the outlining cases out. Then you can investigate and address these exceptional cases if required.
In the project I'm thinking of it wasn't a single exception; rather the length of a typical task gradually crept up (from below 2 weeks at the point when we switched from scrum) until it was normal to have tasks lasting multiple months.
Then you leadership is not doing it right. Cases should take days at most, more then that they should be broken up.
The whole point of Kanban is to have everything visible and known. If your not looking at the Kanban board and taking action based things that look wrong whats the point?
> Then you leadership is not doing it right. Cases should take days at most, more then that they should be broken up.
It's easy - and useless - to make this about people. The point is, this is a problem that we didn't have under Scrum, with the same people.
> The whole point of Kanban is to have everything visible and known. If your not looking at the Kanban board and taking action based things that look wrong whats the point?
At what point does it look wrong? If the idea is to have everything visible why does Kanban generally include fewer stats than other processes? I was always told the "whole point" was limiting work in progress.
It's a problem under scrum as well. Just because the sprint is ending doesn't mean the case will finish. They just get dragged into the next sprint
That's not supposed to happen, and the process says that in that case you have to at the very least re-estimate. If a ticket is really taking forever, at some point the estimate for the time to finish it will be more than a sprint, at which point you're obliged to split it up.
I'm considering a similar approach with my team. Any specific resources or books or tips you found especially helpful?
We use a similar approach in my company and the key here is to borrow the parts that work for your organization from Scrum and blend them with a much simpler Kanban approach. Don't follow a methodology blindly just because someone wrote a book about it. Keep evaluating & tweaking the process until you think it's at a point where it's working well for your organization.
I disagree with most of the major criticisms here. I think it is a valid description of an experience using scrum half-heartedly, but not an argument against its purpose or value.
Points, for example, the argument is based on the premise that teams are obsessed with points. What if the teams use points as a framework to discuss complexity? I have worked in scrum groups where if something was given a large point value it would be questioned and "split" to break the job into component tasks that could be done by separate people simultaneously (or some now some later).
Long meeting times with the wrong people in the meeting? That's a managerial issue and has nothing to do with scrum.
Writing stories is the main art of scrum and without good stories it is pointless. A story can be written to encourage a developers to improve quality of a codebase, share knowledge with another team, or take time to learn themselves. A really good story would encourage these things and also deliver customer value.
Any system for managing software development hinges on good communication. SCRUM's main advantage to me is that it provides continuous opportunities for face to face communication. If you fail to take advantage or engage with those opportunities then it won't work, but I bet another "framework" wouldn't either.
> A story can be written to encourage a developers to improve quality of a codebase, share knowledge with another team, or take time to learn themselves. A really good story would encourage these things and also deliver customer value.
Show that to a people unfamiliar with the Scrum church and it will say that sentence is meaningless. And well, it is, in an absolute way. A story is just that: a story. Maybe it can help to put your son to sleep, but I doubt you need to invent a brand new vocabulary to discuss about software architecture and technical debt, and I doubt you can do anything brillant by only using a few examples in a field that is even more driven by pure logic than maths.
We are grown up. I don't want to work anywhere where I'm told stories and must get some points done by the end of the week. And this is not anecdotal: Scrum is a derivative of a manufacturing framework. I'm doing engineering.
Perhaps but this could be viewed as the "no true scotsman" thing. My main objections to Scrum are a) you need to shoehorn things into user stories that are not naturally expressed that way (as a user I would like all relevant data in the database associated with standard ontology?) b) In a special domain (healthcare) it requires good developers with some level of domain expertise and I find this rare c) (and this is a management issue) people think you can remove important aspects of the methodology (e.g. colocation of resources) and have it still work
I feel like specialized domains in general require devs with domain expertise.
Maybe waterfall with unusually good specifications can use cog-like devs, but the few times I've seen that done it didn't turn out very well.
Agreed
This is my experience also. Points are complexity and high complexity tasks are most definitely to be addressed by breaking down stories. Dependencies are also easily addressed by setting a Definition of Ready. Don't accept a story if the dependencies aren't met (be that APIs, environments, designs, whatever). Simple.
Planning/grooming we do in one hour. You good story writers is all. I once had a planning last 12 hours because the product management was so awful. If your team sucks, no process will save you.
Exactly. A good agile, and by extension SCRUM, process allows the developers to influence how things get done. If the way stories are written are leading the team to write unmaintainable code, change the way the stories are written. Or talk about why the problem exists in the first place and take action to fix it.
Again, it all comes down to communication. If nobody wants to talk to each other, SCRUM is not going to help.
As with all of these posts, they're extremely lacking in alternatives. Of course SCRUM won't work for every team, or even every project for the same team. But it provides a way for the team to evaluate and adjust how to best accomplish their goals in a straightforward and (if done right) low friction way.
I think it's taken for granted how much effort is alleviated by adopting SCRUM. Just think how difficult it would be to develop an alternative process for every team, every project, every new person to join one of those teams or projects. Everyone has their own way of doing things, standardizing on a few universal goals aint a bad thing.
> Long meeting times with the wrong people in the meeting? That's a managerial issue and has nothing to do with scrum.
If scrum tends to lead with long meeting times with the wrong people then scrum certainly should include safeguards against that. Otherwise we need "x+scrum" framework which includes these safeguards. I don't think scrum should fix every problem "in management", but "meeting management" should be a core scrum issue to deal with.
> Long meeting times with the wrong people in the meeting? That's a managerial issue and has nothing to do with scrum.
Even with two-week sprints the majority of your last day is entirely meetings between the review and retrospective. In my experience one-month sprints are more common, especially in large enterprises and government, and you will literally have 7 hours of meetings on that final day if you follow the Scrum handbook.
Long meetings are built into Scrum.
Quoting a nice idea from the article: "One way to achieve this might be putting work items through what I would call an algebra of complexity, i.e. an analysis of the sources of complexity in a work item and how they combine to create delays. The team could then study the backlog to locate the compositions that cause the most work and stress, and solve these knots to improve the codebase. The backlog would then resemble a network of equations, instead of a list of items, where solving one equation would simplify the others by replacing unknowns with more precise values."
I've never been on a team that does pure Scrum - even ones which intended to do so always ended up with what we termed "Scrum-ish": taking the ideas from the base methodology, but adding a whole level of 'house rules' aiming to patch up obvious gaps. For instance, always making time for one refactoring or technical debt task per sprint, tracking accuracy of estimated time versus actual time elapsed (deeply uncomfortable but very useful!), the hundreds of different rules around trying to make standups shorter and more useful.
I am a big fan of well-run retrospectives, though: they can be a really nice way to feel empowered as a developer, especially when you have one retrospective identifying that Thing A keeps causing everyone pain, and the next retrospective having everyone say 'Hey, Thing A is so much better now!' Never realized they weren't 'meant' to be about technical matters, though: in our Scrum-ish teams, they were always open for all topics, and I think that's a very good idea.
Of course, the fun thing about Scrum-ish teams is now you have a whole new level of debate that can happen: "We're failing because we're not doing Scrum rigorously enough!" vs "We're failing because we're doing Scrum too rigorously, and what we need is more house rules!" ;)
Retrospectives for us consist of the dev team sitting in a room getting lectured by the PM on why we consistently fail to close all of our stories, and asking for our opinion on what new processes can be introduced to fix it.
Of course, feedback is solicited, but it is an unspoken rule that criticism of project management is verboten. However, criticism of self and others on the dev team is absolutely allowed and encouraged, and so the brown-nosers amongst the team use the opportunity to make themselves known.
This is familiar to me. However, I'm not sure you'll like my "fix". The PM should never be outside the discussion. The moment they become a separate entity, an unaccountable entity, dev can become more and more pathological. It is for us to stand up to that, communicate the difficulties in task estimation, keep a strict paper trail on when specs get shifted/scope creeps so that we have a well backed response when this discussion happens.
I would even argue that in the same way the "brown nosers" are trying to make some positive impact for themselves, as risky as it is, you can do better by fighting bad product management. If you feel the pain, your team does, and likely, your manager might as well. (if it's also a separate entity from the PMs.) The loyalty and trust you can build by defending your devs and being a force for good can be absolutely invaluable as your career goes on.
By and by, although I acknowledge the risk, if you shape your rebuttals well, you can find yourself bringing PMs to your side (a recent "spirited" discussion in which a PM was refusing to institute KPIs to track their features ended with their other PM peers questioning their resistance and backing the eng push for better telemetry, because when it came to "how can we justify how well we're serving clients if we _don't know_" this speaks even across lines.)
I'm skeptical about retrospectives.
We've certainly done them, identified problem points and then solved them (and it does feel good to do that..) but it doesn't actually seem to make things better.
Lets compare it to say, personal estimations:
When you estimate, execute and reflect, you can tangibly improve your estimation process.
You can quantitatively observe an improvement in estimations on tasks when people go through this process.
Previously; estimated 20 hours for (task). Took 10 hours. Repeat... soon, your estimates are for 10 hours, and you're quantitatively, objectively able to make consistently better estimates.
Retrospectives in my experience don't do that.
You can sit through 50 retrospectives and each one identify a problem area and then fix it and yes that does feel good, but objectively when I reflect over the defect rate as a result of the process, I feel like retrospectives make zero impact on the rate at which technical debt accumulates.
There's something missing in the way they work; all you (well, all we, I suppose, this being my personal experience...) ever do is find things that are wrong and fix them. Objectively when you look at it, there's no closing of the loop where the defect rate drops.
There's no process improvement that generates less problems in the future... all it ever is is band-aiding to prevent technical debt spiraling totally out of control and devastating the project.
There must be a better way, where you somehow measure how technical debt was created and work to incrementally prevent it happening... but I've never seen that actually happen in practice.
> tracking accuracy of estimated time versus actual time elapsed (deeply uncomfortable but very useful!)
It should only be uncomfortable if the team means 'commitments' when they say 'estimates'.
This is part of the reason stories are generally pointed with (more or less) triangular numbers. So that teams stop fretting about missing estimates by an order of magnitude or less.
The retrospective is also my favorite thing about Scrum (or my favorite Scrum ritual, let's say). As you said, when it works, the team feels the rush of having solved a significant problem together. This is the reason I think it should be done more often, and more thoroughly. (I'm the author of the post, btw)
> Quoting a nice idea from the article: "One way to achieve this might be putting work items through what I would call an algebra of complexity, i.e. an analysis of the sources of complexity in a work item and how they combine to create delays. The team could then study the backlog to locate the compositions that cause the most work and stress, and solve these knots to improve the codebase. The backlog would then resemble a network of equations, instead of a list of items, where solving one equation would simplify the others by replacing unknowns with more precise values."
Does this even mean anything? How would it work in practice?
I was going to post the same thing. I found it interesting that the parent comment thought that quote was one of the highlights of the article. To me the quote is far too vague to be converted into anything executable.
My impression is that he means something like identifying a part of the codebase that anytime a story requires working in it, it sucks. I remember we had some old UI object written in some arcane javascript and everyone felt like dying when they had to work in it. But in my experience, the pointing process naturally included that as a consideration. I believe my brain already processes that algebra of complexity in it's native operations when I estimate a point value. I'm not sure that attempting to materialize this "algebra" into some system of equations would be very helpful.
I think the argument for scrum is something like "Our big organization has complicated politics and scrum is one way of dealing with those politics." However, I often wish the management would directly address the internal politics that undermine productivity. I realize that it is awkward and uncomfortable for managers to talk honestly about the differences they have with other managers and other teams, but that is also their profession. That is, working through any relationship that undermines productivity is the job of a manager.
I wrote about this here: "The Agile process of software development is often perverted by sick politics"
http://www.smashcompany.com/business/the-agile-process-of-so...
> What I have a hard time understanding is why the ancient, simple communication form of text is given second seat. The truth of the matter is that, especially under the constraint of distributed teams, it's difficult to beat text.
That's a really good point. I'd be excited to see what a team could do if each developer wrote a one-page memo about what he did and what he was going to do once an iteration, and a few sentences for each day. Throw 'em in a log, and they might even aid the retrospective.
What a fantastic history of a project that would be...
Maybe we could just go back to .plan files, like Carmack.
Not to take away from either point, and I largely agree, but written word is our 'mode 2' - a lot can be missed without talking face to face.
Yes the issue with standups/scrum cereomonies is if people ignore all the 'other' info they are receiving and for what ever reason choose to ignore it.
The real key for Scrum for me has to be not only teams that want to write good code, but teams that WANT to get better at working together. That takes the whole team. If the team aren't bought in to this, then I'm not sure which project management tool will work for them, but it sure isn't scrum!
Scrum proponents (a label I would tentatively apply to myself) would tell you that 'you're doing it wrong' but unfortunately a point-by-point reply to this article would detract from the general problem here: Scrum is intended to be the straightest line towards measuring your real progress on a project, and not much else
If youre working on a project where it is important that you have as-accurate-as-is-realistic an idea of the size of the project, or more specifically your progress through that project, then I can't see how a methodology could be any simpler.
If having a good idea of the size of your project over time and your progress through that project are not very important from a management perspective, the Scrum artefacts will seem like, and will probably in fact be, needless overhead.
Scrum is not opinionated about the actual development methodology so claims about how it affects the code that is written are themselves a bad smell IMO.
> Scrum is not opinionated about the actual development methodology so claims about how it affects the code that is written are themselves a bad smell IMO.
Scrum is actually part of the problem, IMO. I've seen many teams turn scrum into a hammer and treat all future problems as nails.
Example problem: The foobar story has failed failed for the third sprint in a row.
Likely discussed in retrospective (plausibly good ideas, mind you):
- We need to break down stories more before we estimate them.
- Or we need to stop underestimating foobar stories.
- Or we need to focus on unblocking subtasks related to foobar stories.
Probably unconsidered:
- The foobar code is a mess and needs to be refactored.
- Or the foobar subsystem is too coupled to the Fizzbuzz subsystem.
- Or the need for some developer tools to increase productivity in the foobar ecosystem.
Since scrum is methodology oriented, methodology is the first tool teams reach for when a problem is encountered. And I see this after team leads make it explicitly OK to discuss technical subjects in retrospectives.
I'm not a psychologist, so I can't describe why this phenomenon happens, but I see it regularly.
All of the items you listed under unconsidered should be brought up by the dev team. If the dev team is uncomfortable bringing them up, then that's probably a sign of friction between the dev team and management, which is really common.
I've routinely brought up all of the unconsidered comments in retrospectives. Retros are all about making sprints better, and talking about technical problems is integral to that.
>Scrum is not opinionated about the actual development methodology so claims about how it affects the code that is written are themselves a bad smell IMO.
Pretty much every kind of deadline driven development ramps up technical debt. Scrum certainly isn't the worst in this respect (developers make their own deadlines, and conscientious ones will build the time in), but the emphasis on commitment and the pressure to deliver at the end of the sprint puts pressure on developers to cut corners.
The worst part though, is that the product owner is usually non-technical and will deprioritize stories to clean up technical debt as a result.
IMO for any kind of development methodology to work it must have an opinion on technical debt. Scrum doesn't.
Sprints are meant to be based on the previous sprints velocity, so any commitment should get smaller and smaller until you can do it without forcing it.
if pressure is ramping up and quality down the sprints arent serving their purpose.
One of the few defining characteristics of scrum is that the developers define how much they can achieve, and this estimation is improved over time. If this is not happening there is something else wrong with the culture and Scrum is being used as a scapegoat.
A few defining characteristics of scrum that lead to overly optimistic predictions:
* The prediction is made in a meeting while your head is "out of the code".
* The prediction is made in a group setting, rendering the decisions more easily subject to peer pressure and groupthink.
* The prediction is made up to 2/4 weeks in advance of actually doing the work.
* The prediction is made without risk of overshoot attached. Risk is critical metric which scrum conceals.
And the main defining characteristic of scrum that leads to pressure, after all of that unwarranted optimism:
* The prediction is designated as a commitment.
It sounds as though you objecting to being required to give any estimate at all.
I don't know how you manage to read this. He seems to say he would like to be in a situation where he has the means to give good estimate but scrum forbids it and forces to give random and biased estimations.
can we infer that he would like to give his estimates:
* while he is actually writing the code (so not up front)
* not in a group setting but as an individual, so either one person estimating the whole thing or each person giving different estimates
* (third point same as first, dont want to estimate up front)
* must incorporate what is often called 'contingency' (which is actually what the whole point of measuring velocity is for!)
* and the final point - he doesn't want to have to commit to it
how can you _not_ read this into it?
Assume each person giving different estimates for their own work, but not up front - ongoing as code is written.
How is that the same as not being "required to give any estimate at all"?
> he doesn't want to have to commit to it
why not? an estimate is an estimate, not a commitment. Committing to an estimate makes it a commitment, not an estimate.
I might expect a dice roll to be 3.5, I'm not committing to the next roll being 3.5 - analysis should inform policy, in this case expectations informing stated commitments, but the two are not the same.
Furthermore, this bullet point actually takes the quote out of context - He specifically doesn't want to commit to the estimate produced under the previous conditions, not that he won't commit to any estimate. The difference is choosing to commit to an estimate you have high confidence in, versus any estimate given automatically being a commitment (where estimates may be required on demand).
it is totally reasonable for stakeholders to want to track your progress through a project. If you have a good way of doing that then great, you should use that.
Scrum people believe that scrum is the simplest way of measuring that. But at some stage you have to estimate the constituent parts of the project in order to get an idea of its size, and for those estimates to be useful in tracking your progress you have to do it in advance.
I repeat however, if you dont need to do this then thats fantastic! Many of us do however, and some of us choose to use scrum to do that, and some of us have had a great deal of success with that.
(edit: I worry that this sounds condescending. I am just trying to keep the tone friendly)
> for those estimates to be useful in tracking your progress you have to do it in advance
In advance of what? The only constraint on a useful estimate is that is comes before the task is finished - it needn't be considered as credible at the earliest possible time.
Also, your response doesn't really address my post..
(I went to bed so didn't take long to reply before)
I am clearly not expressing myself well. I am talking about a situation where some stakeholders are expecting a complete picture of roughly how large the project is and would like to be able to track how far your team is through this project on a regular basis.
I am putting scrum forward as a methodology for, in as short a time as possible, measuring the size of that project in a meaningful way by merely breaking it up into as small pieces as possible and attaching numbers to those pieces, intended to measure the size of each piece relative to the other pieces, and then over time discovering how long it takes to complete a piece of a given size.
> Assume each person giving different estimates for their own work, but not up front - ongoing as code is written.
The situation I outlined above (the time when scrum helps out) requires you have a stab at estimating all the constituent parts of the project at the beginning of the project.
> an estimate is an estimate, not a commitment. Committing to an estimate makes it a commitment, not an estimate.
True, but the point of estimating in scrum is to assign relative sizes to the pieces of work, not a number of hours, so this isnt a commitment to finish at a specific time but just to say 'I think this is one of the larger pieces of work in this project.' The person I was replying to sounds like they are on a bad team/project where people use their estimates to blame/finger point, and they are ascribing this to scrum as if the team wouldnt be doing this otherwise.
And in case you suggest that estimating without ascribing a time value is not meaningful, it is used to track how far you are through the project, and over time you refine what the finishing date will be given the emerging velocity.
> I might expect a dice roll to be 3.5, I'm not committing to the next roll being 3.5 - analysis should inform policy, in this case expectations informing stated commitments, but the two are not the same.
The analysis comes in discovering the velocity. The expectations evolve over time. But knowing your velocity is of limited use if you dont have an estimate of the overall size of the project.
> The difference is choosing to commit to an estimate you have high confidence in
This is the method for getting confidence in your estimate. You have an overall number of 'points' in the project and you learn how many points you can tackle on average every X weeks.
>The person I was replying to sounds like they are on a bad team/project where people use their estimates to blame/finger point, and they are ascribing this to scrum as if the team wouldnt be doing this otherwise.
Every time you try and infer what I'm "really" saying or what "really" happened to me you get it completely wrong. Next time you do that just assume that you're wrong, it'll save us both time.
The blame/finger pointing on my projects wasn't really external (although in a different environment it certainly could have been). Developers themselves felt bad about missing their 'commitments'. The pressure/blame was largely self-inflicted.
Despite feeling bad the predictions were still consistently optimistic and still consistently wrong due to the environment the predictions were made in. It was a bug in the scrum process that led this to happen, but the team and management (and you, apparently) would rather assign blame to anything else other than a bug in their methodology.
>The analysis comes in discovering the velocity.
Velocity isn't a useful metric.
>This is the method for getting confidence in your estimate.
Except it doesn't work. It didn't work for us and it probably doesn't work for anybody else.
Confidence in estimates means treating risk and uncertainty as if it is real rather than sweeping it under the carpet, like it is in scrum.
Confidence means a prediction process that doesn't make developers feel guilty about being wrong, like it does with scrum 'commitments'.
Confidence a prediction process that doesn't intentionally subject developers to groupthink and peer pressure by immediately putting them on the spot like scrum planning pt 2 does.
Confidence means that your estimation process itself should be mutable. Under scrum it is fixed and not subject to review (if you change it you're doing "Scrum-but" and that's a sin, according to scrum trainers).
Most of all, confidence means that you should be able to inject technical debt cleanup stories into the sprint that derisk future changes. Scrum says that's only allowed if the PO says it's allowed. The PO is not responsible for missed commitments though, so it's not their problem.
>* while he is actually writing the code
Yes. I can take time out to answer email. I can take time out to make estimates as soon as I get an estimate request. Doesn't have to be done in a meeting.
>(so not up front)
What the fuck is the point of an estimate that's not made in advance???
>not in a group setting but as an individual, so either one person estimating the whole thing or each person giving different estimates
The latter. Is that a problem?
>(third point same as first, dont want to estimate up front)
"Not up front" is not the same thing as "not 4 weeks in advance". I'd do it as soon as the PM needed it to do prioritization.
>must incorporate what is often called 'contingency'
If you think risk and contingency are the same thing you're an idiot. Risk is story A (e.g. upgrading dependencies) might take 0 hours or might take 4 weeks while story B (updating translations) is going to take 1.5 hours and it's really only going to take 1.5 hours.
Contingency is (for example) "let's make sure we have 4 weeks spare before doing story A".
>(which is actually what the whole point of measuring velocity is for!)
No, velocity is about measuring how fast you're doing stories.
>and the final point - he doesn't want to have to commit to it
Yeah, because as soon as you start assigning blame for missing feature deadlines the technical debt dial gets ramped up to 11 and predictions become an exercise not in being accurate but in CYA.
An estimate about how long something is going to take can be wrong for many reasons that aren't the developers fault - bugs in libraries, technical debt in dependencies, technical debt they weren't aware of and didn't create, team members disappearing, etc.
If you want developers to commit to things make sure it's things that they have full control over.
The tone of this post is uncivil, e.g. "If you think ... you're an idiot."
(replying here because I guess we've reached the maximum depth)
I am here assuming that you want to be able to try to measure your progress through the project (as I mentioned, this is the only thing scrum does for you). Both of you seem to be suggesting (dont insult me if Im wrong) that this isnt the highest priority.
And no, velocity is to make the whole system self-adjusting. If I put 3 points against a story we use velocity to discover over time how long those 3 points take. This self-adjusts to incorporate for contingency.
If you disagree with this then we simply disagree on what velocity is about. It doesnt make us enemies, we dont need to get super pissed off at each other.
I've seen the "you're doing it wrong" argument so many times (I applied it myself a few times).
Scrum is complex and not always possible to follow exactly, so this is to be expected but it makes me wonder, how many successful projects are out there that are following the true Scrum methodology?
My guess is that it's a few more than the classic waterfall but I still seem to see far more failure than success stories.
The very idea of a one-size-fits-all process is unrealistic IMO. Something will always be customised in practice.
Regarding success stories, it might be that process doesn't play such a critical role as long as solid engineering techniques are used and the team is competent.
If your team is competent and solid engineering techniques are being used, you already have a well working process. Forcing any methodology on this will likely result in a deterioration.
All those methodologies are for the less stellar programming teams, to get consistent results from those (also to a lesser degree to make good and bad programmers work well along each other). Because you can't always get the best programmers.
If Scrum would only work well with good programmers, it would be next to useless.
Successful big waterfall engineering projects where waterfall is actually applied exist. Want to construct a bridge or a rocket, design a microprocessor? You are not going to do that with "stories".
It remains to be seen if big Scrum engineering projects where Scrum is actually applied even exist. I can't even think about one on the top of my head. I'm not even sure Scrum is that well defined for us to be able to judge if is correctly applied or not. And it's yet another story to judge if they are successful or not.
In the end it does not matter much. The theoretical vision that nobody ever uses has almost no interest if you are concerned with real world efficiencies.
> Successful big waterfall engineering projects where waterfall is actually applied exist.
You are engaging in equivocation.
> Want to construct a bridge or a rocket, design a microprocessor? You are not going to do that with "stories".
Nor are you going to use the software development methodology described as the waterfall method (you may use a physical engineering methodology that was among the inspirations for that software development methodology, but those are distinctly different things, with different specific practices, and different domains.)
> I'm not even sure Scrum is that well defined for us to be able to judge if is correctly applied or not.
Scrum is exquisitely well-defined, both as to what it involves, what it specifically excludes, and what it is neutral to, in the Scrum Guide. (There's lots of confusion between Agile, a broad approach which is not a specific methodology, and Scrum, a very-specifically-defined -- though by itself fairly incomplete, in that any implementation of Scrum needs lots of decisions on the things to which Scrum is neutral -- methodology.)
Ok I maybe went a little far for the bridge, but today a microprocessor is way more similar to software than it is to a bridge (at least in some of the design phase, but then now even in some maintenance phases). And a modern rocket also contains tons of software. And waterfall is similar enough to (at least non-software -- but in my thesis also software) engineering to even consider a direct equivalent for the bridge. Only, quite like a description of a method is often not enough to see how it is properly used, the mythical "waterfall" where a phase begins after the other one, strictly, never happens, and there are all kind of loopbacks -- even for the bridge -- and obviously if you try to remove loopback things will get fucked-up, but why would you try to do that? Now in real world conversations, waterfall is used to designate software being developed with proper general engineering practices.
Scrum origin is partly in manufacturing. Now there are some common points between some aspects of software dev and manufacturing, especially more so if the software being developed can be iterated very quickly (but very few if it's not the case), but at least in the real world (and maybe even in the theory) Scrum is what is also actually mainly used to interact with other stakeholders. And given how the communication is performed, and its content, that might be better than complete chaos when nobody is actually able to do the work they are supposed to do (PM being limited to having vague ideas, lack of a truly competent tech lead doing actual tech lead work, lack of vision by management, and so over) and only very vague general ideas of what should do the software -- or more generally the whole product -- are ever emitted.
As soon as "serious" stuff starts to be involved, you need real boring engineering, with functional analysis, requirement engineering, modeling, systematic testing or even partial proofs, etc. And you need to have it structuring communication between teams, and day to day work. And then, I don't expect Scrum or anything Agile in such a context adding any kind of value.
Now the theory of Agile and Scrum has evolved because of criticisms to a point where we are told that it actually do not cover the things that matter. That is bullshit retro-justification, now that the world is fucked up trying to make sense how to use that. Here is the Agile manifesto:
> We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:
> Individuals and interactions over processes and tools
> Working software over comprehensive documentation
> Customer collaboration over contract negotiation
> Responding to change over following a plan
> That is, while there is value in the items on the right, we value the items on the left more.
Engineering is mainly about "processes and tools", of course "individuals and interactions" are also needed, but there is no need to oppose them (although I am not sure what is the point of "individuals" here; the authors might as well said "oh and by the way be nice")
"comprehensive documentation" is critical in all kind of domains, and now that software is everywhere it just makes no sense to declare your "preference" of "working software" above "comprehensive documentation". It is, again, even dangerous to oppose them.
Customer collaboration over contract negotiation; again, highly dependent on the field and specific project if this is something where it makes sense to even have a "preference" or not.
"Following a plan" is what you do about how you organize your work when you use Scrum. There is no problem in studying the impact of a change any time if proper engineering practices are used. Obviously, the cost can vary depending on various factors.
My conclusion about Agile and Scrum, is that if you prefer all of that (4 Agile preferences, and the Scrum theater), you should seek projects that are suitable for the Agile preferences, and so poorly defined that Scrum is a plus. On my side, I'm just not seeking to work on chaotic projects -- on the contrary I try to bring logical and more systematic practice where I feel that chaos reigns -- and I'm neutral about Agile preferences, I prefer to choose projects on other criteria (mostly; intrinsical interest)
> Engineering is mainly about "processes and tools"
And Agile does not avoid processes and tools, it recognizes that process and tools must be specifically fit to the particular team and context of work (Scrum, particularly, is a baseline set of processes and tools that is designed to serve as a framework for common contexts of software work -- its intentionally incomplete to avoid specifying too much that would narrow its scope of applicability.)
> "individuals and interactions" are also needed, but there is no need to oppose them
The need to oppose them comes from the authors' concrete experiences in the software world before writing the manifesto, where very frequently canned (often consultant-pushed) processes and tools were being adopted by management in shops without considering the dynamics of the existing team and the particular work being done. (One of the sad ironies of the Agile movement is that the "Agile" banner itself has become a tool for the same kind of thing.)
> "comprehensive documentation" is critical in all kind of domains
Yes, it is; the preference stated in the manifesto is, again, the result of concrete experience where projects were quite often focused on producing mandated documentary artifacts because there was a checklist and that was how "control" was exercised, but the documents required and delivered were often irrelevant to (and not consumed by, or updated to reflect changes resulting from, the process of) delivering working software.
> Customer collaboration over contract negotiation; again, highly dependent on the field and specific project if this is something where it makes sense to even have a "preference" or not.
This is intended specifically in the content of developing specific software requirements (and, really, its more about the dev team pushing the customer to engage rather than provide hands-off requirements.)
The Agile Manifesto really deals with concrete problems encountered in particularly enterprise software contracting (but bad practices from the enterprise world were, at the time, getting exported to the rest of software development, so not limited to the enterprise world.)
> "Following a plan" is what you do about how you organize your work when you use Scrum.
Scrum, like most methodologies that attempt to implement agile values, focuses quite a lot on managing potential rapid change within the plan.
Well, I've got "concrete experiences" in the software world after the manifesto, where this has been interpreted has fuck processes and fuck tools (except those of Scrum, regardless of their applicability -- which is not the majority of projects, far from it) and let idiotic work continue to be done, now that we have a noun for it. This is not better than the previous situation. Honestly if some management is stupid enough to force badly suited processes and tools instead of letting (competent) teams choose better ones, I doubt they will suddenly see the light by reading the Agile manifesto. And again, in too many actual implementations, Working software is not really an output of Agile processes... except now you don't even have a doc anymore. Actually, to get non trivial "Working software", a good documentation is essential. You don't solve anything by casting that you prefer "Working software", especially more so when you are trying to fix a situation where the documentation is mandatory but poor. And guess what, the "client" also want "Working software"...
Scrum is what you do when you try to do software engineering without actually doing software engineering. It insanely meta, and like explained in other comments, the improvements you get from its loop are too often meta (we should evaluate more accurately). I prefer to stick to the real thing, and core engineering practices. Scrum attempts to fix situation when core engineering practices are misunderstood and used as constraints instead of being used as something essential to the dev of a good product; but it is vain to try to fix such a situation by engaging key people even less in core engineering practices, and more in mundane discussions where the real problems are never addressed.
> Well, I've got "concrete experiences" in the software world after the manifesto, where this has been interpreted has fuck processes and fuck tools
Oh, yeah, that's definitely a problem. I don't think the Agile Manifesto is bad at all, but I think that, ironically, in application it suffers from the same problem it sought to address -- people are looking for simple answers that can be applied without deep knowledge of context. The Agile Manifesto and Agile software movement was itself a strong reaction against that, but unfortunately it (and tools from within that movement, like Scrum) get applied by exactly the same process that the Manifesto was a reaction against (focusing on particular ways it had manifested, prior to the Manifesto, in software development.)
> Honestly if some management is stupid enough to force badly suited processes and tools instead of letting (competent) teams choose better ones, I doubt they will suddenly see the light by reading the Agile manifesto.
Absolutely; the real audience of the Agile Manifesto is software development practitioners that have influence with management, and its not really "new knowledge" as a concrete distillation of experience. The fundamental problem, I think, with Agile isn't that its ideas are bad, its that the real problem it deals with isn't a problem of process/tools, or even the meta-level approach to processes and tools, but a problem with institutional organization and leadership of large entities that happen to be doing software projects, and how that manifests in software projects.
The agile movement has produced some new tools that can be applied effectively in, largely, the areas that didn't really have the worst cases of the problems that motivated the movement -- because its helped motivate and inspire a lot of efforts by people with decent engineering backgrounds at finding new ways of working.
But the kinds of organizations that were worst afflicted by the problems that the Manifesto set out to address are still the most afflicted by those problems, and what they've gotten out of it is a lot of new processes and tools that consultants will sell them, their management will blindly adopt without understanding the conditions which makes them useful, and thus they find all kinds of new ways to fail.
> Scrum is what you do when you try to do software engineering without actually doing software engineering.
Scrum is largely orthogonal to software engineering (presumably, people using scrum in a software project will be doing software engineering within Scrum, but Scrum is not about software engineering.)
> It insanely meta, and like explained in other comments, the improvements you get from its loop are too often meta (we should evaluate more accurately).
Scrum is designed to be very meta, true. And, yes, if you mistake Scrum for a complete process rather than a process framework, you aren't going to get much out of it beyond omphaloskepsis. (I'm actually not convinced that Scrum is particularly valuable, even as a framework, as anything more than a well-known starting point to develop an appropriate, context-specific work model.)
I agree that Scrum is dead simple, but that doesn't mean it delivers sensible estimates, or allows you to get somewhere with less effort than some other methodology. You might end up doing more (and worse) work because Scrum is trying to be too simple and linear, which I argue is the case in the post. But it's simple, I definitely agree.
Regarding development: My main point is that Scrum leans towards agile methods such as XP (testing, CI etc), but it also sucks the time necessary to do those things well. The time Scrum takes off of the devs' working hours could much better be spent on those.
>>> Scrum is intended to be the straightest line towards measuring your real progress on a project, and not much else
There's slightly more to it than that: it also encodes an assumption that you're working with a single fairly-tightly-integrated group (with synchronisation points at least daily). It's possible that this helps with estimation and scheduling -- it's a lot less clear that it helps get the best outcome in other respects.
I agree, it is often not the best approach. But many situations demand a well defined approach to estimation and although the OP tried to preempt this, he didnt provide an alternative
I reckon most experienced coders can cope with estimation when it's justified (i.e. "can we realistically get this done before <specific, real and externally-imposed, deadline>? And if not, is there a useful subset we can manage?"). The bigger problems come when estimation isn't about keeping promises, but rather a part of some form of scientific management aimed at "getting velocity up".
There's also something of an uncertainty principle here -- more precision of estimation is possible, at the expense of increased expected timescales (partly due to padding, partly due to picking lower-risk approaches).
if its being used to 'get velocity up' instead of measuring velocity then its not being done right.
I personally think estimating projects is one of the most difficult things about this industry. Especially if we're talking about delivering many calendar-months worth of effort for a team , unless its just a variant on some other project[s] the team is well experienced at
oh and I have attended some of those expensive Scrum 1-week courses and saw the darker side of that community - it definitely has a cult following that give it a bad name, but I've been to similar conventions around design patterns, object-oriented and (to a lesser degree) functional programming so I think that the community problem is not particular to Scrum.
Those with the loudest voices simply have the loudest voices, be they right or wrong.
The problem is communities.
interesting. Do you have an alternative in mind?
"Scrum is intended to be the straightest line towards measuring your real progress on a project, and not much else."
More like wandering in the desert, hoping you find the promised land.
Been thru scrum master training 3 times, been on many "agile" teams. I've never heard this rationalization. Rather, a common justification for "agile" was you always have a working product. Which might be nice if things worked out that way.
Also, PMI style critical path worked just fine for figuring out that "straight line".
Scrum and "agile" democratized project management, empowering every poseur to claim expertise and ability. Whereas PMI required real effort to learn and master, Scrum flavored "self help" books can be flipped thru before you finish your coffee and then safely stored in plain sight on a book shelf, never to be touched again, allowing said poseur to claim the daily mutant chaotic dysfunctional mismanagement that they've always done is now "agile".
If you're objecting to people who treat scrum (or any project management tool) as a one-stop-shop that will cure all ills I agree with you, but nobody here is saying that.
If you are objecting to defining the scope as small tasks and measuring your progress through that over time, then continually re-evaluating this scope as requirements change, then I think you are not working in an environment that would benefit from this kind of tool.
Its just a pragmatic set of guidelines, and objecting to it with such ridiculous vitriol makes you sound as foolish as the people I think you're objecting to.
My goal is to ship products that people will buy and use. Scrum and "agile" has only been an impediment.
"with such ridiculous vitriol"
Emperor, little boy, no clothes. It's thankless work.
In opposition, defenders of Scrum et al use the No True Scotsman's fallacy. Because those of us who have tried and failed are just morons.
Considering failure rate of PMI led projects is even higher then agile projects for software. I really wouldn't hold that up that as the way to go.
PMI (critical path) != waterfall. But then that's also said of "agile", which too often devolve to waterfall.
Project management is risk mitigation. In my experience, most "agile" projects have been risk amplifiers. Ironic.
I think Scrum is successful in organizations where there is a lot of finger pointing and cynicism, and the engineering team is happy to measure progress in "sprints" in order to deal with upstream requests that they fundamentally don't respect, and possibly a product vision vacuum in which decision are routinely made and reversed, making careful and judicious architecture impossible.
In such an environment, developers prefer to drastically over test their code, and to undertake work in manageable sprints that let management claim success and understanding even when neither exist.
If you already have a very strong product market fit, and you need to hire developers whose judgement you don't trust, or if there are extrinsic sources of timeline pressure (like investors or non-technical management who think developers are lazy... essentially anyone other than users or customers), then Scrum is perfect for your organization.
The other constituency that seems to love Scrum is product managers who either have no vision for the product or no control of the vision, and are essentially being asked to be cat herders and manage engineers without earning their respect or having any authority over them.
Re: daily standups. A lot of what the OP criticizes standups for is what I like about them. The content is only half of it. In my experience, it can be very easy for a team to cease feeling like one and instead become a loosely-coupled collection of independent contractors. 15-20 minutes a day of forced synchronous communication goes a long way to making you feel like you're actually on a team and therefore act like it, and is well worth the minimal time. This is especially true if the team isn't all in the same physical space. It can be disruptive, so scheduling it at the right time is important.
Of course, if you already have meeting overload, standup is going to feel like the worst.
I also like it when after the standup updates, most of us stick around and shoot the shit for 10 minutes. Good way to build camaraderie.
In my opinion, the good thing about Scrum is that you can tweak the rules to fit your needs, aka, "Scrum in name only".
The daily standup, IMO, should be only to remove impediments, and if you have none, then a sentence or two will suffice. I see the DS as the most useful meeting, as you are aware of what your workmates are doing.
And if Scrum is still a pain in the back, then you have Kanban, which is sort of Scrum without the straitjackets.
This kind of tweaking is frowned upon and you will be chastised for doing it.
Yes you can do it (and probably should when you understand the purpose of each Scrummy practice), but don't expect to be praised by Scrummers.
I've had people on my team frown at my tweaking.
I'm extremely happy to discuss the reasons I want to change something, but all to often the only counter-argument is "that isn't Scrum". I'm 100% fine with that, but I'm not going to do something if the only reason for it is religion.
That's nonsense, in my experience. Most people doing Scrum-like development in my experience are much more interested in a development process that works well and produces good results than they are about over-attachment to process.
This is also what I see the best-led companies do, not caring about what is Scrum and what it is not, and taking the pragmatist path. One thing I really regret at my current job is missing the Kanban workshop. It definitely appears to be much nicer.
If developers talk within their team, they should constantly be aware of what their workmates are doing.
If communication is a problem, having a daily standup just pretends the problem doesn't exist, rather than solve it.
> The daily standup, IMO, should be only to remove impediments, and if you have none, then a sentence or two will suffice.
100% this. The only thing a synchronous team-wide meeting is useful for is revealing a significant issue and getting a prompt and definite acknowledgement from the team. And then, if it's a priority, some help with the issue. But given the proper tool, even that feature can be made asynchronous.
IMHO, unfortunately standups tend to become status report meetings just for the sake of it (article talks about control, I think it nails it). I rarely see anyone bring a blocker. It's just "yesterday i did this, today i'll do this. Next!".
It gets worse if the team is actually 2-4 different teams with not much overlap (because companies tend to adopt these agile methodologies without much thought and it just keeps growing and including more people because.. it's nice, right?). Then you're ignoring (or not having a clue) about 90% of the meeting, and it's _daily_.
"Because the assumption is that refactoring will be a few hours' work, or even shorter if it's renaming a class here and replacing a file there. These are only the most superficial cases of refactoring, however..."
That always bothers me, too. The changes you can make continuously are hardly worth bothering to worry about, because they're so small and quick and decent developers will do them automatically. It's the changes that take several days or a week that really make the difference in the long term.
Exactly my point. The changes that are risky (difficult to do, might take more time than planned) are those that will make the greatest difference in all code quality aspects. And these are the ones that are pretty much impossible to integrate into Scrum.
I think people don't understand the reason and given flexibility tend to choose completely counterproductive things to do.
For example the only purpose for points and velocity is to give product owner idea of how much time will have to pass before a thing might even have a chance of being done. It tells him that you, for sure won't get this thing and that thing in next 2 weeks.
The only utility of points for the developers is when doing planning poker. If something gets a lot of points then this means people are not sure how to do it and the thing needs to be discussed and broken down.
After that, points have no use for the team and team shouldn't even be aware of its velocity and how much they burned so far this sprint. Those are the tools just for the product owner and scrum master.
If product owner is satisfied with "it's going to be ready when it's going to be ready" then points after planning poker can be forgotten and you don't even need to calculate velocity. It's still worth to do the planning poker for the sake of the team discussion and so the product owner can deprioritize task that are hard but not crucial.
But turning points and velocity and performance measure is dumbest thing you can get, because then it get's screwed and loses all utility. Same thing in estimations. If manager is going to negotiate estimates then you can as well not do it at all because they no longer carry any real information and are becoming just the reflection of managers hope slightly skewed towards developers hope (which was already overly optimistic).
And velocity is not comparable between teams, because different teams will normalize their complexity points differently. The amount of complexity points that any type of task is given, is supposed to be emergent in that team, because this makes a team's optimism/pessimism bias moot. The reason to use complexily points and not time estimates is also exactly this - to remove preconceived anchoring.
I wish I could highlight this a million times over. The idea of story points (which I have mixed feelings about regardless, but are not as bad as the author of the article mentions) is to produce a useful measure, which becomes instantly useless once it becomes a target. Once you start saying that you have to meet a certain velocity, you've gone off the rails.
The author mentions this, but I'd like to insist on it a bit more:
Scrum is a system of managing people, not a software development methodology.
It's about transforming programmers into cogs and gently forcing them to obey certain rituals every day, until they slowly give up their individual creativity and initiative and become good 'team players' .
And when someone says 'team player', I hear 'you belong to us now'.
I disliked scrum since the moment it was decreed upon our team.
And that's because pretty soon our team became obsessed with points and respecting the religion rather than doing the actual work.
The final drop came when I refactored some code, made it twice as fast using half as much memory (the proverbial 'much better'), only to have to fight the team to accept the changes, because that wasn't in the backlog.
Methodology is only good when it helps you achieve your goals easier and faster, safer, etc.
But when the methodology becomes the goal, then your job changes into satisfying the methodology, rather then being the best at what you like and enjoy.
And this is the exact status quo that larger organisations love - people focused on small irrelevant tasks, while the 'grand scheme' is determined by management.
Not for me. I use certain parts of it in my work today (develop in sprints, demo at the end of sprint, planning, backlog and current tasks), but if I see "we use scrum" in the job description, then I'm not your man.
For our dev team Scrum works fine, the key is to use it just as a framework, and not following every rule to the letter. For example:
- Obsession with points
We don't have that obsession, sometimes we even don't assign story points, just hour estimates
- Meeting extravaganza -
Again, for example remote people don't need to attend all meetings, sometimes we just clarify the work items outside the meetings.
- The sprint has its own backlog, which can be changed only in agreement with the team and the product owner.
We also don't have this... if I finished my tasks, I just move new item from backlog to sprint and work on it.
- Refactoring, reading code, researching a topic in detail are all seen as "not working on actual story points, which is what you are paid to do".
This has some truth to it... regarding research, we do this outside the Scrum.
I'm a designer and my experience with small scrum teams has more or less been the same - a lot of the issues are handled by developers who are given some agency to self-manage and who are half-decent at doing so. I've only worked in one environment that was true scrum, and I don't know if it was the process or the organization — but it felt so overly boxed-in.
The last point has been the biggest one for me. Across multiple organizations working in the scrum process I've occasionally felt like just getting some time to talk to developers for future exploratory work is like pulling teeth because the scrum process relies so heavily on tangible tasks. I think that's a time management thing that has to be established with teams ahead of time — like "hey 80% of your week are these tasks, but you have 20% set aside for research/discussion'
One can do this 80%/20% split. Not sure if it's mentioned in some official guide or it developed through trial and error.
Usually the 20% is reserved for bug fixing tasks which can't be estimated and therefore can't be included in the Scrum framework.
One of my projects has a "technical backlog" which is actually just a Jira task with multiple subtasks which gets dragged from sprint to sprint. As you can imagine this doesn't work so great from a process/tool perspective, but it gets the job done (i.e. Major refactorings can be and have been performed).
The thing with Scrum is that a lot depends on the PO and SM. If they're open minded and experienced you can get something decent. If they're dogmatic and pray at the altar of Jira, prepare for suffering.
Using time instead of points breaks the whole concept of sprint-based estimation which Scrum tries so very hard to defend. Basically Scrum claims that software developers can't estimate based on time so it forces them to use an abstract number instead. Over time this would lead to being able to estimate a project in sprint-sized blocks.
This is why sprints have a fixed duration, the sprint backlog should not be changed, stories are awarded the full points or 0 during review, commitments to finish the sprint, etc.
Sounds like you are doing "scrum but" aka your own methodology.
However their article is criticizing pure unadulterated scrum.
The article is also criticizing their particular version of scrum. For example "collecting points" or obsessing about them is not unadulterated scrum; the word points or planning poker are not even mentioned in the scrum guide. Any good developer, scrum master or book on scrum will tell you that story points are not a performance metric and they are only to be used by the PO and just for planning ahead.
There is little to criticise about pure unadulterated Scrum because there is surprisingly little prescribed and what little there is they openly tell you that you can change it in the retrospective. The real problem is that management, agile coaches (and unfortunately sometimes also some developers) often want to religiously apply every little thing they ever read in a book and seemed like a good idea. I think that solutions to all the problems the author mentioned are possible without deviating significantly or at all from scrum -- should that be important for whatever reason. The only prerequisite is that everyone involved is open to change.
"I just move new item from backlog to sprint and work on it." - This breaks traditional scrum estimating.
Yes and No. Scrum estimating is done with 2 independent metrics: Velocity and Task Hourly Estimate.
Obviously adding something to the Spring is going to hide the fact that you fucked up at Spring Planning. However, as the author said, very few people actually care about task estimate. ( and often team don't even care about breaking stories into task at all, using story == task ) And in any case, unless you are using postits and manual tracking, any tool will just track the estimate alright for you, the information is there if you want to process it.
Velocity is supposed to change from sprint to sprint, getting better as you are better at assigning story point. You can lose SP when you fuck up a story, you can gain extra when you put additional stuff from the backlog. Velocity once more or less stable is supposed to give an idea of a release date.
However considering that you don't break down big stories and epic into smaller stories until you are about ready to implement them, the uncertainty on the release date is more like 6 months vs 2 years vs 10 years. Management make a big deal of it but really that's how you budget most of the time anyway: I need "3 FTE for 6 months or 3 FTE for the year, to be reevaluated at budget cycle per budget cycle".
Most of OP problem actually started really when Scrum Master became a job. I remember something like 10 years ago being all exited how Jira could make very precise report of velocity and estimate and my very experimented coach looked at me as if she had seen the Demon. Scrum is about keeping a supertanker on track in a tempest and those amazing reports gives the illusion of a nimble sport car racing on a track.
This talk about agile by one of the founders was posted on Hacker News a while back, it is good, and I think complements this post:
https://news.ycombinator.com/item?id=11548334
To summarise, the issue Dave Thomas has with agile is that it has become a prescriptive framework, 'Agile' (with a capital 'A'), rather than what it originally started out to be, which is a means of being agile (small 'a').
I really liked this talk, and agree with both him and the OP of this thread that some parts of the 'Scrum' as a prescriptive framework may not work for you.
I find it a bit strange that the OP singled out the retrospective as being a negative- for me this is the best part about 'Scrum', as long as you're not too prescriptive about it.
Really the retrospective is a chance to improve yourselves as team each sprint, and it's a chance for people to get stuff off their chests. It shouldn't be just about scrum, it can be anything - adopting new approaches to development eg: TDD, discussing the state of the code, anything. That's what makes them (for me anyway and I think(!) my teams) enjoyable & useful meetings.
An effective retrospective requires creating a safe environment for sharing ideas and a SM which can foster such an environment.
I am guessing that such an environment is quite rare, hence the percieved lack of utility of the retrospective.
I can speak from experience that this is very hard to do. I expect that I will have to deal with a brewing technical conflict in the team soon and I'm not sure what's the best way to do it. I've also seen what happens if such coflicts are sweeped under the carpet and it's not nice.
>> The daily standup deserves a blog post of its own. This religious ritual has become a staple of every team in the world. Ten minutes of staring into the void, talking about what you did while no one else listens, because they were in the middle of something five minutes ago and will go back to it in another five minutes, and waiting for everyone else to finish.
I've had this happen my times while I was leading the project - it bugs me. I've ended up reducing daily standups and replacing with walking around (physically and virtually) and chatting with people and getting similar results. Of course, "similar results" here is that I heard what someone had to say but nobody else did - so a key to this walking around approach is to tell each person what other people are doing in order to connect the dots.
On the surface this might seem terribly inefficient, but there are a lot of positives, biggest of which is building good relationships with team members and learning about all the other little things going on in life.
On the down side, for people who are only part time on my team (not uncommon in my bigco setting) there is extra scheduling effort and discipline required to make these chats happen frequently.
One of the most essential part about Scrum is that it aligns the understanding across all the stakeholders, including business owner, customer, designer, developer etc. Hence the meeting, prioritization, backlog, daily standup/daily write up as development process artifact etc to focus on the priorities that create the most business value under flexible scope, fixed time and fixed money. scope, time and money are the three variables of a development process. At the end I think the author is proposing with the process for a flexible scope, flexible time and flexible money, which often is a internal project that doesn't have direct business applicable value. In this case, the author should really just break into another side team without business owner and, or even without project owner and simply use Kanban as the solution, instead of sticking around with Scrum. I believe every process has its shortcomings, but instead of pointing the wrongs, I'd appreciate more if the author describe what the situation/stage/type of this project, and why scrum doesn't work in this case, and how a different approach can optimize the outcomes.
Except most of the 'stakeholders' don't give a fig for what the Engineers value. Like technical debt, robustness of the solutions, relieving bottlenecks both in code and in development.
So Engineers get herded into little incremental projects that management can swallow as having 'business value'. And the herd marches toward the cliff as the code base wanders around the solution space but never gets fundamentally sound. Anyway, that's my jaded (from experience) view.
No process will solve the lack of understanding. Also, this goes both ways since engineers frequently don't give a fig about business priorities. Sometimes your refactor isn't really that valuable and is costing the company money.
If you have a culture of trust between departments, you should be able to have honest conversations.
That's exactly the call that gets made again and again, by managers. They rarely value code quality; so easy to dismiss with "that's just a refactor". Converse all you want. Business folk are not going to believe that Engineering isn't foolish and just "costing the company money".
That's the culture difference between working in a cost center and working in a profit center. If you just tell business our code is messy and we want to make it nicer, they will tell you to fuck off. Tell them you need to reduce operational risk and maintenance costs. You need to meet each other halfway.
Keep in mind that Scrum was designed by consultants working for organizations that don't understand software development. So they designed a process that can make sense to people outside our industry. In order for it to go well, though, the person running the Scrum process needs to know how shit really works. This is rarely the case.
> Keep in mind that Scrum was designed by consultants working for organizations that don't understand software development.
Scrum was "designed" by Ken Schwaber, a software project manager at the time looking for a better way to control software development processes.
Schwaber discovered he could use an empirical process, rather than a defined one, to control software projects. He created Scrum following this principles.
Good point, thanks. I was thinking of XP coming out of Chrysler.
I'll stand by the last two sentences of that comment, though.
> the person running the Scrum process needs to know how shit really works. This is rarely the case.
I agree, while a very simple process, people who don't get it tend to complicate it. They focus on the process instead of the result.
Most failed scrum teams I've seen out there skip the retrospective meeting, which is in my opinion, the key of the whole thing: that's the meeting where team members usually realize they "own" the process and can steer it any way they desire.
managers like it. control. results. evaluation. they get to treat programmers like laborers. programmers don't really have anything to gain with it. what surprises me, is that sometimes programmers ask for it. it illustrates how strong the desire is to follow the latest trend.
>>> what surprises me, is that sometimes programmers ask for it. it illustrates how strong the desire is to follow the latest trend.
While I'm far from a Scrum fan, it's not necessarily the worst thing in the world. For instance, it's probably better than "just do what the project manager says", especially if he tends towards micromanagement. And double-especially for people who thrive in group discussions but have trouble managing 1:1s.
I suspect the trades also look different depending on level of experience. Scrum, at least in some incarnations, tends to be quite egalitarian, so I can see it appealing to more junior devs in environments where they feel the seniors get to do all the interesting stuff.
If you have a management team that is very weak when it comes to planning agile does help a lot allowing the world of actual work estimates to enter the equation, or in the sprint structure you can broach the question 'what gets bumped to do this?' If a management team has an endless leash they just stack a thousand different arbitrary pieces in a pile, call it a release and set a deadline. Using agile enforces a workflow for short term planning where one didn't exist.
Maybe it's me, but the real issue with professional scrum masters is that it's rare to find any that practice agility as a way of life.
Majority of professionals I've met appear to lack any ability to improvise and literally just follow the book on what to do without any critical analysis of how their situation fits with the defaults provided.
Basically, my issue with scrum is its community.
Superficially the google term you're looking for is "cargo cult"
But how to fix cargo cult depends on the root cause and there are many:
1) Authoritarianism, my boss said read this book, this book said XYZ, therefore XYZ is true. How could the book possibly be wrong? Or for book, insert rockstar ninja consultant or training class or whatever.
2) Religious belief, sure none of this makes logical sense but if you have blind faith and are not one of those apostates, it'll work just fine. Just relax and perform the ritual.
3) Deep game, where we actually run on anarchy or waterfall but someone needs XYZ on their resume so superficially we've skinned it as XYZ. Dig enough and you'll find what really runs the place is something else.
4) Blame games, a large part of management is how blame is distributed, not just distributing work, and when a subgroup who thinks they're agile-ing to distribute work comes in contact with a group using agile mostly to distribute blame or to stall for time or just to goof off on company time, its kinda like matter-antimatter.
I like how Scrum promotes transparency and trains each individual to become self-managed (or at least getting better at it). But it often feels like it's being hyped up, with many people getting obsessed over its practices/rituals.
Estimating work is a nice idea, but using 'story points' for it just feels like a bad API decision - I have never seen a new Scrum team member (including me) getting it at 1st try.
And maybe it's just me, but having a full time, dedicated Scrum Master, who serves solely as a facilitator (without having any actual role in product delivery) is a nightmare, especially for teams that are used to Scrum practices.
"I have never seen a new Scrum team member (including me) getting it at 1st try." - I've never seen one get an accurate time based estimate either. Even experienced ones.
That's one of the reasons that Scrum doesn't use time-based estimates.
I am a SM and a dev and it's not at all a comfortable position to be in, because it's often making demands on my time and there are conflicts of interest (but not as bad as SM/PO or SM/manager).
The SM is major piece of the Scrum puzzle and a lot of practices lose their value if the SM is not paying attention.
This over-reliance on the SM is a minus for Scrum IMO.
do we see bank analysts playing 'planning poker' when it comes to putting together pitch decks or research analyses? no. and they'd transfer out of any group where a manager tried to implement such a practice.
if a software engineer is more than a couple years out of college, i don't think she/he should put up with it either.
Good to know I'm not the only one who feels this way. When I was battling my way through calc III or setting up 1000-node monte carlo network simulations for my master's degree, I never expected to be treated like an auto-assembly-line worker, but here I am, punching a virtual clock (er, "timesheet") day after day, trying to come up with excuses for my existence in yet another daily standup.
I am just baffled by the assumption that such practices only exist to piss off engineers.
I do a bit of planning poker with my team every couple of weeks, despite being a pretty experienced engineer, and it's great. It's been good at exposing assumptions or concerns that members of the team have, or about confirming that everybody is on the same page wrt. what the complexity of a particular task is likely to be.
I reckon that more groups should be doing this, and not fewer.
> do we see bank analysts playing 'planning poker' when it comes to putting together pitch decks or research analyses? no. and they'd transfer out of any group where a manager tried to implement such a practice.
More fool them if so. I think pg said something along the lines of "unprofessional is what you say when you don't like something but don't have any real criticism".
These are just games designed to avoid some estimation biases. The problem with biases is that one still can be caught by them even if they're aware that they exist, so such games might have some value.
An alternative would be to explain what the biases are and address them in each estimation round. But who does that really?
We've settled on playing games, which can seem childish at times.
However most of these biases are slightly recursive.
it always takes 20% longer than expected, even when taking into account it takes 20% longer.
Probably because of Parkinsons' Law.
> do we see bank analysts playing 'planning poker' when it comes to putting together pitch decks or research analyses? no.
Most industries (don't know about bank analytics in particular) are more personality driven and/or more repeatable. There are few industries that:
- make entirely new things on a regular basis
- are mysterious to laypeople
- cannot ship 50% of a solution and get 50% of the value
I don't think those are all that true of software in general. Most software isn't particularly new: how many "Yet Another CRUD app" projects are there? And with software you can ship a percentage of the features and add more later, unlike (say) hardware manufacturing, civil engineering, chemical engineering or architecture.
Maybe software is reasonably different in that there are a lot of management level people with little technical knowledge of how the work gets done? I don't know how that compares to other similarly technical industries.
OK medical in general, obstetrician in specific. Or perhaps dentist. Certainly the whole pathology sector.
John Cleese (Monty Python) in "Meetings, Bloody Meetings"
https://www.youtube.com/watch?v=ZWYnVt-umSA
You need to rent this video and show it to the team. (and rent "More Bloody Meetings").
Scrum is a management technique, not related to programming. In fact, it is a "micro-management" technique. However, those who lead don't seem to know how to conduct meetings. I don't see scrum training that even hints at the fundamentals of management, even the fundamentals of meetings.
So I've never done Scrum-with-a-capital-S, but I've worked in environments that practice by-the-book Extreme Programming, as well as other flavors of Agile.
I agree with a lot of what the author is saying. They're describing a process that appears to be relatively broken, and identifying some good reasons why they're broken.
What's missing is flexibility. As per e.g. the Agile Manifesto (http://www.agilemanifesto.org), the entire point of frameworks like Scrum is to provide a loose set of processes that you adjust and tweak to your specific team. If you're blindly following the process, and slavishly adhering to things like how precisely your daily standup MUST work, you're following the letter but not the spirit. You're being "Agile" with a capital A, but your team isn't actually being "agile" (the adjective), which is actually the important bit.
The correct response to "our team is unhealthily obsessed with story points, and yet doesn't get value out of them" (or standups, or retros, or whatever) isn't to say "story points are broken". It's to say "this bit of process isn't helping us right now. What can we do to modify the process to provide more value to our team?"
No methodology can save us from managers that don't understand what they're managing. For various reasons this is more common in software development than other fields.
I find points far more useful than time based estimates. The trick is to use points like categories and not try to treat them as 1 point = N of something.
During my training the analogy was to categories of dogs based on size. So like a Chihuahua is maybe a 1 and a Great Dane is maybe an 8 (and there's 7 categories of dog size between them.) But a Malamute might also be an 8. Even though it's shorter than a Great Dane, it weighs maybe the same because it is more thickly muscled. But neither of those dogs is equal to 8 Chihuahuas.
The point being that if you categorize stories like this, treating point values as categories rather than as 1 point = N of X, then that makes pointing a lot easier and a lot faster as your team builds up a methodology (what categories your team uses and what they mean is up to you, it doesn't matter as long as you are consistent.)
This leads to meetings taking less time, and leads to a more consistent velocity than if you do something like 1 point = 2 hours or 1 point = something else rigid.
If SCRUM doesn't work for your team by all means you should abandon it. But, to me, it sounds like his team is using it badly. It sounds like they are being really rigid, and it sounds like a few adjustments could lead to a lot more happiness. But maybe not.
It's worth mentioning a somewhat dated post "Thoughts on Scrum" [0] by Jethro Villegas [1] who works on Mozilla Firefox and earlier managed engineering for Flash Player.
I agree that there are individuals and companies who have defined such ridged structure and expectation into their Scrum method that they can no longer be considered an agile approach missing two of the key purposes of agile approaches:
* Individuals and interactions over processes and tools * Responding to change over following a plan
Now, I cannot say that Scum abstractly is a bad approach to attempting to be more agile. Many of us have employed some form of it with varying degrees of success, but, the expectations on software developers from a business perspective are more rigid for business development and tracability purposes, which can,on occasion, omit the consideration of resource demands and complexities.
Often, i believe, it is that a lack of understanding the purpose of the agile manifesto and the poor implementation of a method at the top level of the developer and managment ladder which is the underlying cause.
In the post, the author cites 3 hour or more meeting where he feels there are too many people there. This would be an item for the retrospective feeding the next iteration, but he only says "Blergh" about the needed feedback.
He states that the standup is more ritualistic, but the point is to ensure each member understands where the sprint stands, inspire collaboration, but really take ownership of the code. It could be that the team size is too big, or that the team really has not taken ownership of their code.
When all is said and done, you have to buy into the Scrum process, the process has to be flexible, and the stakeholders have to not believe it is a magical development method that fixes all the issues of the software development process.
His complaint about the standups interrupting productive work rings a bell with me though. I see no reason why a daily status update over the general channel on slack doesn't serve the same purpose as a standup meeting. In fact, make it so there's a loose timeframe, say 30 minutes, and you wouldn't be interrupting people, nor forcing them to stare at a wall.
My team hasn't tried anything like that, but I think it could really be useful. It encourages better communication throughout the day as well, as it gets people using Slack. Plus, it gives managers the ability to keep track of what people say each day.
For me, the daily status meeting at my current job has been the only time I can guarantee the team lead will be available to answer questions, as he tends to get pulled into meetings (or is busy marking up tasks or working on code reviews) most of the rest of the day.
At this place, where there are lots of different code bases of various quality and approaches to how the features are implemented has shifted over time, I tend to have questions most days.
I have worked at companies where we're only really working on one thing and I'm familiar enough with it that I really didn't have to ask any questions (they also didn't care much on how it was implemented, either, just that it worked) and daily status meetings were pretty much a waste of our time and ended up becoming more like once every other month.
> I see no reason why a daily status update over the general channel on slack doesn't serve the same purpose as a standup meeting.
I haven't used slack in anger. When you need help, how do you tell the difference between a teammate being busy, unavailable, or just too distracted to respond? Part of the point of the in-person standup is to remove ambiguities about this at least once a day.
We've tried that using Slack (and also IRC in previous projects, perhaps even email at some point). It works well, and I don't remember when I did a face-to-face daily standup the last time.
In the commentary for a previous story, another user mentioned that they have a "today-I-will" room in Slack. I've been itching for a chance to bring that up with my management.
"the team size is too big" - this is a very good point. Towards the upper limit of the recommended Scrum team size (3-9) they can get really unwieldy. I've been to a standup with an even larger team, which included a couple of design and marketing folks, and it was unbearable. The recommendation's there for a good reason.
I want to make bad, difficult to maintain code, I want to spend more money doing it, and want to extract all of the enjoyment out of programming for those on the project so that innovation is stifled. Would Scrum be a good choice?
One of the first things I was taught when I did my first Scrum Master course was that scrum is just guidelines. Teams are self-organizing and need to do what works for them.
Most of the criticisms in this piece can be resolved by team members adapting processes so they work for the team.
I've also encountered most of those problems in non-agile, non-scrum teams as well. Instead of obsessing over points, it's hours.
Meeting hell can happen anywhere. Good employees will tell their managers when meetings become an impediment. Good managers work to fix it (and pro-actively try to prevent it).
I always find it odd when somebody complains that sprint goals need team buy-in to change. Why is that a bad thing? Adjust and move on. Sometimes things don't go as planned, sometimes they go well. Hopefully, it all averages out at the end.
Anyways, I guess I view scrum (or any other methodology) as a loose set of guidelines, not strict rules that must be enforced at all costs. If management is forcing the rules, even when they are becoming impediments, that's a sign of a management problem that is unlikely to be cured by a change in methodology.
Exactly true. I hate to say "that's why all of these articles are dumb", but really - is adapting something to your people that hard or unexpected?
Scrum/Agile-English Dictionary http://reddragdiva.dreamwidth.org/594955.html
But with our new Silver-Bullet Development Methodology™, it's a completely new world where none of your decades of experience apply! This changes the nature of development forever!
p.s.: we sell certifications!
(repeat for a new Silver Bullet roughly each decade)
>The daily standup deserves a blog post of its own
I found this to be a really good article about standups: http://www.yegor256.com/2015/01/08/morning-standup-meetings....
Scrum is the worst possible development methodology, except for all the others.
There's a lot of development methodologies and if you actually think this you probably haven't seen many of the others.
It regularly seems to me that people think that there's Agile/Scrum and there's Waterfall and that's it. The first year engineering design course (and the first year programming course) in my department cover a number of different approaches.
That's a great idea if you don't have any customers or constraints.
In the real world, there are externalities that have an impact and require some estimation. Maybe we have to provide new support materials to customers, or finish a contract with a third-party data provider, or change our infrastructure. It's exceptionally hard to do some of these things without being able to make commitments of some kind. Estimates allow us to create a guideline, and subsequently alter either the thing we are going to deliver (i.e. dropping lower-priority features) or the time we are going to deliver (i.e. missing the deadline).
If I hired a builder, and he told me that he refused to estimate the amount of money or time it would take to construct my property, you can be sure I'd move on. I have no idea why we'd consider it appropriate for an engineer to do this.
>If I hired a builder, and he told me that he refused to estimate the amount of money or time it would take to construct my property, you can be sure I'd move on. I have no idea why we'd consider it appropriate for an engineer to do this.
When every builder (i.e. engineering team) anyone ever hired everywhere blew through their estimate ~70% of the time, perhaps it's time to reconsider that stance. See the Standish Group's annual CHAOS Report for an example of software project success statistics, e.g.: https://www.infoq.com/articles/standish-chaos-2015
Estimates are just mutually agreed upon lies that engineering teams tell themselves.
This is exactly why waterfall works for building a building, and why it doesn't work for building software: You can directly observe progress.
Some of the better aspects of Agile are aimed at providing an analogous level of obviousness: E.g. don't move on until a story is completely implemented. This means you see schedule risk as it happens. This also avoids the way waterfall projects used to die, like the one I saw at Lotus while consulting on Mac porting, back when using conventional project management tools for software was a progressive management practice: A GANTT chart with hundreds of lines corresponding to tasks. Each progress bar partly filled, most of them 70-80%. Yet the project was dead because it was obvious it would not be completed in a relevant time-frame.
It is far too easy to game completion tracking in big, complex projects if you don't decompose them into small sequential projects. BUT, even though each smaller project has schedule risk that is reasonable, when you add up the schedule risk across all the sub-projects, if you are honest, it will be large enough that an estimate is going to have low reliability.
If you could count the joists and studs and shingles needed for a computer project, sure you could make good estimates. But more often there's no architect drawing at all. Just some desired behaviors. "How long to build a house that will bring joy and success to the family that lives there?" You can understand when the builder hesitates to name a price, given a spec like that.
> The use of story points appears to be one of the defining features of Scrum.
Practice and experience may have led to this conclusion, but it will be good to know that the (original, Scrum-defining) Scrum Guide, http://www.scrumguides.org/scrum-guide.html, doesn't even mention nor defines "story points". Obviously, there is planning and estimation involved in Scrum, and story points can be a metric used for "amount of work"... but it's not a defining feature.
Just read the summary which I highly agree with and will quote here:
> So, in summary, Scrum
* wastes too much of the developers' time for management
* does not lead to good quality code
* is a control freak which does not leave room for new ideas and innovation.
There are some valid criticisms here, though on balance I think the author enumerates costs without considering benefits. But the biggest complaint is a result of a common misconception: that scrum is somehow a replacement for software design. It is not, nor is it intended to be. The author seems to have a glimmer of understanding that this is the case:
Scrum, however, is much closer to the problem solving approach, where analysis (breaking down a problem, and reassembling the solution) is the organizational tool. In order for an interpretive community to emerge, an organization needs ambiguity, open-ended conversations, and alternative perceptions. All of this, Scrum leaves to something else, whatever it is.
But then goes on as though Scrum precludes any kind of thoughtful design or experimental innovation. Nonsense. Nothing in Scrum suggests that a team mindlessly go through sprint after sprint without ever pausing to have design sessions or build prototypes. And there is scant mention of the product owner at all; it's as though the author believes that software is designed by developers, for developers.
Scrum is a way to implement a software design, flexibly and with the understanding that the design will change along the way. How you decide what to implement, how you arrive at that initial design, is up to you. If your organization doesn't understand how to create and manage the abstractions that make up a software application to begin with, then Scrum isn't going to save you.
As usual, criticism is strongly shaped by experience. My experience is also limited to three or four workplaces, but I have never, ever, seen this "race for points" and definitely not being "awarded" the points at review time. The points don't mean anything by themselves, it's an abstract measure of effort whose value fluctuates wildly, influenced by team dynamics, individual productivity, type of work being done etc etc. Failing to understand that is starting with the wrong foot.
This article is, rightly so given the author, written from the perspective of an individual contributor on a Scrum team.
First, the author refers to Scrum as a methodology. It is not. Scrum is a framework, and being based in agile principles, is much more about a way of thinking and working. It should not be used as a purely do-this-then-that prescriptive approach to getting software built. As we now can see, there many organizations that tout being "agile," when their true behaviors and output are merely agile window dressing. Also known as "fragile" instead of "agile."
I disagree that Scrum is not useful to many organizations and teams. In particular, organizations that basically lack any process (trust me, I've been in a number of them) and it's kind of a free-for-all or based on whichever executive screams the loudest gets what they want when they want it.
What is missing here is why Scrum can provide useful, especially in terms of velocity. A team's velocity can serve as an underlying basis for projecting how long things can take. Like it or not, internal "users" or "stakeholders" in an organization as well as many external ones expect to get some idea of when things are going to start happening and when things are going to be done happening, for particular features or commitments. As expected, this involves being able to at least intelligently (and based on historical data) make a reasonable prediction at dates.
Entire books have been written about software estimation and no framework gets it right.
After working in Scrum for over a year, I find myself in violent agreement with every point in the article.
For me, Scrum has always been nothing more than an “Agile Bootstrap”. The core idea within it is very simple: try something and examine the results. Stick with what works and try alternatives for what doesn’t. Repeat. And, above all, learn.
So Scrum starts you off with a list of practices to get going with. The whole litany of backlogs, story points, sprints, daily stand ups and retrospectives is nothing more than a vocabulary - a pattern language for stakeholders. It’s got pros and cons - a good way to bring people together but it needs to be recognised as a crutch and ditched quickly before the team comes to treat it as an article of faith to be defended at all costs.
And this is where so many agile coaches fail in my (limited) experience. When I was undergoing scrum training, it was hugely disappointing to hear colleagues asking questions of the trainer along the lines of “Can a scrum master multi-task among teams like an anaesthetist in a hospital?”. Or “What does Scrum advocate when a production incident requires someone in the team to drop out and assist?”. Or “Can we have scrums of scrums?”.
The answer in every case is the same: try something - anything. See how it goes and decide whether to apply that approach again in future.
In my experience, people matter more than methodologies. If you have good people on your team, product and dev, then you can make any methodology work. If you have bad people, no methodology will work. Advocating for one methodology or the other is far less important than hiring the right people.
That said I prefer Agile to Waterfall by a lot, and Agile developed in reaction to Waterfall so I think it was a success.
It may be that Agile developed in reaction to Waterfall, but that just says to me that the creators didn't look at all the other methodologies out there.
Scrum definitely sucks, but it sucks less than most of the other options.
Kind of like democracy.
If you don't like meetings, planning, or giving estimates, you probably don't like working on software for a living, because I simply don't know how to build software without meeting with other people, coming up with an idea, and telling the people giving me money when I hope to be done.
Both devs and managers are attracted to scrum for two different reasons: - devs like scrum because of the agility of it and the though that they will have more control on how to evolve the product and the code base and have the freedom to architect and develop the way they see it right. - managers like it because it gives them an oversight on how the team is performing and a possibility to have a more accurate estimate the a rule of a thumb. In my opinion scrum stop working when devs think that it's complete freedom and control over the product and/or managers think it is a magical spell to increase team productivity. Another reason that make scrum goes wrong is that religious believe that every rule in scrum manifesto needs to be applied as is so magic can happen, scrum is not a recipe but a guide that need to be adapted to each team, organization and context. Trying to follow the rules religiously will always fail.
> First of all, what are story points? Are they measures of time it takes to complete a story? If yes, then why are they not in terms of time? Are they measures of complexity?
Story points are a measure of "relative amounts of effort". A 2 point story is twice as hard, as far as effort to implement it than a 1 point one.
> Scrum meetings (aka rituals) have been among the most miserable hours of my life...
Would you rather have many meetings almost every day, or a single big ol' meeting every few weeks so you can focus on coding without interruptions?
> The review meeting causes utterly unnecessary anxiety (Oh my god, will my feature work?)
It should not cause anxiety because it MUST work. The review meeting is to show stakeholders a "potentially shippable product increment". Everything you show on the review meeting should have been extensively tested already.
> Why estimate stories that you are going to break down anyway?
You only break down stories if they are too big. You are not supposed to work every detail on a planning meeting.
> but in Scrum, the retro is explicitly supposed to be about the Scrum process itself, not about the codebase
The retro is not about the scrum process but about how it is working for the team. All these issues you have already complained about should be reserved for this meeting where the whole team can decide which adjustments to make.
> The main goal of Scrum is to minimize risk and make sure the developers do not deviate from the plan. I will come back to "Scrum controlmania" later.
It is regretful that, like all the other "why scrum sucks" posts I've seen on HN, it all boils down to a bad scrummaster not managing expectations and taking the time to explain where the process really comes from.
For some engineering tasks, the work and the planning are almost the same thing. Once you've explored the problem, the code is the small part.
When asked to estimate, the answer is "I'll let you know when I'm getting into it". Then you're made to do a 'spike', which is indistinguishable from doing the task. Except do it in a day now. So the Engineer thrashes around trying to figure it all out in a day, and comes up with some number. The planning phase is now over, so they're supposed to just execute, regardless of what (bogus) number they found. So they start. At the end of the sprint, they've done less than if they'd just started the task the first day. They get 'measured' as a low performer. Of course its the process that's performing badly, not the Engineer. They get frustrated, resentful and stop cooperating with the scrum master.
I've seen this so many times at so many places its just exhausting. No amount of discussion can convince the evangelical Scrum experts that something is wrong with the process.
In agile, the estimates should be decided by consensus of the whole team. Requiring a single engineer to come up with an estimate is a smell.
Here is a quick 4 step planning meeting process based on planning poker:
1. Pick story from backlog, explain in a couple minutes what do you think would take to implement it.
2. Team members pick an estimate of the effort.
3. If there is consensus, note estimate on story, pick next, goto 1.
4. If there is no consensus on estimate, have highest and lowest explain why they picked their estimate, goto 2.
If you are expending more than 5 minutes on a story and there is no estimate, push it down and move on. It's not ready for development. The whole planning meeting should not take more than 1 hr.
This process, allows everybody involved in development to have an idea of the overall project and chip in with their experience to impact estimates (this might require refactoring class X, it might be hard to test, will conflict with Y and Z, etc)
And that's where the 'incremental tasks with no architecture' part comes in. When the scheduler needs rewritten to support whatever robustness; when the server feeds need to be redundant; maybe only 1 team member has any real idea what it takes. Coding isn't always about putting up another web widget or making another database entry.
That's the point: while there might be just 1 team member with a clear understanding of the effort, everybody gets to pick an estimate based on their own knowledge of the subject matter, even if they know nothing.
The exercise of just enumerating in a couple minutes all the steps to complete such a story gives everybody, including the one who is going to do the job a clear idea of the effort or whether they need more information before proceeding.
Btw, while Scrum is great for software development, they are not ideal for maintenance and/or infrastructure management where a defined process such as waterfall might work better. Trouble tickets and bug fixes should be kept outside the sprint unless they become stories.
Again, for a simple story where everything is known, sure. But some stories are all about figuring out how and what. The actual doing may be simple. The Scrum process fails.
I guess there are many way to use points, but what was described in this article isn't the way we use them. Points are great for a group of people to groom a story (estimate a features difficulty). Points are an estimate of difficulty. The way we do it is we have a story that describes what feature will be built, then as a group we talk about it and then all at the same time (so we aren't persuaded by others) rate the story with points (level of difficulty) and if we all (developers) rate it at the same points then we add the points, if we have different opinions on the points (level of difficulty) then we discuss why we think it's easier or harder. And this helps others who wrote the story foresee issues or know of other simpler solutions. Works great if you ask me, and that's how we were taught to do it.
"As any developer will tell you, software development is a marathon, not a series of sprints."
I would argue with that.
We use Kanban among our team. Our favourite tool is www.kanbanery.com It helps us with our everyday work and lets us to track our progress. Before we started working on a kanban board we always had a problem to kick off our huge projects. Now, we cnvert plans & ideas into doable tasks, so you can easily kick off your projece. We love working with Kanbanery because it has many features that make our work easier, like: priority & estimation markers to show how important and complex a particular task is. In case of bottlenecks, instead of sending a long email with an explanation, use the blocker attribute to let your team know what’s going on.
So all in all I still think that Scrum & Kanban can make your life easier.
The aha moment for my team which changed us from a mindset of dealing with scrum to enjoying it was when we moved from aiming towards a number of story points to complete towards committing to the stories that we could individually complete in a sprint, without a concern for the associated story points.
All of a sudden story points disassociated with time. They just became a number. a combination of complexity and understanding.
A lesser, but still important shift, came with our changes to backlog grooming. When part of that grooming included poker planning to estimate points. This Spread that chore out, and our Sprint planning is now approximately half an hour as the team goes down the prioritized list and commits to stories.
Is each developer just working on as many stories as he can in a sprint, picking from a prioritized list? Because otherwise, how do you estimate what you can complete in a sprint? Don't you commit beforehand to deliver a certain amount of stories? And doesn't that translate to a certain number of story points?
We get the prioritized list, we review our other commitments in the upcoming Sprint, and starting from the top of the list we go down and developers pick which ones that they will work on. Since we've groomed these stories, we have an idea of complexity and one or more developer may have already taken the time to identify exactly what needs to be done.
We don't commit before hand to deliver a specific number of stories, or a specific number of story points. That would never work. Rather we focus on what individuals believe they can achieve. Our velocity has become pretty consistent and our delivery of story points is close to 100%
I managed a small team of engineers and synchronous daily standups just broke down with remote teams and timezones.
It also just felt unproductive - it became something we did to feel like we had "best practices" when in reality it was a waste of everyone's time.
I got the sense that everyone would essentially forget what they said/heard at the end of the meeting (or was simply tuned out).
I ended up launching a tool to solve this problem: (https://jell.com).
We're a lot happier showing a "todo list" with each other and still have a place to write out the challenges/progress we're making - all while respecting each others time.
You don't need to be remote or across timezones for this to happen. I can't help but feel that standups are a huge waste of time. If it is a thing that really needs to be done, why not do it asynchronously in a slack channel when you show up, rather than interrupting everybody and wasting a collective work-day across the team every day in lost flow time, by doing it synchronously at some always-inconvenient time?
State of our industry, being "fan" of something or "not a big fan" of something.
You should not be a fan of a tool, but use the right tool for the job. If Scrum is the right tool, use Scrum. If Kanban is the right tool, use Kanban. If Six Sigma is the right tool, use Six Sigma. If waterfall is the right tool, use waterfall.
Second, if you can't use a tool, don't use it (I've seen companies using Scrum with excellent requirements engineering, rearchitecture/refactoring, headroom, ...)
Been doing scrum forever. Even before it existed. And yet I never had to use points, ever.
Scrum Meeting:
- Follow the list of topics for today
... (may include)
... Show progress and new features
... Get feedback
... Discuss blocking points and resources needs
... Get update on supplier status (if buying/selling anything to 3rd parties)
- Decide what are the priorities for this week
- Schedule the meeting for the next week (send the list of topics you'll talk about)
That's overly trivial. It mostly comes down to schedule a meeting every 1-2 week, show/say current status, repeat...
No need to call that scrum and put fancy names on it.
I've been noticing more and more that I code around the stories rather than following the best practices from the beginning. When I am adding a DB connection and we don't have liquibase or fliway in place I can't add it because that is a ticket for next week and I need to get it done now. I write code that sucks so that I don't break from my sprint and possibly work on something else first.
I struggle to believe that people follow a management practice that is broken enough that it encourages doing things the wrong way and slavishly adhering to a procedure. Wait, never mind... I just saw my Dilbert day calendar...
I'm not sure I understand the first complaint; no where in the official guide is the word "points" or "story points" mentioned. I've worked on at least 2 Scrum teams that didn't use story points. All the stories were broken down until they were all of approximately the same effort and then we planned sprints based on number of PBIs.
I've almost always used story points and actually find them useful. As for the author's complaint, for me the key is this:
> First of all, what are story points? Are they measures of time it takes to complete a story? If yes, then why are they not in terms of time?
I don't think I've ever read the official scrum guide, but ever since I started using scrum (around 2004) the concept of story points was clear: They were "perfect days", i.e. days where you don't have any distractions and nothing goes wrong and everything works first try.
As a fairly experienced developer, I can easily estimate how many of those perfect days will take me to do a task (at least if it's a technology I know). When estimating, the whole team estimates all tasks until there's a consensus (and you are right, ideally, all the stories should require similar effort).
After that, the "perfect day" thing is forgotten, the estimates have no unit and the velocity starts playing it's role.
And here's exactly why I like points: The link between real days and points is lost, it doesn't matter. All that matters is that you completed X points in a specific sprint. Chances are that you'll complete X points in the next sprint. Iterate through a few sprints and you'll have a pretty good idea of what will be finished when.
It's a project management methodology, not a software development methodology. Some of these gripes seem not to be accusations of the method but actually the application by whatever organization.
Interesting to see he thinks two weeks is the default sprint length.
I thought 1 week was the default and it certainly works for my team. The various scrum meetings are short and snappy and have high value.
What's the alternative? Give me a credible one and I will change my mind about Scrums. It's not great but I need an alternative, one that is better enough to abandon scrums.
I feel like there's a standard blog post that can be posted in response to opinion pieces like this, and it would be titled "x is not a panacea".
You'd love this https://medium.com/modern-agile
This is so spot on. Use the parts that work, modify to suit the teams needs. I've seen this done to a massively productive and happy team.
Everything I Need To Know About Agile I Learned From Reading "Extreme Programming Explained: Embrace Change, 2nd Edition (The XP Series)", by Kent Beck and Cynthia Andres.
Oh, and from working side-by-side with the best practitioners in the business at ThoughtWorks for three years. But the book is sufficient.
Alright, I know this has already dropped off the front page, but I figured I had something to offer to the conversation...
First, as background, I'm a software engineer, not a "scrum coach" or anything, but I've been on a Scrum team for nine years and nine months. (I know, right? Our first Scrum project was Nov 2006. The set of people has fluctuated over the years but it's still a pretty tight ecosystem.) Just this morning we were requested to make videos about our team(s) to explain why we work so well, so this is pretty apropos.
Second, I really did read the article through and thought it was well thought-out. Notably, I felt they came from a different mindset than mine, a different workplace, and I am a fan of Scrum, so here are my feedback points--I hope they're considered positive, helpful and constructive:
TL;DR: I feel like the author is not empowered in his workplace. Time to upgrade your scrum team mindset.
* Remember that a scrum team is self-organizing and self-managing. Specifically, I'm replying to: "The daily standup is in my opinion a manifestation of a significant but unspoken component of Scrum: Control" implies you don't know or don't practice this and is likely the actual source of your problems with Scrum.
* You mention disliking pointing stories because they never match up with the points on tasks. Points on stories are done because you haven't created tasks yet, so you can't use task hours to add up to a story, and you need to figure out relative complexity before you start.
* Getting points (RE: "I don't get the exact response that was expected, so no points for you.") - Here, the team decides whether it gets points, not a user in a review. That sounds weird, but so does a user saying "No points for you! (soup nazi voice)" in a review. Stick with me here: If you get feedback that says you need to do significant rework, it's clear there was a misunderstanding between your product owner and the business. Make a new backlog item, point it, prioritize it, move on. (If this happens more than once consider how far apart your vision is from your user and their vision of the project!)
* Demoability as a requirement in Scrum (specifically responding to "How can you demo that your code base has become more habitable?") - Business folk understand the value of "plumbing" (pipes in your house aren't visible, but they sure are handy when you want to take a poo). They don't like showing up to meetings to talk about plumbing, though, so either skip that meeting entirely or tell them what will be possible when said plumbing is complete. Point is, don't die on the hill of demoability just to say "See! Scrum sucks! I can't demo all this plumbing!"
* Meetings every two weeks - If they're painful, you're doing something wrong. If it's not ready, it doesn't go in the demo. Don't kill yourself for a demo. It may be a "sprint" but you're still actually in a marathon, so just pace yourself well.
* You mentioned disconnected users. If you have bored users, get better about grouping the meeting. Split it into two meetings, if need be. We don't; we say "Hey, finance folk, you'll want to pay attention the first 15 minutes of the meeting then you can go. You're welcome to stay, but you'll see app features that won't affect you"
* Daily Standup - For high-performing teams, the daily stand up is training wheels. Anything you learn in a daily standup should make you mad. ("Why did you wait til the daily stand up to tell me you [finished X|were blocked|needed this info|are ready for me to test]!"). Do them if you need to. If you do need them, ask yourself why. e.g. who isn't a communicator? Who would've blown you off were they not in this required meeting? Those guys are your blockade for more than just a daily scrum meeting.
* Sprint durations - You make it sound like you don't have a choice. Your team chooses the duration of the sprint.
* "I find the idea that you should get somewhere by sprinting repeatedly [and the rigidness of items in a sprint] rather weird." The rigidness is there to protect you from outside forces, not prevent your team from getting the job done. e.g. It's there to prevent the VP of Marketing from showing up and saying (psst, hey, could you add a blue link on the homepage?" .. "Oh, that link is wrong, make it into a popout." ... "oh, that popout should be a flash video" ... then suddenly you're missing your deadlines to the VP of Finance. It's there to give you a defense mechanism against folks above you trying to work-around their peers for your valuable time.
* "The Scrum coach will find fifty ways of attacking each and every one of these topics, but all of them will be in the form of one more thing. One more meeting, one more document, one more backlog, one more item in the definition of done. " I don't think so, man. It's all about taking the training wheels off, not adding brakes. Maybe you have people working against you. Working against being in a team. In order to protect you from them you're all being laden with extra BS. For us, every painful thing that slowed us down was cut. Blocades were fired. The world got good.
* "Every story in scrum has to end in customer value. [...] why even bother with refactoring?" The team can be a customer. "As a software engineer, I must refactor my [blah] to facilitate [blah]." No need to jump through artificial end-user-centric hoops. When/if you mention it outside the team, just hand-wave them and call it "plumbing." Everyone understands plumbing (as in, pipes in your house you can't see but appreciate every time you flush.) You still get points for work done, you still get value, you're still being a responsible engineer keeping their house clean.
* "What is the job of a software developer? Writing code? I don't think so. I think it's inventing and customizing machine-executable abstractions" Not to be trite, but I'd say the job of a software engineer is to offer solutions to business problems. Usually this is by way of "I've got a hammer so let me hit that hangnail for you", but really, that's all we are, is problem solvers... but... shrug
Re: Ideas for Alternatives:
* "One way to achieve this might be putting work items through what I would call an algebra of complexity, i.e. an analysis of the sources of complexity in a work item and how they combine to create delays. The team could then study the backlog to locate the compositions that cause the most work and stress, and solve these knots to improve the codebase. The backlog would then resemble a network of equations, instead of a list of items, where solving one equation would simplify the others by replacing unknowns with more precise values." I'd love to see some practical examples of this. It sounds more complicated than the simple off-the-hip-shot estimates we get with points, but the idea has promise.
* "The other proposal I would have is to get rid of the review, planning and stand-up meetings." You should be doing these judiciously anyway, not dogmatically. Free yourself of the chains of dogma and just do these when they have value. The thing is that they ARE good training wheels. If you're not in a high-performing team and you skip straight to "just do it when you need it" then you never get into practice, you never get used to them, you ... never do them. They do have value, and you should do them when you can demonstrate value in them. So... Ascend when you're ready to.
It's a dev methodology not a religion :/
Right; religion only makes you go to confession once a week.
The most important agile practice, imho, is far and away the Retrospective. A good retrospective process will fix all other problems.
In the post, you sort of dismiss it out of hand because it is supposed to be a discussion limited to scrum, but that is an arbitrary and self-imposed restriction. You are doing it wrong. Remove that restriction and the doors will open up.
The retrospective is about creating time for the team to review what works and what doesn't about the software development process in general (no need to limit to scrum).
I've seen retrospective discussions veer into company culture, the need for faster hardware, testing processes, etc. Anything related to making the software, the software development process, or the team better should be on the table.
A good retrospective enables the team to use its sprints as experiments to try different things, to evolve the practices to better fit the needs of the team and organization. If stand-ups aren't working, go a sprint without them, or doing them differently -- whatever change would address the weakness identified by the team -- then in the next retrospective reflect on whether it actually improved things. Rinse and repeat. Done properly, a good retrospective will enable your team to evolve and get better and better with every sprint.
Points can be frustratingly fuzzy, but they serve a valuable purpose. They give a measure of team productivity (e.g. velocity), even if imperfect. Yes points can be gamed, but a team should learn quickly that gaming only serves to fool themselves. Because of they can be gamed, I'm not a fan of using points as an externally visible "vanity metric". They should only be used or shared within the team. They should not show up in performance reviews or presentations to management. But points can be very useful as an internal metric for the team to measure whether or not tweaks to the process made a positive impact. You need some way to objectively measure the success or failure of your process experiments.
Of course, any metrics used are themselves certainly deserving of scrutiny and fine-tuning, as poor metrics lead to poor decisions. To me the benefit of points (or t-shirt size estimates) over something like hours is that in software development, hour-precision estimates give a false sense of accuracy. It requires more effort to provide more precise estimates, yet they won't be any more accurate (given the inherent non-repetitive nature of software development). Thus, the use of a coarse-grained measure like points serves to re-enforce the notion that the estimate are inherently imprecise and that we don't want to waste effort on greater precision estimates.
All that said, whether or not to use points or any measure of team productivity at all is a team decision. If the measure isn't serving the goal of improving the software and the software dev process, then change it or get rid of it. That's the beauty of the retrospective -- it explicitly encourages this sort of process-hacking and fine-tuning.
Embrace the Retrospective!
mandatory link to alternative: http://programming-motherfucker.com/
I got my first programming job, while still in college (as part of a co-op/internship program) in 1992, for the U.S. department of defense. As you can imagine, everything was BDUF there. It was very frustrating and the most annoying part was the massive disconnect between what the users expected, which was that they could suggest a change and see it on their desktop an hour later, and the reality of change and release management, regression testing, unexpected side effects, code entropy, etc. etc. Some time around 1999, I started to hear about "extreme programming" (XP), and when I looked into it, I breathed a huge sigh of relief, because here were people whose opinions people seemed to be paying attention to making the same observations that I was, but far more eloquently than I ever could, in a way that seemed to be resonating with users and project managers. I guess I'm too much of a "glass is half full" type, though, because it didn't take too long before XP gave way to "agile methodologies" which became "scrum" which is really BDUF with micromanaging daily standups.
After I read this complain-based article, my only question for this author is what's your fix/suggestion? Otherwise, I don't get what the point of this guy write this article.
Describing drawbacks and problems is value added. Theoretical physicists regularly know about problems for decades before solutions are found. Precisely describing the problem can get the intelligent diaspora organized around finding improvements to the situation.
Yes, thank you. The attitude of 'you may not describe a problem without already having prepared specific, actionable solutions to it' is limiting and only really appropriate in situations where time comes at a very high premium generally, i.e. meetings and such, or the value of actually solving the problem is too low wrt time spent coming up with solutions.
In almost every other situation, writing articles on the internet included, simply describing perceived problems so that you and others may consider solutions to them immediately or at a later time is extremely valuable, and is a necessary (but insufficient) condition of progress being made.
Did you just compare a project management methodology with theoretical physics? Sounds to me like you try to compare apple with orange. Sorry, I don't follow you.
He did just compare problem solving with problem solving. Sweeping problems under the carpet does not get them solved.
I came up with an example of where problem descriptions were useful.
A more relevant example would be The Mythical Man-Month. In many ways, agile and scrum are iterations on solving that problem: project-based knowledge work is hard to estimate and challenging to scale in certain ways.
He makes suggestions at the end, in the section "Ideas for Alternatives", after the sentence "The default answer to any substantial criticism is "What is your alternative?""
In summary, he suggests eliminating the review, planning and stand-up meetings (replacing them with asynchronous & as-required meetings), increasing the frequency of retrospectives, and spending time analyzing the backlog to see what the underlying causes are and tackling those.
Honestly, if you can get away with it Kanban + NoEstimates + Continual Releases.
Scrum becomes useful when you are required to plan(or attempt to) schedules for certain features far in advance.
Deconstruct Scrum and perhaps something like RUP/DAD (Scrum doesn't tackle the whole lifecycle) into components and understand what the purpose of each component is.
Start from Kanban and add stuff until you get a process that matches the critieria of the organisation.
This requires a lot of skill though, so many teams are stuck with one-size-fits-all tools like Scrum, especially now that it's become a management buzzword. And it does a not too bad job at delivering somehing of passable quality not too late. :)
Instead of inventing my own system, I would go with the (rather new) GROWS Method[1], which is designed to be adapted to suit your needs (but only after a certain level of experience is gained).
GROWS splits its components into stages, through which your team progresses as they become experienced with the various practices. The stages are:
Stage 1 is very simple and easy and about getting your team setup on good practices (eg source control). At this stage, you should be following it rigidly.
Stage 2 is about making sure the right people are working on the right things at the right time. A rigid but simple agile-development system.
Stage 3 is about applying judgement and critical thinking to stage 2, adding release planning, retrospectives and other plan-and-feedback practices
Stage 4 is the stage where your team is experienced enough to tune and alter the practices so that they work best for your unique situation
Stage 5 is when you're ready to replicate your teams practices for other teams or environments.
I like it because it acknowledges that no one-size-fits-all system can be perfect for everyone and provides a framework that allows itself to be changed, but does so around a "skills model" to prevent premature change.
"This requires a lot of skill though"
Exactly. Which, in my experience, most teams do not have. All too often have I been in a team practicing some form of agile, but they don't like X and would prefer Y and so they change the framework to suit their needs and then it fails and they blame the framework, saying it doesn't work, when in reality what happened is they changed and broke it, because they are not (yet!) experienced enough in that framework to actually tweak it to their needs. GROWS is designed with this in mind.
[1] growsmethod.com
I have been two different org, one use scrum and one use kanban, working in the team apply kanban was feel like a sunny afternoon in a park - you lose the awareness of time, and just want to lay back and work on few things for long time, but suddenly your manager told you there is a timeline, no one knows until your CEO keep ask (yep, a startup).
A Kanban board should be customized by the team to help them achieve their goals.
e.g: if timelines are important, a due date should probably be added to the tasks/post-its.
Assuming some tasks are more urgent than others, the todo column should be sorted, or tasks could be assigned priorieties.
Adding a limit to the done column which triggers an event like a retrospective/celebration can help avoid the feeling of an endless grind.