Code quality: measuring the business impact of unhealthy code
codescene.comGood read. I would especially want to know more about the unpredictability of the red code/unhealthy code. The difference between healthy code and red code regarding implementation time is astonishing.
This is indeed something that the academic research paper (see https://arxiv.org/abs/2203.04374) identifies as Future Work. There could be a code familiarity component that explains the large risks and variations.
From the paper: "Could it be that changes done by the main author are faster and more predictable than corresponding changes made by a minor contributor? We suspect that author experience could be a factor that impacts the predictability of changes to low quality code."
If that turns out to be the case, then it would highlight organizational dimensions of low quality code: key personnel dependencies and on-boarding challenges.
Interesting observation, so true.
Thanks @john-shaffer for sharing! Probably the best findings I’ve seen that actually tries to puts a number on some Dora metrics.
I'm not John, but I'm one of the authors of the research paper. Indeed, one goal with this work is to complement DORA’s delivery metrics with similar correlations between code quality and its business impact.
Hopefully, this work can help developers when communicating with the product/business side. The sad reality is that without quantifiable metrics, short term targets will win over the long-term maintainability of the codebase. Only a minority of companies actively manage technical debt.
This appears to be selling a static analysis tool that rates code on a 1 to 10 scale on a few axes where the assessment criteria are opaque. So on the face of it that's not something developers are going to like.
@john-shaffer if this is your tool can you provide a link to how the various metrics are computed?
edit: FAQ doesn't include the above. It does say the tool requires write access to GitHub repos it analyses.
Disclaimer: I work at CodeScene.
Here's a link to some docs about how code health is calculated: https://codescene.io/docs/guides/technical/code-health.html
In simple terms we try to measure cognitive complexity of source code.
A more in-depth description of one of the factors: https://codescene.com/blog/bumpy-road-code-complexity-in-con...
Links appreciated, thanks. Based on those I'm not the target market but I wish you well with the product.
I have no affiliation, but I tried the free trial of their tool and I found that the Code Health metrics made a lot of sense. The files that it pointed out are definitely overly complex and more difficult to change than they need to be. But it gave perfect scores to some files with complex jobs that are better-designed. I recommend just giving it a spin on a codebase that you are very familiar with.
This is also valuable information when planning implementation of new features. To also address code quality and risks.
Nice to read – this sheds a new light on to why and how to tackle technical debt.
Yes, this empowers development teams to have data-driven tech debt conversations with management.
I wish that I had this type of data myself back in the day. Would have been so much easier to push back on wishful deadlines.
Good to read!