Basic Micro-Optimizations
openmymind.netI like the basic premise, which in effect is just "write good code" and, if all else is equal, write efficient code.
Premature optimisation is a problem only when you spend too long on it, or it reduces readability, robustness, testability, etc. If it's equally easy to write but better, why not? There's no excuse for routinely writing bad code and "premature optimisation is the root of all evil" is all-too often used as an excuse for sloppiness.
However, I possibly disagree with the string formatting example. If it's a function that gets called a lot and does a special-case string conversion like this, fine go ahead and optimise it. But that's a real optimisation, not what the author has termed a micro-optimisation, which I take to be something you should just do routinely (like ++i instead of i++ in C++).
If you have lots of string conversions throughout your code then the chances are most of them are going to be sprintfs or whatever is the most flexible tool in the language you are using. In these cases, you should just stick with what is idiomatic within the context of that project. It makes reading the code later a lot faster when everything is similar. It also tends to make it easier to change when your simple special cases need to become more complicated.
For example, I have in previous projects standardised on regular expressions for almost all string comparisons, even in situations where a simple substring compare would be much more efficient. However, since 90% of the codebase is using regular expressions to do complex comparisons, it just makes life easier if they are used everywhere unless there's a really, really good reason to do things differently. It reduces cognitive load when reading the code if it follows a similar style throughout.
It also makes maintenance easier when you use the most flexible tool at your disposal everywhere instead of special-casing. Let's say you expect the first character of your string to be an "a" and you do it with substr(foo,1)=="a". Later, you need to make it case-insensitive because of a bug. With regex you just add an "i" flag, but with the special case, you need a tolowercase call. No biggie, but the next day you need to support unicode accents. Uh oh...
If you have a large codebase where string processing is all done in the same way, when you get a bug like not recognising "á" as "a" then at least you will find that all parts of your system behave consistently. Fixing the problem should require roughly the same fix everywhere. The test cases can all be the same. Going back to the author's example, there's no guarantee that sprintf("%d",x) and itoa(x) will produce the same output on all platforms, so it's possible this change although it should be functionally identical, might in reality introduce new edge-cases that you need to test for.
If you've got special cases everywhere then you're going to get different sorts of bugs in different parts of your system which can lead to issues being much harder to trace, harder to test and harder to fix.
TL;DR: Optimise for readability first. Then optimise for performance. Allocate the time you have wisely. Homogeneity is reasonable substitute for DRY; special-cases for common patterns are usually bad and can introduce bugs.
I'm all for these types of optimizations, so long as they don't come with the price of readability. The more complex an optimization is, the more likely it'll cost more in the long run (developer time) than save (hardware resources).
Hardware/Memory/CPU's are still much cheaper than developer time.
That may be the case if you're just doing in-house software.
If you're writing software that others will be running, the calculation's a bit more complicated. You've got to consider your customers, and how much a performance change impacts their purchasing decision, and use that to estimate your revenue impact, and then compare that to the cost of developer time. And what comes out the other end of that is that developer time is often much cheaper than dissatisfied customers.
The case is similar on the Web, where tiny minuscule barely noticeable changes in application responsiveness can have a huge impact on conversion rate. Or worse yet, make the difference between your servers falling over or ticking along smoothly when that oh-so-important Cyber Monday traffic surge comes along.
Not always cheaper. Depends on where you work.
I like the term micro-optimization. I wouldn't use them regularly in my code, maybe just in tight loop or something, but they aren't too invasive and can actually flag the code for future maintainers.
In fact, I'd consider adding a little "// micro-optmized" comment at the end of one of these to communicate to future developers that this mildly odd line is a useful but non-essential little optimization. So feel free to change it but you'll probably want to read up a little or do a quick benchmark before you do that.
The ones described in this article are ones that shouldn't necessarily be constrained to tight loops if you're looking for maximum benefit. Because their real gain isn't saving CPU cycles, it's reducing load on the memory manager. There's definitely still a diminishing returns situation, but the returns may not diminish as quickly since a memory allocation done by one spot in the code has the ability to impact performance virtually anywhere else when you're working in a garbage-collected language.
In regards to growing array one-by-one, obviously be very careful with that. In C, issues to look out for are malloc overhead and memory alignment issues if you are not realloc'ing in even sizes. Also see Mozilla's recent post on this issue. [0]
[0] https://blog.mozilla.org/nnethercote/2014/11/04/please-grow-...
Also you can 'foam' the heap if you continue to grow lots of allocations. Old freed fragments are never big enough to satisfy a new allocation, and your heap memory grows without bound.
Interesting, I've never heard of this. Do you have any links to resources? I tried Googling, but to no avail. "c heap" looks like cheap, and "memory" and "foam" are all about mattressess...
try 'heap fragmentation'
The author is just giving what I would call, "optimizations" or maybe more plainly, "good code".
To me, the term, "premature optimization" means optimizing something before measuring that your optimization has any effect.
Premature optimisation is... ah, forget it ;).
They say the same thing about apathy.