CMake Part 1 – The Dark Arts
blog.feabhas.comCMake is legitimately the worst software I've ever used.
cargo > Bazel > autotools > "the IDE" > handwritten Makefiles >>>>>> build.sh >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> CMake
I used to agree, but I eventually needed to add support in CMake for a particular IDE and got far too familiar with it. It's easy to debug and the syntax could be worse. Whoever came up with the list representation needs to be arrested though.
It might be stockholm syndrome, but I'd rank it worse than Meson and better than Makefiles. Autotools can go die in a fire.
Cargo is wonderful, but having something that user-friendly and well designed in C would feel incongruous with the rest of the ecosystem.
> Cargo is wonderful, but having something that user-friendly and well designed in C would feel incongruous with the rest of the ecosystem.
You really did say this.
Cargo does get some flak from C and C++ build systems. It is definitely not perfect, but which build system is?
The coolest thing about cargo is that it can be and is significantly improved over time. The whole dependency resolution was improved to resolve some long standing issues lately in a backward compatible way and that’s just astounding that they managed to ship this.
Do I like cargo? No. Do I like it better than CMake Meson Bazel Auto tools and make files? Hell yeah, by far, I’d rather get shot than go back to cmake or Auto tools .
Crazy, I'd definitely use CMake above autotools (does not even work natively on windows LOL), handwritten makefiles (not supporting filenames with spaces in 2021 LOL), "the IDE" (do you even do cross-platform?).
I'd use Bazel but afaik it mostly uses a rebuild-the-world approach.. I'm already getting flak from Linux distros maintainers because I use some vendored header-only libraries and not system ones so I don't see how that is even supposed to work.
Bazel does the opposite of rebuild-the-world. That's more of a CMake thing (rm -rf build && mkdir build && cd build && cmake ..). Bazel won't even run tests if the code the tests depend on have not changed. A lot of thought went into Bazel. It caches everything it possibly can.
> I'm already getting flak from Linux distros maintainers because I use some vendored header-only libraries and not system ones so I don't see how that is even supposed to work.
That's where you use autotools. It has been around long enough that every package manager knows how to deal with it. For most C/C++ programs, autotools is all you need. You should learn it because it's not going away.
"the IDE" (like VS project files) is definitely not cross-platform (although it can be), but not everything needs to be cross platform. For game devs building on DirectX, for example, it's completely pointless to support other platforms when the runtime depends on a single platform.
> Bazel does the opposite of rebuild-the-world.
by "rebuild the world" I don't mean "do a clean rebuild" (which frankly isn't a problem in 2021 with CMake + Ninja, the last time I had to rebuild is when the compiler changed from clang-11 to clang-12) but "to build a bazel-based software, all its dependencies must be built with bazel too", e.g. it's harder if you want to link against system-provided Qt, ffmpeg, ...
> That's where you use autotools. It has been around long enough that every package manager knows how to deal with it.
and yet it still sucks on windows if you want to use cl.exe (or if you want Xcode / VS solutions, which is regularly my case).
I think I had the best experience with Meson/Ninja so far. I am also interested in using Nix for building. As for Cargo, I did not like how it recompiled all dependencies when I changed a warning flag on my project. I also found it unusable because it provided no way to check for the hash or signature of the dependencies that it downloads.
I don't think that I have ever been able to successfully compile a project that uses CMake. Its code is horrifying too, for example cmake-3.21.0-rc3/Modules/CheckFunctionExists.c contains
#ifdef CHECK_FUNCTION_EXISTS # ifdef __cplusplus extern "C" # endif char CHECK_FUNCTION_EXISTS(void); # ifdef __CLASSIC_C__ int main() { int ac; char* av[]; # else int main(int ac, char* av[]) { # endif CHECK_FUNCTION_EXISTS(); if (ac > 1000) { return *av[0]; } return 0; } #else /* CHECK_FUNCTION_EXISTS */ # error "CHECK_FUNCTION_EXISTS has to specify the function" #endif /* CHECK_FUNCTION_EXISTS */> I think I had the best experience with Meson/Ninja so far. cmake lets you use ninja as the backend if that's your cup of tea. You can even set it to the default generator , by setting the CMAKE_GENERATOR environment variable to ninja. (I have no meson experience, so can't compare it).
> I don't think that I have ever been able to successfully compile a project that uses CMake.
That's quite the statement. In practice, I've found cmake -h. -Bbuild && cmake --build build
to work about 90% of the time. Far more luck than I've had with autotools.
> Its code is horrifying too, for example:
1) I'm sure I could find some horriffic code in meson too if I went digging. 2) The alternative to this is you having to write something equivalent in your own code, meaning that in my code I don't need to do stuff like [0] in my code to detect features; my build system handles it for me. 3) CMake supports more platforms and targets than I've ever seen in my life, and likely supports more compilers than are necessary. that's a blessing and a curse, but it means that if I write simple program to run on some crufty microcontroller with a bastardised gcc toolchain from the 90s, it's fairly likely that cmake supports it out of the box. Code like that is the price to pay for that level of support.
[0] https://github.com/boostorg/beast/blob/b7344b0d501f23f763a76...
You missed the point. "__CLASSIC_C__" is not a thing (why they don't use __STDC__? I don't know, they don't seem to know either) and the syntax that they use inside that ifdef is.. not what people mean by classic C. It has been there for years and multiple people have pointed it out but they do not seem to care. The funny thing is that they do know how to use the pre-standard C argument syntax (as in https://gitlab.kitware.com/cmake/cmake/-/blob/master/Modules...), it's just that they do not want to fix it for that specific file for some reason.
As for
I am not really sure what to say.if (ac > 1000) { return *av[0]; } return 0;And then for CHECK_FUNCTION_EXISTS();, there are a few rare compilers that do not throw an error at compile-time if said function does not exist.
Also, I have been told that cmake-generated Makefiles invoke cmake itself, so you can't really generate portable Makefiles with it. In addition to that, I have been told that cmake takes ages to compile.
> meaning that in my code I don't need to do stuff like [0] in my code to detect features
I find that much better honestly.
> You missed the point. "__CLASSIC_C__" is not a thing (why they don't use __STDC__?
I dont know, I'm not going to defend it. Imnot going to do a line by line review of the file you picked, as I said I'm sure I can find awful code in bazel, meson, etc.
> Also, I have been told that cmake-generated Makefiles invoke cmake itself, so you can't really generate portable Makefiles with it.
Cmake generates a target for makefile that will re run cmake if your cmake file changes. If you're bukldong with cmake, you distribute the cmakelists txt, and treat the makefiles, ninja files etc as build intermediates
> In addition to that, I have been told that cmake takes ages to compile.
Do you compile your own make regularly? I've compiled cmake once or twice and it's not quick, but it's definitely doable (maybe 5 or so minutes?)
> I find that much better honestly.
The reason to use a build tool is to avoid hacks like that in user code. I would rather have cmake or meson or whatever my meta build tool is handle and test that logic, so I can focus on my library or application code.
> as I said I'm sure I can find awful code in bazel, meson, etc.
Oh, you have proof that person X murdered someone? I am sure I can find awful stuff that person Y and Z did!
[back in 2014] OpenSSL has heartbleed? Well, I am sure that I can find issues in libsodium if I looked.
> Do you compile your own make regularly?
No, my users however might need to once they have to deal with a cmake project.
> but it's definitely doable (maybe 5 or so minutes?)
I was told that it takes hours, though I might be misremembering.
> so I can focus on my library or application code.
This kind of thing does not really distract you from anything. Adding something like that takes as long as it does to add a cmake check. Then the person who is compiling has to do -DENABLE_FEATURE=1.
> Oh, you have proof that person X murdered someone? I am sure I can find awful stuff that person Y and Z did!
No - that's not what I'm saying at all. I'm saying that if you're holding X to a standard, you should hold Y and Z to the same standard.
> No, my users however might need to once they have to deal with a cmake project.
cmake is available from the package manager on basically every system imaginable, or as a binary (or source) download for a whole host of platforms. It's also widely used, so chances are a user is going to have it installed.
> I was told that it takes hours, though I might be misremembering.
If you're going to dogmatically claim that cmake is inferior, then you should at least verify the grounds of your claims are true. I ran
in under 90 seconds. Might have even been faster if I used ninja. I actually don't konw how to compile make from source on windows. I had a look, and apparently I need to ftp the source from a gnu mirror?git clone https://github.com/Kitware/CMake.git && cd cmake && cmake -H. -Bbuild && cmake --build build> This kind of thing does not really distract you from anything. Adding something like that takes as long as it does to add a cmake check. Then the person who is compiling has to do -DENABLE_FEATURE=1.
This isn't unique to cmake but a meta build system does more than just let you add defines.
>And then for CHECK_FUNCTION_EXISTS();, there are a few rare compilers that do not throw an error at compile-time if said function does not exist. check_function_exists() verifies that the symbol can be linked to rather than compile. That's why it gives it a bogus declaration of char CHECK_FUNCTION_EXISTS().
Funny enough I was trying to build a library yesterday that used check_function_exists() to detect the presence of some library functions. The project was set up to output a static library so check_function_exists() returned true for all the missing functions since it linked the test program without issue. https://gitlab.kitware.com/cmake/cmake/-/issues/18121
> As for Cargo, [...] I also found it unusable because it provided no way to check for the hash or signature of the dependencies that it downloads.
Afaik Cargo does it out of the box, based on Cargo.lock.
UPDATE: This doc page seems to confirm that: <https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lo...>
I did not know about rev, thanks. Though, since there is no way to specify a signature you are forced to use potentially outdated packages.
"cargo update" will update the packages along with Cargo.lock content. As for not updating them without a manual trigger, I consider it a feature, but I guess it's a matter of opinion.
> "cargo update" will update the packages along with Cargo.lock content
Disregarding the rev attribute?
This is the entire point of this command. It will update the rev attribute. What else would it be for?
> I am also interested in using Nix for building
Two reasons to avoid this:
- Performance, an item in the store for each object file seems very appealing but would be horrifically slow for anything large.
- Ironically, it would make it tricky to build a Nix package for the result, because recursive Nix is not a thing.
handwritten makefiles have a certain kind of deep, pure beauty if you are in the right mindset
If you have a project that you don't want to embed in another project, doesn't build on windows, doesn't have any spaces in filenames, and do'nt mind manually ordering dependencies wrt the quirks of your toolchain.
I don't see how make is at a disadvantage with relation to other tools with embedding. No build tool is easily embedded in a project that uses a different build tool.
And I've used make on Windows for years and years, so not sure why that is a problem?
My understanding is that recursive make is considered an antipattern (and I'm not aware of any other way to embed makefiles in other makefiles). Cmake's add_subdirectory is first class support for nesting other projects,and the FindX.cmake pattern means you can consume prebuilt libraries inside cmake, which I don't think make supports?
You are technically right about make on windows, but in practice makefiles on windows tend to be written for GCC/clang, and require manual translation to work with MSVC. There's also no IDE support for visual studio, which cmake gives you out of the box;
Handwritten makefiles can be the best option on the table in scenarios not involving a lot of boilerplate code or having to do any platform check at all.
Once you stray out of that niche, anyone is better suited if they just automate all the boilerplate generation and compiler checks.
And that's what CMake does.
Maybe it wasn't your intention, but your comment is the most scathing critique I've ever read of cmake.
Boilerplate code and "compiler checks" are strongly negative anti-patterns. Maybe the worst in programming. That cmake makes it easy to do these awful things just shows how evil it is!
People: write simple, portable code!
> People: write simple, portable code!
That isn't always how it works in practice, though. It's useful for your build system (or meta-build system) to be able to check for, say, C++17 support, and cause the build to fail early if this is missing.
Similarly, you can use a tool like CMake to detect libraries. If a library is missing, you might want the build (or, rather the 'configure' stage) to instantly fail, or you might want to build with certain features removed to cope with the library being missing. CMake supports both, as does autotools.
I agree with everyone who says CMake's scripting language is atrocious and that it's often miserable to work with, [0][1] but that's because CMake specifically is terrible. The problem it's solving is a legitimate one.
An example: you can write a desktop application with two different platform-specific front-ends, and have CMake compile the appropriate one given the target platform. This is nicer than relying on the two platform-specific build systems directly.
>, you can use a tool like CMake to detect libraries. If a library is missing, you might want the build (or, rather the 'configure' stage) to instantly fail, or you might want to build with certain features removed to cope with the library being missing. CMake supports both, as does autotools.
The gp enriquto had a rigid stance on so-called "optional" libraries. If the library is not there, the build should fail and that's it.
I don't know if my previous reply to him explaining the benefits of building even with missing libraries was satisfactory: https://news.ycombinator.com/item?id=25913562
Hey! I remember that interaction (and sorry for my crass language, I was having a hard time by then).
Your explanation was certainly satisfactory. Now I understand a bit the motivation of people who want to compile different programs depending on what happens to be installed at a particular moment in their computers. I still think that it is "an exceptionally bad idea which could only have originated in California" ;)
> People: write simple, portable code!
that does not work as soon as your users are on both windows and a unix-like platform and you are making a non-trivial app (for instance, an app with networking, audio, and video support).
A simple example: how do you share a GPU texture handle across multiple processes portably with a single code that works across windows / mac / linux's graphics APIs ?
Trying to do that is a massive waste of time. It's legitimately easier to maintain separate codebases for each of the different platforms. The code that doesn't need to change can be a library. Or you use some super high level abstraction like Unity, Qt, JVM, Electron, React Native. Portability != cross-platform.
Yeah, now throw game consoles, embedded systems or classical mainframes into that mix as well.
> People: write simple, portable code!
How do you memory map a file, or spawn a subprocess simply and portably? Every way that I've seen it done that doesn't hand the logic off to the build system is an unmaintainable error prone mess.
I'm not sure you understood what I've said at all.
CMake eliminates boilerplate code and compiler checks. They do not exist, at all. With CMake you state that you have a C++11/14/17/20 project, it builds N static/shared libs and M executables, you set dependencies, and you're done.
They do exist in Makefile projects because Makefiles only define the DAG for the build, and don't perform any sanity check at all. So if you have to include dependencies or use specific compilers then you have to manually check each and every single thing yourself, because Makefiles do not handle that at all.
Think about it: why did the entire industry adopted makefile generators such as CMake instead of just using a standard tool like make, which just works and is ubiquitous?
And no, relying on tools and a layer of abstraction to eliminate all janitorial work is not evil or awful. Checking if a lib you depend on already exists in the system is not evil or superfluous. Checking if the compiler you're using supports a specific version of C++ is not evil or superfluous. Do you expect things to just work when you aren't even aware of which compiler you're going to use? Do you want to spend time looking at weird compiler error dumps just because your build machine happens to have a different version of, say, Boost installed?
The main problem of cmake is that some people seem totally oblivious to the problem domain, and what/how much work it takes to get stuff to work reliably given very basic usecases such as... Upgrading a version of a compiler, such as VS. Think about it: How exactly do you think simple, portable code is done? Do you expect code to compile on different platforms by magic?
> Checking if the compiler you're using supports a specific version of C++ is not evil or superfluous.
It is both evil and superfluous. It is evil because you should be writing portable code and do not depend on compiler specificities. It is superfluous because if you do not check, the compilation will still fail, which is precisely the expected behaviour.
> How exactly do you think simple, portable code is done?
By writing it carefully and testing it on different systems. You test with -Wall -Wextra -Werror -pedantic on all systems but you distribute the Makefile without these compiler flags.
> Do you expect code to compile on different platforms by magic?
No, of course. At first you will have a few linuxisms or macOS-isms, that you will gradually remove through a few rounds of multi-platform testing (which is free and easy to do nowadays).
> It is both evil and superfluous. It is evil because you should be writing portable code and do not depend on compiler specificities. It is superfluous because if you do not check, the compilation will still fail, which is precisely the expected behaviour.
But now I get 2 dozen error reports because there is a lot of users who run builds because they're on Linux and that's what $BLOGPOST said to do to have the last version of some software, but have no clue about software development.
> No, of course. At first you will have a few linuxisms or macOS-isms, that you will gradually remove through a few rounds of multi-platform testing (which is free and easy to do nowadays).
So how do you handle that you need to link against Ws2_32 on windows or "-lwebsocket.js -s PROXY_POSIX_SOCKETS=1 -s USE_PTHREADS=1 -s PROXY_TO_PTHREAD=1" on emscripten if you want to use sockets ? Don't use sockets because they're not portable ?
I disagree, to the point I honestly doubt you are not trolling, specially taking into account how you decided to rank "the IDE" between autotools and handwritten makefiles, which makes absolutely no sense at all either way you look at it.
It's also dumbfounding how Bazel is ranked so high when it doesn't even support integrating system libraries as part of it's happy path.
The main reason why CMake, with all it's flaws, is the undisputed build system for c and C++ projects is the uncomfortable fact that all other alternatives are awful in comparison, even and specially in very basic happy path scenarios such as putting together a lib that any user can pick whether its static of shared and install it in the system folder or anywhere without even bothering with which compiler tool chain you're using. In CMake, anyone can get a complete project up and running with less than a dozen lines of code, and that project will assuredly work on multiple OSes as-is.
You'd be hard pressed to find another C++ build system that comes close to doing the same.
Have you tried `Xmake` or `Meson`?
"CMake, you say? Yeah, CMake is like smashing your face on high quality pavement. You can admire the stonework while the blood pours down your face."
Pieter Hintjens, http://hintjens.com/blog:79
That's certainly a tweet-sized sentiment!
I'll point out that the article was written "2430 days ago" and uses a CMakeLists.txt example that looks like it.
The article then goes on to announce Yet Another Build System that doesn't seem to have gotten any traction.
Zproject is not a build system. It's more like package.json esque thing for C. We used it on a past and it was capable to generate auto tools (or c make) build recipes, Debian or rpm packaging, ffi bindings and more.
But you're right it got almost zero traction outside of zeromq.
This. I'm stunned this kit was ever created with the shape it has, but even more stunned a second person agreed to use it.
It completely blows my mind that this demoware has made inroads anywhere.
This is a pretty nice article. CMake is a pretty nice tool and this is a good article for its good parts.
One thing I *hate* with build systems is having to enumerate all of my files. I have a file system. It knows about the files. If I organize my code appropriately (say a lib, inc, and prog directory) the organization says how to build the source code.
I often make my build system support finding all the files in directories, and enumerating them and using them for the build targets.
Some times that scheme bites me when projects get very large, since it can take a while to `find` all the files, but those projects suck to enumerate all the files in too.
The main reason why not enumerating all the files in the build system is a bad idea is that the build system won't know to rerun itself and re-scan dependencies after you added a file. If you do enumerate the files, you need to change a build system file to add an entry for the new source code file, and that tells the build system to rerun itself and re-scan dependencies. And in the grand order of things, adding a file to the build system is a triviality compared to actually writing the code in it.
And, in general, adding a source file can automatically insert a record into the build system if you follow a convention.
Couldn't a build system use a hash of the accumulated files as a cache key and rebuild it's internal state when that changes?
I'm not seeing a big downside, but maybe I'm missing something obvious.
> CMake is a pretty nice tool
It's pretty powerful. It'd be pretty nice if it didn't have such a god awful DSL.
Frankly, with the inception of modern CMake over a decade ago, the only reason anyone has to stray out of cmake's DSL happy path, comprised of all the tried and true declarative bits, is whether
a) they have to do a very niche/specialized/extremely custom extension to CMake, or
b) they have no idea what they are doing.
More often than not, b) is the case.
You and I discussed exactly this 3 months back. I won't restate my responses here, but for anyone curious: https://news.ycombinator.com/item?id=26722717
They recently added a CONFIGURE_DEPENDS flag to file(GLOB...), which will automatically rebuild the file list when new files are added.
...which comes with a cost (extra reconfiguration of the project), but it 's worth it for many projects.
I use GNUstep on Linux and I generate my main GNUmakefile. They have a preamble file I use that is only generated if it does not exist and it allows me to keep all my settings. The biggest drawback is that I haven’t been able to keep private headers private so everything is public. Would only really be a problem if I was publishing a framework for others to use. I love this system. Makes it very nice to have everything organized in folders as needed and then BAM just run cd/generate/make and done.
How do you differentiate which headers should go where in your script?
I've been burnt by that convention pretty regularly:
(1) build automatically scans and adds all files in a directory
(2) I write a quick script foo.py in the directory to check something
(3) Boom, binary contains foo.py
I try to manually enumerate all files whenever I touch a build file.
A better solution is to a checkout of your build, then you wont have foo.py and you can be sure you haven't missed anything.
That would work only if you always build after committing all your changes, which is IMHO another anti-pattern.
Er, but OP's tars up and ship after every change?
Get out of here with your antipatterns.
TFA talks about CMake being widely used in embedded. This seems to be a difference between Europe and the US. It feels like every single embedded project I've encountered in Europe in the last decade uses CMake, and I've never seen an embedded project in the US that uses it.
It's the single biggest difference I've noticed. C++ is certainly more popular in Europe, but you see plenty of C projects in Europe and plenty of C++ projects in the US.
It is debatable how embeddable that is, in any case, official build system for NDK is CMake (ndk-build is kept around for backwards compatibility purposes), so no one doing US projects stuffing Android in places that don't look like phones?
Or GUIs on medical devices using Qt?
Cause there are some, so I would expect US companies to also having a go at it.
What do embedded projects in US use instead?
Some flavor of make (gnu and nmake are the two most common I've seen) usually wrapped with scripts written in perl/python/bash/.bat
Plain ol' makefiles?
Can you not use CMake to generate those makefiles?
I had my first real exposure to CMake earlier this year.
It demos beautifully, but quickly becomes an outrageous collection of side quests to find the secret key.
What collection of hidden methods, global constants, environmental variables and insane incantations must I assemble to cross compile this software?
None. The answer is, None.
The best I was able to find, was get the whole artifice running on your actual workstation, then get it (and an entirely different tool chain, including IDE's?!!) up and running on each target platform, dust off your sneakers to go sit in front of another computer, fire up an IDE and find it's build button.
I know it's not, but CMake somehow manages to feel like a solution created by hand wringing, cat petting, volcano living, mustachioed, cigar-smoking proprietary OS and IDE vendors.
OTOH, zig cc leaves me with a single tear of joy and wonder sliding down my cheek like a framed Velvet Elvis.
Update: Also, premake isn't terrible.
> [Modern CMake] has added to the confusion over using CMake because there are many resources on the web that refer to the legacy style of CMake.
Has anyone else found this to be the case? All the resources I've seen in the last five years or so have been pretty consistent about encouraging modern CMake style, which in my mind encompasses:
- declare targets and set properties
- generator expressions
- support the default workflow
- use find_package to import targets
I do see some misinformation from time to time about using commands like `include_directories` when `target_include_directories` is clearly the better style now, but I guess I don't consider "good style" and "modern CMake" to be the same thing anymore.
Some time ago, I - a seasoned C++ and Python developer - became part of a team, redesigning build and deployment aspects of a complete embedded code landscape, embracing a lot of developmental activities, projects, whole product lines etc..
I think most people who recommend alternatives completely underestimate the degree and extent of specific compiler, tool, library, IDE, and general native-build environment knowledge that CMake has swallowed and incorporated over its years of existence and continues to do so. Including all these quirks & particularities of the endless number of components, software artifacts and tools it handles. This is the whole reason for his success and the one thing no swift, elegant, new-paradigm new player can surpass short-, mid- or - in some cases - even long-term.
Most of the time you can tell the real experience of someone judging CMake simply by his kind of troubles with it. Admittedly, the syntax is ugly and often inconsistent. Consider a list 'alist' and depending on context you can or must reference it as either alist, ${alist} or "${alist}" - terrible, true. The most complex data type is aforementioned list, often as under pain bearable nested variant. Math is cumbersome, no unicode support for string manipulations like positions, length calculations, the list is going on seemingly ad infinitum.
But you can learn this rather quickly and when you write CMake code for some weeks you will become accustomed to it. In the meantime, other levels of annoyances begin to appear. For example, the allowed context of generator expressions is inconsistent, incomplete and sometimes - from a cmake writers point of use - almost artificially limited. Take add_custom_command: It allows for GE's in his DEPENDS and also COMMAND sections - but not as argument for OUTPUT. But wait, starting with CMake 3.20 it does, but:
" Arguments to OUTPUT may use a restricted set of generator expressions. Target-dependent expressions are not permitted. "
Unfortunately, those are often exactly what the developer is looking for. Reason here is as in many cases the deeply ingrained two-pass configure-generator nature of CMake.
For any real project, state becomes quickly important. Diving through many levels of sub directories and maintaining/conveying information between the associated CMake projects becomes far from trivial in no time. And no, cache variables are not the solution.
Another issue is the interaction with higher languages in CMake code. Most people start quickly with execute_process(${MY_PYTHON} ...) in order to handle more challenging topics. Problem is the lack of smooth communication of the results without workarounds like temp files, whatever else.
Also, any dependency requirements not covered by the standard cases, might it affect rebuilds, reconfigures or regenerations, requires deeper knowledge of CMake's actual dependency resolution mechanisms. Often only inspecting his time-stamping bookkeeping or exploiting his trace / graphviz / file-api output will help here (neglecting source code inspection, this is rather rarely required).
In principle, the task requires a fully-fledged programming language - but containing all the accumulated knowledge of CMake about his subjects.
Not an advertisment, but for any beginner I can only recommend Craig's book
https://crascit.com/professional-cmake/
It continously integrates changes from new versions. He is also always helpful in CMakes own discourse forum and gitlab issue tracker.
Great write-up. I learnt a lot.
I wish all the CMake haters invested a fraction of their energy putting together a tool that they feel was superior to CMake, or at least a better alternatice.
CMake has been around for a few decades, and perhaps a dozen alternative makefile generators and higher-level build systems already popped up, but still each and every single one of them failed miserably in gaining any traction.
Why is that, given that CMake is indeed far from perfect?
The truth of the matter is that CMake is, by far, the best buildsystem/makefile generator for C and C++ projects that there is right now. And this has been the case for a couple of decades. Not only does it work reliably but it is also extremely easy to setup and use in the happy path. With cmake anyone can easily create a project that includes multiple libraries and executables that consume system libraries in a matter of minutes right on their first try as a "hello world" onboarding project, and that project will work on multiple platforms and on any CICD pipeline.
I would very much prefer to see a fraction of the energy wasted in hating CMake being channeled into making a cmake alternative. But for some reason, all we see is hate.
Why is that?
Build systems are harder and more complicated and messier than user of build systems understand.
Say a group of smart engineers start designing the perfect build systesm.
Usually there is some sort of design constraint added for correctness that works for, say, 99% of projects, which seems like a good tradeoff. Until it turns out that openssl is in that other 1%.
Or maybe the build system assumes everything can build from source, which maybe works for 90% of cases, forgetting that proprietary vendors often ship prebuilt binaries.
Or maybe the build system is written in <actual scripting language>, which means <actual scripting language>, written in C, now needs a different build system. Also, <legacy OS> doesn't have a compatible version of <actual scripting language> available.
Or maybe the build system requires a lot of boilerplate to support the typical project structure of <large organization>. Instead of having verbose build recipes in hundreds, thousands, or tens of thousands of projects, <large organization> just sticks with its legacy custom build scripts.
The fact of the matter, CMake is pretty good. It lets you do basically anything you need to. It has basically no dependencies to build and use it, so it works anywhere. And it has extensibility that's actually fairly rare in build systems.
Anyway, my theory is folks see the downsides of some tradeoffs CMake made (awkward basic DSL) without appreciating the upsides (available everywhere), especially because they don't realize "works on my project and box" doesn't cut it for maintaining projects in the C and C++ ecosystem. I'll be bold enough to predict that the build systems of Go and Rust will be just as complicated if they ever need to start supporting things like juggling BLAS versions.
Oh, there are tons of CMake replacements. There's a ?make for every letter of the alphabet. The problem is that C users can't agree on which one is the best. Whichever build system you pick, it will have detractors who think it's the worst and refuse to use it. I think CMake survives, because it got early traction from not ignoring Visual Studio like every one else, and survives by being the least objectionable (which doesn't mean it's good).
But if you think cmake can easily consume libraries that work on multiple platforms, please help me use libpng in MozJPEG's cmake, because it's a fucking stupid nightmare:
-- Could NOT find ZLIB (missing: ZLIB_LIBRARY) (found version "1.2.8")> -- Could NOT find ZLIB (missing: ZLIB_LIBRARY) (found version "1.2.8")
that liklely means that the headers were found but not the .so. For instance maybe you have a stale zlib.h in /usr/local, but not zlib-dev installed (thus no libz.so).
You can use cmake's --debug-find first to have more info, and if that's not enough, --trace / --trace-expand ; for instance the person who wrote the FindZLIB.cmake used in that case could have done something like:
which would result in the above error message. The main problem being that the person who wrote that script did not add a small log output to indicate why a given .so was not considered valid.find_library(ZLIB_LIBRARY z) if(ZLIB_LIBRARY) if(NOT /* logic to detect if the library has a specific symbol, for instance gzopen */) unset(ZLIB_LIBRARY) endif() endif()