Settings

Theme

Spec-ulation – Rich Hickey [video]

youtube.com

283 points by Moocar 9 years ago · 73 comments

Reader

programnature 9 years ago

This is a very thick talk, one Rich's best ever IMHO.

The first point is how we talk about 'change' in software, to center around what things 'provide' and 'require'.

Breaking changes are changes that cause code to require more or provide less. Never do that, never need to do that. Good changes are in the realm of providing more or requiring less.

There is a detailed discussion about the different 'levels' - from functions, to packages, artifacts, and runtimes, which he views as multiple instances of the same problem. Even though we now have spec, theres a lot of work to leverage it across all those different layers to make specific statements about what is provided and required.

  • mshenfield 9 years ago

    I found value in dissecting the different levels of change. For the sake of sanity though, we should do breaking changes. Breaking changes exist because we have limited capacity as individuals and an industry to maintain software. This is especially true for infrastructure that is supported by (limited) corporate sponsorship and volunteers. Breaking changes limit our window of focus to two or three snapshots of code, instead of having our window of focus grow without bound. Our limited capacity can still be effective as a library changes over time.

    The most important point of this talk is here: "You cannot ignore [compatibility] and have something that is going to endure, and people are going to value" [0]. Breaking changes provide a benefit for library developers, but it is usually damage done to end users. As consumers we should weigh the cost of keeping up with breaking changes against the quality of a tool, and the extra capacity its developers are likely to have.

    [0] https://youtu.be/oyLBGkS5ICk?t=4177

    • michaelfeathers 9 years ago

      Agreed. Breaking changes can lead to alienation of user base, but I think there's a danger in lulling people into expecting that kind of constancy in software. It creates dependency of another kind. Maybe the trick is to vary features at some rate, getting users used to change and bringing them along.

      In retail it used to be the case that you could go to the same store a month later and see the same shirt to buy. The Sears catalog [1] presented that sort of constancy for consumers. Today there's a lot of flux. Some of it actually engineered to prevent people from delaying purchasing decisions. In software we can and do introduce breaking changes for ease of maintenance, and that can be ok as long as people are used to it. It's making the choice to have a living ecosystem.

      [1] http://www.searsarchives.com/catalogs/history.htm

    • solussd 9 years ago

      Additionally there are safer, usually reasonable, ways to deal with what would otherwise be breaking changes. Give the changed functionality a different name, create a new namespace/module without the removed functionality, or create a new library if you have introduced something fundamentally different (e.g., w.r.t. how you interact with it). That way your users can choose to refactor their code to use the change, rather than discover their expectations no longer match reality when they upgrade.

    • blueprint 9 years ago

      Who says you have to maintain old code? We're talking about simply not deleting it and establishing a discrete semantic for the new version as truthfully, a new version is new content which demands a new name to accurately and precisely describe it. If it didn't it would be like saying different content doesn't produce a different hash.

      • mshenfield 9 years ago

        You're right, there is no obligation to maintain it. I think that misses the point though. The value in keeping the code is to allow the end user to continue to enjoy improvements in parts of the library that don't have breaking changes without upgrading those that do. You could continue to have security patches installed, for example. That value is much less when you don't do basic maintenance implement bug fixes and security patches.

        • blueprint 9 years ago

          Unless I'm missing something… the answer to that problem is to (a) factor the code sufficiently to then (b) create an abstraction (interface) that backs out the concrete implementation to the specifically desired version/functionality.

      • wtetzner 9 years ago

        Except that naming things is one of the hard problems. I don't see why a major version bump can't be considered a different library.

        I guess you can use version numbers in the name instead, since this talk is specifically targeting maven artifacts.

        • blueprint 9 years ago

          Hickey's other talk says hard is relative, and I happen to agree, especially when it comes to naming. The question is to what degree of exactness you can confirm what exists (in problems). That is a function of your degree of truthfulness. So it's "hard" only in the sense it's hard to approach 100% truthfulness. However, I have observed that one doesn't need 100%, one needs to be beyond a certain threshold of effective sufficiency. And according to human history, special, rare individuals are born who do exceed that threshold.

  • michaelfeathers 9 years ago

    That's Postel's Law.

DEADB17 9 years ago

I don't understand why he says semantic versioning does not work. In my experience (with NPM, not maven) it is very useful, adding meaning of intent by convention:

Given a version number MAJOR.MINOR.PATCH, increment the: MAJOR version when you make incompatible API changes, MINOR version when you add functionality in a backwards-compatible manner, and PATCH version when you make backwards-compatible bug fixes.

I got the impression that the issue was maven not being able to handle multiple versions of the same package/artifact, not in the convention.

  • sheepmullet 9 years ago

    Like Rich mentioned from the point of view of a library consumer it's:

    PATCH: Don't care MINOR: Don't care MAJOR: You're screwed

    MAJOR is simply not granular enough and MINOR and PATCH are pointless.

    Sometimes when I update to a new major version of a dependency it all just works. Other times I've got to spend weeks fixing up all the little problems.

    Did you break one thing I didn't even use? Update the MAJOR version. Did you completely change the library requiring all consumers to rewrite Update the major version.

    • DEADB17 9 years ago

      Being screwed would be the case if the consumer of the artifact is forced to upgrade. i.e.: if versions cannot coexist in a code base.

      Otherwise I think the information that they convey is useful:

      PATCH: improvement or correction that does not affect the consumers expectation (safe improvement)

      MINOR: additional features that may be useful directly to the consumer or its transitive dependencies (safe to upgrade)

      MAJOR: No longer safe to upgrade automatically. The consumer may need to investigate further or stay with the previous MAJOR.

      In any case it is useful information being conveyed. The consumer decides how to act on it.

      • sheepmullet 9 years ago

        > Being screwed would be the case if the consumer of the artifact is forced to upgrade. i.e.: if versions cannot coexist in a code base.

        In theory multiple versions can co-exist in a codebase in js land.

        In practice though the vast majority of js libs don't compose well with different versions.

        At best I've found it can work as long as you consider them seperate and unrelated libraries (basically the version in js land is part of the namespace).

        Edit: I definetly don't think it's as big of an issue in js land as in Java land because of transitive dependency handling.

    • jjnoakes 9 years ago

      Still don't see the big problem. If the major version is updated and it doesn't affect you, you have a 10 second job to do. If it does, you have a bigger job to do (or don't update).

      What's the big deal?

      • sheepmullet 9 years ago

        How do you know if it's a 10 second job or a bigger job?

        • jjnoakes 9 years ago

          Read the release notes?

          Read the code diff?

          Try it?

          • sheepmullet 9 years ago

            So lots of manual work. And you still don't see the issue?

            Wouldn't it be good if there was some kind of automated way to know?

            • klibertp 9 years ago

              But without semver you'd need to do this manual work even more often! Semver makes you do less manual labor, because you know that PATCH and MINOR don't require your attention. You don't know such a thing in many other versioning schemes.

              Would it be better to eliminate even more manual labor? Yes, of course. But then is semver bad because it reduces manual labor?

              • prayerslayer 9 years ago

                I get your overall point that it's better than nothing, but you'll have to admit semver makes promises that just don't hold up in reality:

                > because you know that PATCH and MINOR don't require your attention

                :)

                In 99 % of cases, they don't. But you're never completely sure.

            • jjnoakes 9 years ago

              Semver doesn't preclude automating anything, does it?

  • MrBuddyCasino 9 years ago

    The JVM can't handle multiple versions of the same jar, as there is only one classpath.

    A depends on B and C, which in turn depend on incompatible versions of D.

    Congrats, you're screwed, unless you're running in an OSGi container.

    The only reason this seldom a problem in Java land is because many popular core Java libs, including the whole frickin standard library and all the official extended specs like servlet maintain backwards compatibility, and the successful core libs do too (Guava, commons-whatever).

    They do what he preaches. Bam, successful platform!

    • taeric 9 years ago

      Guava is actually a bad actor in this regard. To the point that it can sometimes teach people that "coding fearlessly" is an wonderful thing.

      It certainly can be. Especially in monolith code bases where you can fix everything you broke.

      As a platform, though, it is very frustrating.

  • auganov 9 years ago

    His core opposition is to introducing breaking changes under the same namespace. SemVer is the leading scheme that sanctifies such practice.

    • DEADB17 9 years ago

      Wouldn't changing the MAJOR version be equivalent to changing the namespace without altering the human memorable identifier of the package?

      Understanding that changes in MINOR.PATCH are backwards compatible, is the difference between NAME MAJOR.MINOR.PATCH and NEW_NAME MINOR.PATCH significant? They look to me as just two different conventions.

      • taeric 9 years ago

        No. Because I don't have a way to keep the old and the new.

        That is, the point is that you didn't change my use of the old. You just changed what I actually use. It may work. It may not.

        This is especially egregious when I had to up versions to get some new functions, and old versions just happen to have changed.

        • DEADB17 9 years ago

          I don't know about maven, but in NPM you can keep both. If you take away the limits of a particular implementation, I think that semantic versioning is a useful convention for the producer of an artifact to convey intent to the consumers.

          • taeric 9 years ago

            That is the point. It doesn't convey intent. Outside of the "major versions could break." Which is somewhat worthless.

            Consider, you are using a library that expose ten objects and functions. It is on version 1.2.8. it upgrades to 1.2.9. What do you do? You take the upgrade. Usually no questions asked.

            It upgrades to 1.3.0, but only to add an eleventh function. What do you do? Probably take it, because you don't want to be behind.

            It upgrades to 2.0. reason is "things have changed.". However, they kept the same function names. You think you can make the upgrade fine. Because, well they have the same names. However, you can't know, because some body thought it wise to reverse arguments of some functions. Which thankfully, is a compile time fail. What else changed, though?

          • DEADB17 9 years ago

            Sorry @taeric, I can't reply to your post directly.

            I do think that the intent of the producer is being communicated (unsafe to upgrade, safe to upgrade with new features, safe + automatic improvements).

            I'm not disagreeing that "spec" adding more metadata to have better granularity and potentially reducing the amount of manual work is a good thing. But in the absence of it, "semantic versioning" is an improvement over safe and unsafe versions being indistinguishable.

            • taeric 9 years ago

              No worries. In the future, you can almost always get a reply button by clicking directly on a post. (Click on the "time since post" to get to the direct link.)

              I think I see the point. Yes, he is using hyperbole. However, I have found it is more accurate than not. In particular, the point that many projects feel a lot more cavalier about doing breaking changes.

    • jjnoakes 9 years ago

      Perhaps, but he repeatedly claimed that the minor and patch numbers conveyed no meaning, while dismissing the semver spec as a manifesto.

      But if he read and understood it, he'd know those were important numbers. Maybe moreso than the major version.

      Perhaps he should have argued his actual stance more, instead of the strawman stance. That put me off.

      • sheepmullet 9 years ago

        From the point of view of a library consumer why should they care about the patch or minor versions at all?

        Isn't later = better?

        • jjnoakes 9 years ago

          Because if you start relying on something new or fixed in x.2.z of your dependency you want to make sure anyone using your code isn't using x.1.y.

          • sheepmullet 9 years ago

            And doesn't automatic dependency resolution make this a non-issue for your consumer?

            Edit: I.e. If you declare your own dependencies then tooling should ensure anyone who uses your code uses the same dependencies.

            It doesn't work this way in Java world due to technical limitations, but it can in JS world

            • jjnoakes 9 years ago

              Consumers may want to use a different version.

              Perhaps they want a newer one (bug fix, security fix).

              Perhaps they want an older one (since another dependency was tested against an older version of the dep in question).

              Semver gives you a way to decide what ranges of versions should be safe to move between in order to satisfy all of those occasionally conflicting requirements.

              • sheepmullet 9 years ago

                Explicitly limiting your consumers to a specific version of another library is a breaking change. You have introduced a very specific dependency and are requiring the consumer to honour it.

                By semvar rules you should be updating the major version?

                • jjnoakes 9 years ago

                  What?

                  If my library requires X version 1.2 or higher, how is it a breaking change if I don't work with version 2.0 or 1.0 or anything except 1.2 through 1.99999?

                  That's the whole point of software versioning, no matter what you call it (renaming things, semver, git hashes, anything). At some point you require something of someone else, and you can only use the versions of that other library which provide what you require (or more). Semver is just a way to lock those requirements into a machine-readable number scheme.

            • prodigal_erik 9 years ago

              That limitation is healthy. If versions 1.1 and 1.2 of a class exist and I'm foolish and determined enough to use both in the same process (via multiple classloaders), Java will still ensure that I can't accidentally give a 1.1 instance to a callee expecting a 1.2 instance, or vice versa. Version mismatches at call sites fail quickly and loudly with classloading exceptions.

              I think a hell of a lot of NPM packages only appear to work by accident, and over time they'll fail because of sloppiness about this.

gniquil 9 years ago

Haven't watched through the entire video, but the first part, analyzing dependency bottom up, and claiming version doesn't work, really reminds me of graphql. We've been doing traditional rest with /v1/..., /v2/.... This sucks for so many different ways. Graphql's solution of providing contract at the lowest level and just evolve the "graph" really feels a bit like what he was talking about in this video. And note that Facebook by not "provide less" or "require more" in their API is how they made their API still compatible with older clients. This talk is very interesting! (note I could be spewing BS here as I've not finished the video)

  • opvasger 9 years ago

    exactly my thought, having dug into GraphQL recently and seen a number of talks about it.

    Getting over the annoyance of looking at some initial functionality that is misplaced can be quite hard... It's really tempting to just get rid of those as you're bumping up the MAJOR.

anonfunction 9 years ago

Rich Hickey gave one of my favorite talks that I recommend to all programmers no matter which language they code in:

https://www.infoq.com/presentations/Simple-Made-Easy

taeric 9 years ago

The whole section on "just change the name" fits interestingly in with the way that the kernel is developed. They have been rather strict on this for a long time. A function released to the wild stays. Changes to it require a new function.

tbatchelli 9 years ago

This is a worthwhile talk, albeit a bit too long. In this talk RH provides a framework to think about libraries, how they change over time and how we should be communicating those changes. He proposes a way forward in which the changes are communicated in a way that can be dealt with programmatically, a much better framework than versioning schemes (semver gets a beating in this talk!).

  • adwhit 9 years ago

    Video is a high-bandwidth way of delivering information at a low bandwidth. But try watching tech videos at 1.5 speed, it makes them much more engaging. Also works for ponderous anime.

    • faitswulff 9 years ago

      Hah! Ponderous anime. I remember watching some shows at 4x speed. I'm not sure whether it's anime that introduced me to sped up video, but subtitles are a great help when watching videos at faster than 1x speed.

    • achikin 9 years ago

      That's why I prefer youtube over the other services - you can speed it up. I'm not a native speaker, so 1.25 is the fastest I can understand at the moment.

    • asymmetric 9 years ago

      Got any examples of ponderous anime? Lain? Akira? GITS?

atemerev 9 years ago

I can't get any useful information from videos, it is so slowwwww. Even on 2x (the fastest I can watch), it is still 3 to 5 times slower than I read. Also, I can control my reading, skip the minutiae, or reread important parts or look up some reference — but I can't do that with videos or audiobooks or podcasts. I wonder how people even find it convenient.

Unfortunately, Rich Hickey really loves videos for some reason, so everything he says is lost to me.

(Yes, there are transcripts, but those are linear and unstructured.)

qwtel 9 years ago

I've revisited this post so many times, I know the number by now: http://hintjens.com/blog:85, "The End of Software Versions". I think it is related to, and similar to what Rich is saying.

j-pb 9 years ago

> Logic systems don't have "And nothing else will ever be true!".

Uuh. Closed world assumption? Everything that is not true is false. Most logic systems do have this. Prologs (not a logic system I know) cut operator even turns this statement into an operation.

I feel like Rich really gets it wrong this time. His request to support all the abominations that you have ever written and keep them compatible with recent changes, might work if you have people pay for using your library and a company behind it. But doesn't fly if you maintain these things out of goodwill in your anyhow limited spare time.

The best example of this style going horribly wrong are the linux kernel file system modules. Different api versions all in use at the same time by the same code with no clear documentation on what to use when.

It's also ironic that the examples he uses to make his point namely, Unix APIs, Java and HTML, are horrible to work with especially because they either never developed good API's (looking at you unix poll), or they, like browsers, have become so bloated that nobody want to touch them with a ten foot pole. One of the reasons why it takes so long for browser standards to be adopted is that they have to be integrated with all the cruft that is accumulating there for almost three decades now.

"Man browsers are so reliable and bug free and it's great that the new standards like flexbox get widespread adoption quickly, but I just wish the website I made for my dog in 1992 was supported better." -no one ever.

Combinatorial complexity is not your friend.

I'd rather have people throw away stuff in a new major release, maybe give me some time to update like sqlite or python do, and then have me migrate to a new system where they have less maintenance cost and I profit from more consistency and reliability.

I think that Joe Armstrong has a better take on this. https://www.youtube.com/watch?v=lKXe3HUG2l4

Also even though I'm a fulltime Clojure dev, I would take Elms semantic versioning that is guaranteed through static type analysis anytime over spec's "we probably grew it in a consistent way" handwaving.

  • freshhawk 9 years ago

    > His request to support all the abominations that you have ever written and keep them compatible with recent changes, might work if you have people pay for using your library and a company behind it

    This is generally his focus, on big professional software. I'm also a Clojure dev and I'm on the other side of the fence on this one as well so sometimes I'm disappointed in the enterprise focus but I knew what I was getting into and it is still worth it. Am I crazy to think that maybe other lisps would have done better if they had demanded less purity?

    Same with the examples of Unix APIs, Java and HTML. Sure, they are all bloated and horrible to work with. They are also massively, insanely successful. I think they are great examples because at that scale it's impressive that they work at all.

    This is part of the pragmatism that makes Clojure great, they generally stay away from trying to solve the problem of huge projects being unwieldy and ugly and painful and instead they accept it as a given and work on tools to mitigate the problem. For a lot of people backwards compatibility isn't a choice, it's a requirement set in stone. Even though it always causes everyone to pull their hair out in frustration.

    One day maybe one of these other research languages or experiments will find an answer to this, and prove that it works at scale. I will celebrate more than most.

    • DigitalJack 9 years ago

      The primary advantage of clojure to my mind are the lovely data structures and the interaction with them.

      Common Lisp and scheme can implementation them, and through reader macros interact in probably the same ways, but it would always be second or third class.

      Second big deal for clojure is the ecosystem.

      I'd love a native language like clojure that was driven by a language specification.

    • kazinator 9 years ago

      Almost everything is mutable in Common Lisp: lexical variables, objects, global function bindings. Code compiles to native and you can deliver a single binary executable with no dependencies.

    • junke 9 years ago

      > Am I crazy to think that maybe other lisps would have done better if they had demanded less purity?

      People associate Common Lisp with many different things, but not purity.

  • joe-user 9 years ago

    > His request to support all the abominations that you have ever written and keep them compatible with recent changes, might work if you have people pay for using your library and a company behind it. But doesn't fly if you maintain these things out of goodwill in your anyhow limited spare time.

    It might actually be less effort to follow his method. You may need to create a new namespace or create a new function, but then you don't need to put out breaking change notices, handle user issues due to breaking changes, etc.

    > "Man browsers are so reliable and bug free and it's great that the new standards like flexbox get widespread adoption quickly, but I just wish the website I made for my dog in 1992 was supported better." -no one ever.

    It's not about better support for your 1992 website, it's about it still being accessible at all. Perhaps you've never had to deal with a file in a legacy format (ahem, Word) that had value to you, but was unrecognized in newer versions of software, but I can assure you that it's thoroughly frustrating.

    > Also even though I'm a fulltime Clojure dev, I would take Elms semantic versioning that is guaranteed through static type analysis anytime over spec's "we probably grew it in a consistent way" handwaving.

    An Elm function `foo :: int -> int` that used to increment and now acts as the identity function is merely statically-typed "we probably grew it in a consistent way" hand-waving, which may be worse than the alternative given the amount of trust people put into types.

  • pron 9 years ago

    > But doesn't fly if you maintain these things out of goodwill in your anyhow limited spare time.

    What percentage of the total software produced and used fits that and should discussions of general good practices address this anomaly? Obviously, if you work outside the normal industry practices, then industry best-practices don't apply to you. I don't think you should take what he says too literally to mean that every line of code you should write must obey this and there are no exceptions.

    > that nobody want to touch them with a ten foot pole.

    If by nobody you mean almost everyone. Those happen to be the most successful software platforms in history, and most new software outside embedded (with the big exception of Windows technologies, maybe) uses at least one of those platforms to this day.

    > One of the reasons why it takes so long for browser standards to be adopted is that they have to be integrated with all the cruft that is accumulating there for almost three decades now.

    So what? Look, large software gets rewritten or replaced (i.e. people will use something else) every 20 years on average. If your tools and practices are not meant to be maintained for 20 years, then one of the following must be true 1. they are not meant for large, serious software, 2. you are not aware of the realities of software production and use, or 3. you are aware, but believe that, unlike many who tried before, you will actually succeed in changing them. Given the current reality of software, backward compatibility is just more important to more people than agility.

    > Combinatorial complexity is not your friend.

    Every additional requirement results in increased the complexity, and backward compatibility is just one more, one that seems necessary for really successful software (especially on the server side).

    > "we probably grew it in a consistent way" handwaving.

    Why do you think it's handwaving? The spec is the spec, and if you conform with the spec then that's exactly what you want.

    • j-pb 9 years ago

      When stating that nobody wants to touch browsers with a ten foot pole, I meant that nobody wants to contribute to browser development unless they are paid very very well.

ozten 9 years ago

As I understand it, one of the original problems Go was designed to solve was long compile times at Google.

This talk is fascinating and points out a possible solution space for real world problems in giant codebases and build infrastructures.

What if 80% of your dependency graph can be identified as dead code paths at build time and not require you to actually take dependencies on all that dead code?

  • _halgari 9 years ago

    Sadly then Go went and shot themselves in the foot. Originally (I assumed this hasn't changed), you specified dependencies via git URLs. Sounds great, but then they went and said every dependency always used the head of master. So now you're in the horrific situation of your deps suddenly changing whenever your build decides to pull the latest from the git repos.

    (Note: I don't use go, but this was the situation about 3 years ago when I last looked).

    • anonfunction 9 years ago

      I've always found this to be overlooked or at least a misguided design decision. IMO would be very helpful to be able to specify a commit, tag or branch to pull down.

      The Go team's solution to this problem was allowing vendored dependencies. Basically you can put the dependencies inside your project in a directory called vendor which will be used instead of whatever is in your $GOPATH/src/. This allows you to pin deps to a specific version and not need to pull deps from the internet at all.

    • NightMKoder 9 years ago

      One of the things Rich talks about in this talk is semantic versioning is fundamentally broken. In fact any versioning scheme that allows removal is broken. In some sense this model encourages Rich's ideas better - since you can't version directly, you either make a new library or maintain backwards compatibility.

      Of course the current 'go get' model has a lot of downsides, e.g. Non deterministic builds. I still think it's worth considering building on, rather than trying to fix semver. All that's really missing is a stable monotonic identifier - something as simple as git tree height may be enough.

      • _halgari 9 years ago

        Absolutely, I think the "perfect world" would be adopting some of the ideas Rich proposes but adding (git url + sha) as the dependency management system.

        Of course for that to work well you'd also need to protect yourself from people deleting their git hub repos, or rewriting git history.

eternalban 9 years ago

I was surprised by this. (Rich is rigorously perceptive and characteristically peels conflations apart.) Here, he seems to be conflating linker responsibilities with (unit-module) dependency management concerns.

If deployment-unit-A depends on du-B, but system-language level dependencies have additional subtleties, then that is fine, since someone like RH could be looking into code stripping of unreachable/dead-code on linking, and I'd be completely surprised if that hasn't been already looked at extensively to date.

Yes, you would sacrifice some transfer bandwidth/time on artifact acquisition but keep conceptual models local and their implementations simpler. Or, looking at it from another point of view, OP should consider that a 'dependency' is maximally the provider of objects in context of its namespace.

greenyouse 9 years ago

When I was listening to the part about breakage vs accretion, I kept thinking back to the Nix pkg manager from NixOS. It does a pretty good job of separating out the artifacts by indexing them by version number and a hash of the username (i.e. artifact-name = sha($user) + $project-name + $semver). There have to be other examples of pkg managers that do this kind of thing too...

sgentle 9 years ago

I think there is a missing piece to this conversation, which is about the definition of namespaces. It's easy to conflate "problems with semver" and "problems with semver+the way my package manager handles versioning".

In npm, or with any package manager that allows multiple independent versions to coexist, the version becomes part of the namespace. Multiple different versions of a package can be installed at the same time and referenced from each other. It's no problem if you depend on B1.0 and C1.0, and they depend on D1.0 and D2.0 respectively; you just get all four installed.

In Rich Hickey's vision, we would move the compatibility contract into the package name, so if we want to make a new version 2 of helloworld that does not obey the contract of version 1, we call it helloworld2. In practical terms, this is no different from the above, it's just that we add the version number to the namespace through the package name rather than the package version.

Not surprisingly, this latter strategy is already used by most systems where versions aren't part of the namespace. Debian has python2 and python3. Python has urllib and urllib2 (and urllib3). Unix has mmap and mmap2. It works fine, but it's a bit of a hack.

If your version is part of the package namespace, semver works fine and solves the problem it was intended to solve. If, like in Java, every version has to share the same name, then I agree that semver doesn't really buy you that much. After all, it doesn't help you much to know that your unresolvable dependency tree is semantic.

It's not a coincidence that npm handles transitive dependencies in the way it does; it's the whole point. I'm sure it makes npm much more unwieldy to allow multiple different package versions, but having used it it is exactly one of those "solves a problem you didn't realise you had" moments.

With that said, I definitely agree that more careful API design would allow far fewer backwards-incompatible (ie semver-major) changes, and more tightly defining compatibility (like with this requires/provides concept) is a good way to do that. Many web APIs already say things like "we will return an object with at least these fields, and maybe more if we feel like it", which I think is a good example to follow.

As far as changing package names, there's also an interesting question, not about how computers deal with those package names, but how we do. After all, why helloworld and helloworld2? Why not give it another name entirely? Part of that is because you want to make the connection between these two packages explicit. But if you change your package dramatically, perhaps that connection is not doing a service to your users.

One of the biggest complaints about major rewrites is "why does it have to be the new way? I liked it the old way!" If you genuinely believe the new way is better, why not let it compete on an even playing field under its own name? (And, if you know your new package can't compete without backwards compatibility, maybe don't make a new package.)

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection