Go: Don't Change the Libraries in 1.18
github.comA good call, and probably the most reasonable decision in their situation.
OTOH this removes much of the point to use generics, and makes working with stdlib from type-parametric code more painful.
Still great to see things improving. It took mere 11 years.
There is a path to upgrading library functions to generics, it will come in a later release. Looked like default generic type of the empty interface for those functions. See the other issues linked from the original.
It took that long because you do not NEED generics.
Well, no language ever needs features. We could all write software in C or even assembly, but we don’t because abstractions are nice. Generics are an abstraction.
Take C# for example. The `System.Collections.Generic` namespace is full of generic collections (surprise!) that allow more type safe code. If I have a `List`, I can’t guarantee there isn’t something I don’t want in there (that could cause a runtime exception I don’t catch). But if I have a `List<IFeature>`, I know that everything in the list implements `IFeature` (barring compiler bugs and unsafe code).
To be fair, Go has typed maps and slices. You just can’t implement your own generic collections, or generic operations over collections. Those two will get you decently far.
But that’s all go has. It doesn’t have typed Sets, typed ordered maps, etc.
I don't know how people were looking at these Java 2.0-esque kludges and still arguing generics were superfluous but I'm glad it's landed: https://github.com/elliotchance/pie
Abstractions are not always nice.
One of the nice things about go is it doesn’t have many abstractions and most of them are carefully thought out. I don’t want to have to inhabit somebody else’s abstractions all day at work, I want the language to get out of the way, which go does quite well IMO.
"and most of them are carefully thought out."
Is this stockholm syndrome or something? For example, JSON.
Also, the time abstraction is completely bonkers.
If you want to multiply a 500 milliseconds with a user-given value (suppose I want some number of half-seconds), you must first cast the user value to milliseconds, then multiply, so you are multiplying 500 milliseconds times (say, 4 milliseconds) to obtain 2000 milliseconds.
And don't get me started on goroutines/channels. You are supposed to "share state by communicating" and not "communicate by sharing state". That's great. But at some level, you must bootstrap knowledge about the state of the channel, which is fundamentally shared state by the low-level nature of the channel. When you've worked in systems which have really thought out carefully what it means to have a no-shared-state system, the go system looks like it's been put together by either amateurs or fools.
There’s certainly lots to criticise in Go, as in any language, that’s why I said ‘most’, though I’d say it was peak HN hubris to say it was put together by ‘amateurs or fools’.
JSON is not related to go but perhaps you have some problem with the JSON parser? Works fine me… Personally I don’t find the time constants a problem at all, I’m not keen on the time parsing layout, but that’s relatively minor. Channels I don’t have strong opinions about and goroutines if used sparingly I’ve found a nice balance of utility and simplicity for creating threads, but I recognise I’m not really qualified to argue about them.
It's not hubris, if you've seen something else to compare it to. I'm not claiming to have made something better, I'm claiming to have used more well thought out systems. I know Rob pike isn't an "amateur" in the strictest sense (he might be, at "designing a pl"). I'm not convinced he isn't a fool.
There seem to be a number of people defending the need to convert both multiplicands to milliseconds. Are people overlooking the fact that if you multiply two values with unit ms, your result has a unit of ms^2? The fact that you get ms out of the multiplication in Go is stupid and incorrect.
> you must first cast the user value to milliseconds
Incorrect. You must cast the user value to a time.Duration, since Go is a type-safe language, and like most type-safe [1] languages, its numerical operations generally require their operands to be of the same type.
You might be thinking of multiplying a user-provided value by a constant (of type time.Duration) defined by the time package, like time.Millisecond or time.Hour. I’m under the impression this is a very intentional choice to require user-provided values to be explicitly annotated with their units. Some implementations/variants of durations use nanoseconds, some milliseconds, some seconds, and requiring this assumption to be explicit in the code helps avoid critical bugs like the Mars Climate Orbiter failure. [2]
The time package (and other stdlib packages) definitely has some warts, especially the time format parsing, but I’ve always appreciated the approach taken for durations.
[1] We could get into more advanced type inference here like Rust’s From/TryFrom traits, but the debate between simplicity vs. expressiveness in Go has been retreaded here tens of thousands of times and I doubt either of us has anything new to say on the topic.
> Incorrect. You must cast the user value to a time.Duration, since Go is a type-safe language, and like most type-safe [1] languages, its numerical operations generally require their operands to be of the same type.
This is cargo-cult type-safety; stop a moment to consider the dimensions implied by the types.
“Units” in the last bit was maybe the wrong word.
To be clear: time.Duration is a type, and is the only part of this discussion where “type safety” is a factor.
time.Millisecond is a constant time.Duration whose value represents the duration of a millisecond. time.Millisecond et al are not a unit as in chemistry class, so you’re not supposed to get a time.Millisecond^2 by multiplying them. Just like in any other typed language, T * T -> T, not T^2 (where T = time.Duration), since it’s a type and not a unit.
time.Millisecond et al instead make you explicitly annotate the scale of user-provided durations. When you write
you are assigning to d a value of type time.Duration equal to the duration of one second multiplied by the input. You are free to not use the constants defined by the time package and instead create time.Durations from the literal number of nanoseconds in the duration, if you so choose:d := time.Second*time.Duration(input)
But since the time package already defines convenient constants for standard durations, those will be used instead and be far more explicit and readable.d := time.Duration(input)*1000000000This has nothing to do with dimensional analysis of units, of the sort you do in physical sciences. I don’t know of any general-purpose languages that include dimensional analysis in the language or their standard library’s datetime package, but I’d be curious if you know of one (that isn’t specifically geared towards the sciences) - seems like the sort of thing Ada might include? In any case, I don’t think a language focused on keeping a simple feature set like Go should be a trailblazer here. Durations are easy.
It's still bonkers to make you cast an non-duration input to time.Duration. this causes cognitive confusion, because you are effectively labelling the multiplicand as something that it isn't. All pls with types have semantic meaning for the types if nothing else, so if you can't see why this is a real problem, I can only say: Stockholm syndrome.
"numerical operations generally require their operands to be of the same type"
The correct decision would have been to make the * operator be allowed to operate on a time.Duration and an integer, just as you are uncontroversially allowed to operate * on a float and an integer -- to refute your statement and go even stronger I don't know of ANY pls that require * operands be the same type.
However, that is not what go chose. And we are talking about the choices go made. This is very much NOT well thought out and very ill-considered, especially since "the right thing" is so easy.
> I don't know of ANY pls that require * operands be the same type.
Haskell is an example, where the type of * is:
Which says that the arguments must be numeric, and of the same type. I think this is the sensible choice from a strongly-typed perspective, and some operation which allows one to multiply a time value should be a separate thing.(*) :: Num a => a -> a -> aBut you can define your own * operator, separate to the one from Num, with any types you like, can't you? You might have to hide the one from the prelude to use it.
> to refute your statement and go even stronger I don't know of ANY pls that require * operands be the same type.
This really comes down to a lack of experience on your part. Haskell requires both arguments to be of the same type and only in the case where it can reasonably infer from a literal that it could be coerced will it do so. OCaml, as another example, requires an entirely different multiplication operator for floats.
Requiring the same type for both arguments is not as rare a position as you've made it out to be, and not a showstopper in any case either.
> this causes cognitive confusion, because you are effectively labelling the multiplicand as something that it isn't
You make a fair point, but at that point the debate is about other choices made in the Go language disallowing implicit type conversions.
> The correct decision would have been to make the * operator be allowed to operate on a time.Duration and an integer, just as you are uncontroversially allowed to operate * on a float and an integer
What should the resulting type of the multiplication operator be when applied to a float and an integer? Should the type be different if the operands are swapped? Is it acceptable for the multiplication operator to not be commutative, given that we seem to be demanding a great deal of rigor from our type system? Should we only allow this implicit conversion if type inference is not being used?
> To refute your statement and go even stronger I don't know of ANY pls that require * operands be the same type
Rust, for one, but will test out some more when I’m not on a bus: https://play.rust-lang.org/?version=stable&mode=debug&editio...
> cannot multiply `i32` by `f32`
> Rust, for one, but will test out some more when I’m not on a bus
You can impl Mul for your own types. The operands' types don't need to match.
When I don’t like how something was done in a Go codebase, I’m generally SOL, since it was repeated so many times that there aren’t enough hours in the day to change them all.
When extremely motivated, I’ll script manipulation of the AST, but that’s a pretty extreme thing to have to do.
Scripting manipulation of the AST is something that tooling can make much nicer. That might not exist (yet) for Go, but I think there is a lot of value in making the language easy to understand for tooling. Compare for example the quality of IDEs for Java vs C++.
But this is of course orthogonal to generics, as you can make generics friendly to tooling as well, see Java.
Abstractions add a great deal of "default", potentially very complex, behaviour. This is great when starting a greenfield application. Even in extreme cases like RoR they work and quickly get a whole lot of things running. Fantastic. My newfangled thingamabob does something!
And then the application grew and you have the guys who have to watch and keep the application online when it's business critical. They HATE abstractions. Because 7 abstractions that are used throughout the application means there's 127 cases the developer hasn't thought through, at least ten of them a ticking time bomb. That's 128 possible cases, one of which the developer has actually thought about, 5 they have verified to be reasonable.
An easy case to show what's happening is the suggestion every new developer makes. "I'll just have a thread per connection, that's easy". And yes, it's very easy to get it running. It doesn't block during dev, it handles multiple connections and generally does the job. And it's absolutely guaranteed to crash you server for 10 different reasons in production. And yet, every new developer will (and should) do it.
There's just 2 camps developers in the world, who don't agree and this won't seriously change. Learn how the "other camp" thinks and you'll do better.
I think you're confusing abstraction for framework.
Go, like every language, has plenty of footguns and head-scratching behaviors as well. Don't confuse your familiarity and personal preferences for universal properties of the language.
> Still great to see things improving. It took mere 11 years.
Took that much time because of little clique of people outside the go team had way too much influence in the Go community. Let see if they dump the language like they threatened to, as the result of adding generics. Of course they won't.
Generics are there if one wants to, and they aren't like Java, but ADA. ADA got a lot of things right decades ago including the way tasks work from which Go routines should have taken a bit more inspiration .
Congrats to the Go time anyhow.
I suspect a lot of people outside the go-team made comments, suggestions, and proposals. But honestly, thinking back damn few of them resulted in changes to the language.
If you'll recall back in the day there were several different vendoring approaches, but ultimately the go-team proposed and implemented their preferred solution.
Similarly there have been a million generics & error-handling suggestions but none of them were introduced. It's basically lots of distracting discussions, and it doesn't feel so much like a community project that is actually seeking outside discussion and ideas. (No shame in that, but the pretense is disappointing).
Personally I'm waiting for the fuzzing-support to land in 1.18. Fuzz testing is basically magical, and amazing in reporting problems even in code with "high coverage". The generics might be nice, but off-hand I don't see that I'll be needing them in the immediate future in any of my personal projects - but I fuzz-test the hell out of a lot of my projects (which largely revolve around interpreters and compilers).
I actually think this is one of the reasons I like go so much, and now write code in it almost exclusively -- it isn't really a community project, but rather a labour of love from a small number of genuine experts.
If it actually were a community project, I think we would have more of the "x := make(someType) vs. x := someType{}" TMTOWTDI that you find in other mature languages, and go would be weaker for it.
TMTOWTDI = there's more than one way to do it
Those community efforts may have not been have adopted, but that doesn't mean their existence hasn't informed the Go team's solution
I can only repeat the top comment from the "8 years of Go" thread [1]: the team made a different set of priorities, and was hugely successful in implemneting them, which also brought a ton of popularity. Golang is not my cup of tea, but I very much see the large and underserved niche it filled.
Rob was opposing generics from the start. People were pointing out the need more than a decade ago, but he was firmly saying no. Now he left the team.
> Rob was opposing generics from the start.
That's simply not true.
> he was firmly saying no.
I'm sure you cannot find a single instance where he firmly says no to generics.
He wrote in the FAQ that Go might get generics one day and that they are continuing to think about it.
Here he is arguing in favor of generics: https://www.reddit.com/r/golang/comments/jditu9/what_do_gene...
And he was the one who invited Phil Wadler to help with the generics design. From the Featherweight Go paper: "Rob Pike wrote Wadler to ask: Would you be interested in helping us get polymorphism right (and/or figuring out what “right” means) for some future version of Go?" https://arxiv.org/pdf/2005.11710.pdf
He's left the team? He seems pretty active even in this recent github thread.
Yes, he is no longer on the core team.
https://www.reddit.com/r/golang/comments/ksx4q1/what_happene...
Go doesn't/didn't have generics? Why?
It was commonly stated (or claimed) that they didn't know how to add generics without sacrificing both compilation time and runtime performance [1]. To my knowledge they ultimately didn't choose a particular implementation strategy and instead chose a design that allows multiple strategies as needed.
It also seemed like the original intent was to support generics and metaprogramming through code generation and AST manipulation, the argument being that code generation and AST manipulation are the most general way to build complexity and specialization from a simple set of primitives. I personally don't think that generics and generation are mutually exclusive, but if they were, I'd side with retaining simplicity over the new feature.
Because features in software don’t exist until someone adds them.
this is a really underrated comment. there's always a lot of entitlement when it comes to software. why doesn't x have y?
...because x doesn't have y yet....this stuff doesn't build itself, and it certainly doesn't get built overnight.
It’s pretty overrated really (even if grey). Mature projects and PMs treat submitted code as liabilities to be maintained, not free benefits. And every project is at the whims of its maintainer, who can absolutely reject any contribution they wish.
To suggest generics weren’t here sooner because no one wanted to make the pull request it is just dumb.
if someone's off on some tangent implementing a major feature without coordinating with the project maintainers and it subsequently gets rejected because it doesn't fit the constraints that they've stated for the feature...that's on them.
the go project is pretty upfront with how they go about deciding what will/wont get into the project, what process to follow, etc.
Posing it as "why doesn't go have generics" is bound to be reductionist, because it's too coarse of a question, and any real implementation winds up having a lot of nuance.
the question winds up just sounding entitled and petulant though, so if someone can't be bothered to ask a well informed question about why go doesn't have generics yet, the best answer really is "because it hasn't been added.", tautological as it may be.
What will likely happen is that the users will design generic libraries themselves. There will likely be a couple, and eventually they will converge on the most useful features. Then the Go team can just get inspiration from that I guess.
I'm excited about the prospect of having iterators. It will enable a different, more consistent programming style.
I'm also hoping for immutable collections, even though the lack of specialization will make it more difficult to implement them efficiently. They would enable a more robust way to build concurrent systems.
> What will likely happen is that the users will design generic libraries themselves. There will likely be a couple, and eventually they will converge on the most useful features. Then the Go team can just get inspiration from that I guess.
Most likely yes, or as stated in the ticket, they'll take the existing proposal and implement it in golang.org/x/, where they've had some success fleshing out the design of new packages before incorporating into the standard library. It's worked out well, as early adopters can adopt and generally have a painless transition once included in the standard library.
I agree with iterators and immutable collections -- it's been painful working with trees in go, so hopefully that gets a bit easier now.
Honestly, I'm excited to see what will come of generic functions for channels. Being able to write a generic Dup(in chan T, out ...chan T), or a CtxRecv(context.Context, chan T) (T, error) to cut down on some boilerplate select statement.
+1 for channels, a simple task like duplicating a channel is surprisingly tricky, verbose and error prone
Or, if history is a guide, there will be a few really nice generic libraries written and a sizable minority of developers will adopt them as pseudo standard to fill the void. Then the Go team will do what they want, fracture the community, and alienate those who expected your described path to be the one taken.
I think they've been worse about this w.r.t. tooling (dep comes to mind. The closest thing I can think of along this path for the standard library would be how they've handled errors, and deciding to go their own way instead of adopting dave cheney's pkg/errors[0]
they do definitely seem to have some NIH syndrome at times, but I can't say that as time has progressed that I haven't come to appreciate the decisions they've made that seemed controversial at the time.
I'm very glad they looked farther afield than pkg/errors as per https://news.ycombinator.com/item?id=28284119 .
I worked around that by adding a "dev" build tag that includes stack traces, and without for all other builds (as to differentiate between local and production environments).
Wait, isn't Wrap and other pkg/errors proposals used in the current stdlib? I thought they officially accepted that package as the default errors package.
No, they didn't adopt pkg/errors into the stdlib. pkg/errors did get updated to work nicely with the changes that they did incorporate
Interesting, is there any discussion available to understand their decision?
> What will likely happen is that the users will design generic libraries themselves
the Go team has already made the libraries, theyre just publishing them in the /x/ namespace instead of the stdlib.
/x/ has everything that's not under the Go compatibility promise, among other things from the Go project.
Hm, shouldn't they first update the (most important/interesting) libraries in a fork before stabilizing generics?
I mean designing a feature in a clean room is one think, but using it a the standard library would be a good way to know if they messed up the design in some way.
Yes that would be a good idea. However, they could already have a generic collection lib in the same place (branch/repo) where the generics feature is created.
This is what Rob proposes in this issue.
> I propose we still design, build, test, and use new libraries for slices, maps, channels, and so on, but start by putting them in the golang/x/exp repository.
Also think this is a good call.
Though the second comment has merit too, I think it would be good to have some common abstraction right away, like we had io.Writer, io.Reader, etc. so everybody doesn't define their own, as it'll take time to crawl out of that.
Although in practice it did work out well with error wrapping which was first in libraries and then the stdlib defined an interface for them, which resulted in all libraries adopting that.
> Similarly, if constraints isn’t part of 1.18, there will be a lot of independent redefinitions of orderable types. I don’t think we’re going to learn from experience much that could change that package now.
tl;dr: We're shipping generics in 1.18, it's a huge release so let's wait a little bit until they stabilise and they're used in production before changing our stdlib.
Good call in my book, and I'm extremely excited to have generics finally. I thought they were going to ship with 2.0, but the sooner the better!
It's cool to hate on Go, it's taken over the system programming space for a reason, like it or not, and after working full time on Elixir it's hard not to think in map/reduce and other generic constructs, which were unreasonably verbose before in Go. Now the haters will have to focus on the "if err != nil" statement to pile on the language — though to be fair I'll expect some ergonomic improvement on that aspect as well, eventually.
With generics, Go will be my new Python, but with a decent dependency and deployment story, and I'll just need Zig for my low-level manual memory management needs. What's Rust?
EDIT: indeed being a bit cheeky with HN's favourite language isn't well received in this place.
> It's cool to hate on Go,
criticism =/= hate
Nobody "hates" on Go for the sake of it. If some people were not critical of that language, generics would have never been added at first place. I'm glad the Go team acknowledged the flaw instead of the gaslighting that has been going on for years in the go community from a few go users.
Calling it "gaslighting" is exactly what a hater would do. It's not very constructive.
Generics were a pain point, but people have been crapping on Go on this forum while people out there are using it to build stuff. It's not a perfect language, but if you read any HN thread about it it's like it's impossible for people to go past the lack of generics or the existence of nil.
It is a good use of the word "gaslighting", which I think can be defined as "an effort to convince someone that they do not know a thing that they do know". In this case, a lot of people know from lots of experience that writing type safe generic code is extremely useful, but the go community spent a long time arguing that people could not have had that experience because writing generic code isn't actually useful. This isn't really my impression of what the go team was saying, which I interpreted as more like "parametric polymorphism is useful but we aren't sure it is a good fit with the simple language design we are seeking to maintain". That's totally reasonable, everything is trade offs, but that's not the prevailing pushback you would get from the go community, you'd instead hear a message more like "if you'd rather write generic code instead of copy pasting constantly, you're a bad programmer who doesn't get it", which is not helpful and yes, I think gaslighting is a reasonable description.
The existence of nil is also a design flaw. But you're right that no language is perfect and neither of these flaws keeps people from building tons of useful stuff with the language.
I think there is definitely gaslighting in the other direction too, which is what you're highlighting: lots of people know from experience that go is a super useful language, but then people come of as saying "your experience does not exist, it is not a useful language because of these big design flaws".
Correct me if I’m wrong, but it sounds like you’re not really claiming people say “your experience does not exist” or “you’re a bad programmer who doesn’t get it.” It sounds like this is a bit of exaggeration for effect?
I suspect you’re talking about something I’ve seen, which is people will say that your experience in other languages isn’t directly applicable to writing code in Go because it’s a different language and ecosystem.
This seems like something newcomers need to hear sometimes, and it’s also somewhat irritating if in your case it doesn’t seem like that kind of issue. When generic advice for beginners gets misdirected then it can seem insulting, but I usually just try to remember that they don’t know me and beginner mistakes are worth checking for. I can choose not to be insulted.
Getting people to actually hear what you’re saying when they’re pattern-matching on common questions can be difficult. I think we should still assume good faith, though, and “gaslighting” implies malice.
Those are summaries of responses I've gotten when discussing this a long time ago (before the language team decided to figure out how to incorporate generics). I'm not sure the exact actual wording, but that's very much how it came off to me, that there's no way to be both smart and experienced and simultaneously convinced that generics are an extremely useful language feature. These debates don't get so heated now, because even the go designers themselves conceded the point long ago.
Your more nuanced version of the argument is also wrong though. Experience from other languages is always applicable. Even for beginners, it is obvious and correct to ask "wouldn't we save a lot of repetition if we could write a generic version of this function instead of tediously rewriting the same thing for a bunch of types?". The correct and very reasonable response is "yes, but the designers of this language chose not to support that functionality in an effort to maintain simplicity". The right answer is not "no, this language is unique such that that technique is not applicable to it". That's gaslighting, and much worse when directed toward beginners who will just conclude that they must be wrong, when they aren't.
I agree with your last paragraph in general, but in this case that's not what was going on. Saying that the language would not actually benefit from polymorphism was not true, it was just a rationalization for leaving out a useful feature. It's the difference between "yeah that would be useful but we don't think the complexity it brings is worth it" vs. "that is not useful in this language for reasons you aren't wise enough to understand".
I started coding in 1986, naturally I used plenty of languages without generics to build stuff, that doesn't mean it still makes sense in the 21st century to design strongly typed programming languages without generics.
Some people crapping on Go in this forum are also people who have been using it to build stuff, though. It's inaccurate to assume these are distinct groups.
> it's taken over the system programming space for a reason
I'm curious what you define as systems programming, because Go certainly isn't a systems programming language by the classic definition.
Go has a complicated runtime and GC. It's really not in the same category as C/C++/Rust, but more like Java/C# , just without a JIT.
The only domain where Go is a go-to language is the Kubernetes ecosystem. It's also decently popular for networking heavy applications / microservices / server applications, because the runtime is well suited for those domains.
Go is basically a systems programming language if you consider """the cloud""" a system, a big part of the ecosystem is built on it. It is basically between C/C++/Rust and Java/C#, which is "good enough".
In that case JavaScript is a systems programming language if you consider the web a system.
Golang having GC and a heavy runtime make it unsuitable for systems programming IMO.
It isn't unsuitable per se, but it definitly lacks the knobs of something better like D, C# or Nim.
https://www.packtpub.com/product/creative-diy-microcontrolle...
Not exactly, the web runs on browsers and web engines that are written in C++, and servers that are mostly C and/or C++. On the other hand, "the could" runs (partially) on k8s and friends, which are written in Go (which runs on servers, written in C and/or C++).
We’ll soon have Java and C# without a JIT once they are done working on their AOT compilers.
The C# one seems to be further along though due to having supported the Mono/Xamarin AOT runtime for a long time and is really easy to setup, it’s just a package reference in your .csproj. It also has the benefit of coming with a modern usable language.
Lack of null safety is still a major shortcoming in my book compared to Rust/Swift/Typescript.
I never had a real issues with null safety in 3 years of using Go regularly. Maybe due to may long experience with Java which might have sharpened my eyes for null safety.
Lack of enums and pattern matching is IMO the bigger issue. I miss that regularly.
null safety: Can you provide us a reference to a working definition of the term?
For example in Typescript, you can specify whether a parameter is allowed to be null in the signature of a function. If you do the compiler will make sure that you don't accidentally pass a null value to that function.
function foo(x: number) { return x + 1; } let y = null; foo(y); // compiler will flag thisIn Go
func foo(x int) { return x + 1 } var y *int = nil foo(y) // compiler will flag thiThat wasn't the greatest example as Go doesn't really have any way to represent it. If it did it might be something like this.
The idea being to allow variables to hold and pass pointers to structs like now but ensure they are never nil.type MyStruct struct { Name string } var-not-nil myStruct = &MyStruct{"Hello"} // this is a "not-nil" pointer variable myStruct = nil // compiler would catch thisNow add an error to foo and have x=0 as a valid return value, or even worse, having to deal differently with different errors.
null/nil can only occur in scoped use cases (Option<Foo>, Foo | null, Foo?) and so the compiler can warn you when something could be missing and you don't handle it without being incredibly noisy.
At least now we can write a generic set that supports all the things you want a set to do, instead of using map[T]interface{} and a lot of for loops.
>map[T]interface{}
If the semantics desired are merely those of a simple set, why instantiate storage for the value side of the map? An interface{} consumes 16 bytes. Instantiating a struct{}, i.e. with no fields inside, consumes 0 bytes.
Ah yeah, that's right, it was map[T]struct{} - it's been a bit since I had to do it.
I knew it was something with curly brackets instead of the map[T]bool I started using.
Because when a language doesn’t supply common features out of the box, users implement them for themselves, and inevitably a lot of them do a bad job of it :)
More reasons to have a generic Set type in the standard library. That way you know the implementation is correct every time.
Go has taken over kuberbetes related stuff, that is all.
Although I like it being used for systems programming stuff like TinyGo on embedded or F-Secure TamaGo unikernel, that is hardly taking over the domain where C and C++ still rule, and will keep to do so for decades to come, despite the increasing usage of Rust on the domain.
That's because you can put out there why you like a language and people will upvote you. Dump on other people's choice in a systems language and you will likely suffer some down votes. Cheeky has a price.
Brad Fitzpatrick stated in one of his talks, that a version of Go coming with generics will be called 2.0 eventually. Maybe they do the version jump if there are backward-incompatible changes done to the library in 1.19.
When was that ?
IIRC that was the general feeling then but the impression has changed in the mean time ?
No need to disrespect Rust. I like both. And certainly they both have their proper use cases.
JavaScript/TypeScript changes so much that a codebase will look very different from one year to the next. JS/TS has come a long way and needed to make substantial change, but I appreciate the slow moving, methodical nature in which the Go team moves the language and environment forward.
The negative is that some issues are really hard to fix, because you don’t want to break the backwards compatibility guarantee.
For example, there is a bug in go’s built-in HTML templating, that it misrepresent javascript backticks.
If you do
<script> var string = `http://google.com` </script>
in HTML template, it will interpret // as a comment and return
<script>var string = `http:</script>
This is now really hard to fix; it means either rewriting the JS parser from scratch, but that is a giant change (currently the JS parser is really simple, backticks are hard to do properly without reimplementing all from scratch); the more reasonable choice would be to just ban backticks in HTML templates, but that would break backward compat.
So there is basically an unfixable bug sitting in go html templates.
Can't they add an option or something that specifies how backticks are handled with the default setting to be the current (so nothing breaks) and any new code or existing code that knows about the issue can set it to a better value?
If it’s bad enough, why not release a v2 of the library?
The standard library seems to only be versioned with the language. An external v2 could work, but I’m not sure there’s any precedent for a v2 of a standard library package.
> I’m not sure there’s any precedent for a v2 of a standard library package.
urllib/urllib2 in Python is one example. There are others; it's not really unprecedented.
There is no precedent in Go. It's been discussed a few times here and there, but real plans AFAIK.
> the more reasonable choice would be to just ban backticks in HTML templates, but that would break backward compat.
Given script tags are allowed, this seems like the least reasonable choice. What other arbitrary JS features should be disallowed because the parser isn’t spec compliant?
Why can't they just make a new version of the library for fixes?
> JavaScript/TypeScript changes so much that a codebase will look very different from one year to the next.
Only if you want it to change. Our codebase is plain JS/TS, we don't use many recent features, and it's fine.
Are we just ceding Javascript to Microsoft at this point due to the ubiquity of Typescript?
If anything, TypeScript is quite deferential to TC-39. Microsoft is part of the standardization process, but certainly not the main/only driver. I don’t know where you got the impression that this is a MS issue, but TS changes because JS changes not the other way around.
TypeScript does not output obfuscated JS. The output is completely human-readable, even if certain things don't match the source exactly. TS explicitly tries to stay in line with JS, syntax-wise.
It’s already mainstream to minify your JS anyway as part of a long build process. JS transparency is used by very few people.
No. Typescript is really strict about being only type annotations for JavaScript (except `enum`), so I'm not sure why you'd think that makes them control JavaScript.
It wouldn't be a loss if browsers could natively understand Typescript annotations, and even take advantage of them for the JIT.
Looking forward to the day, actually.
I was really looking forward to getting some basic generic standard library funcs. Guess what I'm getting now is a a ton of std libraries each with a slightly different approach, slightly different bugs and update cycles which will nicely inflate the amount of tech debt in existing codebases.
I guess from the standpoint of a language designer it makes life a bit easier not to do anything and just cherry pick the winners of the various generic attempts that the community will create but as an app. developer I find this disturbing. Also I question the rush to a release all of a sudden. Generics took years to come to fruition. Another year for a decent stdlib won't really hurt anyone and for those that really really want it they can enable it through build constraints.
C++17 introduced std::optional and std::variant, but they're not used by the standard library in 2021, as far as I've seen (unlike Rust which is better off for using them). It feels quite like a missed opportunity to produce more ergonomic APIs (though the inability to produce an optional<T&> is limiting, and std::variant is inefficient at compile and possibly runtime and apparently can't be fixed because ABI). I'm not a Go user, but I dislike multiple competing approaches to problems without clear guidance or fully embracing newer approaches.