Go 1.21 Release Candidate
go.devThis is at least the biggest release since 1.18 with generics, possibly bigger. I’m excited because the changes demonstrate a transition from the traditional go philosophy of almost fanatical minimalism, to a more utilitarian approach.
Loop variable capture is a foot-gun that in the last six years has cost me about 10-20 hours of my life. So happy to see that go. (Next on my list of foot-guns would be the default infinite network timeouts — in other words, your code works perfectly for 1-N months and then suddenly breaks in production. I always set timeouts now; there’s basically no downside)
Interesting to see them changing course on some fundamental decisions made very early on. The slices *Func() functions use cmp() int instead of less() bool, which is a huge win in my book. Less was the Elegant yet bizarre choice — it often needs to be called twice, and isn’t as composable as cmp.
The slog package is much closer to the ecosystem consensus for logging. It’s very close to Uber’s zap, which we’re using now. The original log package was so minimal as to be basically useless. I wonder why they’re adding this now.
I’ve already written most of what’s in the slices and maps packages, but it’ll be nice to have blessed versions of those that have gone through much more API design rigor. I’ll be able to delete several hundred lines across our codebase.
What’s next? An http server that doesn’t force you to write huge amounts of boilerplate? Syntactic sugar for if err != nil? A blessed version of testify/assert? Maybe not, but I’m happy about these new additions.
> I always set timeouts now; there’s basically no downside
Beware of a naive http.Client{Timeout: ...} when downloading large payloads. I've always set http.Client.Timeout since day one with Go due to prior experience, but was bitten once when writing an updater downloading large binaries, since the Timeout is for the entire request start to finish. In those scenarios what you actually want is a connect timeout, TLS handshake timeout, read timeout, etc.
https://blog.cloudflare.com/the-complete-guide-to-golang-net... does a good job explaining how to set proper timeouts, except there's a small problem: it constructs an http.Transport from scratch; you should probably clone http.DefaultTransport and modify the dialer and various timeouts from there instead.
In general, setting timeouts beyond the entire request timeout is pretty involved and not very well documented. Wish that can be improved.
> An http server that doesn’t force you to write huge amounts of boilerplate?
I just started my first Go tutorials this week. One of them was go.dev's Writing Web Applications [0]. I was actually struck by the lack of boilerplate (compared to frameworks I've used in Java/Python/etc.) involved.
I get that it's a toy example, but do you know of any better write-ups on what a production Go web server in industry looks like?
I don't think there necessarily is a default production webserver setup. People use different routers or frameworks, or go bare bones because they can.
You asked for an example, and here is one. This is my side project "ntfy", which runs a web app and API and handles hundreds of thousands of requests a day and thousands of constantly active socket connections. It uses no router framework, and has a modified (enhanced version of the http.HandlerFunc) that can return errors. It also implements a errHTTP error type that allows handler functions to return specific http error codes with log context and error message.
It is far from the most elegant, but to me Go is not about elegance, it's about getting things done.
https://github.com/binwiederhier/ntfy/blob/main/server/serve...
The server runs on https://ntfy.sh, so you can try it out live.
> This is at least the biggest release since 1.18 with generics, possibly bigger.
I sort of see what you're saying, but then again, the addition of a couple of small generic packages (slices, map, cmp) and one larger package (log/slog) isn't exactly a huge amount of new surface area. Definitely not as big a qualitative change as generics themselves, which added I think it was about 30% more content to the Go spec.
> The slog package ... I wonder why they’re adding this now.
Because it's very useful to a ton of people, especially in the server/service world where Go is heavily used. To avoid a 3rd party dependency. To provide a common structured logging "backend" interface. See more at https://go.googlesource.com/proposal/+/master/design/56345-s...
I agree we can be enthusiastic, but the Go team is still spending a lot of time getting APIs right, finding solutions that fit well together, and so on. I don't think it's the downward spiral of "let's pull in everything" we've seen in P̶y̶t̶h̶o̶n̶ some other languages.
> This is at least the biggest release since 1.18 with generics, possibly bigger.
> the changes demonstrate a transition from the traditional go philosophy of almost fanatical minimalism, to a more utilitarian approach.
This change demonstrate that "to stay as a mainstream" programming language, you can't preach minimalism , has to adopt utilitarian approach.
Even Java and Python have default network timeouts. I don't know why that's the case though.
https://ashishb.net/all/infinite-network-timeouts-in-java-an...
The popular Python “requests” HTTP library doesn’t have a default timeout. There’s a 2015 GitHub issue asking for a default timeout, if even an opt-in environment variable to avoid breaking API compatibility. There’s a lot of comments on the issue, but no commitment to implement or close as “won’t fix”.
Typo: I meant no default network timeouts :/
As far as I can tell, the network timeout (specifically SetDeadline, SetReadDeadline and SetWriteDeadline) is handled by Go runtime not by the OS, given how complex real world systems are, I wouldn't hold my breath on that one.
You might be interested in this discussion https://github.com/golang/go/discussions/6022 about http serve mux.
You seem to have linked to an issue about error messages in text/template; did you intend to link to something else?
Probably meant to link this. https://github.com/golang/go/discussions/60227 It’s an interesting discussion about making changes to the default router.
It is interesting to see them add things like the "clear" function for maps and slices after suggesting to simply loop and delete each key one at a time for so long. Is this a result of the generics work that makes implementation easier vs. the extra work of making a new "magic" function (like "make", etc.)?
That `clear` on a slice sets all values to their type's zero value is going to be extremely confusing especially coming from other languages (Rust, C#, C++, Java, ...) where the same-named function is used on list-ish types to set their length to zero.
Doubly-so when `clear` on a map actually seems to follow the convention of removing all contained elements.
Sure, although as a Go user, the behavior described is exactly what I’d expect. These new functions are no different from functions that you could write yourself.
I'm a go user, and think it's dumb that:
will have different results if f is a slice and a map.clear(f) fmt.Println(len(f))I guess, but that seems expected to me at this point, and consistent within the semantics of how slices and maps work (and other values).
Maps are kind of like
Slices are kind of liketype map *struct{ len int; ... }
We get a lot of convenience by having the pointers auto-dereferenced, but the cost is that the semantics are still different and there are no syntactic markers to remind us of the fact.type slice struct{ len int; ... }I don't think any language has really given us something that is completely intuitive here. Python's semantics with the list type are a constant surprise to newcomers. C++’s semantics surprise newcomers. Rust's semantics surprise newcomers. Surprises all around. The best you can hope for is something that is internally consistent.
The slice in Go is more or less equivalent to &[] in Rust or std::span in C++. The whole idea of passing a pointer by value is key to understanding the semantics of most modern programming languages. Like, is Java pass-by-value or pass-by-reference? You can argue the point, but whatever label you decide is appropriate for Java, it’s useful to think of Java as passing pointers by value. Same with Python, Rust, Go, etc. This is not intuitive for people who are new to programming.
> The slice in Go is more or less equivalent to &[] in Rust or std::span in C++.
Not really, because they are mutable, they can mutate the underlying memory, and they can re-allocate. They are a weird mix of &mut []/Vec or std::{span,vector}.
In contrast, a Rust &[] can may the underlying storage (if it's an &mut []), but cannot spin out a new storage on its own and start a new life without a backing structure – and I'm not utterly familiar with std::span, but I would wage the semantics are close.
Go slices can, which is why they are always tricky, especially for beginners. Not only does = not really do what is intuitively expected, not only every beginner will be bitten in the ass by forgetting the `x =` in `x = append(x, y)`, but it is impossible, when calling a function expecting a slice, to know if this function only wants a view on some memory or actually expect to modify it; a capital difference that is very clear in Rust or C++ type systems.
To be honest I kinda hate the reply to “X is like Y” comment when someone says “X is not like Y because of difference Z”. It's just… so pedantic. The whole reason we say “X is like Y” instead of “X is the same as Y” is because X is not the same as Y. I’m just really tired of seeing this response on HN over and over. I was pretty damn explicit when I said “more or less” and you’re here to argue about whether it is legal for me to say “more or less” in this context. I mean, geez, what a drag.
If you talk about how Go slices are tricky for beginners, but you cite C++ as some kind of gold standard against which Go should be compared, then I think you’ve lost the plot—C++’s type system is a complete and utter trash fire for people who are new to programming. Rust, as well, is very difficult for people to get into. Even the Python semantics for lists get people tripped up all the time.
I bring this up because there is no language that gets things right for beginners and still provides the tools which professional programmers expect to have. And if you want to pick an example of a language that is particularly bad for beginners, C++ is it. C++ is shit for beginners. Complete shit. I bring up the Python example because it’s something I’m always explaining to people who are learning Python—Python is ok, but slicing in Python creates new arrays containing a copy of the slice's contents.a = [[]] * 5 b = [[] for _ in range(5)]The nuances of how references and values work is something that you have to work through, and then you have to come to terms with the conventions for the particular language you are using. IMO, Go’s slices are fine… you really just have to be careful about aliasing a slice you don’t own, but then again, that’s true for languages like C++, Python, Java, and C# as well. Rust is the only one that’s really different here.
> The whole reason we say “X is like Y” instead of “X is the same as Y” is because X is not the same as Y
Being able to change the underlying data is a pretty big difference. Technically, their only solid common point is that they address contiguous spaces in memory.
> you cite C++ as some kind of gold standard
I never did; I highlighted the difference between immutable views vs. whatever Go slices are.
> I think you’ve lost the plot
No need for the snark there.
> Technically, their only solid common point is that they address contiguous spaces in memory.
That’s a very major thing to have in common. Definitely not a minor detail, for sure.
When you're comparing two cars, the fact that both have four wheels and drive on a road is not that compelling.
In this scenario, we are comparing two cars, a bicycle, a jet ski, and three types of airplane. Yes, the cars are similar, within that context. Many languages, like Python and Java, do not have an array slice type. And the similarities between C++, Rust, and Go are relevant—the length is a property of the slice itself, and since the slice is passed by value, it is not modified by a function that accepts a slice as an argument, even if the objects the slice point to are modified by that function.
If you see a different context, then you misinterpreted what I wrote.
It is easy—trivial, even—to imagine scenarios where a particular “X is like Y” does not make sense. What you should do, as a reader, is try and understand what the writer means, rather than try to figure out some way to interpret a comment so that it is wrong, in your view.
The easy way out—saying “X is not like Y because of difference Z”—does not meaningfully contribute to the discussion.
> The slice in Go is more or less equivalent to &[] in Rust or std::span in C++.
My understanding is, to use the Rust/C++ term, slices in Go are owned, but they are not in Rust or C++. That is, they're a pointer + length in the latter two, but a pointer, length, and capacity in Go.
The type in Go does not carry ownership information. It is not a useful distinction to say that slices are "owned" in Go.
Types in C++ don’t carry ownership information inherently either, but they’re still thought of in these terms. I know Go doesn’t often use these terms, which is why I clarified.
I think the distinction is useful specifically because it explains why Go slices work differently than in at least those two languages.
Sure, there may be a right way to say this.
I have a particular axe to grind when it comes to the word “ownership” of objects in programming. In C++ and Rust there is a very natural sense of ownership in that the owner of an object is who may deallocate the object, and that ownership may be shared with std::shared_ptr<T> in C++ or Rc<T> / Arc<T> in Rust. Ownership is such a useful concept in these languages because it is generally true that somebody must deallocate the object, and it must happen safely.
As a very natural consequence, people who spend long hours working in C++, Rust, C, or other similar languages start to associate, very closely, the notions of ownership and correctness. And indeed, ownership is broadly useful outside C++, Rust, and C. Even in a garbage-collected language like Java or Go, it is generally useful to have clear ownership. You don't modify objects that you don't own, or use objects outside their scope.
But occasionally, you come across a piece of code where ownership gets in the way. Perhaps some garbage-collected algorithm that transforms data with pointers going all over the place. It probably sounds like a mess, but that is not necessarily true either—it can be perfectly good, correct, readable code.
So while ownership is a useful concept for talking about specific pieces of Go code, or specific pieces of Java code, it is not applicable to all Go or Java code, and that’s fine. It’s kind of like talking about code in terms of functions—nearly every language on the planet makes heavy use of functions (or some equivalent), but it’s also true that code does not have to be organized in functions, and you will occasionally see code that does not use functions.
Every language has sharp edges, but go's whole MO is to avoid rabid footguns at the expense of verbosity (IMO). The for-shadow issue thats fixed this release is a great example of go deciding to do the intuitive thing rather than the "correct" thing because that's how people work.
I don't think the implementation details matter to a user of a map or a slice (or an array for that matter) - they're language builtins (as opposed to span, vector and map in c++ which are library types).
In my experience, go has tons of footguns that come because of the verbosity. Rather than having clear abstractions that handle edge cases for you, you get to reimplement these things yourself every single time.
Case in point, clear. Or "typed nils". Or accidentally swallowing errors because you had to handle them manually. Or reimplementing higher-level job control on top of channels every single time.
>Or reimplementing higher-level job control on top of channels every single time.
Can you please explain this?
Maybe generics have fixed this, I threw in the towel on golang before they released them.
But as an example, if you wanted to have any sort of higher-level management of goroutines (for example, a bounded number of background workers) you get to rewrite or copy-paste that code every place you want to accomplish that. A library couldn't exist to abstract away the idea of a pool of background workers because it can't know in advance what types you want to send over your channels.
Again, I wouldn't be surprised if post-generics there's a library now to do this for you. But for years if you wanted anything higher level than raw channels, you're basically on your own.
Just saw your reply now. Thank you.
> Python's semantics with the list type are a constant surprise to newcomers.
Care to elaborate?
The builtin clear() will handle cases like deleting NaN from a map.
Go slices are passed by value so there's no way for clear() to resize the underlying array without reassignment.
I suppose it could have been x = clear(x) or clear(&x), but certainly if you understand Go semantics then seeing any function call do Foo(slice) already signals that the call can't modify the length since there's no return value.
This is a great example of why I dislike Go. It is not obvious that a slice is passed by value while a map is not or why. Therefore every action on it feels a bit weird because of that, and now you have functions like "clear" that take a very non-obvious action. Personally, I'd rather have pass-by-value return an error and only allow pass-by-reference (better: they should have had maps and slices be pointers). I'm not sure I'd ever use a function that set every value to its zero type.
I agree the semantics seem weird, I've occasionally wanted the equivalent of x = clear(x) but I can't think of a time when I've wanted to set all the values to the zero value.
The bug doesn't seem to discuss use cases for it either. The most I could find is: https://github.com/golang/go/issues/56351#issuecomment-13326...
Which boils down to "doing what clear(slice) does cannot be implemented efficiently today" but I'm not sure how having an efficient way to do something folks don't want is useful?
There's already a memory clearing optimization in the compiler: https://github.com/golang/go/issues/19266
So yeah I'm not sure under what situations folks will use clear(slice).
You often do it in Go to avoid pointing at something needlessly which would delay or even keep that something from being garbage collected.
That's actually a great explanation of why it's not easy to implement the clear function the way it makes sense for slices. However, this is a built-in, not a normal function, so they could make it do whatever they like, including doing the intuitive and desired thing, no? It seems to me that they've just created another "loop variable gotcha" type situation...
C#’s array/span.Clear() does exactly the same - it zeroes them out.
It sounds like they're inheriting the naming from the calloc command. Allocates then 0's the memory. It lines up with the go devs' backgrounds
From the spec [1], it was because the loop doesn't work to clear a map with a NaN key.
> It is interesting to see them add things like the "clear" function for maps and slices after suggesting to simply loop and delete each key one at a time for so long.
Slowly walking back dogmatic positions is just how the Go team works.
I say this as a person that wrote Go full time for a handful of years.
No, it's because a use case was discovered that the for loop approach can't handle: NaN keys.
In my experience, that's exactly how this plays out every single time.
And around and around we go.Dev: Can we have a function to clear a map? Go: No, it's easy enough to write the 5 lines of code to just do it yourself every time. Dev: Okay, I don't see why I should have to write those 5 lines every time but fine. Isn't looping over everything going to be slower than just… having a function that can empty the internals? Go: We've implemented a compiler optimization to detect this and rewrite it to the faster code it would have been if it we were to implement it. Dev: Isn't that… way harder than just writing the method? Anyway, I noticed this solution doesn't actually always work because of this edge case. Go: Just handle the edge case every time then. Dev: That's the point. I can't.You're distorting the real story to fit your bias. Once the NaNn edge case was discovered, work to deal with that edge case was started.
Can you link to any unit of work that solves this edge case? Because all I can find are bugs and issues created 2015/2016/2017 that were closed and unresolved.
This isn't remotely close to the first or even the tenth time I've seen this exact pattern play out. Finally there's some straw that forces the golang team to backpedal on a dogmatic position, but along the way there's dozens of comical defenses of the current state of things.
So you say, with no actual reference. Maybe what you refer to happened in your head, and not in the real world.
1. Generics
2. Clear
Are the two already covered not enough?
I would like to see some content from the go team on generics or clear that fits your claim. Yes, there are many in the community who speak the way you suggest, but you don’t seem to know much about the Go teams pov.
https://news.ycombinator.com/item?id=23033183
A pretty apt write up.
But I accept there’s probably not an amount of evidence to change your belief on Go’s dogmatism. And that’s okay! You like a language. That’s great!
From what I can tell the issue here is that Rob Pike thinks the label "generics" is inaccurate? Seems like a far cry from what you have accused them of. I think theres not only a lack of evidence to convince me, I think your claim is just straight up unsubstantiated. I think an unbiased, responsible observer would have to conclude similarly to me.
> I think an unbiased, responsible observer would have to conclude similarly to me.
Unsurprising conclusion. Enjoy the day!
I used to really like Go. Now that I don't work with it, I find that the further I go on without it, and with using other tools, the less and less I'd want to go back.
I would argue Go's inability to manage NaN keys is irrelevant to the desire for "clear", in that I would argue that the NaN keys issue should be fixed _regardless_ of clear.
One aspect of this is that it was formerly impossible to delete NaNs from a map[float64]T, unless you had the nan already.
Even with the NaN, the NaN wasn't equal to itself, so it still wouldn't delete. Really, they just should have forbidden float64 key'd maps, but too late for that, I guess.
It might also be that they’ve worked their way down the priority list and are getting to these features that are largely just to tidy up code.
Clearing a container is usually a much simpler and faster operation than looping through all and removing them individually. That's not a question of tidying something up.
There were compiler optimizations for clearing by iterating. I haven’t looked at the code, but I suspect this won’t be much more efficient than iterating was with the optimizations.
I expect both will result in the same code; the only difference is that the clear built-in can handle maps with NaN keys.
> after suggesting to simply loop and delete each key one at a time for so long
Those were always bad alternatives to a real design problem, they just didn't have a good alternative to offer at the time.
Huh, I'm glad to see generic Min/Max functions, but the fact that they're built-ins is a little odd to me. I would have expected them to put a generic math library into the stdlib instead. The fact the stdlib math package only works with float64s has always struck me as a poorly thought out decision.
Making them builtins allows them to work as you’d expect with cases like,
With ordinary functions, the arguments are assigned types too soon, and you get integer types for 0 and 1 in the above code. In C++ you might make the types explicit:func clamp(x float64) float64 { return max(0, min(1, x)) }
That’s not meant to be exactly the way you’d write these functions, but just a little bit of sample code to show how the typing is different.template<typename T> T clamp(T x) { return std::max<T>(0, std::min<T>(1, x)); }That doesn't seem to be true, unless I'm misunderstanding something. https://go.dev/play/p/ymM0tD3aGYg?v=gotip
Obviously these sample functions don't take into account all the intricacies of float min/max functions.
You can
assuming a and b are const.const x = min(a, b)I can't think of a use case for that. If all the inputs are consts, then you know the values and can just assign it to be the less of a or b. Am I missing something here?
Const are build-time constants, they are not necessarily the same value across all build configurations.
That’s a good point.
> I can't think of a use case for that.
It helps to document intent.
Probably not so useful for min, but it can be more useful for more complex functions.
The proposal[0] gives a rationale. Using builtin functions lets them be variadic without allocating a slice.
While I suspect open coding may make the optimization a little easier, there's no reason it couldn't optimize out a slice and fixed 2-3 iteration loop with the same result.
The proposal's real conclusion was "the decision cannot be resolved by empirical data or technical arguments."
What sort of use case do you see for non-float64 math operations in a Go application?
Well, the obvious ones are of course Min and Max functions, which is resolved with this. Other ones I commonly find myself wanting to use with integers would be math.Abs and math.Pow I guess. Otherwise they are mostly functions useful with floats, so ultimately I understand the logic, though even in that case, it would be nice if they were usable with float32s as well without casting back and forth.
Personally I try to avoid using floats for calculations if I can (unless it's obviously warranted), I've encountered far too many foot guns from using them, though honestly the same can be said about integers in some situations too. I wish there was a package like math/big that was more accessible, I find the current interface for it pretty abysmal.
Float32 is pretty popular in graphics.
I'm a bit surprised that the slog package was added to the stdlib, but it does seem to use the API that I think is the most ergonomic across libraries I saw in Go (specifically, varargs for key values, and the ability to create subloggers using .With), so I guess it's nice most of the community will standardize around it.
If all goes well, you won't have different libraries using different loggers anymore, in some not too distant future, which should improve easy composability.
I literally just updated all of my golang logging to use zerolog so i could get severity levels in my logs. Bad timing on my part! I guess ill re-do it all with slog, i prefer stdlib packages to third party packages.
Tomorrow we will see a blog post titled “Why I always use 3rd part dependencies instead of stdlibs”
ideally an slog handler is built for zerolog, so that you can use slog BE and keep the zerolog FE
I have mixed feelings about it. if nothing else, the name..."slog" isn't exactly the word i want repeating to myself as I'm working.
A little honesty is a good thing
You can alias it when importing it then.
I think it's pretty exciting. Don't have to keep using regex or weird parsing to get the key values from logs that one wants.
Also a bit surprised how fast it was added to the stdlib, but perhaps there was a lot more consensus on the api compared to other golang proposals.
I wonder if it's possible to use slog in 1.20 already, is there a back-port?
I'm changing logging on the service right now and it just makes sense to use it now, but entire service can't move to pre-release version of go.
Indeed it's in "golang.org/x/exp/slog"
https://pkg.go.dev/golang.org/x/exp/slog#hdr-Levels seems to fall into the same trap that drives me _starkraving_ about the 18,000 different(!) golang logging packages: there doesn't seem to be a sane way of influencing the log level at runtime if the author doesn't have the foresight/compassion to add a command-line flag or env-var to influence it. A similar complaint about the "haha I log json except this other dep that I import who logs using uber/zap so pffft if you try to parse stdout as json"
That bugs me too. I consider it a red flag for a library to log to anything except a `log.Logger` passed in from the caller. Now I'll expand that to include a `slog.Logger` as well. If the library is logging directly to stderr or stdout, that is a sign that it probably has other design issues as well.
Put the logger in the context
Isn't it considered bad practice?
yes, in general
the context stores request-scoped data, whether or not the logger is a request-scoped value is a grey area
and to reply to sibling comment, opentelemetry is basically a house of antipatterns, definitely do not look to it for guidance
> opentelemetry is basically a house of antipatterns
"Look on My Works Ye Mighty and Despair!"
https://github.com/open-telemetry/opentelemetry-collector/tr... -> https://github.com/open-telemetry/opentelemetry-collector-re... ... and then a reasonable person trying to load that mess into their head may ask 'err, what's the difference between go.opentelemetry.io/collector and github.com/open-telemetry/opentelemetry-collector-contrib?'
Oh, I see. Thanks.$ curl -fsS go.opentelemetry.io/collector | grep go-import <meta name="go-import" content="go.opentelemetry.io/collector git https://github.com/open-telemetry/opentelemetry-collector">I don't enjoy the otel APIs, but they are implicitly scoped; contexts are a natural place to store them.
"The context stores request-scoped data" might be another Go-team dogma due for course correction RSN.
huh? there's no dogma involved here, it's just an observation of the properties of the type
a context is created with each request, and destroyed at the end of it
and values stored in a context are accessible only through un-typed, runtime-fallible methods -- not something you want to lean on, if you can avoid it
In practical terms there's pros & cons I guess, but in general doesn't loading a Context with session variables make code more concise and easier to understand ? DB connections, loggers, and the like. If you really want to pass around Context's in all your API signatures then at least try to make the most of it.
Are pitfalls ever actually encountered ?
if you pass a logger to a foo as a parameter to the foo constructor, then missing a logger is a compile-time error
if you pass a logger to a foo as a parameter in the request context, then missing a logger is a run-time error
Fer sher. But a passed-by-Context logger could be used (for example) to override a library package's default (stdlib?) logger.
But what is the SOP / Best Practice here ? Do many libraries have some sort of SetLogger(..) initialization call, so that loggers don't clutter the API ? Or are error returns info-(over-)loaded ?
it's pretty straightforward, everything that logs takes a logger as a dependency during construction
It's already idiomatic for OpenTelemetry, and otel has use cases that overlap slog.
The new experimental fix for loop variable capture [0] is huge; this is the biggest footgun with the language in my experience.
I hope the experimental fix makes it into the next version of Go by default.
Wow, that’s a blast from the past. Those code examples look exactly like the var to let changes in ES2015…
Nice, my push for actually using the sha256 instructions on amd64 finally got released. 3x-4x increase in hash speed on most x86 which is really nice for content addressable storage use cases like handling container images.
Got a link to the PR? Curious to see how this is implemented.
It looks like https://go-review.googlesource.com/c/go/+/408795
Seems like these changes only benefit you if you have an Intel processor. If you have an AMD or Arm processor, you won’t see any difference.
Also, interesting to see assembly again after many years. Haven’t touched that since college during a compilers and assembly course.
Edit: never mind, amd has implemented these “sha-ni” instructions since “Zen” [1]
And adding that they already had support for ARM sha256 instructions https://github.com/golang/go/blob/master/src/crypto/sha256/s...
Huh, that is interesting how they do that. They are enabling SHA instruction support based on CPUID and without respect to the value of GOAMD64. I did not realize Go was doing that.
AFAIK, the sha256 extension isn't a part of any of the x86_64 microarchitecture levels, so a cpuid check is most appropriate here at the moment.
Fair point. But what surprised me was the way HasAVX2 is getting set. It is set on the hardware that has AVX2, even if you set GOAMD64=v1.
Yup, that's standard, including in other ecosystems. It's what I do in ripgrep for example when your target is just standard `x86_64` (v1). GNU libc does it as well. And I believe Go has been doing it for quite some time. (The specific one I'm aware of is bytes.Index.)
This was especially important back before the days of v1/v2/v3/etc of x86_64, since Linux distros distributed binaries compiled for the lowest common denominator. So the only way you got fast SIMD instructions (beyond SSE2) was with a CPUID check and some compiler features that let you build target specific functions. (And I'm not sure what the status is of Linux distros shipping v1/v2/v3/etc binaries.)
In case anyone else was wondering what this is about, here's some useful background https://github.com/golang/go/issues/45453
I enjoy Go so much. It is almost perfect language for getting things done, but I still can't understand some design choices.
Does someone knows why Go uses env variables (like GOOS and GOARCH) instead command line arguments?
Env vars make it easier to automate in CI. The actual script to build for each os/arch is the same but only the vars change. It's convenient. You can always prefix the command with the env vars on the same line if you want a one-liner.
It could make it easier for build systems to be multi platform. You don’t have to keep track of custom args and add them to every call, you can just set the environment once.
As a PL focused on building networking services across different arch and platforms if it was any different that’ll be the first thing to hate in go
I guess so you can configure them on e.g. a build server instead of tweak your build command, but then, neither is particularly portable.
I assume so that it can be sticky across invocations and is easy enough to debug using `go env`
Worth noting that the release announcement was written by Eli Bendersky, of https://eli.thegreenplace.net/ fame. It's a fantastic technical blog with literally decades of content.
These new packages, like slices and maps, were a long time coming. So glad it's finally here.
I cannot even begin to tell you how many different itemInSlice functions I've written over the years.
We've had `slices` and `maps` in the `exp` tree for awhile; I think they're pretty widely used already.
tbh whenever I write one of these it makes me wonder if I can accomplish the same logic in a better way.
Overall, a release more for engineering than language. Even the new API's are mainly optimizations, and optimizations are netting ~10% (pretty good for an mature toolset).
The WASI preview shows Google is committing engineering resources to WASM, which could grow the community a touch.
FWIW the WASI support is 99% a community contribution, so unfortunately it's not much of an indicator of Google's commitment.
A good first step for better WASM support, however it's currently incompatible with tinygo's WASM target.
For example, I'm working on a custom WASM host (non-browser) and have a tinygo WASM package with import bindings like this:
//go:wasm-module rex
//export wait_for_event
func wait_for_event(timeout_usec uint32, o_event *uint32) bool
Both these comment directives are tinygo-specific of course, and now Go has added its own third and different directive of course.When I add Go's desired `//go:wasmimport rex wait_for_event` directive, it complains about the types `*uint32` and `bool` being unsupported. Tinygo supports these types just fine and does what is expected (converting the types to uint32). On the surface, I understand why Go complains about it, but it's such a trivial conversion to have the compiler convert them to `uint32` values without requiring the developer to use unsafe pointer conversion and other tricks.
Hopefully I can find a way to keep both tinyo and Go 1.21rc2 happy with the same codebase going forward and be able to switch between them to evaluate their different strengths and weaknesses.
The type conversion will improve in new releases. FYI recent TinyGo releases supports go:wasmimport too. The desire is definitely to allow users to use either or at least easily migrate. Thank you for trying it out!
I wonder if the new stdlib logger is featured enough to get rid of logrus/zerolog.
I'm wondering the same. Anyone already played for some time with the pkg?
There’s been an emphasis in slog on Handler composition over directly implementing a ton of features. Personally I love it - there are things I’ve needed, that slog can do, that few other loggers make easy/possible.
Zerolog will still be relevant for raw performance (slog is close to zap on perf - doesn’t win benchmarks, doesn’t look out of place either), fewer really need it but some really do.
I've been using it for a few weeks now. Overall pretty happy with it. Has good default API, and can be basically arbitrarily extended as needed. We even have a custom handler implementation that allows us to assert on specific logs being emitted in our stress/fuzz testing.
Release candidate
Seems like a really substantial release to me. The new built in functions min, max, and clear are a bit surprising, even having followed the discussions around them. The perf improvements seem pretty great, I’m sure those will get much love here.
Personally, I’m most excited about log/slog and the experimental fix to loop variable shadowing. I’ve never worked in a language with a sane logging ecosystem, so I think slog will be a bit personally revolutionary. And the loop fix will allow me to delete a whole region of my brain. Pretty nice.
Am I reading it correctly that `clear` does different things for maps and slices? Why doesn't it remove all the items from the slice like it does with the map, or set the values in the map to the zero value like it does for slices? That seems like an easy thing to get tripped up on
You can't "remove all items from the slice"; you can only change the length to 0: "slice[:0]".
That _is_ removing all the items from it; my point is that if you pass a map with `n` entries to clear, you end up with a map with 0 entries. If you do the same with a slice with `n` elements, I'd imagine most people would expect to end up with a slice with 0 elements, but instead you have a slice with `n` copies of the zero value.
But it's not "removing items", at least not for all meanings of the word "removing". You can see this with something like:
Which will print back "XXX world" because it's using the same array, and nothing was ever "deleted": only the slice's length was updated.s := []string{"hello", "world", "foo", "bar"} fmt.Println(s) // [hello world foo bar] s = s[:0] fmt.Println(s) // [] s = append(s, "XXX") s = s[:2] fmt.Println(s) // [XXX world]This is why "delete(slice, n)" doesn't work and it only operates on maps.
I suppose clear(slice) could allocate a new array, but that's not the same behaviour as clear(map) either, and doesn't really represent the common understanding of "clearing a slice". The only behaviour I can think of that vaguely matches what "clearing a slice" means is what it does now.
Okay, yeah, that definitely isn't what I expected. It's pretty wild to me that `s = s[:2]` will ever work fine if `len(s) == 1`; I would have assumed that it would always be the same regardless of how the slice was created. Playing around with it, it seems like this means that if you pass a subslice to a function, that function can get access to things from the entire slice, including the portions that weren't in the slice passed in[1]!
I think I understand now why `clear` can't work on slices the way I think it should, but only because slices themselves don't work the way I feel even stronger that they should.
Slices in Go are a tad counter-intuitive, I agree, but the approach does make sense I think. It allows you to use "dynamic sized arrays" for most cases like you would in Python and not worry too much about the mechanics, at the price of some reduced performance, but in cases where this kind of performance does matter it allows you to be precise about allocations and array sizes. So you kind of get the best of both.
Anyhow, this explains it in detail, if you're not already familiar with it: https://go.dev/blog/slices-intro
clear couldn't allocate a new array unless it was s = clear(s) like append. Maybe that would have been better semantics though.
> The new built in functions min, max, and clear are a bit surprising, even having followed the discussions around them.
Was that discussion pre-generics?
Most of functions and libraries introduced in Go 1.21 is stuff people already put in community libraries (lodash being probably most popular, despise utterly nonsensical name not relating to anything it does) so it is just potentially cutting extra dependencies for many projects.
No, it was a recent discussion, here: https://github.com/golang/go/issues/59488
You mean samber/lo? What is nonsensical about the name?
As a non-developer who has only gone as far as "hello world" in Go, I'm baffled by the idea that the log/slog thing is new - that seems like an absolutely basic language feature. TBH I'd say the same about min/max, but could forgive those being absent since Go isn't known for being numerically-focused...
> As a non-developer who has only gone as far as "hello world" in Go, I'm baffled by the idea that the log/slog thing is new - that seems like an absolutely basic language feature.
Then you'd be even more surprised when you learn that the vast majority of languages do not have standard logging library in core.
Most have one or few common libraries that community developed instead, but they are not in stdlib, and if stdlib has one it's usually very simple one (Go had standard logger interface that was too simple for example)
I have evidently been spoiled by Python and it's abundance of batteries.
Python does not include a structure d logging package as part of the stdlib as far as I know. What package are you thinking does what slog does?
Just the standard "logging" - might not meet the definition of "structured logging", but at a glance it seems about as featureful as what is being added to Go right now.
Python has no equivalent of logger.With or other k/v pairs, which is what makes it structured logging and why it's interesting at all. Go has had unstructured logging since its early days.
I don't really follow what the benefit of the k/v thing is relative to just passing in a suitable string. I'd just assumed that the automation of "debug", "info" etc was what made it structured.
There's been a "log" package since forever, but slog adds structured logging with fields and some other things. I don't think many standard libraries have that built in?
> that seems like an absolutely basic language feature
Most languages have no logging "system" built in at all. Honestly it's really quite rare.
Most languages include unstructured logging libraries in the standard library, including Go. Structured logging is usually provided by third party libraries.
The only other one I know would be C# with Microsoft.Extensions.Logging. Its so ubiquitous that 3rd party libraries work with its abstractions. Slog is a really good thing for Go
Please add "RC1" to post title. Currently it's misleading than a stable release is here.
I love the new stdlib additions- I've been pulling in these dependencies since generics arrived and it'll be nice to have them built-in.
An extremely generic release.
> New slices package for common operations on slices of any element type. This includes sorting functions that are generally faster and more ergonomic than the sort package.
> New maps package for common operations on maps of any key or element type.
> New cmp package with new utilities for comparing ordered values.
Pun intended? =D
GOOS=wasip1 is pretty cool IMO (disclaimer: I work on wazero.io)
hiyo
Glad to see the crypto performance improvements, I noticed a major regression in the performance of some auth code with 1.20
This is a big release. Lots of new packages. The language is changing
It is a big release, and the number of new stdlib packages (4) is relatively high for a Go release. That said, apart from the addition of some minor builtins (min, max, clear), the language isn't changing. That happened back in 1.18 with the introduction of generics.
Go releases feels it’s has changed massively when reading the release notes but when coding it’s just like every other day
Why is map copy "dst, src" vs "src, dst"?
Copy operations in Go are normally destination first, source second. This includes builtins like copy() and library functions like io.Copy(). Making it "src, dest" would make this one case the opposite of all the others.
Note that the order mimics variable assignment. You copy an integer with:
I appreciate the consistency.var src, dest int dest = src // dest first, src secondThanks, "mimics variable assignment" is a good way to remember it
To match the semantics of dst = copy(src). Multiple languages model operations like this in assignment order.
Really glad to see some of these new packages (sort, map, etc) making use of generics. Should reduce the need for a lot of helper functions.
Also really excited to see loop capture variables finally getting sorted out. It is a constant pain point with new devs, and I have no good answer when they ask "but WHY is it like this?"
More information about loop capture here for those interested https://github.com/golang/go/discussions/56010
> "but WHY is it like this?"
Because, historically, it's been like that all over, it's not just Go. For example, Python has the same loop variable reuse.
Probably comes from a time when compilers were a lot simpler, and all local variables were allocated stack space for the whole duration of the function call.
The new built in functions for slice manipulation are a welcome addition!
Nice - but hang on a second, I thought you cannot shadow language keywords in Go. So projects bumping to 1.21 in the future should be aware that you will run into compile time errors all of a sudden… doesn’t that actually break the compatibility promise?
max := something()
https://go.dev/doc/go1compatYou can shadow any builtin function.
https://go.dev/play/p/pG3Qi8G4dS5package main func main() { arr := make([]int, 0, 10) make := 1 arr = append(arr, make) len := func(arr []int) int { return -1 } println(len(arr)) // Output: -1 }Builtins aren't keywords.
A function is not a keyword.
I am honestly surprised nobody mentioned the intention of Go team to make multipath TCP the default in later releases
Can you elaborate on why this is surprising for those who don’t fully understand the differences?
I don't think Multipath TCP has been tested in enough environments to become the default yet. It's compatible with TCP, yes, but it's mostly useful for e.g. mobile devices that have multiple links like Wi-Fi and 4G, and it lets users to maintain TCP connection to a certain service even when moving across networks. Go seems to be server-oriented first, and there are some potential downsides to multipath TCP in a datacenter environment (e.g. potentially higher CPU usage, etc).
"In a future Go release we may enable Multipath TCP by default on systems that support it."
This could be five years from now. Or maybe never.
From what I heard the reason for not defaulting, its not yet acceptable across different platforms esp windows and most who’ll need this are data centers 5 years its too long, since linux kernel has accepted mptcp
This is great, but why do I get the sense that Golang's development is so slow? Ex:
Java: We added structured concurrency and virtual threads!
Golang: We added a min function!
Most of the standard lib still doesn't properly support generics, and at this pace, it will be another 5 years at least before it does.
Touché. When I noticed how happy I was that they added a min function, Stockholm syndrome came to mind.
Tbh I don’t see most of the standard lib benefitting from generics. For example, json.Unmarshal wouldn’t be dramatically better with generics — in practice, I rarely see runtime errors where I passed the wrong kind of thing to that function.
I personally love the slow pace of go development. I love that I don’t need to refactor my code every year to take advantage of whatever new hotness they just added. The downside is that stuff that’s annoying now will be annoying forever (like those times when you want a more expressive type system), but I’m willing to live with that.
Because great care was taken for the 1.0 release to be a complete design. Most language changes since then have just been fixes. That's why Go 1.0 code is basically the same as Go 1.21 code.
> New built-in functions: min, max and clear.
What a mistake.. reserved keywords are words I can no longer use for myself...
Zig does it better by requiring a prefix @ for most their builtin needs
min and max are predeclared identifiers in the universe block, not reserved words. https://go.dev/ref/spec#Predeclared_identifiers
You can continue to declare your own entities with these names.
They're not reserved keywords. Existing/package defined min/max functions would take precedence. They have the same semantics as `append`
You refactored your code, you think you wrote your ``min`` function, but no, it'll call the builtin one, without warning you..
I don't like this design..
The compiler will tell you if the types aren't compatible, and this is only for primitive comparable types. What `min()` implementation could you have that even does something different?
A heap?
Not too familiar with go toolchains, but I bet there's a linter that will warn you about shadowing builtins.
Yep: "variable append has same name as predeclared identifier (predeclared) go-golangci-lint"
if you have so few tests and so little code review that this matters then I am not sure what to suggest
Qt uses qMin and qMax, it may have been nice if go went with like gMin and gMax
For anyone that misses their Objective-C days.
Wait is this now heap allocating a value in every iteration of every loop? I hope that allocation is optimized out in every case where there isn't a closure over the loop variable?
They discuss this here: https://github.com/golang/go/wiki/LoopvarExperiment#will-the...
In fact, you can enable warnings/logs that indicate whether code that is affected by the loopvar experiment results in a stack-allocated or heap-allocated loop variable: https://github.com/golang/go/wiki/LoopvarExperiment#can-i-se...
I imagine that the current workarounds for this issue also end up with heap-allocated variables in many cases.
Generally it is optimized out.
The fine details resemble the analysis of correctness - all the evidence shows people expect per-iteration semantics with considerable frequency, and don’t rely on per-loop semantics with measurable frequency. But it’s impossible to completely automate that assessment. Likewise, it’s impossible to automatically detect code that will spuriously allocate because of the semantic transition.
Regardless of how the compiler is optimising this, I 100% agree that the old behaviour is unexpected and it’s caught me at least once. Really happy to see this (until recently) unexpected change.
I don't actually use Go, but I have used many other languages where it is like the old behavior. I learned once that I have to build the closure correctly to get the value I want and know now to do it. Don't have any statistics on whether I made that mistake again, but anecdotally I can't remember a case where I have. In their analysis they have found a lot of cases with that mistake, though. So I guess fair enough.
However, I wonder what it will mean if someone who mostly writes Go will now use another language? Will they be more prone to make that mistake?
It's hardly standard behaviour. I mean in Java for example there didn't used to be value types, so everything was a pointer and the effect of this would be the same as the new behaviour in Go.
The only lesson to be learned here is that languages are different. But I think the new Go behaviour is more ergonomic.
In Java you can only close over final variables, so you can't close over the loop variable at all. (Unless that changed since last time I used Java, which - granted - was a long time ago.)
The problem being fixed doesn’t affect only closures, but the body of the for loop itself. So for example taking the address of the loop variable would unexpectedly return the same value for the duration of the loop.
Of course it'll be optimized. It's just semantics that's changed. Compiler will make sure to copy variable value to new address.
I didn't see this optimization when I read the overview. I also hope that the compiler is smart enough to avoid this.
The way to view it is "unless there is syntactic sharing, it is a for loop, same as before". The compiler uses a syntactic test (with little knowledge of control flow or value use) to exclude loops from the change. This excludes most loops.
After the change, escape analysis figures out if the changed iteration variable actually needs heap allocation; in an internal sample of code that was actually buggy (i.e., biased, guaranteed to have at least one loop like this) for 5/6 of the loops escape analysis decided that heap allocation wasn't needed.
The reason this optimization isn't part of the language change proposal is that escape analysis is "behind the curtain"; ignoring performance, a program should behave the same with or without it, and it is removing heap allocations all over the place already. Escape analysis is also extremely difficult to explain exactly, so you would not want it in the spec, and "make escape analysis better" (that is, change it) is one of the prominent items in the bag of things to do for Go.
really hope Go has something like MERN for node.js or Django for Python, so I can use it for ready-to-go backend framework. There are gin and echo etc, just not as widely adopted as MERN or Django.
in some of my use cases, I need make sure source code is fully protected, neither Node nor Django can do that well, Go will be perfect as it is compiled, however there is nothing like MERN or Django in Go(yet). Another option will be Java, but I do not know Java.
Can we get arenas yet?
Arenas are available as an experiment. See (e.g.) https://www.reddit.com/r/golang/comments/ztaxhu/docs_for_the....
Was there a push to get it released today?
No, release candidate 2 was released today; Go 1.21 is not yet released.
Based upon the date as .21 on the 21st.
So I guess with new builtin functions we will be breaking backwards compatibility?
If you already have your own functions or variables named max, min, or clear in-scope, they will shadow the new built-in functions and your code will continue to use your own version of the functions. No breakage to existing identifiers that match the new function names.
(This is the same behavior as the append built-in function today, for example. These things in Go are _not_ reserved keywords, they are simply global functions that can be overridden at other scopes.)
You’re right. Backwards unfriendly is maybe a better way to say it.
min and max are common variable names so depending on the version of go and the scope you should expect min and max to mean different things.
No reason these functions couldn’t have been part of the stdlib.
Lets be honest, its a terrible choice
In what way? Overall as a language, identifier shadowing is a feature of the language in nested scopes. Are you saying built-in identifiers (that aren't language keywords) should be treated specially and work differently than user-declared identifiers?
It's terrible, IMO, because every package that has generic words is now a variable name I can't use. A simple example which i find unreasonable:
Now I have to make up variable names because `filepath` will shadow the package. How it this sensible in any shape? Zip just does this better by having @ in front of builtins.package main import ( "fmt" "path/filepath" ) func main() { filepath := filepath.Dir("./") //filepath.Dir('./") -> This is now a string. Can't use filepath package anymore fmt.Println(filepath) }you're complaining that the nomenclature for packages is not differentiated in a way that allows user code to have variable names with the same name as package names
you can still allow this, of course, by aliasing the package import
but needing to do this is "terrible"
is that correct?
Wrote a blog explicitly asking for some of these changes last year: https://www.lremes.com/posts/golang/
Nice to see their going in a good direction.