Settings

Theme

Golang vs. C# (.NET 5.0) at Benchmarks Game

benchmarksgame-team.pages.debian.net

101 points by iamdual 4 years ago · 113 comments

Reader

throwaway894345 4 years ago

I can't imagine a better setup for a language flame war :). I really like debating languages, so I hope it doesn't go that direction.

One of the standard caveats with this particular benchmark game with respect to Go is idiomatic optimizations are prohibited. To use the btree example, Go's memory management is low latency and non-moving, so allocations are expensive--any Go programmer writing a performance-sensitive btree implementation would pre-allocate the nodes in a single allocation--an absolutely idiomatic and trivial optimization--but the benchmark game requires that the nodes are allocated one at a time. In other words, the C# version is idiomatic, but the Go version is expressly contrived to be slower--not a very useful comparison.

Mad respect for .Net though; it's really impressive, I like the direction it's going, I'm glad it exists, etc.

  • lalaithion 4 years ago

    The point of the btree example is to test how good programming languages are at allocating tree-like structures that can't be preplanned. It's a valid argument that this is a rare real-world requirement, but it's not contrived to be slower.

    • throwaway894345 4 years ago

      > The point of the btree example is to test how good programming languages are at allocating tree-like structures that can't be preplanned.

      Forcing allocations for every node isn't justified by a desire to demonstrate dynamically sized binary trees. A naive dynamically-sized tree would just keep a list of node buffers and allocate a new node buffer every time the previous one fills up (perhaps with subsequent buffers doubling in size). The benchmark is, by all appearances, contrived to be slower.

      • igouy 4 years ago

        > … a desire to demonstrate dynamically sized binary trees…

        Cart before horse — the binary trees are justified by a desire to demonstrate memory allocation.

        http://hboehm.info/gc/gc_bench/

        • throwaway894345 4 years ago

          1. That’s plainly not the case here since other languages are allowed to use custom allocators

          2. Why use a binary tree benchmark in the first place if you’re going to limit the implementation to certain naive implementations (and again, only for one language)? Why not just measure allocations outright or at least call the benchmark “allocator performance”?

          3. Showing allocation performance doesn’t help anyone understand the actual performance of the language, which is transparently what everyone uses these benchmarks for. If they wanted a general idea for language performance they would allow trivial, idiomatic optimizations. A benchmark that shows allocation performance is worthless, and a suite of benchmarks that includes a benchmark for allocation performance but not GC latency is worse than worthless: it’s misleading because latency is the more important concern and it’s what these bump allocators trade in order to get their fast allocation performance.

          • lalaithion 4 years ago

            Looking at the fastest Java, Haskell, Racket, OCaml, JavasScript, C#... they're all doing per-node allocation using the standard allocator, and all beating Go. The limit is not just for Go. I don't know why you think that Go is the only one being disadvantaged here.

          • igouy 4 years ago

            1. Please be specific.

            2. Again, not limited to only one language.

            3. You are allowed an opinion.

            • throwaway894345 4 years ago

              1. Rust, C++, C

              2. It is in practice, but regardless of the size of the cohort there’s no compelling reason for these limitations.

              3. And you’re entitled to ignore reason. It cuts both ways.

              • igouy 4 years ago

                1. Which program?

                2. Again, not limited to only one language. You are allowed an opinion about what is or is not compelling.

                3. As before.

                • throwaway894345 4 years ago

                  1. btree; see the Rust version which uses a bump allocator for example

                  2. Doesn't matter whether it's exactly one language.

                  > You are allowed an opinion about what is or is not compelling.

                  It's not a matter of opinion. The definitional purpose of benchmarks is to indicate something about reality; if you contrive rules that cause the benchmarks to deviate from reality, they lose their utility as benchmarks. I've demonstrated that the rules are contrived (i.e., they prohibit real-world, idiomatic optimizations), so I think we can say as a matter of fact that these benchmarks aren't useful.

                  Of course, no one can force anyone else to see reason (but I don't have any interest in talking with unreasonable people).

                  • igouy 4 years ago

                    1. bumpalo: Star 586, Fork 52 — a library, not implement your own custom allocator.

                    2. You have repeatedly claimed "only for one language".

                    > I think we can say as a matter of fact…

                    Apparently that is your opinion.

                    • throwaway894345 4 years ago

                      1. See all of the other arguments in this thread about "contrived rules"

                      > You have repeatedly claimed "only for one language".

                      How many languages are in practice prevented from using pre-allocation? How big is the cohort? Does it matter if it's exactly one or if it's two or three? Why are you fixating on this relatively irrelevant point rather than the more substantial point that has been reiterated a dozen times?

                      > Apparently that is your opinion.

                      In the same sense that "the sky is blue" is merely my opinion.

                      • igouy 4 years ago

                        > How many languages are in practice prevented from using pre-allocation?

                        How many provide GC?

                        The substantial point is that you wish special treatment for Go lang.

                        > … "the sky is blue" is merely my opinion…

                        When all can see the vibrant orange red sunset.

                        • throwaway894345 4 years ago

                          Not special treatment, just a rules that allow for idiomatic programs. Of course I’ve said as much a dozen times now and you won’t engage with it, so I don’t expect you to now. ¯\_(ツ)_/¯

  • _ph_ 4 years ago

    That is one reason I don't consider the language benchmark to be really relevant: a lot of benchmarks are taited by the exact rules of the competition.

    • throwaway894345 4 years ago

      And these rules are particularly bizarre. Rust, C, and C++ are all allowed to use custom allocators while Java and C# have GCs which are optimized for this particular micro benchmark but not for real world applications (although I hear Java’s GCs are making good headway on latency lately, and clearly all of the GCs are suitable for general application development). So it’s really just Go which is forbidden from an idiomatic optimization as far as I can tell.

      • Jweb_Guru 4 years ago

        Go optimizes ridiculously insanely for latency because it's driven by Hacker News articles about GC latency, not because it's better for "real world applications." Its atrocious throughput is entirely a consequence of that decision and is one of very the few useful things that the benchmark games do actually demonstrate. Java's unwillingness to provide an allocator with such an extreme tradoeff has proven a pretty good idea, and now they are able to provide only moderately worse latency than Go for "real world applications" with far better throughput.

      • igouy 4 years ago

        > … allowed to use custom allocators…

        No. They are allowed a library memory pool.

        As-it-says: 'Please don't implement your own custom "arena" or "memory pool" or "free list" - they will not be accepted.'

  • abledon 4 years ago

    reminds me of the 'im tired of being a hipster' post on frontpage recently.... "go learn 'unhip' tech/languages and live your life"

  • igouy 4 years ago

    > absolutely idiomatic and trivial optimization

    Which is not accepted for the C# programs either.

    sync.Pool is accepted —

    https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

    > the Go version is expressly contrived to be slower

    The requirements were contrived in April 2008.

    afaict Go initial release was March 2012.

    • throwaway894345 4 years ago

      > Which is not accepted for the C# programs either.

      Because C# doesn't benefit from this kind of optimization. Its GC is generational, which means that it has very fast allocations at the expense of high latency. In most applications, lower latency is more important than slower allocations (not least of all because these batch-allocating optimizations are nearly trivial), but these benchmarks don't reflect that at all.

      > The requirements were contrived in April 2008. afaict Go initial release was March 2012.

      Contrived = "the rules artificially prohibit idiomatic optimizations". It doesn't require that the maintainers have a prejudice against Go (although as you point out, the maintainers have had a decade to revisit their rules).

      • igouy 4 years ago

        > Because C# doesn't benefit…

        C# does provide a memory pool implementation.

        • throwaway894345 4 years ago

          So what about C, Rust, and C++? They’re all allowed to use bespoke pools and custom allocators. The fastest Rust implementation imports an allocator crate. No doubt you can lawyer the rules to make sure Go still appears slow, but in reality this benchmark doesn’t tell you anything about the language’s general performance because while Go’s allocator is slower, idiomatic Go allocates much less frequently than other languages, but this benchmark prohibits idiomatic optimizations. Moreover, there isn’t a benchmark that shows gc latency, which is the flip side of the allocator coin. So if you really want to die on the hill of “worthless but well-lawyered benchmark rules” be my guest.

          • igouy 4 years ago

            What-about-ism.

            https://golang.org/doc/faq#garbage_collection

            > They’re all allowed to use bespoke pools and custom allocators.

            No. They are allowed a library memory pool.

            As-it-says: 'Please don't implement your own custom "arena" or "memory pool" or "free list" - they will not be accepted.'

            > … while Go’s allocator is slower…

            So that tiny tiny program shows it's slower because it's slower.

            • throwaway894345 4 years ago

              > What-about-ism.

              Not sure what you're referring to here.

              > No. They are allowed a library memory pool.

              Yes, this is a contrived rule. In reality, a Go developer would write the extra ~dozen lines (all of the heavy lifting done by the builtin slice implementation) and call it a day.

              > So that tiny tiny program shows it's slower because it's slower.

              Tautology. It's slower because the contrived rules preclude idiomatic optimizations.

              My point is that these contrived benchmarks don't indicate anything about the relative performance of these languages, but you keep responding with some variation of "but Go is slower in these benchmarks!" which everyone already agrees with. So unless you're going to actually address the point, I don't see the point in continuing on. It feels like you're hell-bent on using this thread for your personal programming language holy war, which is disinteresting to me (see again my first post) and against the site rules.

              • igouy 4 years ago

                > … a Go developer would write the extra ~dozen lines…

                And a C# developer could write a program that would avoid GC, and a Java developer…, and…

                It feels like you're hell-bent on using this thread for your personal programming language holy war…

                • throwaway894345 4 years ago

                  > And a C# developer could write a program that would avoid GC, and a Java developer…, and…

                  Are those idiomatic? If so, then they should be permitted to apply those optimizations. Again, the whole point of benchmarks is to indicate real-world use.

                  > It feels like you're hell-bent on using this thread for your personal programming language holy war…

                  I've reiterated my substantial point over and over again ("contrived rules don't indicate real world use, which is the definitional purpose of benchmarks") and you still haven't addressed it. But in any case, perhaps if both of us think the other is waging a holy war, it's an indicator that the thread has run its course.

Thaxll 4 years ago

When you see how much effort it takes to C# and Java to optimize the runtime, there are a lot of people working on that. C# is fast but you see that it uses between 2 and 32 times the memory that Go needs.

Overall you can see how fast Go is, it has little optimization compare to C# and it's as fast. Compare this: https://benchmarksgame-team.pages.debian.net/benchmarksgame/... and overly complicated C# version: https://benchmarksgame-team.pages.debian.net/benchmarksgame/... ( avx, Intrinsics etc ... )

  • JamesSwift 4 years ago

    I definitely have run into this, even when using 'server mode' in asp.net core. I never was able to figure out why the C# version of my POC was using so much memory, but rewriting to golang ended up using a very predictable, minimal amount of memory in comparison.

    • Mattish 4 years ago

      Server mode is much less likely to incur GC. Were you causing enough memory usage to force your app to actually free memory?

      It will intentionally use more memory for the sake of throughput, hence why this post has all .NET program flag for it, as it's a _speed_ benchmark primarily.

      • JamesSwift 4 years ago

        I cant say for certain, but I'm pretty sure I manually GC'ed as a test and it didn't seem to help. Its been a while and the C# ended up being a temporary approach until I saw how much less memory the golang version used.

        They performed almost equivalently for RPS I believe.

    • azth 4 years ago

      My guess is that golang's GC is optimized for latency at the expense of throughput, limiting the max memory size.

      • xh-dude 4 years ago

        It is, and it’s not directly tunable … the opinion is ‘we’re IO-bound, not compute-bound’.

        I spent a chunk of time recently (10s of hours) doing dumb C# vs Go benchmarks - files and networking, and nothing worth taking seriously - just, usually the part about being IO-bound was true. C# is really impressive and was just a little slower with the best async solutions I could come up with. The machinery for async has overhead, so do Go routines and channels … the first-pass, not very performant code was just a little faster and IMHO clearer with Go (but I’m much better with Go /shrug).

      • merb 4 years ago

        nah he just compares two different things. basically his golang hello world probably had basically nothing while his dotnet version used the "Microsoft.NET.Sdk.Web" which basically pulls the shared framework which will load a ton of stuff. BUT even after that the memory usage might be bigger. however does it really matter? I mean dotnet is not a big memory hog. it's pretty lightweight for what it is. compare it to java and the golang number would be insane.

        • JamesSwift 4 years ago

          I tried to do the most minimal, idiomatic design for both. I didn't save the C# code (I think it probably was just asp.net core and newtonsoft.json), but here is the go version [1].

          It basically just loads a JSON file into memory then allows you to query the data with 2 API endpoints.

          [1] - https://github.com/J-Swift/GamesDbMirror-go

          • merb 4 years ago

            well asp.net core is not really minimal or idomatic. it pulls a whole framework your code doesn't do a lot of things that asp.net core would do. sadly since nancyfx died there arent that many c# http framework that are as lightweight as the golang once. asp.net core is more like java spring btw. nowdays most c# http frameworks do call `<FrameworkReference Include="Microsoft.AspNetCore.App" />` which is really really big compared to just Microsoft.NETCore.App most often the defaut aspnetcore also configures a "secure" application (working cors, etc.)

  • igouy 4 years ago
  • joelfolksy 4 years ago

    "( avx, Intrinsics, etc ... )"

    I have to give you credit for trying to apply the Rule of Three to a single criticism.

    Of course, I don't really understand how the fact that someone took the time to vectorize the C# submission is supposed to be a mark against C#...

  • agumonkey 4 years ago

    and afaik, .net has record types unlike the jvm (yet) which means java is even worse

    • kaba0 4 years ago

      I think you mean structs (value types. Will be called primitive types in Java). Records are not too interesting from a performance pov (and java has them, and I think they actually predate c#’s), though java will likely be able to optimize serialization/deserialization of records better.

    • ternaryoperator 4 years ago

      Java recently added record types.

Guillaume86 4 years ago

The focus on performance since Core is really nice to see, dotnet 6 is continuing the trend as well: https://devblogs.microsoft.com/dotnet/performance-improvemen...

keewee7 4 years ago

I prefer C#. But here the Go code actually looks like normal production code while the C# examples look like something made by a low-level optimization wizard.

  • notdang 4 years ago

    For C# they went too far, it has this comment in one of the files: > Using GOTO which currently causes dotnet to generate better machine code

  • alkonaut 4 years ago

    I looked at the k-nucleotide, n-body, etc and saw nothing out of the ordinary in terms of C#.

    There wasn’t even a lot of “modern C#” low level optimization like ref-structs/spans and similar. It actually looks like there is quite a bit of performance from C#8, 9, 10 left on the table.

    • coder543 4 years ago

      Are we looking at the same file?[0]

      Tons of very "interesting" attributes like this:

          // prevent inlining into main to decrease JIT time to generate main
          [SkipLocalsInit][MethodImpl(NoInlining)]
      
          [SkipLocalsInit][StructLayout(LayoutKind.Explicit, Pack = 32)]
      
          [SkipLocalsInit][MethodImpl(AggressiveOptimization | NoInlining)]
      
          [FieldOffset(32)]
      
      
      and tons of "unchecked" blocks.

      Not to mention that the entire file is using explicit vectorization, which I consider to be a very high degree of optimization -- tons of software never bothers to implement explicit vectorization, and does just fine.

      If all of this is "nothing out of the ordinary", then "ordinary" C# has changed a lot since I last spent much time with it.

      [0]: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

      • alkonaut 4 years ago

        Agree - those attributes are definitely “low level” and not idiomatic in most situations. I must have missed them skimming that file.

        For the particular case of n-body you could argue you are already in a pretty extreme HPC world and doing it without vectors would basically be a toy calculator. The problem then of course that C# isn’t really ever idiomatic for that. The question (as alaways) becomes about what to compare. Typical or pushed to the limit.

  • matttproud 4 years ago

    If what's in the Benchmarks Game is normal Go code to you, then I feel pretty bad for you in terms of what your colleagues are exposing you to on a day-to-day basis quality-wise.

scanr 4 years ago

I enjoy using both languages. The significant performance difference between the two for me is compilation speed. Size of binaries produced is also important if you’re shipping them around.

flyinglizard 4 years ago

I say modern .NET is a marvel of features, development tools, interoperability, performance and even ships with its own cloud environment (Azure).

Unfortunately developers who don’t know better judge it by it’s historical association with Windows rather than how powerful it is today.

  • bob1029 4 years ago

    > Unfortunately developers who don’t know better judge it by it’s historical association with Windows rather than how powerful it is today.

    Some of us actually got to experience the entire journey from the old to new world first-hand. We started out as a .NET 3.5 Framework solution (windows only), and are now looking at a .NET 6 upgrade (any platform). Over the course of 7+ years, we went through all of the following frameworks:

    3.5 => 4.0 => 4.5 => 4.6.2 => [netcore convert]

    2.0 => 2.2 => 3.0 => 3.1 => 5.0 => ...

    Some of the transitions were a little painful, but the same fundamental product survived the entire trip.

    I don't know of many other development ecosystems where you can get away with something like this. If we didn't have the stability this ecosystem has to offer, we would not be in business today.

    • alkonaut 4 years ago

      I’m on a project I created in .NET 1 and then migrated to 2.0 (to generics) early on. It’s moved through VSS via svn to git. It has gone from VS 2002 on CD’s to VS2019/2022. It has over 100k commits. Still work full time on it today. I share your experience with the migrations. Never an issue. The new sdk (“core”) style project system was probably the largest blessing of this whole journey.

    • EMM_386 4 years ago

      I actually created a .Net Enterprise app that went on to become an industry leading application.

      At the time I was hired, it was to modernize an Access application used internally, to try to sell it as a product.

      .Net was still in beta at the time (this was early in 2001). I figured I might as well go with the flow and try it out.

      It was a crazy ride, and I've since left the company ... but we went through every version from beta through 4.8 before I left.

      I'm now using .Net 5 in Azure to power my new company's REST APIs.

    • kaba0 4 years ago

      Well, without starting a flamewar (.NET is an impressive runtime), the JVM is better at backwards compatibility, I think.

      • bob1029 4 years ago

        I can totally see this being a thing based on my understanding of that ecosystem. Java has always been the "runs anywhere" technology.

        In the space we work in, a vast majority of systems are written for either Java or .Net

  • umvi 4 years ago

    Can you now get the full .NET development experience in a Linux-only environment? I haven't used it in a while, but from using Unity (game engine) it seemed like C# on Linux was a bit crippled

    • moonchrome 4 years ago

      Yes, use Rider and .net core - everything works on non-windows platforms (working on .net from osx right now, deploying to linux)

      Avoid vscode for C# development, unlike TS/JS (which is top of the line), the support for C# even in core is toy level.

      • lostmsu 4 years ago

        What do you think is lacking in C# extension for VS Code vs TypeScript? I would expect a better experience with C# (in terms of tooling) because the language is typed.

        • moonchrome 4 years ago

          It just doesn't work nearly as well - even on simple .NET core solutions created from CLI intellisense chokes up, refactoring doesn't work, it's nowhere near the quality level of TS.

          • foepys 4 years ago

            Even TS support is pretty lackluster compared to Rider's and VS' C# support.

    • cyral 4 years ago

      Yes, JetBrains rider is IMO superior to VisualStudio these days, and I run a huge amount of C# code in production on Linux. It was definitely crippled in the past but it's a first class citizen in the .NET ecosystem now.

      • metadata 4 years ago

        Rider is incredibly fast and superior to VS in all aspects but one - its debugger is terribly broken. VS will break on my line of code that crashes. Rider will crash somewhere completely unrelated. My code is very much async, that might be what kills it. I ended up doing everything in Rider, but debugging in VS.

        • Akronymus 4 years ago

          Rider also doesn't allow you to move the current instruction to somewhere else on getting an exception IME.

      • AlfeG 4 years ago

        I'm actually use both. Debugging story in Rider is a bit flawed compared to VS. But due to very large codebase Rider is a bit better in performance.

    • adwww 4 years ago

      Also relevant... can you learn it, build it and deploy it for free?

      I've always thought the whole ecosystem looked really productive, and the code I've had to review occasionally looked well structured and readable. But when I was starting out MSDN cost a fortune and I've never really considered trying to learn it.

    • na85 4 years ago

      Especially with lsp-mode, C# on Linux (or Mac) is a great experience.

      https://0x85.org/csharp-emacs.html

    • jakearmitage 4 years ago

      I use Sublime and it works well.

  • caeril 4 years ago

    Not really. And the allure of Go over .NET has very little to do with performance.

    Most applications I develop these days are web services. I write most things today in Go, but I used to work quite a bit with C#/Asp.NET applications.

    Here's what I do to build a Go web application:

      - go build .
    
    What I get out of it is a single, statically-linked, self-contained ELF binary for which deployment is as simple as scp, if I want to. It will run on any x64 linux box, without dependencies, since it contains its own webserver. I don't need to dump it into IIS to make sure the build works.

    Here's what I used to do with .NET:

      - Open VS, wait about 30 seconds for it to finally start working
      - Set the target, Rebuild All
      - Publish to file, wait
      - Eventually get a packaging error, predicated on some obscure tools dependency issue somewhere inside my 98KB .csproj file, which I have to fix by closing down VS, manually editing in Notepad, and re-opening VS
      - Finally get a working build+publish, ok, let's take a look
      - Oh, very nice, the publish directory weighs in at nearly a QUARTER GIGABYTE, and contains about 60 dll dependencies and a ton of entirely useless descriptor files.
      - Well, okay, let's at least get this deployed to the test server to make sure it still plays nice with IIS, xcopy this over.
      - Oh, IIS doesn't like this at all, now let's spend the next couple hours tracking down this insane .net framework dependency hell.
      - Screw it, where's my whiskey?
    • sbelskie 4 years ago

      dotnet publish -r linux-x64

      Seems pretty simple. Though the binary size is definitely not on par with go.

  • throwaway894345 4 years ago

    > even ships with its own cloud environment (Azure)

    Can you elaborate on this? What does it mean that ".Net ships with Azure"?

    • lambda_dn 4 years ago

      Microsoft created .NET and Azure and a lot of the tooling so the integration is tight, now that it's multi platform and with Rider (multiplatform .NET IDE) being as good as if not better than Visual Studio there is nothing stopping it.

      • moonchrome 4 years ago

        Azure integration is not that great actually. For example .NET 5 support for Azure Functions is a huge breaking change with random issues, missing features and terrible IDE support (you have to launch it through CLI and attach to a PID printed out by the host process to debug a function, you could just debug 3.1 functions) - it's beta quality.

        I find this a common theme with Azure support and .NET

        • teh_klev 4 years ago

          I agree, the functions story with .NET 5 is a right pain in the arse at the moment. I understand why it's happened, but I want my .NET 5 durable functions!

        • lambda_dn 4 years ago

          Azure functions are one small part of Azure. Im talking about Azure Dev ops for example which is deeply integrated in the tooling. Including source control/product backlogs/pipelines/testing etc.

      • throwaway894345 4 years ago

        What does it mean "the integration is tight"? Are we talking about good Azure libraries, or is there some runtime integration (e.g., a .Net orchestrator i.e., Kubernetes but for .Net applications rather than containers)? Or does it just mean that Azure is better than other clouds at running .Net because it's a first-class citizen where it's usually second-tier on AWS/GCP?

      • adwww 4 years ago

        Alternative take, the Azure documentation is shit if you are not using dot net... but pretty good if that's your toolchain.

  • snotrockets 4 years ago

    C# was always the better Java, F# the better Scala.

    But for many years, the per seat cost was higher, which scared many away

    • kaba0 4 years ago

      In what way do you consider F# better than scala? Especially with the recently released, revamped Scala 3?

melling 4 years ago

I was always under the impression that Go never had a great optimizing compiler. It was never a primary focus given the limited developer resources.

I couldn’t find a direct C# to Rust comparison but Rust trying to compete with C++ means performance is a goal, if that’s what you are after.

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

  • throwaway894345 4 years ago

    I don't think it was "limited developer resources" so much as a desire to preserve blazing-fast compile times. The very rough rule-of-thumb that I've heard is that optimizations must pay for themselves (a compiler which is itself compiled with the given optimization must not be slower than the previous version).

    • igouy 4 years ago

      "He used the compiler's self-compilation speed as a measure of the compiler's quality. Considering that Wirth's compilers were written in the languages they compiled, and that compilers are substantial and non-trivial pieces of software in their own right, this introduced a highly practical benchmarks that directly contested a compiler's complexity against its performance."

      p44 "Oberon — The Overlooked Jewel" Michael Franz, in "The School of Niklaus Wirth".

      https://www.google.com/books/edition/The_School_of_Niklaus_W...

  • gwp 4 years ago

    > I was always under the impression that Go never had a great optimizing compiler. It was never a primary focus given the limited developer resources.

    It's an intentional choice, that's why it compiles code so fast. Also because of that it's a lot simpler than say GCC. Before Go, the Plan 9 C compiler was designed in a similar manner too (I think the Go compiler was forked from it).

    I think the simplicity aspect is even more important than the compiling speed. It's easier and cleaner to keep the compiler simple and write optimized assembly code by hand when it's needed. That way, the compiler doesn't get so messy (fewer bugs, easier to maintain...) and the written program is of better quality (humans can produce better code than compilers).

    • kaba0 4 years ago

      How often have you met compiler bugs? Also, humans absolutely can’t produce better code than compilers, at least not reliably. All those minor improvements that compilers routinely do does add up. And noone wants to write arcane hacks that make code unmaintainable and may not even provide a performance benefit (CPUs are finicky beasts, sometimes the seemingly worth option is faster)

      I very much want my computer to work as hard as it can to make my code more performant for free. What I would like to see more is a separate debug and release build mode, where the former can go as fast as it can without optimizations, while the latter can be as slow as it wants and result in the most optimal binary it can produce. Zig does that for example.

  • the_duke 4 years ago

    There are both GCC [1] and LLVM [2] backends for Go, but I don't think they see much usage compared to the default.

    [1] https://golang.org/doc/install/gccgo

    [2] https://go.googlesource.com/gollvm/

booleandilemma 4 years ago

I'm not really concerned with such small differences in speed, to be honest. The thing that I look for is: how productive am I when using language X?

  • Koshkin 4 years ago

    ... and nothing can compare with C# (under Visual Studio) in this regard, it seems. Not the most efficient at run time, but a good, knowledgeable developer's productivity is insane.

    • Salgat 4 years ago

      Especially with the .NET Standard Library. It has damn near everything already out of the box with 1st class support, and if not there's almost certainly a nuget for it. C# is an extremely productive language to develop for.

nedsma 4 years ago

It's 2021 and .NET developers still argue which coding style they should adopt and how they should enforce it. It's totally mind numbing that team members are split between implicit and explicit variable naming. Contrary, in Go you just write code because those nuances should not matter.

  • codenesium 4 years ago

    That nuance does matter which is why we are still having the discussion. Teams can pick which style they want but mixing is unnecessary overhead and when you're talking about 10s or 100s of solutions I'd like the style to match across the board.

  • Salgat 4 years ago

    That's more of a failure of management. Companies like Google outline style guides for company-wide usage. It might not be "the best" but it's consistent and the company enforces that.

matttproud 4 years ago

I've never understood why folks treat the Benchmarks Game results as indicative nor representative of anything useful. The code specimens they use are often unpolished nor idiomatic, without even commenting on whether they could be made to perform better through Byzantine, careful by-hand optimization.

Why does their web site have no contact nor link to where the source code for the project can be checked out, contributed to, or amended?

  • igouy 4 years ago

    > I've never understood why…

    Perhaps they don't read the website text?

    > … no contact nor link…

    Search works.

    • matttproud 4 years ago

      I ran multiple search queries. I wouldn't be so dumb to post a comment like this here without having done my homework. The best I found after trying numerous keyword permutations was https://salsa.debian.org/benchmarksgame-team/benchmarksgame, but this did not appear to contain all of the benchmarks' source, just the source embedded in HTML, which is specious at best. This repository looks mostly like frontend HTML and chrome, not a SUT, executor, nor even the sub-test code.

      At the very least, I couldn't realistically re-run some of the example benchmarks from the source embedded in the HTML, because they did not include vendoring/version information for external packages they depend on. That made me doubt the provenance of https://salsa.debian.org/benchmarksgame-team/benchmarksgame.

      • igouy 4 years ago

        > This repository looks mostly like frontend HTML…

        It is mostly frontend HTML — the benchmarks game project deliverable is a (now static) website.

        > I couldn't realistically re-run some of the example benchmarks from the source embedded in the HTML, because…

        People have taken the simplest thing that will work approach: select/copy the program source code from the website, paste into a text editor and save; then build & run adapting the commands shown on every program page.

      • igouy 4 years ago

        Perhaps these other projects are a better match with what you expect to see —

        https://programming-language-benchmarks.vercel.app/

        https://github.com/kostya/benchmarks

      • igouy 4 years ago

        > … just the source embedded in HTML…

        "Where can I get the program source code? — zip'd program source code"

        line 11 ? in the README

        > … vendoring/version information for external packages they depend on…

        If the programs don't build/run with the latest GA external packages, they will be shown as "Make Error" "Bad Output" "Failed" until someone updates them.

        • matttproud 4 years ago

          It would do the project’s web site a big service to actually link prominently to this repository, if this site is in fact canonical.

          A zipped source code archive inside a Git repository is very surprising (violation of principle of least astonishment). No wonder I couldn’t find the benchmark programs’ source; they aren’t even indexed for code search due to residing in a zip file.

          • igouy 4 years ago

            And, of course, there was a time when the website did "link prominently to this repository" — back before July 2020, when "new source-code was usually measured and shown on the website within a few days."

      • igouy 4 years ago

        > … made me doubt the provenance…

        To address that concern, there's now a link in the homepage banner.

fulafel 4 years ago

Please fix the title (the original title spells Go correctly too).

popotamonga 4 years ago

Is there one similar for c# vs scala?

  • igouy 4 years ago

    Many years ago — but from version-to-version too many of the Scala programs suffered bad bitrot, failed and were not updated.

    • kaba0 4 years ago

      Scala code is backwards compatible, only the class files are not - so at most dependencies could have become stale. With a recompile, the programs should work just fine.

      • igouy 4 years ago

        Requirements changed; and neither the original program contributors or anyone else on the scala mailing list wished to update the programs.

        (The scala mailing list archive doesn't seem to go back that far.)

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection