Welcome to C# 10
devblogs.microsoft.comOh wow. I do not like that "global using". It harkens to the auto-loading issues I've had with Rails. "Where was this defined? I dunno! It probably works here, though!!1"
I generally don't like a random file impacting several other files. Extension methods are ... tolerated and ... "fine" but I still feel unpleasant using them.
File-Scoped namespaces seem like someone's really, really tired of having nested folders and seems actively unnecessary.
I like natural lambda types
Good update on parameterless structs. I assumed that's how they worked already. I haven't used C# in 2 years; but you could do that with classes back when, so I assumed it would be the same with structs.
Constant interpolated strings is nice.
Extended property patterns is fine, just probably not for me.
> Oh wow. I do not like that "global using". It harkens to the auto-loading issues I've had with Rails. "Where was this defined? I dunno! It probably works here, though!!1"
It's been a feature of Visual Basic .NET for a very long time (from recall: at least 2008), I'd hope Microsoft heavily queried user feedback before implementing.
https://docs.microsoft.com/en-us/visualstudio/ide/how-to-add...
File-scoped namespaces is nice to not have every class already sitting at one level of indentation. I don't see what it has to do with nested folders.
To be fair, global usings aren't fundamentally different from assembly references (which have always been per-compilation rather than per-file).
With respect to parameterless struct constructors: the reason why C# didn't have that historically is because there are many corner cases where those aren't invoked in CLR. Basically any place where you can't do "new" directly on the struct itself - e.g. when you create an array of structs, its elements do not have the constructor run for them. So C# designers originally decided that it would be less confusing overall if structs were always default-init, in all contexts - which means no parameterless constructors. I'm not sure what prompted the change of mind.
I do not like that "global using" as well. I wonder why they added it.
Looks like they added global usings to support their implicit usings functionality. Essentially they want to save people having to put using System, System.Linq, System.Collections.Generic, and others[0] at the top of nearly every C# file.
I'm of two minds:
- I think they're an anti-pattern, because it creates a global scope that can get messy/annoying.
- It makes a ton of sense for the implicit usings functionality, and I'm tired of needing to add the basic SDK usings to every source file.
So the ideal is to enable .Net's implicit usings, then to use an analyzer to "ban" adding more global usings directly from your solution/project. Best of both worlds that way. Alternatively just make them a no-pass item for code reviews.
[0] https://docs.microsoft.com/en-us/dotnet/core/compatibility/s...
It's literally the same as Rust's prelude system and I don't see anyone complaining so much about this.
Rust doesn't have anywhere near the following of C#, so there probably aren't enough opinionated people in the community to mention it on an HN post
> Essentially they want to save people having to put using System, System.Linq, System.Collections.Generic, and others[0] at the top of nearly every C# file.
Interesting that this doesn't bother me at starting from .net 1.0. All tools (including raw VS without addins) can add those using automatically and they do.
But for me it's still a nice to have feature if VS will be able to show where this using is defined.
This is what I’ve settled on. Implicit usings for the SDK are great but otherwise I don’t want this in any code base I work on.
Because most files in C# start with:
using System; using System.Collections.Generic;
and a few more lines like that. Think of this part of the BCL as the prelude in Haskell.
It's an easy way to do a config without a full importer and complexity in passing config around / looking it up.
I like it. This goes way back. They give the example of a globalusings file.
That's a cheap / easy way to do a config file (at least one use case).
Doesn't really bother me, so long as the global usings are only kept to a single file. Intellisense will tell you the full namespace, then you can just see if you've got that namespace in the file you're in or in your dedicated global usings file.
> so long as the global usings are only kept to a single file
Here's where the mess starts.
It seems far too easy to sneak a global import statement in to random files and then have the entire codebase polluted by it.
I feel like this is too foot-gun'y, myself.
Yeah, there should definitely be a way to have the compiler enforce global usings in only a specific file.
I thought it was generally considered good practice to put extension methods into static classes in their own file. So like `FooExtensions.cs` if you are writing extensions for the `Foo` class.
I would consider this the same: Project should have a `GlobalUsings.cs` file.
Does C# have any linters available that could enforce such a conventions?
C# has a framework in-place to facilitate workspace-specific linters, so even if one doesn't exist, it could be written very easily (and premade ones are sure to appear quickly).
https://docs.microsoft.com/en-us/dotnet/csharp/roslyn-sdk/tu...
C# is becoming the C++ of managed languages. I wish they would stick to a smaller set of more powerful features… like an earlier C# with hygienic macros or something.
They really need to start marking stuff with [Obsolete()] more aggressively. C# has some now outmoded data structures and even keywords.
I'd love to see a C#/.Net Standard release that was dedicated to getting rid of things. It may be less exciting in the short term, but it is well past due.
> I wish they would stick to a smaller set of more powerful features
That's basically F#. One can program just using OOP in F#, and it is much more clean and concise.
Does that make Java the C of managed languages?
The great thing about C#/kotlin throwing the kitchen sink at stuff is the good stuff eventually makes it's way into Java.
(For me at least) life is too short to wait for the best features to trickle down. I mean, most of the interesting Java 17 stuff was surpassed by OCaml in 1995! That’s a 35 year lag!
I find Java's features to be very well balanced, and only make it into the language after they've been vetted and tested in the wild by other languages.
e.g. see their take on concurrency by means of project Loom. No need for async/await and providing separate APIs for sync vs async operations. It has records and sealed types and pattern matching, and is getting destructuring soon.
I much prefer the Haskell / F# approach of using do-notation to allow the user to build their own syntax.
I don’t think I can build an Async<Either<E, T>> expression with Loom, for example.
.NET async interops nicely with any other language that can do basic callbacks (even C!). How does that work in Loom?
For VM languages no problem, all those languages will support threads or using threads.
For non-VM interopt it will block the carrier thread if a native call causes any sort of blocking.
In Java, external to the VM calls are almost non-existent so it's not much of an issue. Likely you'd do those sorts of calls as regular kernel threads instead of the Loom virtual threads.
To clarify, by interop here I don't just mean calling foreign functions. I mean calling async foreign functions. When async is explicit, it's easy to handle it on ABI level - it's just a bunch of callbacks (or abstractions wrapping them, like tasks/futures).
For example, suppose you're writing a Windows desktop app, and you decide to do so in Java. Modern Windows APIs are async. How would you asynchronously invoke such an API? In C#, you'd just use await.
Definitely not as easy but Go managed to get it to work fairly well with some extra overhead. I would rather the language design for pure Java first and C interop second.
This approach results in more closed ecosystems, though, where everything has to be re-implemented to work well, instead of reusing existing libraries (that have been polished for decades in some cases).
Also, what about OS APIs? Those always going to be at the bottom of the stack, and they are increasingly async themselves.
I agree in a sense but I would do it another way. Either change to the Rust edition equivalent, or simply drop old stuff and keep in the newer bits.
There is a Rust edition equivalent in C#…kindof. The csproj file has a property to specify which version of the language you want to use. So if you set it to 9, the compiler won’t allow features from 10 (supposedly; I haven’t tested it).
I'm aware of <LangVersion> on the csproj but it's opt in system. You opt into the features of the version you want. It never disables anything of the prior versions.
What do you mean “disabling” things from the prior versions?
This release in particular feels like they are scratching around to find new features to add.
Dumb questions: why does it seem like languages always continually add features?
Can a language not become "feature complete", while still improving over time?
We don't know how to make languages. We're in the Kepler stage of software development. Everyone has a different 'cosmology' to explain what's going on, we don't yet have the technology to understand what's going on, we don't have the math yet to explain what's going on, and we're just in the very beginning stages of even being able to take measurements of anything worth while.
"Here's a feature" Now, will it makes code bases better? Will it make them worse? Do we even have a way to quantify better or worse?
What looks like is happening to me is that general purpose languages are all slowly migrating to look a lot like ML with some sort of existential mechanism. So that is static type system with generics, lambdas, algebraic data types, pattern matching. The existential part is typically expressed with interfaces, but it looks like there's a few options floating around.
Meanwhile, low level programming language designers are all going crazy trying to find a way to replace c / c++. Rust, Odin, Zig, Jai (if it ever actually gets released), etc. That probably won't look like ML or at least it will need to have some other stuff to handle the domain without driving developers crazy.
I'm sure other domains will slowly figure out that they can cheat the triumvirate of engineering (fast, cheap, good) by developing languages that suit their domain.
But I suspect we're looking at 50-100 years before we really start to see any progress that lets us have "feature complete" languages.
I wonder if there is any parallels of programming languages to spoken languages.
Every year, new words are constantly added to official dictionaries ... while old words continually fall out of favor/use.
And concepts in one language (e.g. "English" or "Rust") then get adopted/imported into another language (e.g. "French" or "Go").
Words in natural language dictionaries is more akin to functions in libraries. OTOH natural language syntax doesn't change nearly as fast.
But OP is right - we haven't really been doing programming all that long in the grand scheme of things, and there are still too many unsettled questions.
Also, in many cases, we came up with desirable concepts long ago, but using them wasn't feasible due to performance overhead until recently.
There's also domain-specific language / slang.
I think this analogy is a good one. I've always held that software development has a crafty side in addition to the raw computer science. And I think you nailed it that this is where that comes in. There's multiple ways to write any program. So the only "complete" language is one that allows you to express anything and everything exactly the way you want to.
For instance, one could argue that using process forking vs threads vs fibers vs async / await are all just different connotations of the same denotation of "doing multiple things in parallel".
Spoken languages are constantly evolving and mostly evolve at the keyword / reserved level and sometimes when interacting with neighboring languages through necessity you’ll get pidgins and creoles (think of them as external DSLs). But because spoken languages are so fluid there is a massive amount of ambiguity over time. It’s not like we can read Middle English very well now (we need scholars oftentimes) and another problem with human languages is the entity knowledge problem with idioms oftentimes being computationally “AI complete” (requiring general AI to comprehend semantically).
In this respect x86 is the English of the computing world moreso than even C, Java, etc because these are different cultures. And as someone that was in ESL I can’t express how much waste and cruft is in the language that makes me think it’s Perl or Ruby (note that Larry Wall is a linguist and wrote Perl to be more like a human language - we can see now why it was not a great idea).
Also seriously, human languages are entirely defined by its user base (language pedantic folks have very little influence empirically such as the flawed academic rule of not ending English sentences with a preposition which was a bad port of a rule from Latin) and given the trends of each programming language community I’m doubtful giving language control to users is what helps keep a language ecosystem alive and thriving. In fact, human languages are essentially a secondary map of imperialism, genocide, and suffering across all of human history moreso than trade and integration with consent.
I think F#, Clojure, and Elixir are languages that show it's possible to stop adding major features to.
Scheme as well. Notably, these languages are (mostly) capable of letting developers add new models of computing or software design on top of the base language in a way that appears natural as a user.
If you want OO in Scheme, you can do it (and various models of OO at that). If you want a concurrent model, you can do it. If you want a relational programming model, you can have it.
Try doing the same with, for example, C. You can accomplish it, but you have to jump through hoops or rely on OS libraries or other things. And it will rarely, if ever, feel "natural" within the language.
I don't think F# did that by choice. The reference compiler implementation is simply too complicated, which makes it hard to add new features. It is, ironically, also written in F#.
From what I have seen, Don Syme is a very pragmatic language designer and doesn't want to include features just to add them, and he holds .NET interoperability extremely high on the priority list, which further limits adding more functional programming features.
I imagine the compiler implementation is complicated, and I would guess it is due to the .NET interoperability.
You can just check out Roslyn code and compare it with FSC code. Trust me, compiler code quality/complexity is the main cause.
There is Awk - fascinating mini-language almost unchanged for decades. Therefore very portable. You can learn it once and be sure you know it all.
I like it. Nothing too crazy (well except maybe the return types and attributes on lambdas that could make some ugly code), mostly quality of life improvements and stuff you expected to work in C# 9.
The most interesting feature here (though included only as a preview) is static abstract members on interfaces, which will make things like generic math possible.
This is a great ability actually.
That said, I think I'd prefer the addition of structural inheritance to the existing nominal inheritance. Also, algebraic types.
Not all of these changes sound good to me. Some sound like very niche cases that I now have to make a decision about when it comes to coding guidelines because there's always going to be that one developer in our company who wants to show off, even if it's not the right tool for the job.
AWS lambda with graviton here I come....
Is "natural types" a common term used in programming language theory? I haven't heard of it before. I tried to search it but found nothing.
Ugh. So with the new method groups feature, adding a new method overload can break working code even if that code never calls the new overload.
That was already the case. This does not change that issue.
Using the example of Console.Read from the article, in C# 9, you could do `Func<int> read = Console.Read;`. Now, if someone adds an overload for the Read method to Console, that C# 9 code will break.
In C# 10, that doesn't change. What changes is that we don't have to specify `Func<int>`. We can just use `var`.
You wouldn't even have to get lambdas involved for this problem to surface. Consider something like this:
This works, but now I add an overload:void Foo(double x) { ... } Foo(123);
and the above call is now ambiguous. Note that this example goes all the way back to C# 1.0!void Foo(decimal x) { ... }Method overloading (and how it interacts with other language features) is probably the single most complicated part of C# today, for good reasons.
That's right. Additionally, this worked back to C# 4 or 5, I think?
Changing the code changes the code.
In any popular language, if you have some method whose single argument is being implicitly upcast by a caller then you add a more specific overload on that arguments inheritance hierarchy, the caller will now be calling the new method.