Towards a more powerful and simpler C++ with Herb Sutter
blog.jetbrains.comJonathan Blow (cult/indie game developer/studio owner) has a lot of ideas about evolving C++ (or rather, building a new language to replace it) specifically in a game industry context. This perspective is interesting because its kind of running at a different angle away from the "memory-safe, better guarantees" project that Rust is workin on.
https://www.youtube.com/watch?v=TH9VCN6UkyQ
Just as the scripting languages took a big bite out of C/C++ usage starting in the 90s, I think we're seeing another couple use-case for C/C++ pull off:
1) stuff that is performance sensitive but security sensitive as well (Rust) 2) stuff that is soooo performance sensitive that Rust and C++ are actually too bloated, but which isn't really security sensitive so the guarantees that Rust/Modern C++ offer aren't worth it (Jai, Blow's language).
Category 1 is clearly real, but Category 2 might be too small to sustain itself (though Blow makes an economic argument that it would be worth it).
Are we just going to keep peeling back C/C++ users from the pack with more specifically-useful languages until there are none left except for those maintaining legacy code? Or are there use-cases where C++ will continue to make sense?
I guess a lot of this depends on whether or not we can meaningfully "modernize" C++ through the standards process without simply bolting on a lot more features that add to the bloat. I wouldn't wager much that this trick can be pulled off.
I went back and forth over email with Jonathan Blow a while back and my opinion is that C++ is moving in a direction where it addresses most of his concerns about its suitability for game programming, though he doesn't agree. I wrote a few blog posts elaborating on why, starting with: http://blog.mattnewport.com/why-c17-is-the-new-programming-l...
I should revisit the topic in light of the latest progress with C++ and to correct some errors in those posts but I continue to believe that C++ is getting better as a language for games.
There is a somewhat legitimate argument that C++ is too complex but I don't believe you can make a convincing case that it justifies creating a whole new language (which will inevitably come with its own quirks, idiosyncracies and complexities) rather than engaging with the development of C++ itself.
Given the C++ commitment to backwards compatibility, I think the case for a new language in order to reduce complexity is very straightforward. A minimal case would be a C++ subset I guess.
But the backwards compatibility is taken so seriously because it is so valuable, much more valuable in practice in my experience than the supposed benefits of creating a new incompatible language with fewer features.
Interesting; i think it’s fairly clear by now engaging with C++ is not going to lead to reduced complexity at all. When was the last time it removed support for a feature? That’s not what the language needs because it’s more interested in preserving existing code bases than it is at improving them (the latter being a huge effort, truly massive, so i understand the decision and don’t mean it as a slight in the least).
Major influential figures in the C++ community (notably Bjarne Stroustrup and Herb Sutter) are very explicit about having a goal of simplifying everyday usage of C++ (make simple things simple). Watch any of Bjarne's talks over the last few years to see this is one of his main focuses. It's also recognized that backwards compatibility / not breaking existing code are very important however. Yes, that's a hard set of requirements to meet but I find it bizarre when I keep seeing people state that the C++ standards committee and the community in general don't care about simplicity and ease of use when those are literally the major topics of keynotes at every C++ conference of the last few years.
Complexity of the language is a much discussed topic. In the trivial sense it is true that the language will inevitably get "more complex" as new features are added but backwards compatibility is largely maintained. I don't believe that is a very relevant metric for usability however. Higher level features are generally added to languages to make them simpler to use but they also make that language more complex by some metrics. You can argue that assembly language is "simpler" than C++ because it lacks "complex" higher level abstractions but I'd rather wrote code in C++98 than assembly most of the time and I'd rather write code in C++17 than any previous version of C++ because it continues to get more usable and simple things get simpler.
Complexity doesn't come from abstraction, but rather from the lack of simultaneously short and precise descriptions of how things work. C++ isn't complex because “it's flexible enough to support several programming styles” or whatever nonsense. It's complex because its features are all bolted on, rather than parts of a coherent design from the ground up.
C++ has a standard which contains extremely precise descriptions of how things work which is more than can be said for many languages. How many other languages are there that have three major compiler and standard library implementations with almost no common code yet which all manage to be largely compatible (able to compile the same code and agree on the meaning)?
Some of the complexity of C++ comes from it doing hard things, some is in part a consequence of heroic efforts at backwards compatibility. There are areas of the language that most people stay away from or use in very constrained ways like multiple inheritance that are unlikely to be deprecated for backwards compatibility reasons but in my many years of professional C++ development multiple inheritance has never caused practical difficulties for me precisely because everybody stays away from it except for pure abstract interfaces.
>> but rather from the lack of simultaneously short and precise descriptions of how things work.
> C++ has a standard which contains extremely precise descriptions of how things work which is more than can be said for many languages.
(0) Most programming languages don't set the bar very high.
(1) You missed the “short” part. The C++ standard is already pretty long, and it isn't even written in a form that makes it possible to prove things about C++ programs by consulting the standard.
> How many other languages are there that have three major compiler and standard library implementations with almost no common code yet which all manage to be largely compatible (able to compile the same code and agree on the meaning)?
I can think of at least four: Standard ML, Common Lisp, C, Java.
> Some of the complexity of C++ comes from it doing hard things,
In unintelligent ways. For example:
(0) Macro, pardon my French, template expansion as a generic programming tool is super dumb. Better alternatives were known in the 70's.
(1) C++ classes are very poor abstract data types: You need arcane language design hacks like `friend` to abstract over two or more types at once.
(2) C++ classes are also very poor object constructors: The language has no type for “all objects that have methods foo, bar, qux” (à la OCaml or Go).
> some is in part a consequence of heroic efforts at backwards compatibility.
Most of it.
> (0) Macro, pardon my French, template expansion as a generic programming tool is super dumb. Better alternatives were known in the 70's.
And yet C++ has better support for generic programming than most mainstream languages and it continues to improve. What's a language you think does generic programming 'right'?
> (1) C++ classes are very poor abstract data types: You need arcane language design hacks like `friend` to abstract over two or more types at once.
I don't really understand this comment. C++ has good support for static / compile time ADTs (the STL is very much built around the idea) and also supports 'runtime' ADTs through interfaces. In neither case is friend needed to abstract over types.
> (2) C++ classes are also very poor object constructors: The language has no type for “all objects that have methods foo, bar, qux” (à la OCaml or Go).
Static polymorphism in C++ currently relies on duck typing but this is what the Concepts TS is addressing. A major benefit of that will be improved error messages and a better generic programming experience.
> What's a language you think does generic programming 'right'?
The most obvious examples are ML and its derivatives.
> C++ has good support for static / compile time ADTs (the STL is very much built around the idea) and also supports 'runtime' ADTs through interfaces. In neither case is friend needed to abstract over types.
The STL is built around ADTs in spite of the language's lack of support. If C++ had an actual notion of abstract types, then templates could be type-checked against ADT specifications (so-called “concepts”), rather than having to wait until instantiation time.
> In neither case is friend needed to abstract over types.
You can't encapsulate a single abstraction providing two or more ADTs without using `friend`. That's a fact.
> Static polymorphism in C++ currently relies on duck typing but this is what the Concepts TS is addressing.
This is a reply to the wrong thing. Concepts are ADT specifications, not object specifications.
> The most obvious examples are ML and its derivatives.
I'm reasonably familiar with F# but I know that's strayed quite far from its ML roots and its generics are constrained to some degree by what the CLI supports. For the kinds of generic programming that C++ does well it's not clear to me where the ML style is superior. Can you give some specifics?
> If C++ had an actual notion of abstract types, then templates could be type-checked against ADT specifications (so-called “concepts”), rather than having to wait until instantiation time.
Yep, that's why everyone wants Concepts to be standardized.
> You can't encapsulate a single abstraction providing two or more ADTs without using `friend`. That's a fact.
I'm still not clear what the issue is here. A C++ class can implement multiple interfaces for dynamic polymorphism without using friend and can implement more than one 'concept' for static polymorphism without using friend. Can you explain what you mean in more detail?
> This is a reply to the wrong thing. Concepts are ADT specifications, not object specifications.
Are you talking about something like Go interfaces? For my use cases static polymorphism is generally preferable to dynamic polymorphism and in those cases where dynamic polymorphism is desirable C++ style interfaces are often sufficient. When the need arises for something like a Go interface in C++ people usually use type erasure but implementing that currently requires a bit more boilerplate than would be ideal (though there are libraries that help). That's something that could be solved in the future by something like Herb Sutter's metaclass proposal. Perhaps I'm still not understanding your point though.
> Can you give some specifics?
Concepts are what ML and Haskell have been calling “signatures” and “type classes”, respectively, for ages. Unlike concepts, which only exist in the collective minds of C++ programmers, signatures and type classes are actual features of existing type systems, so, for instance, the type checker can make sure that you aren't trying to use an inexistent (member) function - without ever attempting to instantiate your generic code.
It is very unfortunate that F# got rid of this important feature.
> Yep, that's why everyone wants Concepts to be standardized.
What I'm telling you is other languages have had this very same feature for well over two decades.
> I'm still not clear what the issue is here.
Here's a thought exercise: Design an API for manipulating graphs, nodes and edges. The concrete representation of these three types must be hidden from the user by language-enforced mechanisms. Using `friend` is not allowed. Using the pimpl pattern is not allowed. Bypassing type safety is not allowed.
The fundamental problem that you'll run into is that C++'s access levels work with one type at a time. ML modules don't have this problem, because a single signature can specify multiple abstract types. An implementation can have access to the representations of all three types (graph, node, edge), while at the same time hiding these representations from all clients.
> Are you talking about something like Go interfaces?
Yes.
> For my use cases static polymorphism is generally preferable to dynamic polymorphism and in those cases where dynamic polymorphism is desirable C++ style interfaces are often sufficient.
I agree: static polymorphism is preferable whenever possible. It is easier to reason about both for language users (who care about correctness) and compiler writers (who care about optimizations). However, sometimes you want dynamic polymorphism, and that's what objects (in the object-orientation sense, which you can think of as “thingies that have vtables”) are for.
I think the general consensus of the C++ community is that we need Concepts or something like it. I don't think anyone is really claiming no other language offers something similar. I'd love to learn more Haskell but for my primary domain (games and VR) it's not a terribly practical option. This gets back to my original point - yes Concepts will make C++ more complex (in the sense of adding features) but I think it will make the language better / simpler to use in practice and I look forward to it being standardized and widely available so I can use it.
> The concrete representation of these three types must be hidden from the user by language-enforced mechanisms.
This seems to be more an issue of encapsulation than support for ADTs. There's value in hiding concrete representations (ABI compatibility) but C++ works well for my use cases most of the time with concrete representations visible (and this helps with performance which is important in my domain).
As a language user I don't think it's just compiler writers who care about performance :)
I do see the value in something like Go interfaces although it's not a problem I encounter that frequently in my domain. Type erasure is a handy technique in those situations and I think better language support to eliminate some of the boiler plate is desirable. More complexity :)
> I'd love to learn more Haskell but for my primary domain (games and VR) it's not a terribly practical option.
Oh, sure, Haskell has lots of defects. (Chiefly among those, being lazy.) I only said that it has something that's essentially concepts, except it has been designed, implemented and used since ages ago.
> This seems to be more an issue of encapsulation than support for ADTs.
The whole point to ADTs is that clients don't get to manipulate the internal representation! What exactly is that, if not encapsulation?
> most of the time with concrete representations visible (and this helps with performance which is important in my domain).
ADTs are about hiding the representation from abstraction clients (other programmers), not from the compiler, of course! In fact, a compiler writer could use ADTs solely for type-checking purposes, and from then onwards proceed as if ADTs didn't exist. So I don't see how using (proper) ADTs must have any adverse effect on performance.
> I do see the value in something like Go interfaces although it's not a problem I encounter that frequently in my domain.
I have to agree, I don't use objects with dynamically dispatched methods much either. My original point was just that C++ classes are neither good ADTs nor good object builders. They're at an uncomfortable point at the middle, with the disadvantages of both, and the advantages of neither.
> I only said that it has something that's essentially concepts, except it has been designed, implemented and used since ages ago.
I'm not sure why this is relevant to the topic at hand though, other than historical interest. What's relevant is that something like concepts are a useful thing for a language to have and C++ will be a better / more usable language with them, even if it means adding 'complexity'.
> The whole point to ADTs is that clients don't get to manipulate the internal representation! What exactly is that, if not encapsulation?
You said "The concrete representation of these three types must be hidden from the user" and mentioned the pimpl pattern which led me to think you were talking about ABI issues. In C++ generally private members are not accessible but they are visible (in headers) and affect object size and layout. That can be a problem for build times and for versioning / binary compatibility but it also allows for private functions to be inlined and avoids pointer indirections and simplifies certain other optimizations (devirtualization for example).
C++ does not currently have a language level concept of modules (and I'm not sure the modules proposal working its way through standardization addresses your issue here) or anything like the C# internal access level. There are patterns to structure your code to enable implementation hiding for collaborating classes in a 'module' but they don't tend to be very widely used due to lack of first class language support. In my own experience I haven't found this to be a huge issue but maybe I just don't know what I'm missing.
> Here's a thought exercise: Design an API for manipulating graphs, nodes and edges. The concrete representation of these three types must be hidden from the user by language-enforced mechanisms. Using `friend` is not allowed. Using the pimpl pattern is not allowed. Bypassing type safety is not allowed.
Perhaps I am not understanding your example, but...
I'd have a Node class, an Edge class, and a Graph class. None would be friends of any of the others. Many of the API functions would take a Graph (plus other parameters), but you might like them to be able to operate on an Edge or even a Node as well. But the way I think you handle this is by having a conversion operator (which is also a constructor). That is, if I have a Graph constructor that takes an Edge parameter, and one that takes a Node parameter, now I can use an Edge or a Node as a parameter to a function that takes a Graph.
Note that this does not require Graph to know about the internal details of Edge or Node. Also, it does not bypass type safety. It does sometimes return a different type than you passed in, but I'd argue that you want that: If you call addEdgeToGraph you expect to get a Graph back, and you will, even if you passed in a Node in place of the Graph.
Simplifying every day usage is entirely a different thing than simplifying the language. I’m not denying the inprovement in usage, just that you should expect the complexity of the language to continually increase because that’s the needs of the c++ community.
My question then is why should I care about "increased complexity" by this definition more than about "simpler usage"? Adding well thought out language features that simplify my every day usage is a good thing and the sense in which it increases complexity is not a sense that particularly concerns me. I think C++ is generally making good choices about what features to add. Generic complaints about new features tautologically "increasing complexity" are not interesting. Specific concerns about particular features not carrying their weight in terms of simplifying usage are interesting (and a big part of what the C++ standards process is engaged in).
Agreed. You have to choose between "large scale removal of features from the language" vs "30+ years of back-compat". You can't have both. A huge strength of C++ is it's legacy and the maintainers would be foolish to throw that away in a C++ 2.0 movement.
Instead, they add new features that lets new code be written in new ways without requiring you to toss out your old code. Your old C code is full of mallocs and frees. Your old code still works when you partially update it using the newest features. But, once C++ added new and delete, you rarely ever needed to type malloc or free any more unless you were overloading new and delete. Your old C++ code is full of news and deletes, but new language features added in C++11 made unique_ptr and shared_ptr possible. And, now you rarely need to type new or delete unless you are making your own unique_ptr/shared_ptr variant.
Why would you care? Well the complexity makes tooling difficult, the engineers expensive, and arguably ongoing development could be slower depending on how many c++ features they have to deal with in one code unit. Are these any strikes against the language itself? No, of course not, but one can see why e.g. using go instead might afford more flexibility if the underlying c++ distinguishing features aren’t necessary. The language complexity is still something to consider even if c++ is sufficient.
I am by no means arguing that C++ committees are making any mistakes.
I don't think we disagree on much. Jonathan Blow is of the opinion that what games programmers need is a new language designed from scratch, in part because he believes the most viable (and dominant) language used for games, C++, is irredeemably complex. I recognize that there are complexities to C++ but I'm of the opinion that it's been getting better and will continue to do so for games development and that the new features in practice are making the language simpler to use even if technically they are making it "more complex" in the sense of having more features. The tooling is also getting better, despite the complexity, and Clang has played a big part in that. The case for switching to a different language is not compelling to me, especially to a completely new language rather than something with some track record.
C++17 removed trigraphs. C++17 removed dynamic exception specification which is deprecated since C++11. C++17 removed operator++ for bool which is deprecated since C++98. C++17 removed the “register” storage class.
C++11 removed the original meaning of "auto".
> When was the last time it removed support for a feature?
why should it ? Adding features is enough for simplifying stuff.
eg take the following code:
it leverages three new features: auto, range-based for, and braced initialization.for(auto& val : {1,3,12,17,20}) { val++; }How would it look in cpp03 ? Two possibilities:
Orstd::vector<int> v; v.push_back(1); v.push_back(3); ... v.push_back(20); for(std::vector<int>::iterator it = v.begin(); it != v.end(); ++it) { (*it)++; }int v[] = {1,3,12,17,20}; for(int i = 0; i < (int)(sizeof(v)/sizeof(int)); i++) { v[i]++; }
As Ex-Gamedev I don't buy that #2 is too bloated for C++, you just have to be smart about what features you pick.
Really though for what he wants to do you want a flexible framework(scripting language) backed by a fast engine(native). Jai sounds pretty interesting but I don't know if Blow has the interest in building an ecosystem around it or just using it for his own projects.
FWIW my ideal use case is Lua + Rust. I've done it on a few projects so far and really love the combo of flex + stability.
Agreed
The only case I see for "C++ is too bloated" is for embedded apps on limited hardware and even then
But then of course people make something that's 10 inheritance levels deep and (ab)uses templates and then suddenly "C++ is slow". Write better code
C++ has done a pretty good job of maintaining the ideal that "you don't pay for what you don't use". Your C++ is too slow? Think carefully about which features you're using.
If you choose features appropriately (given that speed is your top concern), and you still find that another language is faster, I'd be quite surprised.
I don't think this is specifically a C++ problem, I think it's a problem for any language which tries to be generalist and encompass many use cases without finding a well-defined niche. Namely, the compromises that a language makes in order to appease wildly divergent use cases will make it less than optimal at many of them, and if demand is high enough for any one of those use cases, then a different language that is custom-tailored for that use case will start to find a foothold.
I think of it like bicycling: one could use the same bike for commuting, road racing, mountain biking, and BMX, and you'd certainly save room in your garage, but most people do not need to do all four of those activities and will instead invest in bikes that make tradeoffs to excel in the use cases that they actually care about.
C++ design principles are already a good match for game development however. Specifically its intent to leave no room for a lower level language (except assembly) and to provide zero cost abstractions. There are lots of things C++ doesn't try to be and the are lots of languages more popular in particular domains as a result. C++ dominates game development (and particularly game engine development) because it is the best available match to the needs of that domain. It continues to try and evolve to match those needs better (that's why there's a game development focused study group SG14 on the standards committee, show me another language that takes the needs of game developers that seriously).
> 2) stuff that is soooo performance sensitive that Rust and C++ are actually too bloated, but which isn't really security sensitive so the guarantees that Rust/Modern C++ offer aren't worth it (Jai, Blow'
If you expect Jai to be faster than C++ I think you will end up disappointed. There isn't much reason there should be any more performance disparity than Clang and a different C++ compiler. The only language that I think could be really be called faster than C++ is ISPC.
I'm suspicious about your so called (runtime) slow bloat. learning/understanding bloat, sure, compile time bloat, sure. But execution wise? I would like examples, specifically some that could not be dodged in a trivial manner
From the Rust perspective, there should be no time when Rust is too “bloated” for performance. That’s clearly an ideal but if there are specifics I’d love to hear about them.
We endeavor to not ever leave performance on the table.
The interview covers proposed metaprogramming features in upcoming versions of C++. In particular, it demonstrates metaclass as a way for users to define new kinds of types, instead of relying solely on class/struct/union/enum.
For example, Java has an interface, in which methods are declared but not defined. The proposal for metaclass gives a demonstration of what an interface in C++ could look like:
interface Shape {
int area() const;
void scale_by(double factor);
};
Instead of changing the compiler to allow for new interface keyword, we can create a metaclass: // the dollar sign ($) prefix indicates reflection and metaprogramming
$class interface {
// the constexpr indicates compile-time execution
constexpr {
// raise an error if there are data members
compiler.require($interface.variables().empty(),
"interfaces may not contain data");
// loop over all functions
for (auto f : $interface.functions()) {
// raise an error if move/copy functions are present
compiler.require(!f.is_copy() && !f.is_move(),
"interfaces may not copy or move");
// function must be public
if (!f.has_access())
f.make_public();
compiler.require(f.is_public(),
"interface functions must be public");
// function must be virtual
f.make_pure_virtual();
}
}
// add a destructor
virtual ~interface() noexcept { }
};
Thus I can create a new kind of type directly in my code. This can be part of a library for downstream users without ever changing the compiler.See the full proposal here:
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2017/p070...
That is not a step toward simpler code. Under that proposal, people looking at your code will have to look up definitions of basic things that ought to be keywords---like "interface"---in order to reason about the code.
> Under that proposal, people looking at your code will have to look up definitions of basic things that ought to be keywords---like "interface"---in order to reason about the code.
That's a good thing: now you can refer to 20 lines of code instead of 15 pages of standardese to understand what happens.
I can learn the 15 pages of standardese and it'll apply uniformly to every program. The 20 lines will contain some subtle surprise at the worst possible time.
> I can learn the 15 pages of standardese and it'll apply uniformly to every program.
That's assuming the compilers implementors had the same interpretation than you. With code, there is much less room for interpretation.
Not really a problem, if a "basic thing" is common enough you just put it into a library. Maybe even the C++ standard library.
The "interface" definition should probably qualify for that.
Are they really going to add $ to C++? Unbelievable. What was wrong with "reflexpr"? It is way more C++'ish than "$".
Update: From Herb's blog :
"Also, a vocal minority in the committee strongly want a syntax that does not include the $ character (not just for $class, but also for $expr reflection) because they have large code bases that use $ in source code that is not C++ code but is processed to emit C++; removing $ is an easy change at any time and we’ll just follow what the committee decides for reflection syntax (in fact, the alternative syntax I showed in the endnote above removes the need to write $). So further work is needed on those items, but fortunately none of it affects the core model."
https://herbsutter.com/2017/07/26/metaclasses-thoughts-on-ge...
P.S. I really hope that vocal minory would win ;)
But overall the proposal is what I have been talking about for a long time.
It's still up in the air I think - you can see from his blog[1] that others in the the committee prefer a syntax more like
[1] https://herbsutter.com/2017/07/26/metaclasses-thoughts-on-ge...meta::type interface(const meta::type source) { // … basically same code … };Thank you, I didn't know that. This syntax makes much more sense. Considering C++ style programming.
I am very interested in having Reflection in C++. But tbh $ makes it really awkward.
The best way to add new language primitives without breaking anyone is to pick a syntax that currently fails to compile.
`reflexpr interface {...}` could be declaring and initializing a global.
`$class`, on the other hand, doesn't compile. One could also do `virtual class` or something, I guess, since `virtual class` is a combination of reserved keywords.
You are correct. But there is another side too. $ makes the language way uglier than it is now, and C++ is ugly language already.
Why does it make C++ more ugly? Do you find PHP ugly too?
Next proposal will be about the new C++ logo, which will be, yes, a camel!
Its so much simpler :)
I agree, but $ doesn't make the language way more uglier than what it is now?
I don't see why, unless you have something particularly against the $ symbol? It seems to fit with existing syntax quite well, e.g. & gives the address of a thing, $ gives the reflection of a thing. Extending that to define a metaclass seems pretty natural.
I sort of want a file-by-file language upgrade like Objective-C seems to have. For instance, if you add something like “nullable” to an Objective-C header, then the compiler will require similar directives throughout the file; otherwise, it doesn’t.
C++ needs a new strict set of rules that (ideally for individual files, to start) prohibits some set of older/deprecated features from even compiling. That way, you know where the language is going and you adapt.
This sounds to me like "we will make it simpler by adding more features (that are presumably simpler to reason about)." The problem with C++ (and the reason that it is too complex) is that it has too many features. This proposal will do nothing to eliminate all of the cruft, the real source of complexity. That would require actually removing features, backwards-compatibility be damned!
Looking at this post https://news.ycombinator.com/item?id=15613848, I wonder if it could help to simplify C++ by moving some of the existing features into libraries, which you would have to include for backwards compatibility, but could do without if you didn't need backwards compatibility.
Backwards compatibility in C++ means "your existing code still compiles and does the same thing". Can you give an example of a situation where that could be achieved with your proposal?
Well, my thought was that you'd just have different profiles, like is already used:
or-std=c++20-full
Perhaps c++20-light could never become the default, since it would break backwards compatibility, but you could always set the flag.-std=c++20-lightI dunno. It was just a thought I had when reading about the new feature, not something I've thought through.
C++ has many problems. A too complicated language is the least important of those because the newer features are significantly easier to use and read.
Headers suck. Build systems suck. Package management sucks. Compile times suck.
Precisely the things that are not part of the language are those that suck the most.
I rarely see anyone present well thought out specifics of what should be removed from the language when making these types of claim. There are some complex areas (two phase name lookup springs to mind) that might be done differently if designed anew but I haven't seen too many good examples of things that could be "easily" removed, backwards compatibility be damned, that I have found to be actual problems in practice. The best examples are usually legacies of C.
Yeah Metaclasses in Python are obviously so powerful and make programs so easy to read, so let's just go ahead and add those as well.
Now we only write
struct Point {
int x;
int y;
};
to get the oh-so-needed class Point {
private:
int x;
int y;
public:
Point() =default;
~Point() noexcept =default;
Point(const Point&) =default;
Point& operator=(const Point&) =default;
Point(Point&&) =default;
Point& operator=(const Point&&) =default;
};
Genius! Almost like 1972 where we wrote the former and that was just fine!You can still write the former and it's just fine.
Except, I need to think the latter?
Not for a Plain Old Data (POD) type. You need to think about the latter if you're manually managing memory or OS resources in your class (which should be rare) or you're trying to optimize performance when you have members that can be moved more cheaply than copied (usually because they manage memory) which should also usually be rare and the result of identifying a performance issue through profiling.