Settings

Theme

Thoughts on Modern C++ and Game Dev

elbeno.com

150 points by lebek 7 years ago · 142 comments

Reader

arandr0x 7 years ago

I'm not in games but my industry is adjacent and I have the same gripes with c++. (Which the author characterized very well.) Although build and debug times definitely are an issue regardless of the performance of the hardware.

I find reading C++ "standards" papers onerous and feel like they're written in a way that's deliberately inaccessible. I don't much like the idea of going to CppCon -- even if my company funded it, which maybe they would, I feel like I'd be marginalized for not using template metaprogramming, not knowing the new hotness by heart, and generally being a proponent of C-with-classes. I just feel like so much of the C++ "standards" work feels like it's led by academics who think the concerns of working programmers like me are beneath them.

Is there a way I can "get involved" and does my voice have any value?

  • mattnewport 7 years ago

    If you want to get involved and for your voice to have value then you have to educate yourself on the subject. Unfortunately too many of my former colleagues in the games industry (I'm now in a "games adjacent" industry too) fail to do this before complaining about C++ and have the same attitude of "it's not fair that I should have to know what I'm talking about before anyone will listen to my complaining seriously".

    Videos of all CppCon talks from the last several years are freely available on YouTube and if you took the time to watch them you'd see that many of them are by working programmers and not academics, quite a few of them in the games industry. You'd also learn that the committee is quite focused on simplifying the use of the language and on finding more usable ways to get the benefits of template metaprogramming. You would also find explanations of many features that are more accessible than standards papers and occasional explanations of why standardese is the way it is - nobody, even the most academic speakers, claims to find the standard the most accessible way to learn about a new feature.

    • foldr 7 years ago

      >"it's not fair that I should have to know what I'm talking about before anyone will listen to my complaining seriously".

      In a way it isn't fair. Most users of most programming languages don't know the language very well. So the views of these programmers are important.

      • mattnewport 7 years ago

        I've never really bought this argument. If you want to contribute to theoretical physics and be taken seriously it's expected that you have a solid grounding in physics. This principle is generally fairly widely applied. There are people on the standards committee who represent their companies and part of their role is to speak for the needs of the 'regular' programmers at their companies who may not be as well informed on the language - Titus Winters from Google takes that part of his role seriously. Bjarne Stroustrup teaches C++ to many students and one of his big focuses is on simplifying the language for beginners. To do a good job of simplifying something though you actually need to understand the problem space very well. Simple ain't easy.

        If someone wants to represent the 'average' programmer then the best way to do it would be to educate themselves on the language and the current state of standardization and then participate as an advocate for those programmers.

        • foldr 7 years ago

          I wasn't talking about participating directly in the standards committee (and neither was the OP).

          It seems that the people who are supposed to be speaking for the regular programmers aren't doing a particularly good job.

          • insulanus 7 years ago

            > It seems that the people who are supposed to be speaking for the regular programmers aren't doing a particularly good job.

            I agree with this. The people at CppCon are very smart, very informed, and very very pro-C++. It's silly to think that such a person (people on the committee, no less), would be a good spokesperson for your everyday programmer.

            However, that's certainly not the main problem keeping C++ complexity high. Even though the committee genuinely wants improvement in this area, there is too much fear of breaking legacy code, and too many early design mistakes to overcome on the path back to a simpler C++.

          • mattnewport 7 years ago

            It does seem that it seems that way. As someone who's pretty familiar with the current state of C++ and also has a long history in the games industry however I don't believe it actually is that way. I think the problem is more one of perception than reality but many of the complaints come from people who don't know what they don't know and I'm not sure the best way to fix that problem. The resources for them to educate themselves are freely available but a certain subset of them seem weirdly proudly stubborn about maintaining their ignorance.

            I'm not personally that interested in making it my mission to help educate people who don't want to be educated but perhaps it would be a valuable way for someone to get involved. People complaining on the basis of wrong information or misunderstandings aren't likely to be taken terribly seriously though.

            • foldr 7 years ago

              You're very quick to call people ignorant and make uncharitable assumptions about their motivations. If your attitude is the common one, then it does seem that the opinions of regular C++ programmers are not valued by the people involved in the standards process.

              • mattnewport 7 years ago

                I'm quick to call people ignorant who demonstrate their ignorance. It's not even an insult - I'm ignorant about lots of things. There's nothing wrong with being ignorant about something if it's not important or relevant to you. However, if you want to get involved in technical discussions about a topic that is of importance or relevance to you then I believe it is your responsibility to take advantage of the resources available to you to learn about that topic so you're not ignorant any more. In my experience the C++ community is quite helpful and generous about helping people who want to learn but you're going to have to do a certain amount of the work yourself.

                I'm not part of the standards committee nor do I represent the C++ community as a whole. I personally am not particularly interested in trying to educate people who don't want to be educated. That doesn't say anything about whether I or anyone else in the C++ community is interested in the opinions of regular C++ programmers.

                • foldr 7 years ago

                  Arandr0x hasn't demonstrated his/her ignorance. I'm not sure what you hope to achieve by harping on the ignorance of unnamed persons who aren't participating in this discussion.

    • arandr0x 7 years ago

      I've read some of the drafts and the final spec of the APIs I care most about, for example, the concurrency API (which I like a lot... I'm not solely complaining!). Do I need to watch every talk on every C++ feature before anybody in the C++ community will want to talk to me? Because I'm paid to actually write code. (I mean normally. Today my boss isn't back from vacation so you get to read my rants on HN.)

      Here are the C++ subjects of interest to me:

      * geometric primitives. Not having basic geometry by now is insane.

      * concurrency, I think the API as it is is good, but some of the APIs around atomics (mostly the difference between compare_exchange_weak and compare_exchange_strong) are not clear, I have had to explain those to coworkers before. Why can't we have something called test_and_set like on atomic_flag that's an OK default? Also, while I perfectly understand why that's the case, atomics have deleted copy constructors and that tends to generate compiler errors. I don't get why compilers can't just generate default constructors that don't copy the atomic like all my coworkers keep having to write and potentially introduce errors in

      * filesystem API, very excited for that but sort of worried it will not work on every platform and especially that it'll work terrible on windows. I have to write Windows/Linux/iOS/Android C++ so a bunch of my gripes are "the standard is inconsistently supported" and I supposed that's not the job of the standards committee to enforce... but maybe they could stop inventing new stuff for Microsoft to screw up...

      * few other misc things. The fact X const& and const X& are both valid leads to every project having its own "standard" of const placement and it's not great to read, etc. not major stuff. But things that are of concerns to working programmers.

      I'm not really complaining, I love C++ and being a C++ programmer and you couldn't pay me enough to go integrate idiosyncratic web frameworks. However I think you must admit that the C++ community, beyond the language itself, is a little exclusive and not super friendly to people who are not as knowledgeable as others. I have a little time to learn some of this stuff, and I would be happy to, but I will never be an expert like most of the standard committee is. Based on your message I think that makes me and my opinion not welcome? If so that's OK, but, um, there's a lot more of me than there are of them in this industry.

      (edited for formatting)

      • chris_wot 7 years ago

        On the X const& and const X& issue, I wish that everyone would use X const& and never const X& - it would make the rule that "the thing on the left is the subject of the const" much easier.

      • mattnewport 7 years ago

        The standards committee is a volunteer effort. Some of the most active participants have employers who consider their involvement as part of their job responsibilities so they are at least partly being sponsored by their employers but plenty of people involved in the standardization process and attending conferences are not paid by anyone for their participation and are in fact paying the costs of attending out of their own pocket. Hardly anyone in the C++ community is directly paid to sit around learning more about the language. If you wanted a career where you didn't have to put in learning on your own time to keep your skills current I think you picked the wrong one.

        It's a fairly involved process to get a new library standardized. Functionality that's not included in the standard library may be missing because it's hard to get agreement on what is required, because there isn't anyone sufficiently motivated to drive it through standardization or because the committee just hasn't got round to it yet.

        I don't consider this particularly problematic in many cases. Some functionality probably shouldn't be part of the standard IMO. For example, there's been some effort to standardize a 2D graphics library and I'm of the opinion that should not be part of the C++ standard. I'm inclined to think the same thing for geometric primitives, for similar reasons. Certain types of library are better left to competing open source libraries for now I believe.

        Implementations are not the domain of the standards committee. It's rather inconsistent to demand relatively niche new library functionality for geometric primitives at the same time as wanting the committee to stop adding things to the standard faster than Microsoft can keep up however.

        Concurrency is complicated and there are a lot of pitfalls which the standard library has gone to great lengths to avoid. It sounds like you're misunderstanding the best way to use atomics but without knowing more about your use case I'm not clear exactly how.

        Coding standards are not really the domain of the standard and some things that have multiple possible ways to write them have to be maintained for backwards compatibility. If you want to standardize things like const placement however, tools for automatic formatting and transformation of C++ are getting better all the time thanks to clang/llvm.

        C++ occupies a niche which necessitates it being a bit less beginner friendly than some languages and a bit more demanding on users putting in the effort to learn the language. That's what makes it fairly uniquely suited to certain use cases. I don't see that as a bad thing. I don't really know what you're asking for to be honest. My advice to anybody who uses C++ as a major part of their career to invest a decent amount of time into learning it better though, just as I'd advise them to spend time learning more in any domain that is a big part of their professional life.

        • arandr0x 7 years ago

          Thank you for replying in-depth, I didn't expect it. I actually agree with you re: 2D graphics(I was aware of it already), I just think geometry is different because it has wider applications. Your viewpoint is consistent though.

          Again, I think concurrency is one of the parts the standard library does best. (And I think part of the specifics of this is it has primitives like async and future that are higher level and are much easier to explain. I happen to write lock free data structures often enough that finding blog posts at a good level to explain the API for atomic to my code reviewers is a need/bother for me, but there are also plenty of concurrency use cases where I don't need atomics, and I also read that there was a plan to allow parallel execution for most of the algorithms in the standard lib, I don't know if that's in MSVC yet but that's an example of things I like).

          I don't blame the standards committee for implementors. In general, it must be terribly difficult to be on the standards committee, and have to deal with stuff implementors have already done, stuff they won't do, and stuff they are telling you they are doing but will not do right in the end. OTOH, as a end user, you have to understand that I have to deal with implementors, and they affect how I think of a given C++ feature, some of which look awesome in the text but are impractical because of implementor differences. I'm not alone: every C++ dev I know has a subset of C++ features they have deemed "practical", based on things like implementation performance, compiler error messages, how many characters the function names have, etc., and part of the problem is it's not the same subset for all C++ devs. Sometimes I wonder if there shouldn't be a subset of the standard (and I don't mean the C subset!) that's earmarked as beginner-safe and compilers could enforce the, well, beginner-safeness of code that is meant to be foolproof... but I suppose your angle is code meant to be foolproof should either never allow a beginner within 50 feet of it or be written in Java (or Rust?). Which, I get.

          I would like to clear up a misunderstanding that I think you had: I do not spend zero time learning about C++. I'm not super confident and I think by HN standards I am not very experienced in C++ (or anything else), I'm definitely not at the level where people at CppCon would want me in the room, but people at my workplace do come to me for help with that stuff a bunch of the time. I tend to volunteer to explain newer or more complicated parts, etc. I'm a mid-level dev generally and C++ is my main language, and I'm not significantly more ignorant than my coworkers about it. If that is below the threshold at which you will consider someone worthy of talking about C++ on the Internet, maybe that's why C++ doesn't have that many beginner-friendly communities around.

          • mattnewport 7 years ago

            I have no threshold for when someone is worthy to talk about C++ on the Internet. What I don't particularly like is people complaining about things they don't seem to have made much effort to understand. I'm happy to answer questions for people who genuinely want to learn about C++ (my top 5% ranking for C++ answers on Stack Overflow attests to that I think).

            There's never been a better time to learn C++ - there's a wealth of free resources online from conference videos to blogs to podcasts like CppCast. C++ standardization is more active than ever and the language is getting better with each update to the standard and there is a real effort to also make it simpler which I believe is mostly succeeding. Implementations have also improved a lot in recent years. It's just the pace of change is such that there is work to do to effectively communicate that fact to people who don't pay much attention to the development of the language.

            I think if you want a long term career in this industry you need to dedicate at least ~5 hours / week of your own time to professional development on an ongoing basis. If you work with C++ on a daily basis then spending some of that time working through CppCon videos (I like to watch at 1.25x speed) and other free online resources is a good idea IMO. I'd also advocate writing a bunch of small programs from scratch to explore unfamiliar features or libraries. Working through programming interview type problems can be good practice, or just try things out to satisfy your own curiosity. I'd say getting in the habit of writing a small program to experiment with something new was the biggest factor in increasing my comfort level and understanding of the language.

            • arandr0x 7 years ago

              > I think if you want a long term career in this industry you need to dedicate at least ~5 hours / week of your own time to professional development on an ongoing basis. If you work with C++ on a daily basis then spending some of that time working through CppCon videos (I like to watch at 1.25x speed) and other free online resources is a good idea IMO.

              Then I know what I'm doing tonight. I like the computational geometry aspect much better than the programming language part. The thing with C++ is very often the stuff you don't know sneaks up on you at the worst time when starting on a new library, or trying to reuse code more complex than you're used to. It's strange to think of it as something to study, but 5h/week is something I could do instead of more overtime.

              >What I don't particularly like is people complaining about things they don't seem to have made much effort to understand.

              The main thing I was complaining about was C++ takes (relatively to e.g. Python) a ton of work to understand accurately, but also to explain properly to other people. You must know that from answering on StackOverflow too. (There are features of C++ I don't use because I/my coworkers don't yet know them, and studying will help me with that, but I can't complain about them yet!) StackOverflow is actually great for C++ though, so thank you for answering on it. Usually there's always an answer with a detailed explanation even for strange code samples. Often it's through reading the other people's questions that I can understand how a construct can be useful in practice.

              • mattnewport 7 years ago

                In my experience C++ may seem more complex to understand initially than something like Python but once you invest some time to really understand it then it is much less mysterious / less of a black box than an interpreted or JITed language, particularly when dealing with anything performance sensitive, because nothing is hidden from you under layers of abstraction that persist at runtime yet are relatively opaque to your debugging or profiling tools.

                One of the nice features about C++ IMO is that there is very little "magic" in the standard library. Unlike some languages, all the features of the standard library can be implemented with standard language facilities that are available to you for use in your own code. Some of the more complex areas of the language however are there primarily to support library authors writing very general code and you don't need to fully grok them to be an effective user of the language. Learning about some of them over time however will likely pay dividends for your own code even if you are not writing widely used libraries.

  • dagenix 7 years ago

    I find reading medical "standards" papers onerous and feel like they're written in a way that's deliberately inaccessible. I don't much like the idea of going to Medical Conferences -- even if my company funded it, which maybe they would, I feel like I'd be marginalized for not knowing basic medical techniques, not knowing the new hotness by heart, and generally being a proponent of bleeding with leeches. I just feel like so much of the Medical "standards" work feels like it's led by academics who think the concerns of people that don't know medicine like me are beneath them.

    Is there a way I can "get involved" and does my voice have any value?

    • arandr0x 7 years ago

      You're being flippant, but your paragraph is the reason we have pharmaceutical sales reps, and the reason they are so well paid.

      • jplayer01 7 years ago

        Except the information to inform himself is everywhere. All the CppCon talks are on YouTube. There are tons of great resources for learning modern C++, or learning about what's going on in the C++ world - there are plenty of teachers, among them even Stroustrup. If he isn't interested in putting in anything more than zero effort into defeating his own ignorance, why should anybody else care about him?

        • arandr0x 7 years ago

          I've since taken the advice on the other threads and watched more CppCon talks -- I think my problem is I used to google new features and find talks on them, and they were features that were too advanced for my understanding, and that made me think every talk was like that. I've watched a few talks now that are both exactly relevant to what I do and completely understandable. So, I'm glad that in this specific case ranting on the Internet had the effect that I got new motivation to try harder to find talks that were for me. I don't comment on any other website than here and now I know why. I have never had in person a conversation with anybody about C++ that has taught me more than an evening of these talks. I still don't feel I'm ever going to be able to do anything about when C++ is frustrating to my workflow, but at least I have more tools to sidestep the frustrating parts. I wanted to give an update so people would know that the advice was actually really good, and that the talks even though they're hour long, are very dense in information.

          That said, you (and several of the other posters) are being awfully dismissive of the notion that there are people who work in your language but don't feel confident with it. I may have major imposter syndrome or something but I've (before this weekend) thought if I didn't get a talk at first it meant I was too stupid for all talks. I don't have mentors at my company because most senior developers have other subjects of expertise than C++, which is just a tool to us. My city has several Javascript meetups but nothing on C++. And I think the way I've been treated when saying, essentially, relative to the C++ community, "I use C++ every single day but I don't think I'm one of them", which is being told, "you're not good enough/disciplined enough/confident enough to be one of us", shows that.

          Most of programming is slowly waking up to the idea that being exclusive and giving people tough love instead of guidance is starving companies out of potentially good developers. I know my coworkers and most of them would give up on a learning resource that they felt was too advanced and would be turned off from any effort if all everyone was telling then was "try harder". (Even though in this case trying harder is 100% the solution.)

          In most other fields, including medicine, there are high-EQ people hired to fix exactly that, by bridging the gap between where the practitioners currently are, and where the industry wants them to be, usually in a way that's either very high touch or very structured. And in those other fields, if most practitioners aren't reached by the information, or are consistently misapplying it, the industry guilds put the blame on the way their information is being transmitted, not on the practitioners.

  • doctorRetro 7 years ago

    Upvoted. And I just want to say I agree completely. Your comments re: "not knowing the new hotness" and "led by academics who think the concerns of working programmers like me are beneath them" really strikes a chord with me. I think this is a problem in all of programming these days and has really soured me on the industry.

  • JdeBP 7 years ago

    I was in an IST 5/-/21 committee meeting last month, and sat next to committee members who were talking about the same things as you are. None of them came from academia.

    The truth is that C++ standardization is not full of "academics", and people involved voice these same concerns.

    One of my fellow committee members, Guy Davidson, is in the games industry, and the subject of the 2D graphics proposal has been a regular topic at our meetings.

    * http://cppcast.com/2018/07/guy-davidson/

    * http://jdebp.uk./FGA/cxxpanel.html

    * http://cxxpanel.org.uk/

    • arandr0x 7 years ago

      I'm not a fan of the 2d graphics proposal, some for reasons specific to it (I think it's too like the HTML5 Canvas API) some for more philosophical reasons like 2D graphics is a big subject that may not be possible to standardize in a way that pleases enough people. It's true that the existence of the proposal demonstrates the committee has realized graphics programmers had needs! The discussion around it too does a lot to put the kind of problems we have to work with on the map.

      Anyway, originally in my OP, I read the blog post and at the end, he was saying to people that have any industry-specific concern with the way they use C++, and the way C++ is changing, to get involved. I was just saying, as a C++ end-user, even one who cares enough to write this, there is no readily accessible way to do that, because I don't work with anyone who's already on the committee nor at Microsoft. So it feels a bit like an empty rebuttal on his side. Writing this here, it's basically the longest discussion I've ever had my whole career with people who really know deep things about C++ and the process it's made by. I'm glad for it but I'm clearly not the demographic the original blogger was addressing! Even though I definitely feel those same issues he outlines.

  • Clanan 7 years ago

    Off-topic but can you recommend any resources off-hand for improving in your preferred C++? I share your approach but have a heck of a time finding quality books and such. (Email in profile if preferred.)

    • arandr0x 7 years ago

      I enjoy Yossi Kreinin's blog [1] and The Old New Thing [2]. I don't read programming language books, which I usually find to have little good industry knowledge and to be too detail oriented and easily out of date. Although K&R (the one about C) is a great book that's exactly the right size and the examples from the Gang of Four book are actually very on point.

      Most often I learn code by reading code. OpenSceneGraph is an example of an open source project that has modern ish C++ but no deliberately abstract misdirection. Open source usually does not have super clean code but it's a good way to sample the variety of structures you can find in a project. Of course the classics (sqlite and the Linux kernel) are much too C-like for where I'm getting at but they are still full of lessons for how to organize modules and APIs, how many arguments to pass and where, where to park I/O code, that sort of thing.

      I posted it here for the benefit of HN readers who may have the same question but I'll write you an email too so you can share what you want to improve and such. There aren't many C++ programmers left in the HN-reading, not just punching a clock demographic, and I kind of miss talking to people who get that. Like the prototypical game dev in the article I go to one conference a year, and it's not about C++.

      [1]: https://yosefk.com/blog/ [2]: https://blogs.msdn.microsoft.com/oldnewthing/

  • summerlight 7 years ago

    It's nice to hear this kind of concerns; don't be afraid! At last, C++ committee members are just people like you, hopefully nicer than expectation.

    > I find reading C++ "standards" papers onerous and feel like they're written in a way that's deliberately inaccessible.

    Sadly, it's somehow true; standard wordings are typically not readable for laypeople because its primary purpose is unambiguous specification but not education. Generally, I find that it's a good idea to avoid "wording for ~~" proposals if I haven't followed that specific line of proposals from its beginning.

    But many proposals are still fairly readable technical papers; for instance, Herb Sutter's proposals are generally easy to follow. (ex. https://wg21.link/P0709)

    > I feel like I'd be marginalized for not using template metaprogramming, not knowing the new hotness by heart, and generally being a proponent of C-with-classes.

    https://www.youtube.com/watch?v=rX0ItVEVjHc

    Don't worry. Mike Acton is not known to be a strong proponent of "Modern C++", but his session[1] is one of the most popular CppCon video on Youtube. Even if you don't like templates, people will generally respect you.

    > I just feel like so much of the C++ "standards" work feels like it's led by academics who think the concerns of working programmers like me are beneath them.

    http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/n479...

    You can find the list of the participants for the last meeting. Most of them are just engineers and even Bjarne is now working for Morgan Stanley (I think most of "designed by Ivory Tower" concerns can be generally credited to Bjarne being a professor before) They're just writing C++ code as their daily job like you (and very likely suffering from C++ as well). That's why they're writing proposals to improve the language.

    Some tangential story: except for few exceptions, PL academics are generally not working on languages like C++ because it typically doesn't align well with their interests. Usually they tend to use more elegant, academic friendly languages like Haskell or ML. Or even fancier languages like Coq, Adga, Idris, depending on topics. Or design their own languages. For formal verification researches, maybe C or Java. But C++ is typically considered as a complex, inelegant beast for researches.

deng 7 years ago

So, the solution to bad debug performance is essentially YAGNI? I'm afraid that isn't a very convincing argument. If your code is several orders of magnitude slower in debug mode, then this is a problem. Simply downplaying this with arguments like "single-step debugging is a last resort" or "just write better tests" won't make this problem vanish. Just like exploding compile times are not solved with "just buy Incredibuild".

But his argument fits well with C++'s history of finding exceedingly complex solutions for simple problems. Want to have efficient matrix calculation? Well, who needs native support for matrices when you can do the same with expression templates and static polymorphism/CRTP (see: Eigen library).

The last section of the article says you either do nothing or you get involved. I'm afraid it is missing the obvious third option: switch to another language which actually supports your use case.

  • neutronicus 7 years ago

    > But his argument fits well with C++'s history of finding exceedingly complex solutions for simple problems. Want to have efficient matrix calculation? Well, who needs native support for matrices when you can do the same with expression templates and static polymorphism/CRTP (see: Eigen library).

    I have to defend C++ here - "native matrices" is under-specified. In practice, "Matrix" is one of the leakiest abstractions in programming and you have to care about representation and choice of algorithm pretty much from the get-go, and IMO C++ is actually the best available option for managing that complexity, especially when you're solving large systems in parallel (and it's worth pointing out that one of the front-running open-source libs in this space is written in C++[1]).

    [1] https://github.com/trilinos/Trilinos

    • deng 7 years ago

      For many projects you won't need that complexity. You just want to directly map basic matrix operations to the usual BLAS/LAPACK calls. Fortran 90+ does that job well, for instance, and performance will usually be better than a C++ library (yes, I've tested against Eigen and Armadillo, although that was years ago). Combine that with the enormous compile times and the absolute ridiculous error messages for even simplest syntax errors, and I never looked back. Fortran may have a bad rep, but the newer iterations are actually pretty good for that kind of stuff.

      • curlypaul924 7 years ago

        I'm confused. Are you suggesting that Fortran is a good environment for game development?

  • vlovich123 7 years ago

    I'm curious if this comment stays true once you build debug with -Og since that's an actual optimization level now that ensures the code remains debuggable while still applying a meaningful number of optimizations. Most people turn off all optimizations in debug builds but that's silly for 99% of problems unless you're tracking down a potential compiler bug.

    • pm215 7 years ago

      QEMU briefly used -Og for its debug build setting, but switched back to -O0, because in practice using -Og results in a lot more situations where gdb just says "<optimised out>" rather than being able to tell you the values of variables, arguments in stack backtraces, and so on. If -Og really was "optimize where possible without breaking the debug illusion", that would be great, but in my experience it absolutely was not, and now I'm pretty wary of going back and trying it again when -O0 works just fine for me for debug...

      • vlovich123 7 years ago

        What version of GCC/GDB were they using? I've been using GCC/GDB 7 for embedded development and haven't really seen a problem with it. It's also almost a hard requirement because otherwise the generated code is much larger & much slow which impacts the runtime behaviour to a significant extent.

        • pm215 7 years ago

          This would have been gcc 5.4.0, as shipped by Ubuntu 16.04. Certainly it's possible that newer gcc do better, but from my point of view if they started out with something that breaks the debug illusion it indicates that their definition of the feature is wildly different from mine. Once bitten, twice shy.

  • cheez 7 years ago

    For what it's worth, I only use a debugger when I have a crash.

jayd16 7 years ago

I don't want to weigh in on the rest of the content but the characterization of the game industry is pretty accurate in my experience.

I would expand more on the first bullet point of why game devs don't test. Tests are anti-agile and game development is extremely agile. Usually you don't know what kind of game you're making until you're done.

  • oselhn 7 years ago

    That's not true. From my experience unit tests are great for agile. It will allow you to create "trusted" modules which you can move around and rework much easier (you can also treat your tests as executable documentation). Without tests you can't safely do any change to existing code especially if you are changing code written by someone else. You have to risk it and than spent considerable time in debugger if it breaks something unrelated to your feature.

    • coldtea 7 years ago

      >That's not true. From my experience unit tests are great for agile.

      Not when you're constantly prototyping, which is what game dev essentially is for the most part of the process...

      • lifthrasiir 7 years ago

        > which is what game dev essentially is for the most part of the process...

        Except that it isn't. You don't maintain the equal velocity of changes throughout the process, even for indie games. And there are always portions of the game amenable to tests.

        • jayd16 7 years ago

          Its not like tests are impossible. It's more that TDD is almost impossible.

        • coldtea 7 years ago

          Changes until the last minute to crucial gameplay elements are not uncommon..

        • omg_ketchup 7 years ago

          Sure, menus and networking. Maybe loot distribution, level generation, etc.

          Not so much actual gameplay though. That does constantly get tweaked.

  • tokyodude 7 years ago

    I'm not sure what most teams do now-a-days but I went to GDC in Koln and saw the Croteam talk. Over 10 or so years by programmers just adding a little here and there as it occurred to them they had build a pretty cool testing system.

    First they had made it so if someone was playing the game and saw a bug they could press the "file a bug" key, type in a description and the game would save out enough info to bring someone back to that point in the game, same camera, and possibly other state. From the bug database they could click a link that would launch the game back into that state, let someone verify the fix and mark it as fixes.

    The also had a waypoint system for bots to play through the puzzles (this was the Talos Principle they were talking about). If the bots ever got stuck, as in didn't make to the next waypoint within some time limit they bots would file a bug using the system above.

    https://www.gdcvault.com/play/1022784/Fast-Iteration-Tools-i...

    As another interesting idea apparently the creator of Thumber built a URL system so people press a button which would generate a URL into the clipboard, they could then paste that URL into Slack (or email/chat/etc) that would launch the game in a particular state to pass that other users on the team.

    • corysama 7 years ago

      Similarly, on a game I worked on we had a “controller monkey” that would spam all possible player actions (move, jump, attack, special powers) rapidly and randomly. We then had the testers record multiple paths exploring through each level. Every few seconds the monkey-controlled player character would be teleported a bit further down the path to ensure progress and coverage. Dozens of people would set the monkey to run all night when they went home for the day. In the morning we would have a dozen fresh crash dumps.

      • pfranz 7 years ago

        I feel like when people just say "tests" there's a lot of conflating. Usually, on a site like this or on a blog post, "tests" are referring to automated unit-testing or talking about TDD...which is probably most rare and most fought against in game dev (from what I've seen from an adjacent industry).

        Testing, in general, is pretty essential to writing code that does what you want. Test code is just automating what you'd be doing manually and it's a lot faster to have the code do it than for me to do it 10 times. Even if that's printing out a value or showing it in a debugger. Testing frameworks or libraries to fuzz out problems and harden code are quite common. Game dev often has a lot of manual play testers. Most engines and dev consoles have a lot of tools to either just record the screen or save state when problems are seen.

        Heck, most "cheat codes" were added to jump around the came to test for bugs. That's technically "test code."

        • tokyodude 7 years ago

          > Testing, in general, is pretty essential to writing code that does what you want.

          Shipped 17 games, several AAA games with no automated testing. Not saying that was good only that it happened and so is at least some evidence that automated testing is not essential.

          The problem with automated testing in games is most of it will be content specific and the content changes multiple times a day. A product like say GTA5 has 20-30 programmers and 300 artists and game designers. Those 300 artists and game designers are adding new content and or changing old constantly. Often they also use a scripting language or a blueprint like visual language to setup in-game logic like "open blue door if player has blue key". There's just too much to test. Every day 1000s of tests would have to be re-written because some boxes, doors, etc were moved .5 units to the left.

          There isn't zero value in automated testing (see above) but my experience with testing on a large project, Chrome, was that it really slowed down my velocity. Easily by much more than 50%. Of course Chrome is a platform and absolutely needs the tests IMO. I think a game engine would benefit from lots of automated tests. The game itself though it gets harder to figure out where the balance is between automated testing and manual testing.

          As others pointed out as well big teams, like an OS team or a browser team often have dedicated staff to setup and maintain a testing infrastructure. Game teams rarely have this. Maybe they should but few games are given the budget.

  • honkycat 7 years ago

    I agree /w testing game development being less of a priority than in other industries, but I feel that it is because code quality tends to take a back seat in gamedev.

    Tests are NOT anti-agile, that is just dumb. I feel like a bunch of hacker news hipsters read an article about TDD 3 years ago and then said "Yep that's my opinion! Tests are bad." Despite every major software company requiring unit tests for their production code-bases. ( Hint: It's because they did the research and found tests beneficial. ) Tests enable agility.

    Let's just get this out of the way now: Tests are not about catching bugs. Tests are about allowing your to safely refactor your code without breaking previously declared behavior.

    Testing enables you to iterate and refactor code without constantly releasing new regressions. Testing IS code quality. If you lack tests you lack a core piece of code quality.

  • de_watcher 7 years ago

    A sufficiently big game in an established genre with in-house engine and expansions has several levels of automatic tests.

    • pferde 7 years ago

      In my experience, such games only have manual testing, done by the people who bought the game.

      • de_watcher 7 years ago

        My experience is from working on one of the games with automatic tests.

        From the playing experience: yes, there are way more games with internal manual and external community testing.

        • SeanBoocock 7 years ago

          Yeah at a certain level game productions will usually have automatic “smoke” tests for general build stability and I’ve worked on one that had automatic feature tests with replayed input. These were generally useful for catching obvious crashes and regressions, but the overhead only makes sense for a certain level of production. Could also see, and have heard of, more rigorous functional testing of things like a procedural generation pipeline that are otherwise harder to get sufficient coverage of manually.

  • otikik 7 years ago

    I think the reason is that tests have a very obvious up-front cost, while the time they save is distributed in the future, in a non-immediately obvious way. I still think that they end up saving time, with some exceptions like UI code, which are more easily tested "by hand".

    Game project managers are infamous for not being great planners, so it wouldn't surprise me that they dismissed automated tests as "a waste of time" or "something that we can't do now because we don't have time now" (so we end up wasting more time in the end, having to do death marches, etc)

    • midnightclubbed 7 years ago

      In defense of game project managers (a phrase I may have never said before) is that their planning is often fine, but hampered by changes to game design/direction/requirements. Unless you're churning out a copy-cat derivative game design requirements have to be reasonably loose as you find what does and doesn't 'work'. Finding the fun is not easily managed.

    • blktiger 7 years ago

      Modern games have only increased the benefits a game studio can gain from testing as well. Games are now moving into service territory which only increases the amount of time spent maintaining the game while continuing to add to it.

      • midnightclubbed 7 years ago

        This. Determining when the effort should be applied is the tricky part. Games are still hit driven and get cancelled/re-purposed during development. You can spend a lot of QA engineering time developing systems to test functionality that never ships (case in point would be Fortnight - the original shipped game did not need to be tested against the current 100 player game instances and huge load but they could have spent a bunch of time testing AI systems that are no longer any part of the game).

        • blktiger 7 years ago

          I don't disagree that it's important to understand when something is purely a proof of concept vs something that will stick around to evaluate the costs. However, the AI systems are still in Fortnite (and they even used those systems for the Haloween event). The major money making part of Fortnite has been the battle royale mode though. If they open up the main game to be free-to-play similar to the battle royale mode those systems will probably end up being used quite a bit.

    • justinhj 7 years ago

      Or perhaps game projects are extremely difficult to manage? You think if it was just a matter of competence then these companies would put billion dollar revenue on the line not hiring the best they can find?

  • rafaelvasco 7 years ago

    I mostly. Agree. Tests can be useful in some specific games, specific cases, but in general they're much harder to do in gamedev compared to other areas. Maybe one case for gamedev would be to test the calculations of character damage in a RPG based on several factors. But that is an isolated case.

overgard 7 years ago

A few thoughts:

* None of the problems that have been commented on are unique to the games industry at all. Slow debug builds suck for all C++ developers and weird template meta-programming is confusing for practically everyone.

* He makes these broad hand-wavey statements like "individuals don't feel pain from slow compile times", or "big companies can just can throw processor power at it" to which I would say, BS. Fast iteration in C++ is really hard because of the delay and it's a big problem for everyone.

* "Participate more" -- isn't that exactly what people are doing on twitter? Not everyone can go to CppCon.

  • mattnewport 7 years ago

    I think slow debug performance is a bigger problem for games than many other applications. It's annoying for everyone but the nature of games as interactive experiences means you often have to play a game to reproduce a bug easily and often with a full production level where the bug was reported, not with simpler test content. If you can't maintain a playable frame rate in debug builds this can be a problem.

    This problem was worst in my experience in the Xbox 360 / PS3 generation because the in order processors handled debug builds very poorly and were different enough from a PC that it was common to have to debug on target rather than on a PC build on a much more powerful development machine. It's less of an issue with current generation consoles that are basically PCs as they don't suffer as badly with debug performance and many issues can be debugged on a PC build on a more powerful system. It may be more of an issue for mobile still.

    Fortunately many of the newer features of C++ 17 and 20 help both with improving debug performance and with simplifying / reducing the need for "weird template meta-programming". Several also help with compile times and modules in particular are quite focused on tackling the biggest root cause of slow compiles in C++.

  • andrewmcwatters 7 years ago

    My question is, where are all these "big companies" who can throw more processor power at these problems? Because frankly, every major company I've been at uses the same commodity or cloud hardware everyone else does, so I just don't see it. It's a moot point.

    Rarely do I see workstation-grade hardware in the wild, and when I have, they're build slaves that are incredibly anti-agile.

    • mattnewport 7 years ago

      EA I know gives developers very powerful workstation class developer machines because I used to work there and have friends who still do and if anything it sounds like they've got even more powerful on a relative basis since I left.

youdontknowtho 7 years ago

Whenever the response to a twitter argument is "get involved and make the change you want to see happen" you can expect absolutely nothing to change.

Part of this is just people complaining on a platform that over values short pithy complaints.

pulsarpietro 7 years ago

I am a bit puzzled by this:

"Before about the early 90s, we didn’t trust C compilers, so we wrote in assembly."

There a lot of games released during the 80's, were they really all written in assembly ?

https://www.myabandonware.com/browse/year/

I don't have experience in the game industry at all, I must add.

  • vardump 7 years ago

    Heavier things, like graphics processing were typically written in assembler. Or sound mixing.

    Some things like texture mapping you could only write in assembler, because you'd need to use x86 lower/higher half of word (like AL and AH registers) due to register pressure. Spilling to stack could have caused 50%+ slowdown.

    486 era you needed assembler to work around quirks like AGI stalls.

    On Pentium the reason for assembler was to use FPU efficiently in parallel with normal code (FPU per pixel divide for perspective correction). Of course you also needed to carefully hand optimize for Pentium U and V pipes. If you did it correctly, you could execute up to 2 instructions per clock. If not, you lose up to half of the performance (or even more if you messed up register dependency chains, which were a bit weird sometimes).

    One also needs to remember compilers in the nineties were not very amazing at optimization. You could run circles around them by using assembler.

    Mind you, I still need to write some things in assembler even on modern x86. But it's pretty little nowadays. SIMD stuff (SSE/AVX) you can mostly do in "almost assembler" with instruction intrinsics, but without needing to worry about instruction scheduling and so on.

    • coldtea 7 years ago

      >486 era you needed assembler to work around quirks like AGI stalls.

      Plus, nobody had a 486 in the 80s (it was released in 1989). People would be lucky to have a 286, but usually just some home computer (Apple II, Spectrum, Commodore 64, Atari ST, Amiga 500, Amstrad CPC, etc).

      • vardump 7 years ago

        Oh right, I missed it was about eighties.

        Yeah, back then assembler was even more pervasive. It was the only way to write something publishable on 8-bit systems. Well, there were some action games (like C64 Beach Head), trivia and adventure games written in BASIC.

        You spent a lot of time on 8-bitters getting code and asset size down, so that it'd even fit on the machine in the first place. Forget about luxuries like division and multiply, most advanced math those things could do was little more than adding (add, sub, and, or, xor) two 8 bit numbers together. Even shifts and rotates could only handle 1 bit left or right.

        CPU clocks were measured in low single digit MHz. On top of that, 8 bitters were very inefficient — each instruction would take 2-8 clock cycles (6510) or 4-23 cycles (Z80).

        On PAL C64 you have 19656 clock cycles per 50Hz screen frame minus 25 bad lines. So you could realistically expect to execute only 5-8k instructions. If you used all the frame time for just copying memory, you'd be able to transfer just about 2 kB. Just scrolling character RAM (ignoring color RAM) took half of the available raster time (yes, I know about VSP tricks, but it wasn't known in the eighties).

        16-bit systems allowed some C, but most Amiga and Atari ST games were written in assembler. I'd guess same is true for 286 era, but not sure.

        • aerique 7 years ago

          I'm not sure this is correct but: I remember reading or hearing rumours that Psygnosis' Barbarian (for the Amiga and Atari ST) was written in C and that it was so slow because of that.

          What I mean is: we perceived it as being slow because of the rumours. Just typical nerd attitude that we still see now.

          • vardump 7 years ago

            Sure, some games were written completely in C. I believe significant number were hybrids, performance sensitive parts in assembler and other code in C.

            Some games were prototyped in C and optimized afterwards.

            For example Amiga Turrican 2 required 33 MHz 68030 CPU during the development phase. Of course the final version ran fine on 7 MHz 68000 Amiga.

    • CoolGuySteve 7 years ago

      Last time I used SSE intrinsics, which was GCC 4.9 I think, I had a lot of trouble with register usage. It looked like it was compiling down to use only one SSE register for everything instead of parralelizing across them.

      I tried the same algorithm in godbolt with some clang versions and it was slightly better, using two or three registers, but not by much. So I had to break it into inline assembly.

      I wonder if GCC has improved since then.

      • vardump 7 years ago

        > It looked like it was compiling down to use only one SSE register for everything instead of parralelizing across them.

        Yeah, that's a common problem and leads to nasty dependency stalls. MSVC is horrible in the same way, at least 2015. Haven't tried newer versions yet. Intel's ICC seems to generate good code most of the time.

      • exDM69 7 years ago

        > I wonder if GCC has improved since then.

        Yes, it has. I've written a lot of SIMD code and spent a good amount of time reading the compiler assembly output and there has been huge improvement over the last decade.

        GCC register allocation wasn't great, then it got better with x86 SSE but still sucked at ARM NEON, and now it seems to be decent with both.

        Clang was better at SIMD code before GCC was. It was equally good with SSE and NEON.

        In my experience, compilers are much better than humans at instruction scheduling. Especially when using portable vector extensions, you don't have to write the same code twice and then tweak the scheduling for every architecture separately.

        • vardump 7 years ago

          > In my experience, compilers are much better than humans at instruction scheduling.

          It'd be more accurate to say they're much better than humans when the heuristics or whatever they use works. Sometimes the compiler messes up badly.

          The workflow is often to compile and then examine disassembly to see whether the compiler managed to generate something sensible or not.

          Other issue is that compiler pattern matching is sometimes not working and generating correct SIMD instruction. Even when data is SIMD width aligned. For example, recently I saw ICC not generating a horizontal add in the most basic scenario imaginable. * shrug *.

          • AnIdiotOnTheNet 7 years ago

            Things like this make me question the wisdom of ever using higher level languages. We took the path of abstracting our description of what we want to happen away from processor instructions with the idea that we could write code that could then compile on multiple architectures without changes, but the reality is that we still often need to special case things even without performance considerations, and the farther we abstract the more performance seems to be impacted and the more often we seem to end up jumping through abstraction hoops rather than getting things done.

            The minimalist in me wonders if maybe just using some kind of macro system on top of assembler plus a bytecode VM with the ability to drop to native instructions wouldn't ultimately be better.

    • pulsarpietro 7 years ago

      I must have been a bloody hard and interesting work, back then.

      • vardump 7 years ago

        Things are much harder nowadays due to complexity. From almost impossible to understand CPU cores to massive amounts of third party code to the modern requirements (IoT, ouch!).

        Debugging predictable single thread single core system was also child's play compared to distributed networked beasts each running on lots of cores and thousands of threads.

        Nineties problems were contained in a small box. Oh, and no internet like today, so needed to order books and magazines. And to use BBS and usenet. Even then, a lot of it was reinventing the wheel again and again.

        Modern problems are sometimes nearly uncontained (think software like web browsers, etc.).

        • enjo 7 years ago

          Just jumping in to agree here. I've always thought of this way:

          In the 90's the code I wrote was more difficult. Today coding is much easier (better languages, tooling, etc..). However the systems I build are many times more complex.

          In the 90's we hired the best programmers, today we hire the people who are best at managing ambiguity and complexity.

          All of this is very hand-wavy as there are certainly still disciplines where pure programming skill is most important, but those seem to be fewer and fewer every day.

        • pulsarpietro 7 years ago

          Anyway I envy you guys, now things tend to be too high level and you lose that holistic view of the computing machine.

          Back than you felt pretty skilled I am sure, now I feel anybody could do my job, often. Unless you are working for the big 4 or similar, many jobs don't give you that excitement.

      • richardjdare 7 years ago

        Back then assembly language was much more ergonomic than it is now, and the machines were simpler. Many of my favourite games in the 80s were made by teenagers programming in their bedrooms. Check out this 68000 assembly tutorial video from Scoopex, the Amiga demo scene group: https://youtu.be/bqT1jsPyUGw

        He gets a simple graphical effect going on the Amiga in only a few lines of assembly. Doing the same thing using DirectX in c++ would take you all day!

        • pjc50 7 years ago

          > much more ergonomic than it is now,

          This is an interesting line of argument - in what way, and what could be done to improve the ergonomics?

          > He gets a simple graphical effect going on the Amiga in only a few lines of assembly. Doing the same thing using DirectX in c++ would take you all day!

          This is absolutely true, but in something like ShaderToy you can go back to producing complex pixel-bashing effects with a huge amount of processing power.

          It's just the external Tower of Babel from boot to usability has got a lot larger.

          • richardjdare 7 years ago

            My notion of ergonomic mostly comes from programming the Amiga in 68000 and then moving to the PC and being horrified by x86!

            In 68k you had 8 32-bit data registers, (d0-d7) and 8 address registers (a0-a7)

            If you wanted to access bytes or 16 or 32 bits you could do so like this:

              move.w #123,d0 ; move 16bit number into d0
              move.b #123,d1 ; move byte into d1
            
              move.l #SOME_ADDRESS,a0; set address reg a0 to point to a memory location.
              move.b d1,(a0) ; move contents of d1 to memory location a0 is pointing to.
            
            Nice and easy to work with and remember.

            On x86, thanks to its long and convoluted history you have all kinds of doubled up registers which you have to refer to by different names depending on what you are doing, and tons of historical cruft.

            On top of the CPU, the old home computers had no historical cruft and it was very easy to talk to the hardware or system firmware; usually you'd just be getting and setting data at fixed memory locations. I can read an Amiga mouse click in one line of 68k. I've no idea how you'd do it on a modern PC or even Java! Modern systems just aren't as integrated, for better and worse.

            Assembly language was also part of mainstream programming back then. You'd learn Basic then go straight to assembly if you wanted to do anything serious. So there were computer magazine articles on assembly, childrens books[1]. My first assembler, Devpac, came from a magazine coverdisk with a tutorial from Bullfrog, Peter Molyneaux's old game company[2].

            So there were a whole range of cultural and technical reasons for assembly language being much more of a human-useable technology back in the day.

            >It's just the external Tower of Babel from boot to usability has got a lot larger.

            Yes I agree, I kinda miss being able to see the ground, which is probably why I find retro programming so appealing.

            [1]https://archive.org/details/machine-code-for-beginners [2]https://archive.org/details/amigaformat39/page/n61

          • MrRadar 7 years ago

            > This is an interesting line of argument - in what way, and what could be done to improve the ergonomics?

            Due to the enormous complexity of modern CPUs I'm not sure there's anything that could be done. With the 486 and contemporary (and earlier) uarches you could largely expect the CPU to execute exactly what you wrote so understanding the performance impact of any given bit of assembly was pretty straightforward. Then CPUs started adding features like superscalar, speculative, and out-of-order execution, branch prediction, deep pipelines, register renaming, and multi-level caching that massively complicate modeling the performance of any given code.

            For example you may need to explicitly clear an architectural register before reusing it for a new calculation to avoid creating a false dependency in the uarch which would prevent the CPU from executing the calculations in parallel. Knowing when this is necessary can be hard and the rules are usually different between different uarches, even within the same uarch family.

            Good assembly programmers who are aware of all this complexity can still beat compilers but they certainly can't do that for the scale of code that compilers routinely generate. Thankfully compilers are generally "good enough" these days and assembly only needs to be hand-written for very hot inner loops for performance-critical code or for cryptographic code where the exact performance characteristics of the code could potentially leak information if they're not handled correctly.

        • vardump 7 years ago

          > Back then assembly language was much more ergonomic than it is now, and the machines were simpler.

          In a way that's true. I also enjoyed writing assembler back then, especially on 68k.

          However, to really extract the last cycle you often ended up generating a block of code at runtime (kind of very basic JIT).

          Sometimes self-modifying code provided the final oomph to get something running fast enough. The reasons varied: sometimes it was because of running out of registers, sometimes dynamically changing the performed operation without branch.

          Too bad for the later CPUs with instruction prefetching and caching...

        • slezyr 7 years ago

          You should use SDL for this and not DirectX. You don't need complex API for GPU to paint line if you concerned about complexity.

          • zrobotics 7 years ago

            Have you looked through the SDL source, though? Sure, I can get a window open and paint lines very easily, I only have to write ~50-150 LOC. However, I've silently added thousands of lines (and at least one dll) to my project. The library only hides some of the complexity (which I love about it), but the complexity still exists.

          • AnIdiotOnTheNet 7 years ago

            That's just piling on even more abstraction! You may as well use a hex editor and create a single line BMP, then "run" it with any image viewer.

            • slezyr 7 years ago

              "more abstraction"? It's just one to be cross platform, you can't use OpenGL without it.

              No need to go full shit ape and creation of 3D images in bmp is excellent exercise you should try.

              https://github.com/ssloy/tinyrenderer

              • AnIdiotOnTheNet 7 years ago

                My point is, if you're allowing arbitrarily deep abstraction you can make almost any task trivial by just using something that basically already does what you're trying to do. The point the parent was making was how, with almost no abstraction at all, you could do graphics on an Amiga in a few lines, but to do the equivalent on a modern system at the lowest reasonable level of abstraction takes a lot more effort. Comparing that to a giant abstraction layer is missing the point.

                > you can't use OpenGL without it.

                Yes you can. You can't even pretend that coding once against SDL will absolve you of having to deal with platform issues, it just helps a lot.

                > and creation of 3D images in bmp is excellent exercise you should try.

                I'm a hobbiest game dev who almost exclusively uses software rendering (albeit to a framebuffer that gets pasted onto the screen with OpenGL as the path of least resistance). I've also written image libraries. None of this has anything to do with the parent comment.

  • simonh 7 years ago

    I can support the trust issue with C compilers. I did my CS degree in the 80s and every C compiler I used had severe bugs. These weren’t on Unix to be fair, they were to run on PCs or to program embedded systems, but they were awful. I remember one (Aztec C?) crashing out because one of my statements had two lines of whitespace before it instead of one (or maybe it was an extra space in a blank line? I forget), which was fine elsewhere in the program but just caused a problem in that particular context. I first used GCC in the 90s and it was heaven.

    • mark-r 7 years ago

      In the '80s I used a cross-compiler that ran on DOS and produced code for 68K. We probably ran into a compiler bug monthly on average. We always got great turnaround on our bug reports though - I think it was a one-man shop.

  • flohofwoe 7 years ago

    8-bit CPUs instruction sets were really aimed towards assembly programming instead of being compile targets for "high-level languages" like C, and coding tricks are required (not optional) that are not possible in C, like self-modifying code or exact cycle-timing of instruction sequences (e.g. to fit into a video scanline). Some more obscure languages were not that bad (e.g. Forth), but most game programming on 8-bit machines was definitely done in carefully hand-crafted assembly.

    This only changed slowly with 16-bit machines like the Amiga or Atari ST, they had more memory, the Motorola 68000 instruction set was more suited for compiled languages, and the custom chips (like copper and blitter) freed the CPU from many graphics tasks. Yet even on those machines the critical parts were usually written in assembly.

  • coldtea 7 years ago

    >There a lot of games released during the 80's, were they really all written in assembly ?

    Most of them, yes. Here's a well known game:

    http://fabiensanglard.net/prince_of_persia/index.php

  • pavlov 7 years ago

    Anything with realtime graphics was probably written in assembly. There were game companies whose entire business was porting games between platforms, essentially rewriting the code for each machine.

    Slower-moving adventure and RPG games might be in a higher-level language. (IIRC the original Wizardry was written in a VM-based Pascal for Apple II?)

    Companies that specialized in adventure games would have their own interpreter and VM — Infocom's ZIL, Sierra's AGI, Lucasfilm's SCUMM. Game developers would write code in a scripting language against that company-standard VM.

    Amateur games might be written in BASIC because every computer under the sun shipped with a BASIC interpreter back then.

    C wasn't a practical option because decent compilers didn't exist for most non-Unix systems until the end of the 1980s — or if they did, they'd cost an arm and a leg. (I think the retail price for Microsoft's C compiler for DOS was several thousand dollars.)

  • georgeecollins 7 years ago

    In the mid 90's teams I led wrote code in assembly for things that were very performance intensive, in particular 3D rendering without 3D cards. There was also a need to write in assembly for the PS2, which had a very complicated and idiosyncratic architecture. In both cases you were really desperate for every bit of performance you could get and you didn't necessarily trust compilers to make optimal code. Since then compilers have improved and CPUs are much smarter about predicting what memory they will need.

    Also, Microsoft Visual C became good, but it was not before version 4. I remember watching a team at Activision literally take more than an hour to compile their game. Perhaps they were doing something wrong, but similar teams had much less of a problem when 4.0 came out. You cannot imagine what a drag that is to a team's productivity and creativity.

  • gaius 7 years ago

    With a good macro assembler like DevPac on the ST or Amiga, writing assembly was actually not that far off a contemporary high level language. It helped that the 68k had a very programmer-friendly instruction set.

  • codemusings 7 years ago

    I can't recall if I read it or whether it was mentioned in that Mocap video diary on youtube but Prince of Persia is an example that comes to mind.

    EDIT: Found it. Of course Fabien wrote about it: http://fabiensanglard.net/prince_of_persia/

  • sehugg 7 years ago

    They weren't all in assembly. Lots of games were published in BASIC or Pascal or even custom interpreters like Z-Code. But most arcade games and cartridge titles were coded in assembly back then.

CoolGuySteve 7 years ago

Is Visual Studio really the best debugger?

Every time I use it I get really frustrated by the difficulty of entering complex instructions. The GUI is more discoverable but I find myself missing gdb ‘s functions and parser.

However, one thing in gdb that’s become steadily worse is the ability to evaluate STL’s operator[] and the like in optimized code, with the debugger frequently whining about inlining. It’s pretty horrible having to decipher the _m_data or whatever of various implementations.

I’m actually not sure if gcc is not compiling the inlines into the object code (I thought it was required by the standard) or if gdb just can’t find them.

  • berkut 7 years ago

    Unfortunately, yes (and I'm a very strong Linux proponent these days), in terms of what it can display and work out in terms of locals and watches automatically. It's the one thing I miss from Windows (and I haven't used it really in 10 years).

    I'm also of the opinion that GDB is getting worse in terms of what it shows (or more often won't) these days, especially with regards to >= C++11 - maybe it's just out-of-date python pretty-printers, but on multiple recent linux distro machines, it won't even show the contents of std::string these days without diving inside the structure or using expressions.

  • pjmlp 7 years ago

    Yes, given the graphical tooling for multi-core, GPGPU, data visualization, edit-and-continue, mixing Assembly with code (even on .NET), interaction with GUI components on WPF/UWP apps,...

    • CoolGuySteve 7 years ago

      That kind of gets to the heart of what I’m saying. These graphical tools are great as long as they do what you need. But they are less composable and customizable than an expression parser.

      Like in the article he mentions not being able to see custom data types, but my .gdbinit has a few pretty printers in it for exactly that purpose.

      And when you do get something customized in MSVC like a specific PGO build or something, it tends to be tightly coupled to that project. It’s less easy to cut and paste into another project since the primary interface is really a dozen little text fields modifying XML somewhere.

      • pjmlp 7 years ago

        VS supports displaying custom types.

        Regarding gdb, during the mid-90's I got by calling it from XEmacs, until I discovered DDD.

        I got spoiled with Borland debuggers, typing n, s, l, p all the time and drawing structures on paper gets tiring after a while.

      • coffeeaddicted 7 years ago

        You can parse expressions inside the "QuickWatch" dialog in VS.

    • jcelerier 7 years ago

      At least for the Qt ecosystem, apps like Heaptrack, Hotspot, Gammaray are in my opinion miles better than what VS provides.

  • uglycoyote 7 years ago

    I can't speak to the comparison with gdb, but as a game developer I have had the misfortune of working with a lot of second rate debuggers that were the only option because they were part of the toolset for that particular game console. Most of these are pieces of software that the console maker would contract out to a smaller company to build custom and provide as part of the development kit, and they would be riddled with bugs, poorly supported, often poorly translated from Japanese (I recently worked with a debugger which had "Go" and "Come" as a couple of the primary menu items, I think those were supposed to mean start and stop), and lacking in important features.

    Basically it was such a pain to debug on the actual game console platform that we made a PC build of the game just so that we could have the pleasure of debugging in Visual Studio and avoid having to debug on the console, even though we were not planning to ship on PC. Maintaining a separate PC rendering engine and other platform specific libraries at a fair expense, but it was always worth it in terms of being able to solve problems faster. We had other motivations as well, on some game platforms the tools for linking and building an executable "ROM" could be quite slow, so having a PC build would save quite a bit of turnaround time.

    I have always had a very high opinion of Visual Studio's debugger because it has a few features that are invaluable that other debuggers lack:

    - "Immediate Mode", a.k.a., the ability to run C++ functions while stopped at a breakpoint. We use this a lot for writing functions which log out a bunch of useful information which would otherwise be tricky or time consuming to find via the debugger's usual interface (watch window, or whatever). In particular, we encode all our strings in the game as 32-byte hashes but in debug modes we keep a u32-->string lookup table around, and calling functions in the immediate window is invaluable for getting identifying information from out of those string-hashes.

    - Data Breakpoints, i.e. stop when a particular memory address is written to. Critical for debugging "memory stomp" bugs such as array-overflow issues or writes to dangling pointers.

    - Custom "visualization". Visual Studio's debugger has an XML language called .natvis which allows you to define a custom way to pretty-print specific types. Basically the equivalent of defining a custom "ToString" in a language like C#. The .natvis language is kind of annoying to use, but without it certain data types would be a real pain to look at in the debugger. (e.g. rather than using pointers everywhere, we have a safer Handle type which is essentially a lookup into a table that has a pointer. These indirections would be a pain to follow manually in the debugger, but .natvis allows us to present the target object in the debugger automatically, so that it is as easy as following a normal pointer)

    - Automation. There's a fairly rich automation API ("ENVDTE") which allows you to drive the debugger from outside tools. I'm currently using this with Python to provide a convenient way for non-technical people (e.g. QA testers) to bundle up and send all of the relevant details from a crash to other members of the team. (e.g. callstack with all the locals, and contents of log)

arka2147483647 7 years ago

>Epilogue (... snip ...)

> 1. Do nothing (...) You can deal with that by imposing rules on what is and isn’t allowed in your codebase, (...)

This is what everybody is already doing in gamedev

> 2. Get involved (...) C++ committee participation is open to everyone. (...)

Most game dev studios are Small or Mediums sized companies, and don't really have the time to waste in Committee meatings...

  • Jyaif 7 years ago

    > Most game dev studios are Small or Mediums sized companies, and don't really have the time to waste in Committee meatings..

    Irrelevant. What counts is where the C++ game devs are, and it's in the big companies. And participating in the design of a language is not a waste of time...

    • SomeHacker44 7 years ago

      Its a matter of perspective, what constitutes a waste of time.

      If you are under pressure to ship something now/soon, and may not exist as a company in the next standards cycle, then it is probably a waste of time for that company.

pmarin 7 years ago

>Stop going to GDC as your one conference per year and start going to CppCon.

In fact the most popular CppCon video in Youtube is from Mike Acton: “Data Oriented Design and C++”

shmerl 7 years ago

Game developers should start using more Rust.

  • uglycoyote 7 years ago

    For console development (Sony, Nintendo, XBox), anything but C++ has never seemed like an option because all of the development tools and libraries provided for working on those consoles are C++ centric.

    But I'm curious if there are any console development companies that are successfully using Rust or other languages which perhaps can link with C++ libraries?

    We use C# in our studio for tools, and are able to link it with our game C++ so that we can run some of the game's subsystems within the tools (e.g. animation engine) but shipping the game with C# code is not an option for several reasons, performance being the most important, but also we need to build our game for the console using CLang/LLVM, and I suspect it's not possible to write C# which interfaces with C++ using LLVM, only with Microsoft's compiler.

  • rafaelvasco 7 years ago

    Yeah it's an awesome language. Has its downsides (language enforced memory micromanagement is a good thing but can get annoying sometimes.) but it's one of the best we have now. For now i'll stay with my beloved C#.

    • shmerl 7 years ago

      C# doesn't sound like a good option for games development though (except may be for scripting used in various engines).

      It's now dominated by C++ for a good reason, since it requires tight performance control. So Rust is a valid candidate for fixing C++ issues. C# - not really.

    • satsuma 7 years ago

      +1 for csharp, got to use it in an introductory unity class and fell in love with it.

edoo 7 years ago

Most industry game dev is also done on top of C++ engines and libraries. I can see Go being used in the near future as the big engines offer bindings but I bet in 5-10 years the average startup is using something like C#. It is slow as beans but eventually CPU speed will make it much more reasonable for real use. The Unity engine is a good example. It has a weird easy powerful super bloated paradigm.

  • jokoon 7 years ago

    You just mentioned 2 garbage collected languages on a game dev topic. I don't think they're adequate. Remember how the article was very insistent about being able to control memory and CPU resources. Those are one of the few reasons C++ is not dead.

    Rust? I don't see it either.

    • walkingolof 7 years ago

      Garbage Collection is not per default something bad, even in game development, its a method of memory management, just like reference counting, or manual memory management.

      If you are allocating memory in your renderloop, your likely doing it wrong. Allocation outside of this critical path, well, chose your poison.

      GC will consume more memory, but likely to be faster when it comes to allocating and deallocating large amount of small blocks.

      Reference counting, if you do it properly (to avoid thread starvation), is costly. Cheap atomic reference counting, involves a calculated risk that you may have threads hanging.

      Manual memory management, well, we all know the cost of that :)

      That being said, many many games are today developed with Unit which uses C# as its primary programming language.

      • HippoBaro 7 years ago

        Sure, you can. But then you need to manage when GC triggers. Because if you don't, you'll drop frames. And then the real choice is between C++ where there is (little) extra work getting RAII correct vs GC languages where you need to keep preventing GC up to a point where it's okay to freeze the universe.

      • jokoon 7 years ago

        You can allocate stuff on the stack instead, or keep things in a game state.

    • deng 7 years ago

      His point is that many studios nowadays use an established engine and do not develop in the C++-from-scratch-style. Especially smaller studios do not have the capacity for this and also depend much more on cross-platform availability to generate more sales. So they use Unity or stuff like Gamemaker Studio and use whatever binding is available (C#/Boo/Js for Unity, Gamemaker Studio has its own language). Or they even turn to something more exotic, like Haxe/Heaps.

    • andrewmcwatters 7 years ago

      A big part of this is ecosystem. You have so many game libraries and software that are written in C and C++. To many gamedevs, Rust is just a systems language Go. It doesn't bring anything significant to the table compared to just using a limited subset of C++.

      • edoo 7 years ago

        The big game mills don't care about languages, they care about the bottom line. If you can churn out the same quality product with more junior engineers because the language is less difficult to master it is going to happen rather soon.

    • edoo 7 years ago

      I see it in the sense of writing your game logic in Go using bindings for Unreal or one of the big ones. The real performance critical stuff will probably always be in C++. The popular game Rust (not language) uses Unity and the overall engine is pretty bad but it works and made them a fortune. By its nature it is incredibly extensible. You can mod it by dropping source files with hooks right into a directory without compiling anything. The ease of that kind of stuff draws people like crazy.

  • andrewmcwatters 7 years ago

    You might want to stretch that estimate out. I'm running an Intel Core i5-3550 from 2012. Furthermore, I foresee no reason to upgrade in the next 4 years. The current i5 on userbenchmark.com's front page is the 9600k, which says it's ~53% faster than my 3550. 7 years later.

    CPU performance is barely going anywhere. Developers should instead try to figure out how to do more with less growth.

    GPUs are also overpriced, and playing older games and comparing them to new ones doesn't show great payoff. As far as I'm concerned, we've plateaued. Maybe going from a GTX 760 to a 1060 would give me a few more frames, but frankly, more often than not, the games are programmed like utter shit.

    • HippoBaro 7 years ago

      This. When you think about it, it's an exciting time to be a dev. We need to be clever at stuff and can't just expect next-generation CPUs to make coffee for us.

  • corysama 7 years ago
mark-r 7 years ago

Maybe the QA folks are worse off in game dev companies, but they seem to be second class citizens everywhere else too. Which is a pity, since as stated they are worth their weight in gold. A developer has a mindset of how do I make this work, a QA person has a mindset of how do I make this break - they are completely complimentary.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection