Settings

Theme

Why C++ for Unreal 4

forums.unrealengine.com

209 points by zschoche 12 years ago · 177 comments

Reader

flohofwoe 12 years ago

We've also been there done that (about 10 years ago though), we had a very powerful scripting approach integrated into our game engine which gave direct access to game play systems in order to let our level and game designers build scripted behaviour into the game. In the end we ended up with a terribly huge mess of script code (I think it was about a third of the actual C/C++ code) and the majority of the per-frame performance-budget was lost somewhere in this scripted mess. The game sometimes suddencly crawled to a halt when some crazy scripting construct was called, and we had a lot of trouble getting stuff into a shippable state until the gold-master milestone (this is the game: http://www.metacritic.com/game/pc/project-nomads).

The main problem with scripting layers is that you are basically handing programming tasks over to team members who's job is not to solve programming tasks, and thus getting a lot of beginner's code quality and performance problems which are almost impossible to debug and profile (unless you have a few top-notch coders in the game- and level-design teams).

And then there will be definitely those "creative workarounds" to get something done which neither the engine nor the scripting layer was designed for, which make entertaining horror stories the programmers tell the new hires when they inevitable ask why your engine doesn't have scripting ;)

A better approach is to give the level designers simple, pre-programmed, combinable high-level building blocks (AI behaviours, actions, triggers, etc), and let them customize a level (as in game area) with this. But never build the entire game logic with such an approach! With this, stuff can still be fucked up, but at least the performance-sensitive stuff can be implemented by the programming team, and it's much easier to debug and maintain.

[edit: typos]

  • greggman 12 years ago

    The biggest advantage of scripting for me has been

    1. Co-routines

    Co-routines (co-operative multi-tasking?) mean you can do stuff like

        while (isWalking()) {
          advance();
          yield();
        }
        
    
    This is effectively 'yield' from Python, C#, etc.. You can implement this in C++ by swapping stacks and calling setjmp but there's usually issues.

    2. Iteration time

    You can usually swap script code live. This project aims to fix that for C++ though I'm a little skeptical it can stay robust

    https://github.com/RuntimeCompiledCPlusPlus/RuntimeCompiledC...

    • crucialfelix 12 years ago

      The slightly obscure music programming language SuperCollider [1] added co-routines about 10 years ago and they became one of my beloved techniques. Was very glad to see them come to python and soon to mainstream javascript.

      Boost has a c++ implementation but it looks quite different:

      http://www.boost.org/doc/libs/1_55_0/libs/coroutine/doc/html...

      [1] http://supercollider.github.io

      edit: pythons new asyncio stuff looks very interesting:

      https://docs.python.org/3.4/library/asyncio-task.html

      • munificent 12 years ago

        > Was very glad to see them come to python and soon to mainstream javascript.

        I really need to get around to writing a blog post to explain this in detail since this misapprehension is endemic. Python and JavaScript do not have coroutines, they have generators. Lua has actual coroutines.

        The latter is dramatically more expressive than what you can do with what Python, JavaScript, and C# offer. This mistake drives me crazy because it means people don't know what they're missing.

        Here's a quick example. Let's say we've got a little Python class for binary trees:

            class Tree:
              def __init__(self, left, data, right):
                self.left = left
                self.data = data
                self.right = right
        
        We'll add a method to do an in-order traversal. It takes a callback and invokes the callback for every data value in the tree, like so:

            def walk(self, callback):
              """Traverse the tree in order, invoking `callback` on each node."""
              if self.left:
                self.left.walk(callback)
        
              callback(self.data)
        
              if self.right:
                self.right.walk(callback)
        
        We can create a little tree and then print the data items in order like so:

            tree = Tree(Tree(Tree(None, 1, None), 2, Tree(None, 3, None)), 4, None)
        
            tree.walk(print)
        
        (This works in Python 3, in Python 2, you'll have to make a little fn for print.) Swell, right?

        Later, we decide we want to iterate over the items in a tree. Easy-peasy, Python has generators! We can just make a function that takes a Tree and returns a generator. We already have a method to walk the nodes, so we just need to call that and then yield the items, like so:

            def iterateTree(tree):
              def callback(data):
                yield data
        
              tree.walk(callback)
        
        Then you can just use it like so:

            for x in iterateTree(tree):
              print(x)
        
        Perfect, right?

        Actually, no. This doesn't work at all. You can't yield from the callback passed to walk. That's because walk() itself doesn't know that the callback is a generator.

        This is the problem with generators: they divide all functions into two categories: regular functions and generators. You run a regular function by calling it. You run a generator by iterating over it. The caller must use it in the correct way.

        At its simplest level it means you have to be careful when refactoring. If you have a generator function that gets too big and you want to split it up, you have to remember that the functions you split out are also special generator functions if they contain a yield. You have to remember to flatten it when you "invoke it".

        It's more than just annoying though: it means it's impossible to write code that works generically with both kinds of functions. In other words, all of your higher-order functions like map, filter, etc. now only work with some of your functions. (Or, I suppose, you could explicitly implement them to support both but that's more work and I don't think most languages do.)

        In languages like Lua, the above code just works. You can yield from anywhere in the callstack and the entire stack is suspended. It's fantastic.

        (If I can be forgiven a bit of self-promotion, I'll note that my programming language Wren[1] can not only express full coroutines like Lua, but also supports symmetric coroutines which can express some things Lua cannot. They are roughly like the equivalent of tail call elimination for coroutines.)

        [1]: https://github.com/munificent/wren

        • crucialfelix 12 years ago

          Great write up - make that a rough draft for your post. I was going to edit my comment earlier to point out that coroutine is not generator.

          I realized this limitation one day while trying to do it in python. You cannot just yield another stream.

          SuperCollider has proper co-routines: http://danielnouri.org/docs/SuperColliderHelp/Core/Kernel/Ro...

          and the pattern library is built entirely around embedding in streams and yielding others streams. it uses this for very interesting numeric music patterns.

          Python 3.4 also now has co-routines: https://docs.python.org/3.4/library/asyncio-task.html

          esp this is interesting:

          result = yield from future – suspends the coroutine until the future is done, then returns the future’s result, or raises an exception, which will be propagated. (If the future is cancelled, it will raise a CancelledError exception.) Note that tasks are futures, and everything said about futures also applies to tasks.

          result = yield from coroutine – wait for another coroutine to produce a result (or raise an exception, which will be propagated). The coroutine expression must be a call to another coroutine.

          Javascript does/will have simple generators

        • anon4 12 years ago

          Yield and generators (i.e. save stack; return value to caller; receive value from caller) are really a language feature for writing a runtime for a different language with coroutines. Or if you're willing to write your program in a way that looks like it was generated by a source-to-source transformation tool, you can write your coroutines on top of them. The most basic construct is something like this:

            CurrentCoroutine = None
          
          
            def run(main, arg):
              global CurrentCoroutine
              CurrentCoroutine = main
              while CurrentCoroutine is not None
                CurrentCoroutine, arg = CurrentCoroutine.send(arg)
          
          
            def corodecorator(coro):
              @functools.wraps(coro):
              def init():
                c = coro()
                c.next()
                return c
              return init
          
          
          And this is pretty much it. A simple example for two coroutines that pass control to each other would be:

            @corodecorator
            def coro1():
              # yield nothing on first call to receive args
              arg = yield None
              friend = arg[0]
              while True:
                print('coro1')
                arg = yield friend, (CurrentCoroutine,)
                friend = arg[0]
          
          
            @corodecorator
            def coro2():
              arg = yield None
              friend = arg[0]
              while True:
                print('coro2')
                arg = yield friend, (CurrentCoroutine,)
                friend = arg[0]
          
          
            run(coro1(), (coro2(),))
          
          
          You can do the same with javascript and events, but it requires a much higher degree of masochism.
        • kansface 12 years ago

          Py3000 added support for the "pass a generator" construct that you care about: http://simeonvisser.com/posts/python-3-using-yield-from-in-g....

          • munificent 12 years ago

            As far as I can tell, that's still just a little local syntactic sugar for making composing generators nicer. There's a still a fundamental distinction between generators and other functions.

            Think of it like this. Let's say you have a chunk of code that you want to refactor out into a separate method. The usual way to do that is to pull it out into a separate method and then call it from the place where the code used to be inline.

            In languages with coroutines, you can just do that, regardless of what's in that chunk of code. In Python, you have to think, "Oh, does this chunk of code contain a yield?" If so, you need to do a "yield from" the function you pulled out instead of a regular call.

            It forces you to constantly be cognizant of and design around the split between normal code and generators.

    • VikingCoder 12 years ago

      I've been using threads to do co-operative multi-tasking, for a while now.

      Every place that I'm tempted to write an event-driven finite state machine, or something similar, I spawn a thread instead. I get to write synchronous code, which feels much more natural to me.

      For instance my actor, running in a thread, calls a function like advance(). That drops data into an object, and wakes up the main thread, and blocks.

      The next time the main thread wants to give processing time to the actor, it describes the world into the same shared object, waked up the actor's thread, and blocks.

      Doing a switch like this dozens or even hundreds of times per second seems to work pretty well, especially if the main thread only gives execution to the actor thread when it needs to - inputs have changed, etc.

      For my use cases, it radically simplifies my code, and I have a small number of different inputs to handle, so it has been scaling well.

      • srean 12 years ago

        That is precisely the beauty of coroutines: your code can look like threaded code but you can get the performance of an event-loop. Put another way, with coroutines you get most of the advantages of an event loop but very few of its disadvantages. (You get concurrency only, not parallelism, well this is almost true). For cooperative multi-tasking, threads are unnecessarily resource hungry and wasteful. They hoard resources although most of the time they are doing nothing just waiting. This is usually fine if you think yours is the only application that should be running on the hardware at that time, but typically you want to share the hardware with others.

        Remember not everybody has the luxury of working on a system where you can spawn 10,000 threads without breaking a sweat.

        However this territory of literally hundreds of thousands of programmed agents participating in a game does not seem to be very populated. Perhaps part of the reason is that very few languages had efficient (this eliminates stack copying), scalable and portable support of coroutines. This is starting to change, but not as fast as I would like.

        I think C is to be blamed for the long under appreciated status of coroutines. It is one abstraction that C left out, although the VM C had as its execution model (the PDP) had excellent support for coroutines at the instruction level. C exported pretty much every abstraction of the underlying instruction set, but not coroutines.

        EDIT: @VikingCoder Replying here as HN wouldnt allow me to respond till some time has past. Yes I have looked at asio although just scratched the surface. It looks very interesting, as far as I know they are not threads though (which is a good thing), they use macro and template metaprogramming trickery to turn producer-consumers into one big switch case. If you interested in coroutines and seamless interaction with C++ I can recommend http://felix-lang.org

        • munificent 12 years ago

          > C exported pretty much every abstraction of the underlying instruction set, but not coroutines.

          My hunch is that the designers of C would have said "goto" and "switch" cover the use case where you have a bunch of peer chunks of code that you want to freely bounce between.

          Remember, at the time function calls were considered expensive, so not support full coroutines across function call boundaries may not have been on their minds as much.

          • srean 12 years ago

            Indeed, and thanks for the excellent commentary on coroutines, awaiting the blog post. I think another fact played into their decision: coroutines do not specify how they are to be scheduled, that leaves room for arbitrary policies. Not that C shied away from keeping things undefined.

        • VikingCoder 12 years ago

          Have you experimented with Boost Context and Boost Coroutine?

        • boulderdash 12 years ago

          @srean - can you provide a link to the PDP support for coroutines? I would like to read more.

          • srean 12 years ago

            By the time I got within touching distance of any computer the era of the PDPs were long over. I have learned from John Skaller that they could exchange control between two stack frames in a single assembly instruction. "Exchange Jump" is what I think it was called. You will have to search the assembly manual for PDP-11 for more. Wikipedia has some details http://en.wikipedia.org/wiki/Coroutine#Implementations_in_as... But I am sure there are HN readers who can speak with way more authority and exhaustiveness than the wikipedia page and can probably point find you a PDP-11 manual. I think you will find this thread interesting http://permalink.gmane.org/gmane.org.user-groups.linux.tolug...

            Quoting the most interesting bits from that thread, (although I urge you to read the original):

              Of the many styles of subroutine calls on the PDP-10, JSP ac,addr is the fastest,
              as it's the only one that doesn't require a memory store.
            
              Its ISP is something like:
            
                    ac = PC
                    PC = effective address [addr in the usual case]
            
              The subroutine return, of course, is:
            
                    JRST (ac)
            
              Here, the efective address is the contents of the register.
            
              The coroutine instruction combined the two:
            
                    JSP ac,(ac)
            
              This essentially exchanged the PC with ac.
            • boulderdash 12 years ago

              Hi srean - thanks for that recap. I just did some more digging on this and tried to understand the assembly versions of coroutines. They were very spartan. It was just: POP the next address from the stack into TEMP, PUSH the current PC, then set the PC to TEMP. Notice that there isn't any linking or parameter passing.

              Overall, it has been fun reading on all the variants of this idea.

              side note: in my first job, there were a few PDP-11s in the lab that I was responsible for. We never turned them on though.

              Also, the PDP 10, which you mention above, was one of the most revered machines by hackers.

      • anon4 12 years ago

        Don't you still need to carefully use locks everywhere? For me the one reason to use coroutines instead of regular threads is that coroutines are cooperative multitasking, rather than preemptive. Which is what I want for when I have a bunch of concurrent but not parallel processes working on shared data.

        • VikingCoder 12 years ago

          I have a main thread, an actor thread, and a single object that they use to communicate. So yes, I have one lock, around the single object they use to communicate.

          I'd need one lock per actor thread and its communication object.

          I say again, this works in my problem domain, and probably wouldn't work in other domains.

    • forrestthewoods 12 years ago

      UE4 has hot reloading of C++ code. Some of their demos were pretty impressive. However I'd have to use it for a long period of time on a "real" project to see how reliable it truly is.

  • midnightclubbed 12 years ago

    By removing the power from your design team you are creating more work for the software engineers and removing tools from the creative team.

    An ideal scripting language:

    - Allows designers to build complex gameplay elements, define complex ai behaviors, create gameflow with minimal work from the software engineers.

    - Handles memory allocation/destruction behind the scenes

    - Does not crash the game when an error occurs

    - Handles multithreading and/or events

    - Does not allow designers to shoot themselves in the foot

    - Has a simple and clean syntax

    - Allows software engineers to expose their APIs to it easily

    - Can be reloaded on the fly

    If you control the feature set of the scripting language then it becomes a crucial tool for rapid development of your game - empowering the creative team to make their game and allowing the software engineers to concentrate on all the other stuff that needs their attention.

    In my opinion visual scripting and component systems can be a useful addition to an engine, but I have always seen a usefulness in having a scripting language layer.

    (I have shipped multiple high profile console games with more lines of script code than of game code and design teams matching the size of the programming teams)

    • flohofwoe 12 years ago

      Did you use an existing scripting language, or roll your own? How did you handle debugging and profiling, and did you limit access to control constructs like loops, variables, etc..? Did the scripting system have some sort of per-frame budget/quota? Curiously interested...

      • midnightclubbed 12 years ago

        Rolled our own - one I helped write, one I used. Both followed a stripped down C-like syntax (minus pointers). Both had conditional statements, loops, variables, simple structs, arrays (range checked), float/int/bool.

        On the projects I worked on script performance wasn't too much of an issue, the scripts were used to control state and flow, not to do any number-crunching.

        If you control what is exposed to the scripting then engineering will know when design asks for access to something that should be implemented outside of scripts. Which in my experience seemed to have minimized performance issues.

        One concerns when switching over to a development system such as Unity or Unreal 4 is that your gameplay 'scripts' have access to the entire engine. It seems very easy at that point for your game to turn to unintentional spaghetti.

  • bch 12 years ago

    > The main problem with scripting layers is that you are basically handing programming tasks over to team members who's job is not to solve programming tasks, and thus getting a lot of beginner's code quality and performance problems which are almost impossible to debug and profile

    I disagree that it has to follow that scripting -> bad code. I think of the C <--> scripting integration that I do as a Judo secret weapon. I get to easily fling "high-performance" C around in a super-dynamic way (REPL, easy-to-use high-level abstractions)... I think the problem you're describing is real (possibility to abuse/mis-use scripting power), but dismissing the notion of scripting plus C/C++ because of possibility of abuse seems like throwing out the baby with the bathwater. Better addressed with training and culture.

    • flohofwoe 12 years ago

      Yes I actually agree, it's more of an organizational problem of who does what in a team. For fast prototyping, or working in a small team of experts it will be an advantage. There are only very few people which are gifted to be both a great artist and a pragmatic programmer who cares about code quality and performance. I think the right middle ground is that the programming team provides recombinable building blocks, which are low-level enough that combining them makes sense, but high-level enough that they are still easily controllable and maintainable. The UE4 blueprints sound like this.

  • robrenaud 12 years ago

    > The main problem with scripting layers is that you are basically handing programming tasks over to team members who's job is not to solve programming tasks, and thus getting a lot of beginner's code quality and performance problems which are almost impossible to debug and profile (unless you have a few top-notch coders in the game- and level-design teams).

    It sounds like having a real software engineer do code reviews and/or rewrite the poor scripts before they get committed would be a solution to this problem. Presumably giving level designers the ability to script things is good for the game play.

octo_t 12 years ago

The phrase

> 'What starts out as a sandbox full of toys eventually grows into a desert of complexity and duplication.'

is beautiful and is a pattern I've seen multiple times before. Its not feature creep per-se, but more something a bit more insidious in software development.

  • simias 12 years ago

    That's a good lessons for people wanting to create a new languages too. One of the reasons C++ was successful in the first place is that it needs almost no glue to interface with C code. Contrast that with all the languages providing a more or less cumbersome FFI.

    Rust for instance looks very promising but you still have to go through the tedious task of redeclaring all the prototypes of the C functions before you call them, it cannot directly parse C headers (as far as I know). That makes writing hybrid code (for instance incrementally porting code from C into Rust) much more difficult and error prone than they need to be.

    • kibwen 12 years ago

      Actually, one of the first tools to appear in the Rust ecosystem was a port of the "bindgen" program written for the Clay language, which has been solving the problem of parsing C headers for years now:

      https://github.com/crabtw/rust-bindgen/

      There are also long-term plans for adopting this into the compiler itself:

      https://github.com/mozilla/rust/issues/2124

    • pjmlp 12 years ago

      Another one is that all successful system programming languages have a OS vendor shipping them on their OS SDK.

      That is the only way to make people adopt them. Otherwise they become just another language to do business applications.

  • squidi 12 years ago

    I love the irony of the contrast between this article and https://news.ycombinator.com/item?id=7584285. Not that either are wrong, but it demonstrates the constant flux of the software world.

    • jeremiep 12 years ago

      There's a world of difference between a web server handling each requests within ~1-30 ms and a game simulating the entire world made from thousands of entities under ~16ms.

      It is very possible to write games in high-level languages, but you will lose at least half the compute power of the machine by doing so. Note that unless you're writing a Gears of Wars, you don't really need such performance and productivity wins once again.

      • Guthur 12 years ago

        "It is very possible to write games in high-level languages, but you will lose at least half the compute power of the machine by doing so."

        Sounds definitive, interesting considering that in some cases languages like OCaml outperform C++.

        C++ does not mystically provide good performance. Knowledge of algorithms and appropriate data structures are far more beneficial.

        Does the coder know how to come up with something like http://en.wikipedia.org/wiki/Fast_inverse_square_root or the cost of a hash map verses indexed array lookup. These will win you far more than any language choice.

        • jeremiep 12 years ago

          I completely agree about knowledge of algorithms and data structures being far more important to programming than the choice of language.

          Many languages can outperform C++ in some cases and this language it would not be my choice for a next-gen game engine either (D would be).

          The thing about going native, however, is that you also control the memory layout of these data structures easily to lower cache miss rates. You can write highly performant code in all the cases where its needed. For a game engine that's very likely to be most of the sub-systems updating the game objects and interfacing with the hardware. Writing native code makes it pretty straightforward once you learn to structure your data for cache locality and prefetch the memory when you know its going to be needed. Video game engines are chock-full of use-cases for this.

          I'm mostly a functional programmer now and I do love referential transparency. It's perfect to reason about the logic of our programs and has completely changed how I view software now. But the tradeoff for this is that we lose the ability to easily reason about the execution speed of our programs without deep knowledge on how it gets compiled, and is usually dependent on the compiler vendor.

          For real-time applications crunching tens of thousands of objects 60 times a second running sometimes on sub-par hardware written by armies of programmers straight out of Java-School, this makes C++ a no-brainer.

  • ksk 12 years ago

    Any specific examples? It would be interesting to go through the commit logs.

xedarius 12 years ago

I developed two titles with the Unreal Engine and whilst initially UnrealScript seems like an advantage it very very quickly becomes problematic. My favorite being the dependency of the C++ code on the script and the script on the C++, so if you're not careful you can end up being completely unable to do a build.

As much effort as they put into the IDE it would always play second fiddle to Visual Studio. When I left there was no way to remote debug unreal script on the target device (this may not be the case now).

I know that all of the guys I worked with in the studio would welcome pure C++ approach. The only real losers here are mod makers who will have a higher entrance bar.

  • tinco 12 years ago

    The modmakers don't suffer too dearly, blueprint can do almost anything a modmaker could want, and in the future perhaps we can drop the almost.

    They didn't really drop the highlevel language, they just dropped the idea of a highlevel general purpose language, favouring a strict DSL.

    There's still a bit of interop, but you have to be very explicit about it. In C++ you use macro's to tell the compiler what's accessible to blueprint, and to access blueprint from C++ you have to make weird queries that are very obviously highly dynamic and ill-performant.

    So cool stuff all around :)

  • RobotCaleb 12 years ago

    We ended up prototyping most things in UC then converting most of it to native code for performance reasons.

    The reason builds were sometimes a bit tricky was due to chicken and egg syndrome because the script compilation wouldn't just compile scripts, but also modify headers to support the new objects created in script.

    Aside from the performance issues, the main drawback (to me) with UC was definitely how tightly coupled to native code it was.

  • CmonDev 12 years ago

    They could also use C# instead of UnrealScript on top of C++. This way you would be able to leverage the Visual Studio while getting one of the best languages. It would make Unity3d guys welcome once they need something better as well.

    PS: I like the BluePrint though.

    • kevingadd 12 years ago

      To second what scott_s said, you'd have a huge interop layer. The fact that C# has a closer match to C++ types and primitives doesn't mean you have no interop. Unity has a huge surface area between the C++ parts of the engine and the C# parts of the engine; a ton of work goes into maintaining that, and it complicates porting to new target platforms like the web (can't just feed everything through emscripten).

      • azakai 12 years ago

        Yes, C# and other .NET languages are nice, but the open source implementation (Mono) ends up being difficult to port to novel platforms due to a combination of technical and legal issues.

    • scott_s 12 years ago

      They would still need an interoperability layer, which was one of the main reasons for dropping UnrealScript.

    • kcbanner 12 years ago

      I feel like .NET is a bad choice for this.

HeXetic 12 years ago

It's important to note that what is being talked about in this post is not, "why we wrote the Unreal engine in C++", because it already was in C++. Many games, older Unreals included, had a separation between "code" and "scripting", where stuff like animations, weapon firing, etc. was written in scripts, in the belief that this would be easier to update as required vs. C or C++ code.

Doom 3 and previous Unreal engines had scripting languages; even the first moddable FPS engine, Quake, had a 'scripting language' of sorts -- Quake C, a sort of subset of C. id software turned back to pure code with the Quake 4 engine, however, recognizing the mistake that introducing the overhead of script-vs-code, and the limitations of scripts, outweighs any gains from being "easier to edit".

  • angersock 12 years ago

    Brief rant:

    Quake 1 ran a virtual machine, which QuakeC compiled down to. Quake 2 ran native code via DLLs. Quake 3 ran either VM code or native code--depending on how clever you wanted to be, you might need to break into the native code.

    There wasn't some "mistake" about using scripts-vs-code, because they would actually compile down to executable bytecode. This made it much easier to load mods over the network if you needed to, and to port their tech to different architectures.

    idTech 1-3 internally were a lot more like VR operating systems than scriptable game engines.

    and the limitations of scripts, outweighs any gains from being "easier to edit"

    Wrong, wrong, and wrong. The mod scene flourished back in the day specifically because it was so easy to bodge together mods in these friendlier environments--especially in the Unreal series.

    The place where scripting falls apart is in modern AAA games where way too much stuff is expected of/exposed to designers, and then you end up with gigantic sprawling piles of poor performance. A friend worked with a licensee of the Unreal engine for a few games, and their script dispatch switch went on for...well, let's just say that many good programmers lost many good hours in those mines.

    Scripting is a perfectly good tool, and one that makes sense until you start doing crazy AAA stuff with it.

    • HeXetic 12 years ago

      As a Doom 3 modder myself, I experienced considerable headaches juggling the interplay between scripts and code, especially since so much stuff (like firing a gun or moving around) spanned the two systems so inelegantly.

      • angersock 12 years ago

        I think it was handled somewhat better in older engines.

        Doom3 seemed a bit of an odd duck.

  • hyp0 12 years ago

    I would guess the idea was to enable non-coders (esp. artists, level designers etc) to handle many of their needs themselves - for faster iteration.

    But maybe the UnrealScript wasn't simple enough in practice to do that?

    Or perhaps, giving people a template function, and a few functions, is just as easy/hard as a separate scripting language.

    Generally, scripting languages are a really great idea: consider all the bash scripts in unix. An imperfect mismatch with the underlying language, yes; but worth it.

    • GeneralMayhem 12 years ago

      It sounds like the problem was that UnrealScript was initially too simple for the applications it was pushed to, which led to more and more of the underlying system being exposed through the interop API.

CyberShadow 12 years ago

There was a talk at last year's D conference how Remedy Games have used D as their "scripting" language:

http://dconf.org/talks/evans_1.html

I wonder if some of the same points would apply here. The short version of the talk is that D compiles much faster than C++, has limited C++ link compatibility (e.g. classes, but not templates), and overall has nicer syntax / language features than using C++ directly. Metaprogramming / compile-time introspection allow automatically serializing/deserializing data to allow updating data structures without restarting the engine.

  • pjmlp 12 years ago

    Yeah C++ compile times will be a pain until a module system gets done.

    This is why I follow with high interest the work clang guys are doing for the committee.

mccr8 12 years ago

These issues are all very similar to the difficulties with interaction between JS and C++ in web browsers. A lot of engineering and specification effort has been expended in browsers to improve these problems, on things like WebIDL [1], codegenned bindings[2], JITs that understand some of the behavior of the underlying C++ operations [3] and so forth, but for a game engine where you aren't running potentially malicious code I can see that it would make a lot more sense to just tell people to use C++ rather than expend that effort.

[1] http://www.w3.org/TR/WebIDL/

[2] http://jstenback.wordpress.com/2012/04/11/new-dom-bindings/

[3] https://bugzilla.mozilla.org/show_bug.cgi?id=938294

beefsack 12 years ago

It will be interesting if any major game engines pop up using Rust as the core language, or even Go. C++ has been the king of highly optimised game engines for so long, I can't help but feel it has become so entrenched in the industry that it will take something monumental to disrupt it.

  • kibwen 12 years ago

    There already exists a subcommunity devoted to exploring Rust's applicability to game development and computer graphics. The most active person in this department is probably Brendan Zabarauskas: http://voyager3.tumblr.com/post/82419271783/i-found-this-anc...

    I also know of a few game development studios with R&D departments who have had an eye on Rust in the past, though I bet it will be a long time before a major studio is willing to make the risk that writing an engine in Rust would entail.

  • forrestthewoods 12 years ago

    Can Rust code be compiled and debugged on PC, OS X, Linux, iOS, Android, PS4, and XB1? If the answer to any of those platforms is no then the answer to your question is no.

    • steveklabnik 12 years ago

      iOS, PS4, and XB1 are missing.

      I know it's not your point, just providing some answers in case someone is wondering.

      • forrestthewoods 12 years ago

        I was making a point and asking a real question because I didn't know. I checked wikipedia and couldn't get a clear answer. So thanks!

        Sticking with C/C++ also means you'll be able to more easily port your game to any platform of the future. It's just a safe bet that any future platform (either hardware or software) will support C/C++. Especially for games. Anything else is a dice roll. Sometimes a roll worth taking, but still a roll.

        • bjz_ 12 years ago

          I think it will most likely be smaller devs and indies who use Rust for game dev initially. Once demand grows, hopefully there will be better cross platform support in the future. Thankfully Rust emits LLVM IR, so that will make expanding to more platforms like iOS, PS4 and Enscripten much easier. XB1 would still be an issue though... not much you can do about that though.

          For me, Mac/Win/Linux/Android/iOS support is more than enough, and I think it will be enough to bootstrap the language into some level of industry acceptance. It depends on the project really, and how much the developer values console support over the benefits that a modern systems language like Rust provides.

  • lobster_johnson 12 years ago

    I have high hopes for Nimrod [1] in that regard. Unlike Rust and Go, Nimrod is able to be almost as fast as C/C++ without sacrificing syntax; Nimrod's syntax often looks entirely like Python. For example, here's the example from the Nimrod home page:

        # compute average line length
        var count = 0
        var sum = 0
    
        for line in stdin.lines:
          count += 1
          sum += line.len
    
        echo "Average line length: ",
          if count > 0: sum / count else: 0
    
    Type inference ensures that this uses efficient types internally, and is compiled to something very close to the efficiency of C. Here [2] is the generated C code, minus line tracing and stack trace frame generation. (Nimrod does things like bounds checking and overflow checking; without them, the program obviously becomes faster; the AddInt() function, for example, is replaced with a simple "+=".)

    [1] http://nimrod-lang.org

    [2] https://gist.github.com/atombender/f50e47c573f865d000ec

    • bjz_ 12 years ago

      Nimrod is a great language, but it has different goals to Rust. You get better expressiveness, and cleaner code, but you don't get the huge benefits of Rust's static type system. It depends on which you value more - I think there is a place for them both though.

      • lobster_johnson 12 years ago

        Nimrod's static type system looks comparable to Rust's; are you referring to Rust's safety guarantees?

        If so, I believe Nimrod's support for immutability gets you pretty far, but I have not looked very deeply at it. For example, implements an explicit "IO taint" mechanism reminiscent of Haskell.

  • Narishma 12 years ago

    I don't think it will happen with Rust until it becomes more stable with no breaking changes between releases.

    As for Go, it's not very well suited to this domain. Or at least no better suited than Java or C#.

    • bjz_ 12 years ago

      > I don't think it will happen with Rust until it becomes more stable

      Indeed. Most of the trail blazing work is being done by professional game devs in their spare time, hobbyists, indies and students who are willing to endure the pains of early adoption for the incredible gains that Rust gives them. It will only be post 1.0 however that the bigger players will be able to even consider using Rust. It's too risky to bet an entire company on – and that's coming from somebody who is willing to bet their indie project on it. ;)

  • bjz_ 12 years ago

    I don't foresee big, established C++ code bases like the Unreal Engine being re-written from scratch, but once Rust reaches 1.0 it will definitely be a very compelling choice for new projects. It will start with the side-projects and indie projects, and gradually work its way up to larger and larger projects as the language proves itself. It's not a something that will happen overnight.

  • mattgreenrocks 12 years ago

    What's the appeal of Rust for gamedev?

    • bjz_ 12 years ago

      - Fine grained control over allocation whilst maintaining memory safety (stack, heap, RC, GC, or roll your own)

      - No null pointers (with an Option type that compiles down to a nullable pointer)

      - Data race free concurrency

      - Zero cost abstractions

      - RAII and destructors

      - No exceptions

      - A modern, highly expressive type system

      - Generics that throw type errors at the call site, not deep in a template expansion

      - True immutability (not `const`)

      - An excellent C FFI

      - Compiles to native code

      - You don't pay for what you don't use

      - Safe by default, but you can circumvent the safety checks if you know what you are doing, and those places are clearly marked and easy to audit. Inline assembly is supported.

      - Most safety is enforced statically, so you don't pay for it at run time.

      • mattgreenrocks 12 years ago

        I was being a bit flippant when I posted, but I'm really impressed with the list. I need to look into seeing where I can help on the compiler or runtime.

        No exceptions is my only objection, but I know they're dubious for a systems language.

        • bjz_ 12 years ago

          No worries, it was a good question! I would highly recommend hopping on the IRC if you'd like some more information or have a chat. The community is very active and friendly. You can see the list of channels here: http://static.rust-lang.org/doc/master/index.html#external-r...

          Regarding exceptions: whilst they can be be very useful, unfortunately a significant number of large, performance sensitive C++ projects outlaw them due to overhead and safety concerns (the semantics can become quite hairy when mixed with destructors). The Rust developers felt that it was easier to forgo them entirely.

          • deathanatos 12 years ago

            > due to overhead

            My understanding is that the old exception code called "SJLJ" (short for setjmp, longjmp, which is what it was) was slow. I think each try/catch required hooks, and yes, it was.

            The newer compilers generate something called "DWARF"; resources on it are unfortunately scarce, but my understanding is that you don't pay anything in speed for an exception until you throw one. (You do however pay a bit of disk/memory for data about where try/catch handlers are, I think.)

            > safety concerns (the semantics can become quite hairy when mixed with destructors)

            I'm assuming that you shouldn't throw in a destructor.¹

            This argument, to me, always needs more information attached to it, because by itself it's meaningless. Assuming the alternative is returning either the result, or an error code, you run into exactly the same semantic issues, you're just handling them manually now. Is that better, and how?

            In the manual case. If I assume I have some code that returns to me an error code that I can't handle, I need to propagate that error up to a stack frame that can. Thus, I begin to manually unwind the stack, during which, I destruct things. If we're assuming destructors can throw², then I can potentially run into the problem of having two errors: now what do I do?

            C++ isn't the only language here: C#, Python, and Java share the problem of "What do you do in the face of multiple exceptions requiring propagation up the stack?", though I think C++ is the only one that solves it by terminating the program. I believe C# and Python just drop the original exception, and I have no idea what Java does. Honestly, if things are that effed up, terminate doesn't sound that bad to me. In practice in C++, most destructors can't/don't throw. (Files are about the hairiest thing, since flushing a file to disk on close can fail: C++'s file classes will ignore failures there, which doesn't exactly sit well with me. You can always flush it manually before closing, but of course, if you do this during exception propagation and throw on failure, you risk termination due to two exceptions.)

            Even C has this, in that if you're propagating an integer error code up the stack, and something goes wrong in a cleanup, you've got this problem. In C, you're forced to choose, of course, including the choice of "ignore the problem entirely".

            That said, I'll add the answer for Rust here. (I've never used Rust, so correct me if I'm wrong. I'm going abstract away the Rust-specific types, however.) Rust, for a function returning T but that might fail, returns either Optional<T> or a Result<T>-ish object, which is basically (T or ErrorObject). Rust has strong typing, so if there's an error, you can't ignore it directly, because you can't get at the result. And if you try, it terminates the "task". Strong typing is the winner here. (This reminds me why I need to look into Rust.)

            ¹It's not illegal to do so, but since destructors get called while an exception unwinds the stack, you can potentially run into an exception causing an exception. Two exceptions in C++ result in a termination of the program.

            ²If we're not, then exceptions are perfectly safe.

            • bjz_ 12 years ago

              Yeah, I will admit I am not an expert in exceptions, so I might have been incorrect in my response. I'm sure there would be better people to talk to on the #rust. Alternatively you could ask on the mailing list or /r/rust.

    • gnuvince 12 years ago

      A safer language with higher-level, low-cost abstractions.

      • zerooneinfinity 12 years ago

        Honestly, I've made over 15 games in my career and safety with C++ really just isn't an issue with decent developers. The line between what's a programmer for games and what's a designer is narrowing, most designers are competent programmers. Furthermore, there's some great tools out there to help prevent things like memory leaks. Combine that with good company practice, like code reviews, and it becomes a non-issue.

        • gnuvince 12 years ago

          I guess, then, that you weren't part of the Battlefield 4 team [1]. I've discussed the issue of the "no decent programmer" fallacy in the past; yes, in theory if programmers were careful and alert, they could create flawless software, yet this never happens in practice because humans are prone to errors (i.e. not understanding a subtlety of the language or library, thinking that a validation is done at a different level of abstraction, failing to imagine what could be an error scenario and how it could occur, etc.). Languages like Rust offer the same capabilities as C or C++, while eliminating entire classes of bug sources.

          [1] http://en.wikipedia.org/wiki/Battlefield_4#Technical_issues_...

          • maccard 12 years ago

            If BF4 was written in Csharp, or java (or rust or go?? I'm sure it would still have just as many bugs. One of peoples biggest complaint is the kill shots that you don't see, but that's a design choice (client side hit detection).

            • steveklabnik 12 years ago

              Of course, it's impossible to speculate without seeing the codebase, but considering that Rust makes several classes of C++ bugs impossible at compile time, I'd be hard-pressed to imagine that a Rust version wouldn't be less buggy.

              • loup-vaillant 12 years ago

                If the safer type system gives the devs an unwarranted sense of security, they might write less tests, be less careful in their design, or wait longer between audits and other sanity checks.

                If on the other hand the devs understand which classes of bugs aren't ruled out in Rust, then sure, you will end up with fewer bugs.

                • bjz_ 12 years ago

                  Rust's type system eliminates the need to test for whole classes of bugs, because they are statically checked for at compile time. This means that tests can be more focused on logic errors rather than standard book keeping. If you look at the example set by the rust repository itself (https://github.com/mozilla/rust/), it is heavily tested and every single PR (https://github.com/mozilla/rust/pulls) is reviewed before merging. This discipline definitely filters down into third party libraries.

              • maccard 12 years ago

                Less prone to certain kinds of bugs, sure. But logical errors, not necessarily.

        • pcwalton 12 years ago

          > Furthermore, there's some great tools out there to help prevent things like memory leaks. Combine that with good company practice, like code reviews, and it becomes a non-issue.

          The security track record of applications written in C++ disagrees with you.

          • zanny 12 years ago

            We are talking about new engines written in these languages, though, not multi-decade old codebases still using inline assembler, goto, and pointer arithmetic.

            Modern C++ is really safe if you use the subset that involves automatic storage duration, well bounded arrays, etc and use all the warning flags of your compiler, run static analysis, have a robust test framework, etc.

            • pcwalton 12 years ago

              No, modern C++ is not even close to memory safe. This is my favorite meme to destroy over and over on HN. :)

              Consider iterator invalidation, null pointer dereference (which is undefined behavior, not a segfault -- and you can't get away from pointers because of "this" and move semantics), dangling references, destruction of the unique owner of the "this" pointer, use after move, etc. etc.

              • JoeAltmaier 12 years ago

                Extraordinary claim- please elaborate. I'm working on 100's of thousands of lines of C++ code with a medium-sized team; memory issues are almost non-existent because of disciplines described above.

                • pcwalton 12 years ago

                  I've described this many times in the past, but here are a few things that modern C++ does nothing to protect against:

                  * Iterator invalidation: if you destroy the contents of a container that you're iterating over, undefined behavior. This has resulted in actual security bugs in Firefox.

                      std::vector v;
                      v.push_back(MyObject);
                      for (auto x : v) {
                          v.clear();
                          x->whatever(); // UB
                      }
                  
                  * "this" pointer invalidation: if you call a method on an object that is a unique_ptr or shared_ptr holds the only reference to, there are ways for the object to cause the smart pointer holding onto it to let go of it, causing the "this" pointer to go dangling. The simplest way is to have the object be stored in a global variable and to have the method overwrite the contents of that global. std::enable_shared_from_this can fix it, but only if you use it everywhere and use shared_ptr for all your objects that you plan to call methods on. (Nobody does this in practice because the overhead, both syntactic and at runtime, is far too high, and it doesn't help for the STL classes, which don't do this.)

                      class Foo;
                  
                      unique_ptr<Foo> inst;
                  
                      class Foo {
                      public:
                          virtual void f();
                          void kaboom() {
                              inst = NULL;
                              f(); // UB if this == inst
                          }
                      };
                  
                  * Dangling references: similar to the above, but with arbitrary references. (To see this, refactor the code above into a static method with an explicit reference parameter: observe that the problem remains.) No references in C++ are actually safe.

                  * Use after move: obvious. Undefined behavior.

                  * Null pointer dereference: contrary to popular belief, null pointer dereference is undefined behavior, not a segfault. This means that the compiler is free to, for example, make you fall off the end of the function if you dereference a null pointer. In practice compilers don't do this, because people dereference null pointers all the time, but they do assume that pointers that have been successfully dereferenced once cannot be null and remove those null checks. The latter optimization has caused at least one vulnerability in the Linux kernel.

                  Why does use after free matter? See the page here: https://www.owasp.org/index.php/Using_freed_memory

                  In particular, note this: "If the newly allocated data chances to hold a class, in C++ for example, various function pointers may be scattered within the heap data. If one of these function pointers is overwritten with an address to valid shellcode, execution of arbitrary code can be achieved." This happens a lot—not all use-after-free is exploitable, of course, but it happened often enough that all browsers had to start hacking in special allocators to try to reduce the possibility of exploitation of use-after-frees (search for "frame poisoning").

                  Obligatory disclaimer: these are small code samples. Of course nobody would write exactly these code examples in practice. But we do see these issues in practice a lot when the programs get big and the call chains get deep and suddenly you discover that it's possible to call function foo() in one module from function bar() in another module and foo() stomps all over the container that bar() was iterating over. At this point claiming that C++ is memory safe is the extraordinary claim; C++ is neither memory safe in theory (as these examples show) nor in practice (as the litany of memory safety problems in C++ apps shows).

                  • zanny 12 years ago

                    A lot of this just looks to be lacking const correctness. If you declared most of the mutable types const (and the use cases for a non-const unique_ptr are few) you can avoid most of these issues.

                    I think it is a valid criticism of the language that not all non-primitive types aren't implicitly const, though. But you could never implement that without colossal backwards compatibility breakage. Which I guess is fine, since you could just keep a code base an std= behind until you fixed it.

                    > Use after move: obvious. Undefined behavior.

                    This I don't have an answer to though. I've always disliked how this isn't a compiler error.

                    • pcwalton 12 years ago

                      You can return out references and still get dangling pointers with const values. For example, you can return an iterator outside the scope it lives in and dereference that iterator for undefined behavior (use-after-free, possibly exploitable as above).

                      Besides, isn't "C++ is memory safe if you don't use mutation" (even if it were true—which it isn't) an extremely uninteresting statement? That's a very crippled subset of the language.

                    • bjz_ 12 years ago

                      > If you declared most of the mutable types const (and the use cases for a non-const unique_ptr are few) you can avoid most of these issues

                      Mutability in Rust is perfectly safe because of the static checks built into the type system – the compiler will catch you if you screw things up.

                      > you could never implement that without colossal backwards compatibility breakage

                      I cannot express how important immutability as default is. This prevents the issues that C++ has with folks forgetting to mark things as const. There is also lint that warns when locals are unnecessarily marked as mutable, which can catch some logic errors (I say that from experience).

                      Also note that I said 'immutability' not 'const'. Immutability is a far stronger invariant than const, and therefore is much safer. It could also lead to better compile-time optimisations in the future. I'm sure you know this, but just in case:

                      - const: you can't mutate it, but others possibly can - immutable: nobody can mutate it

                  • JoeAltmaier 12 years ago

                    Right; stl sucks. So its work, but you can make ref-safe containers, even thread-safe ones. We do that; we do audio rendering with audio-chain editing on the fly, with no memory issues. It takes care, more care than other languages. But its far from unsolvable.

                    • scott_s 12 years ago

                      And the philosophy of Rust is, what if we encoded that "care" into the language itself? That, to me, is a clear win. It is, to me, good systems language design: codifying decades of hard earned "best practices" into the language semantics itself.

                    • dbaupp 12 years ago

                      Of course it's possible to write correct C++ code, just like it's possible to write correct assembly code. The point is the extra care required: every piece of code needs to be very carefully authored to ensure it's correct, to avoid the myriad pitfalls.

                      • JoeAltmaier 12 years ago

                        Or you can just trust the language. And if its not right, or not the way you plan to use it, what then? You're stuck unless the language also permits you to roll your own.

                        • dbaupp 12 years ago

                          Rust does allow you to implement low-level things in itself, by giving an escape hatch into C/C++-like unsafe code (i.e. risk-of-incorrectness is purely opt-in, rather than always-there).

                          Examples of things efficiently implemented entirely in the standard library in pure Rust (well, with some calls into the operating system/libc): Vec, the std::vector equivalent. Rc, reference counted pointers (statically restricted to a single thread). Arc, thread-safe reference counted pointers. Mutex. Concurrent queues. Hashmap.

                  • detrino 12 years ago

                    Use after move by itself is not undefined behaviour.

                    • pcwalton 12 years ago

                      It is for most of the important types that people move; e.g. unique_ptr (results in null dereference).

              • steveklabnik 12 years ago

                You should write something up with actual code samples. I would also love to actually demonstrate this to people.

              • bjz_ 12 years ago

                I agree with Steve here - in this instance a catalogue of code examples would be much a great deal compelling than natural language explanations.

          • angersock 12 years ago

            They're talking about games in C++, not random C++ apps. Oranges to apples.

          • CamperBob2 12 years ago

            Selection bias much? The 500,000 C++ applications that have never blown up in somebody's face aren't discussed on Hacker News.

            • dbaupp 12 years ago

              Maybe most C++ applications are low-value as attack targets, so no-one has bothered to find all the corner cases that make them blow up.

              The fact that applications like browsers and operating systems (which are known to be high value targets) have a lot of effort & resources put into security but still have attack vectors makes the "C++ is secure" position fairly indefensible.

            • bjz_ 12 years ago

              pcwalton mainly works on web browser development (Servo), which whilst sharing some goals with game development, also differs in some respects. Although online security is more and more important in games these days, the real appeal of Rust in respect to game development is in providing an alternative to the 'death by a thousand cuts' that can plague large C++ projects.

              I've posted a list of the things I consider the most relevant to game development: https://news.ycombinator.com/item?id=7587413 Any one or two of them alone wouldn't really be a compelling enough reason to switch, but put together they form a very compelling value proposition.

        • zerooneinfinity 12 years ago

          It seems like it's hard to say if Rust really would have eliminated those bugs, the reports are vague and the ones that aren't, e.g. fixed framerate issues, would be an issue either way. My argument isn't solely get good developers and be done with it. It's a combination of things, and one of the most important things is getting good practices in place. I don't know if EA did this but having a good auto-test system in place probably would have caught those crash bugs and prevented the server issue, for example.

          • bjz_ 12 years ago

            > one of the most important things is getting good practices in place

            That is really important, but still, wouldn't it be better if you could encode at least some of those good practices into the language itself, rather than relying on humans to be constantly on their game? I'm certainly not perfect, so I would rather my sloppiness be caught earlier rather than having it come back to bite me in the future. See: http://thecodelesscode.com/case/116

pjmlp 12 years ago

The PVS-Studio guys have a static analysis of the code quality

http://www.viva64.com/en/b/0249/

archagon 12 years ago

On the other hand, I feel that tools like Unity are successful precisely because they let you tinker with your game in a WYSIWYG kind of way. It's really liberating to be able to work inside such a tight feedback loop, and it's a little weird to see a modern game engine distancing itself from that approach.

(But maybe I'm missing something. What exactly is the role of the Unreal Editor in UE4? Is it mostly for things like graphics and sound?)

EDIT: OK, so apparently UE4 has something called Blueprints. I'm still not exactly sure what they are, but people in the thread are saying that they're superior to C# in Unity, and that they can even allow you to make a game without knowing how to program. So why is Tim Sweeney saying that C++ is replacing UnrealScript for gameplay code?

  • admiun 12 years ago

    There's actually an interesting demo of the blueprints feature in the toolset:

    https://www.youtube.com/watch?v=9hwhH7upYFE#t=384

  • k00pa 12 years ago

    You can use the blueprints to create the game, you can also use the C++ to create the game.

    You can also mix up the blueprints and C++.

    At this point Unity only really has simplicity because of the lack of features, and well if people are too scared about C++.

  • Guvante 12 years ago

    > So why is Tim Sweeney saying that C++ is replacing UnrealScript for gameplay code?

    Blueprints are a weird entity, not really code, but definitely similar.

    I think the major difference is they are heavily event based and probably have severe performance restrictions placed on them, which is the major difference. If you want a real time-slice you need to do it in C++.

  • CmonDev 12 years ago

    Can you make new Blueprint building blocks using Blueprint itself? Also it is still programming, just a visual kind of it.

nly 12 years ago

Maybe UnrealScript was just too damn complex. Having not used it, and just Googled it, my first reaction was "this looks just like C++ anyway". What's language features are there specificity catered for games? It doesn't seem very DSLy

  • barrkel 12 years ago

    It had synchronous-style code for handling animation logic, without needing hundreds of threads.

    It also had some clever ideas about replicating state across the network. You want to run the simulation locally to reduce latency, but you also need it to run elsewhere to have a consistent source of truth. So some state would be calculated by the local simulation, but be updated when packets of truth arrive. Member variables could be annotated according to how they were replicated, IIRC.

  • jeremiep 12 years ago

    Time and State are first class constructs in UnrealScript, they are not in C++.

    It's also possible to recompile UnrealScript files without recompiling the whole C++ program which took quite a while. (Unreal4 allows hot reloading C++ so this is no longer an issue.)

  • snarfy 12 years ago

    There were a few constructs for dealing with scripts running on the client vs server and also data replication across the network built into the language. This is where it different from being just a scripting language.

    Ultimately it was confusing at first. Besides the official docs I had to look at their script source and old UT2 tutorials to figure it out.

    I haven't seen the new engine, but I imagine simply doing a net.replicate(&player_info) would have been more straight forward than dealing with all of the unrealscript language constructs for it.

pjmlp 12 years ago

The whole C# vs C++ discussion going on the forum shows how little current generations understand of compiler design and language implementations, oh well...

  • berkut 12 years ago

    Exactly.

    What's most distressing is people (as normal) are completely ignoring the garbage collection overhead, which is mostly where the advantage of having complete control is, in terms of micro-managing your memory allocation, e.g. using slab allocators, memory pools, pre-allocation, etc.

    C# code in theory (ignoring things like intrinsics support and inline asm) can be as fast as C++ for tight loops, but in my experience (writing 2D/3D software for the VFX industry) +85% of the time if you profile something, it'll be the memory allocation which is killing things performance wise.

    • Pxtl 12 years ago

      C# has support for stack-allocated "struct" objects that avoid the GC. They have their limitations and gotchas, being somewhere between simple C structs and C# classes, but they exist.

      GC-based languages run games on many, many platforms. The problem, imho, is that you have to leave 90% of the language features on the shelf when you're doing your main loops in order to avoid triggering the GC.

      The gaming industry is practically begging for a language like Rust.

      • pjmlp 12 years ago

        I am also looking forward to the .NET Native C# compiler release.

        Mostly as a way for young generations to finally grasp GC/memory safe != VM, as they seem to have been brainwashed since Java became widespread.

      • ajanuary 12 years ago

        I feel like I should point out that the stack is an implementation detail [1]

        Though in this context it's relevant to point out pretty much any implementation will stack allocate them, it's not accurate to say C# has stack allocated objects.

        [1] http://blogs.msdn.com/b/ericlippert/archive/2009/04/27/the-s...

        • Pxtl 12 years ago

          Regardless, C# has a linguistic feature that provides objects with stack-like performance and copy-by-value semantics familiar to C developers.

      • Guvante 12 years ago

        There is heavy rumors going around that a new version of C# with a more well defined memory model is in the works.

    • determinant 12 years ago

      It doesn't have to be an all or nothing proposition with C# or C++ anymore if Windows is your target environment. You can write the portions you want to write in C# and mix it in with C++/CLI. Yeah, I know, a lot of people think C++/CLI is ugly, but in the .NET world, it is a very clean glue language if you can get past the strangeness of having two different type systems (native and managed.)

      You can essentially choose and manage how you want to deal with memory by virtue of how you choose between native and managed types, as well as control the behavior of the garbage collector itself.

    • pjmlp 12 years ago

      Note that Unreal uses a C++ GC.

      • m_mueller 12 years ago

        I'm neither an Unreal nor a C++ expert, but I assume that this kind of GC still allows exact control over when the collection happens since it's not language- but a library feature. In this case this should still be better for game engine purposes, since the GC can for example be hidden behind GPU rendering time.

      • k00pa 12 years ago

        Totally different from C#:sa GC.

        The GC in unreal engine is used to know when what entity should be removed from the game. (If you have multiple entities pointing each other.)

        The memory allocation still works how it works in C++.

        • pjmlp 12 years ago

          Well, since C++11, C++ also has a standard minimal GC API, although only VC++ supports it currently.

          Thanks for the clarification, as I only knew there was a GC from gamedev articles, forums, without much details.

XorNot 12 years ago

I remember back when Descent 3 was being developed, the devs must've run straight into this problem since pretty late in the development cycle they suddenly dropped their script language for pure C++ libraries for scripting levels.

danso 12 years ago

> Developers seeking to take advantage of the engine's native C++ features end up dividing their code unnaturally between the script world and the C++ world, with significant development time lost in this Interop Hell.

Replace "C++" with "JavaScript/client-side processing" and "script" with "server-side scripting" and I feel like this adequately describes web-development.

  • matthewmacleod 12 years ago

    Completely disagree. Server-side and client side are two completely different environments — different architecture, different security concerns, different user interfaces. Indeed, there's pretty much nothing shared.

    Attempts to hide this have thus far all been leaky abstractions, and we're still in the research phase (e.g. Meteor). I'm not convinced that it will be possible to create a coherent web environment which abstracts the server-client boundary effectively.

    Note that this doesn't preclude e.g. using Javascript as a server-side language. That's not related.

  • pjmlp 12 years ago

    You also need to add the dark magic tribal dance of making CSS/HTML/JavaScript work in a coherent way across all target browsers in mobile, desktop, TV, settop boxes and whatever else...

    At least that is how it feels for guys like myself that lack designer skills.

    • darylteo 12 years ago

      I've yet to meet a designer that can do the above. Its very much a black magic regardless of who you are.

  • CmonDev 12 years ago

    JavaScript only survived on client because we have to support legacy code across different platforms. There is no reason to pick same crappy language for server-side as well, unless you happen to be more familiar with it than modern languages.

    • m_mueller 12 years ago

      There is at least one reason I can think of: Code sharing with mobile client implementations.

  • dkersten 12 years ago

    Replace "C++" with "server-side processing" and "script" with "client-side scripting" and I feel like this adequately describes web-development.

    FTFY

    On a serious note:

    I've recently started using Clojure on the backend and ClojureScript on the frontend and while its not quite 100% of the way there, its close and quite pleasant to work with.

syncsynchalt 12 years ago

Inner Platform Effect: "the tendency of software architects to create a system so customizable as to become a replica, and often a poor replica, of the software development platform they are using" - http://en.wikipedia.org/wiki/Inner_platform

Seems an appropriate term here.

leoc 12 years ago

The upshot is likely that a community-created Lua or JS binding will gain a significant userbase.

  • TillE 12 years ago

    That's fairly likely, and there are several of those projects underway in the forums. Unfortunately the engine is filled with weird macros and their own reimplementation of STL-like types, so any full binding is going to require quite a lot of wrapping.

    I think the more effective solution for most developers will be to keep all the engine-adjacent code in C++, but integrate a scripting engine of your choice just with your game logic.

  • CyberShadow 12 years ago

    Note that you'll need a language capable of AOT compilation if you want to target consoles which only run signed code (or use an interpreter, which will be rather slow). JIT is not an option on those systems.

    • dkersten 12 years ago

      The LuaJIT interpreter is surprisingly fast. Still not as fast as the JIT or an AOT compiled language though...

      • NickPollard 12 years ago

        His point though is that JITs (like LuaJIT) are often not allowed on consoles for security reasons - they don't allow running of unsigned code. So LuaJIT might not be an option.

        • dkersten 12 years ago

          As myrmidon said, LuaJIT ships with a very fast interpreter.

          What I was trying to say is that, while its not as fast as JIT or AOT, it is still extremely fast (many times faster than the reference Lua implementation apparently).

        • myrmidon 12 years ago

          LuaJIT contains a pretty fast interpreter, thus it makes sense to use it on consoles (with the JIT turned off by compile-time-flags) instead of the Lua reference implementation.

  • CmonDev 12 years ago

    "JS" - they want a better language, not worse one.

kayoone 12 years ago

Unity runs well with a Scripting approach and for everybody where thats not enough, you can still get the full source license. I think Unity did it quite clever in that the engine itself is c/c++ and all the interfacing with the engine is done through C#/Mono.

  • thefreeman 12 years ago

    Unity caters to new game developers though. I'm not saying you can't make real games with it, but I'd be surprised if it was used for AAA titles, and definitely not as much as Unreal Engine.

    Also, I found this comment from later in the thread really interesting about how easy it is for hackers to abuse the reversibility of managed (.NET) code. Granted, this could just be due to poor design on the developers part, but giving hackers that kind of insight into the games design cannot be helping anything.

    So with that, This could be a unity issue, mono issue, or just the game developers issue. For background reasons, I am a local memory hacker, tho the reason I am here on UE is im teaching myself how to develop games not for hacking purposes. The game im going to talk about is the only game I know that is MP only and uses the unity engine. This game has one big problem when it comes to hackers, what we do is simply edit the .net dll's to manipulate the game, no hooking, no debugging, no working out functions in assembly, it also meant that we could reverse the source to pretty much 100% usable source - this resulted in the end user being able to change things like the user id to stop them being able to be banned. Un-Ban able hackers? its destroying this game. [1]

    [1] - https://forums.unrealengine.com/showthread.php?2574-Why-C-fo...

  • ajanuary 12 years ago

    Unreal Engine 1 - 3 had a similar approach. The linked post describes why they no longer feel that approach is beneficial for them.

    [Edit] Is Unity's approach drastically different in a way that solves them?

daenz 12 years ago

This was refreshing. I'm struggling with the same problem...I've embedded Lua in my C++ engine for high-level scripting. Unfortunately, as my scenes became more and more complex, I found myself struggling with representing the inheritance hierarchies in Lua, as well as things like object ownership/gc (resorting to passing around shared_ptrs in Lua userdatas). And for each new data type, I had to write the same old C++ boilerplate to make it available in Lua the way I needed to. The complexity is getting to be too much.

I think this article is going to push me to strip out the embedded Lua from the engine and use plain old C++ as well. Great read!

  • djur 12 years ago

    Embedded scripting languages seem to work best on what I'd call a 90/10 model. Either your game is written in the scripting language inside a thin compiled shell (to provide access to system routines and the occasional optimization), or your game is written almost entirely in the host language and just configured using scripts. So: 90% Lua, 10% C++ or vice versa, but not 50/50 or the like.

    In general, it seems to be a red flag if a single thread of logic runs back and forth between host and scripting contexts.

    • yoklov 12 years ago

      Alternatively, write most of it in C++ and allow it to be extended with components written in script, ECS style.

      Or something else.

      Honestly, having too much scrip is very much a thing to try and avoid. Perf (gc, etc), long term maintainability (usually no static types in script), tooling (frequently no debuggers, profilers, or the ones that exist are low quality), etc are all reasons for this.

      If you're going to have 90% lua, you probably should be writing the game in lua anyway...

golergka 12 years ago

Right now I sit through the second week of non-stop iOS crash logs of our Unity game, which are opaque, unclear and mysterious for the most part, and I can't agree more.

Guthur 12 years ago

From my limited experience of Unreal Script it was a pretty rough implementation of a language, it really didn't seem to provide any real benefits.

What I would have liked to see was better support for interactive development. When I was playing with it I still had the edit/save/compile/run loop to see my changes, that's not what I want. I use Common Lisp extensively and appreciate the power of a REPL, it's available in many languages now. This brings me to the next point of providing powerful reflection support so that I can easily explore the state of the application, as well as better debugging tools.

UnrealScript was just not a very good language implementation in my opinion.

So in my opinion removing it and exposing the C++ is a good thing. But not for the same reasons that most are touting, I don't want to code in C++. But now it should be a lot easier to build a reasonable Lisp on top of Unreal Engine 4, which is what I really want.

anoplus 12 years ago

A(programming) language gets very powerful by simply being accepted as standard. Take English as example.

jokoon 12 years ago

finally.

Now, to make the gameplay programmer and level designer's job easier, you still have to build well made and documented building blocks.

It's either that or hire people who do know how to talk a statically typed language. Might be good news for the job market, I always found it weird to have people making games who were not really competent in programming. always baffled me.

I guess you can still force people who are unable to write good c++ to write c++ anyways and hire somebody else to valgrind everything. In the end using a statically typed language is more a requirement for performance, clarity and consistency than a lack of flexibility.

Setting the bar high or demanding discipline if you prefer. Computers are stupid, so you need to be precise when you work with them.

  • Guvante 12 years ago

    Their blueprint system is phenomenally powerful without including any actual code.

    Heck it even allows live preview of how it is executing.

chris_wot 12 years ago

Looking at the comments around the post, the amount of people who don't really understand pointers but who seem to be Unreal developers is a little surprising!

  • mattnewport 12 years ago

    But then, the number of people who don't really understand pointers but who seem to be C/C++ developers is a little surprising...

Aardwolf 12 years ago

Can you script actors and such with C++ from within UnrealEd then?

  • pjmlp 12 years ago

    As far as I understand, yes with the blueprints graphical system.

thomasahle 12 years ago

The idea:

> It is ... more dangerous than UnrealScript, C#, and JavaScript. But that is another way of saying that it's more powerful.

is why we can't have nice things.

hyp0 12 years ago

Am I cynical? The recent changes in Unreal 4 make me think the company is in extremely serious trouble. It's because they aren't addressing the key reason people buy a graphics engine (viz graphics), but all this ancilliary stuff. While it is important, it's off center...

But maybe they are just trying to fend off Unity (open source, a more coherent experience). Usually when companies do this, it's too late. I've no idea if that's the case here.

  • k00pa 12 years ago

    Haha.

    Note how many games have come out using UE3, think about the royalties coming in.

    Also, right now they have just about the best game-engine, only hobbyists and some indies would choose Unity, mostly because its easier.

    UE4 will definitely get several high profile games soon from big AAA companies.

    I would say that Unity is in deep trouble right now :P

    • hyp0 12 years ago

      note that disruption comes from below, not from big AAA titles...

      • untog 12 years ago

        note that 90% of the profit comes from big AAA titles, not from below.

        ... I get what you're saying, but I think the Unreal Engine is going to continue to be a cash cow for a good while yet.

        • hyp0 12 years ago

          Digital had bumper profits just before they went under - it's typical, as they go upmarket, to get greater profits. But I'll accept that you get what I'm saying. :)

          Another data point: id was killing it (and, to us, Carmack remains undisputed). But they completely lost out the next generation, to Unreal, because of vehicles. I don't think that'll happen here, just disclaimin' past performance is no guarantee of future success.

          Though, to be fair, the Quake engine underlied Call of Duty, the most successful franchise (I believe), and it really showed in the framerate. Note: no vehicles. And yes, past tense.

          • k00pa 12 years ago

            They didn't even open the latest Id tech to public... They didn't even go to the game.

            It has nothing to do with vehicles.

            • hyp0 12 years ago

              It was an example of losing dominance, not to do with open.

              Re: vehicles. I'm going by what Carmack said in an interview.

              • k00pa 12 years ago

                That sounds really weird, do you have a source for that interview?

                • hyp0 12 years ago

                  I agree it sounds weird!

                  He was talking aboout the time around Unreal Tournament 2004, and at that generation, it was a big thing (kinda like waving grass was at one time).

                  Sorry, I don't recall which interview (and it would be hard to google unless there's a transcript). It might have been one of his keynotes, perhaps the one with Rage on an iPhone. I'm pretty sure it was a long one (at least 1.5 hours). It was one of the big popular videos on HN/r/programming (not an esoteric one).

  • 10098 12 years ago

    Last I checked Unity was a closed-source proprietary engine that I had to pay money for.

    I'd say they should start worrying about Unity when something like Unreal Tournament 2004 (a decade old game btw) comes out built with Unity.

    • zanny 12 years ago

      Maybe he means the Godot engine? I've heard a lot of newer kickstarters are eyeing it since it went FOSS.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection