Programming language ray tracing benchmarks project
github.comOrdered by realtime, fastest to slowest for those like me who got annoyed by the scrolling up and down trying to compare:
Rust (1.13.0-nightly) 1m32.392s
Nim (0.14.2) 1m53.320s
C 1m59.116s
Julia (0.4.6) 2m01.166s
Crystal (0.18.7) 2m01.735s
C Double Precision 2m26.546s
Java (1.7.0_111) 2m36.949s
Nim Double Precision (0.14.2) 3m19.547s
OCaml 3m59.597s
Go 1.6 6m44.151s
node.js (6.2.1) 7m59.041s
node.js (5.7.1) 8m49.170s
C# 12m18.463s
PyPy 14m02.406s
Lisp 24m43.216s
Haskell 26m34.955s
Elixir 123m59.025s
Elixir MP 138m48.241s
Luajit 225m58.621s
Python 348m35.965s
Lua 611m38.925sI rewrote the Go benchmark to be a mechanical translation of C and it performs much better.
So Go is only twice as slow as C, not thrice as slow. This puts it just ahead of Java and just behind Julia.C (gcc -O3): 23.8s Julia (julia 1.1.0): 32.8s Go (alt) (go 1.12): 39.3s Java (java 1.8.0_60): 44.2s Go (org) (go 1.12): 64.8s OCaml (ocaml 4.07.1): 79.1s JS (node 11.14.0): 137.0s Pypy (pypy 6.0.0): 139.4s C# (mono 4.2.1): 187.3s Rust: DOES NOT COMPILENote that the Java impl is creating objects all over the place in the inner loop - madness!
I'm sure a "mechanical translation of [the] C" version would improve things for the Java ver as well. If we removed startup costs (the class file validation, etc) I'd expect it to be on par with C.
The startup costs are fixed (they don't increase linearly with the number of loop iterations) and for such a small program, they could not reasonbly explain more than 1s of the 20s gap between C and Java. Also, I don't think it's "allocating objects in an inner loop", because Java's allocations are super cheap (bump allocations) if the escape analyzer doesn't keep them on the stack in the first place.
That said, after examining this benchmark further, I don't think it's very good since the sequences returned by the random number generators are not controlled for (each implementation uses its own standard library RNG with their own seeds, so the sequences will vary from language to language). This likely causes more loop iterations, but considering the loop termination condition, the theoretical distribution of RNG outputs, and the trivial work done in the loop body, I doubt that the delta in loop iterations can explain any significant portion of the gap. Rather, I think the gap is simply a difference in performance of the RNGs themselves--C and Rust use a poor man's RNG (xorshift) which performs very well for this exercise but is not a good general purpose RNG (and standard library RNGs are optimized for the general case). When I rewrote the Go version, using the xorshift implementation made the most significant impact (15s), although I'm not 100% sure that the output of the RNG isn't just causing it to run the RNG less frequently. I opened up this ticket against the project: https://github.com/niofis/raybench/issues/15.
Julia was astonishing. It's a high level language that's performing almost like C.
Last time I checked, many years back, the spec was changing and the run time did crash. Guess it has gone a long way since.
The other one is Lua. My assumption was that it's one of the lightest and fastest language around. Looks like "fastest" isn't true in some cases.
Different languages' benchmarks might not be equally well-written / optimized. In particular, I'd expect C and Rust to be very close to each other, and a 20% gap between them is a red flag.
Rules like "code should be simple, as in, easy to read and understand" are also hard to judge, especially near the top of the list where there's a lot of pressure to optimize. Is SIMD easy to understand? What if it's in a library? What if the library was written specifically for this benchmark? Etc. I think https://benchmarksgame-team.pages.debian.net/benchmarksgame/ has to deal with every possible permutation of this debate.
Not necessarily. Because C allows the pointer manipulation, the compiler can in general not make assumptions about pointer aliasing. This prevents some optimizations.
In Rust, the compiler has more information/control over memory layout/lifetime and can therefore make stronger optimizations.
Automatic vectorization is an area where this helps a lot, and raytracing can benefit a lot here. 20% sounds reasonable to me.
Well, generally, perhaps. But any performance oriented C programmer worth his or her salt would be aware of aliasing issues and write code in such a way that it doesn't cause problems for the compiler. Plus, this is a toy benchmark of a few hundred lines so the compiler can do full-program analysis. So the 20% difference is indeed a smell.
Looking at the crb*.c files, structs are passed as pointers and not by value. This makes it harder for the compiler to analyze the data flow which I would bet is part of the reason Rust is faster here.
> pointer aliasing
Unfortunately, due to LLVM bugs, the Rust developers had to disable that optimization, more than once. I don't know whether the "1.13.0-nightly" he used has that optimization enabled or disabled. (See https://github.com/rust-lang/rust/issues/31681 and https://github.com/rust-lang/rust/issues/54878 for the relevant Rust issues.)
That's a good point, I didn't think about the effect on autovectorization. Do you think that's what's happening here? My impression was that getting good vector code out of the compiler usually requires manually tuning things.
shouldn't rust being faster than C be something of a red flag that they aren't quite the same algorithm? Or that the algorithm is sub-optimal?
C and Rust have been trading blows on the language benchmark games for a while now which dictates the algorithm used. From my experience, it's relatively easy to accidentally write fast Rust, but incredibly hard to write fast C.
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
It got me thinking as well, so I ventured and did some experiments on this, and found that the main difference is the algorithm used for the RNG; C's std lib uses a slower one (which also is thread safe, and butchered OpenMP performance). You can take a look at a more apples to apples comparison in the latest update for crb.c which uses a xor128 rng; rust is still a little faster (especially when going multithreaded), but not quite the difference in the README file, still need to get some time to update it.
fwiw, I looked at some of the quicker c/rust examples, without too much other analysis
and because we have a number of tiny single cpu vm's out there (which would also benefit from a performant language) I gave it a shot there:crb-vec-omp //I added some #pragma omp to crb-vec executable size: 18k time: real 0m3.630s valgrind: ==17703== HEAP SUMMARY: ==17703== in use at exit: 7,408 bytes in 15 blocks ==17703== total heap usage: 20 allocs, 5 frees, 14,790,856 bytes allocated rsrb_alt_mt.rs executable size: 426k time: real 0m1.630s valgrind: ==7221== HEAP SUMMARY: ==7221== in use at exit: 43,120 bytes in 216 blocks ==7221== total heap usage: 256 allocs, 40 frees, 11,113,784 bytes allocated
so rust appears broken on centos 7.5 (no, I'm not going to edit the binary). But that is an insta-deal breaker for us.:~# time ./rsrb_alt_mt ./rsrb_alt_mt: /lib64/libc.so.6: version `GLIBC_2.18' not found (required by ./rsrb_alt_mt) real 0m0.002s user 0m0.002s sys 0m0.000s :~# time ./crb-vec-omp real 0m24.234s user 0m24.160s sys 0m0.035sHave you tried passing everything by value? That is, instead of:
you write:bool hit_sphere(const struct sphere* sp, const struct ray* ray, struct hit* hit)
IME, clang is insanely good at optimizing pass by value calls.static bool hit_sphere(struct sphere sp, struct ray ray, struct hit hit)fwiw, I took a stab at replacing a bunch of => with . and ran it: time ./crb-vec-omp real 0m0.764s
which is more than twice as fast as the rust example,
but it didn't create the right output...
if you get bored would you mind taking a stab at adding parallel and modifying crb-vec.c https://github.com/niofis/raybench , I definitely think you might be on to something here.
shouldn't rust being faster than C be something of a red flag that they aren't quite the same algorithm? Or that the algorithm is sub-optimal?
The difference isn't much. And Rust is more like FORTRAN. Maybe a bit faster than C, but can't do the gymnastics with pointers that C can.
> can't do the gymnastics with pointers that C can
It can if you write the "unsafe" keyword, but there's a pretty strong community norm around not doing that sort of thing, unless you can encapsulate it inside some sort of safe API. And to be fair to C, I think C can close the gap with Rust/Fortran if you use the "restrict" keyword a lot?
With unsafe, you can do anything that C can.
Without unsafe, there’s significantly more aliasing information, which helps optimizations.
Rust is compiled by LLVM, while C compiled by GCC, which is a bit conservative. It's possible to enable same optimizations for gcc and LLVM, so their speed will match.
Wasn’t Julia specifically designed to be easy to optimise? It’s not quite like other higher level languages a they thought about performance first.
That's using version 0.4 of Julia too (current is 1.1). Current version has a a lot of improved optimization passes that would likely benefit this benchmark.
That's a big jump between OCaml and Go. I'm not familiar with ray tracing, but skimming the source code it mostly looks like it's doing floating point math; it doesn't look like it's using the runtime (no allocations, no virtual function calls, no scheduling, etc), so I'm surprised that Go is performing relatively poorly.
I wonder if the performance gap is attributable to some overhead in Go's function calls? I know Go passes parameters on the stack instead of via registers... Maybe it's due to passing struct copies instead of references (looks like the C version passes references)? Generally poor code generation?
Anyone else have ideas or care to profile?
EDIT: From my 2015 MBP, Go (version 1.12) is indeed quite a lot slower than C, but only if you're doing an optimized build `-03`:
EDIT2: I re-modified the Go version (https://gist.github.com/weberc2/2aed4f8d3189d09067d564448367...) to pass references and that seems to put it on par with C (or I mistranslated, which is also likely):tmp $ time ./gorb real 1m15.128s user 1m9.366s sys 0m6.754s tmp $ clang crb.c tmp $ time ./a.out real 1m13.041s user 1m10.284s sys 0m0.624s tmp $ gcc crb.c -o crb -std=c11 -O3 -lm -D_XOPEN_SOURCE=600 tmp $ time ./crb real 0m22.703s user 0m22.550s sys 0m0.073s tmp $ clang crb.c -o crb -std=c11 -O3 -lm -D_XOPEN_SOURCE=600 tmp $ time ./crb real 0m22.689s user 0m22.564s sys 0m0.060s$ time ./gorb real 0m19.282s user 0m14.467s sys 0m7.523sThere's a variety of possibilities. Lerc mentions GC as one possibility, which could definitely be the case. Another one that would be high on my "first guess" list is that everything above it has much better optimizers, and raytracing code is one of the places this is really going to show. Go does basically very little optimization, because it prioritizes fast compilation.
(Where Go "wants" to play is that same benchmark, except including compilation time.)
A couple of the things below Go I suspect are bad implementations. I would expect a warmed-up C# to beat Go if both have reasonable (not super-crazy optimized implementations) or at least be at parity, and Luajit may also be a slow implementation. In both cases because ray-traced code is a great place for a JIT to come out and play. EDIT: Oh, I see C# is Mono, and not the Windows implementation. In that case that makes sense.
Oh, and I find it helpful to look at these things logarithmically. I think it matches real-world experiences somewhat better, even though we truly pay for performance in the linear world. From that perspective, it's still only the second largest. The largest is Haskell to Elixir, which is substantially larger. O'Caml->Go is large, but not crazily so; several other differences come close.
There are multiple ”levels” of performance in play here, and which level a language performs on depends on the language, runtime and implementation.
The most naive level is e.g allocating heap objects for vectors, rays etc. On that level the algorithm is probably bounded by pointed chasing, cache misses and GC.
The next level up is an allocation-free loop (at least)
The best level is an optimized and allocation free. If the implementation isn’t allowed to optimize (use SoA instead of AoS, manually vectorize, unroll etc) then the winning languages will be the ones that have sophisticated compilers such as those with LLVM backends.
As an example: The C# example should be on the second level here - but it has a poor implementation (looks like it’s ported from java or written by a java developer) so it’s actually stuck on the first naive level.
Like I responded to Lerc, I don't see any allocs in the hot path here.
Also, as I edited, I updated the Go version to pass by reference and that put it on par with C (and also per my update, I may have mistranslated somehow).
Profiling results:
- initial time: 49s
- replacing default thread-safe RNG with rand.New sliced 6 seconds off. (the default RNG uses mutexes), = 43s
- use float64 instead of float32 and remove many type conversions. Another two seconds off. = 41s.
As others suggested, go still lacks many compile time optimizations and the implementation could be improved.
I did some profiling too--I shaved off 10-15s by using the C xorshift implementation instead of rand.Float32() (which spends a lot of time locking a mutex).
I'm not familiar with go but I seem to recall it is garbage collected.
If so it may be something to do with the creation of new vectors on the heap instead of the stack. The compiler would have to determine the full lifetime of the vector value to be able to bump it to the stack. That's an optimization, and sometimes it's just not possible (but probably is here).
In the C instance no such optimization is necessary. They ask for it on the stack, and if you try to use it after the stack is gone, Bad Things™ happen.
A hypothesis to test would be that languages above the jump are managing to work on the stack and ones below are allocating objects on the heap.
Is it creating vectors in the hot path? I'm not seeing it. Go does some escape analysis (the optimization you're referring to), but it's pretty conservative.
All of the vector operations return new vectors instead of mutating.
A sensible approach for maintainable code but without knowing if a and b are used elsewhere the compiler can't reuse the space they occupy. If C can't figure it out, it can just stick a new one on the stack which doesn't cost too much.func v3_add (a, b v3) v3 { return v3{x: a.x + b.x, y: a.y + b.y, z: a.z + b.z} }This is, of course, Rust's bread and butter which is probably why it takes the top spot.
Those are copied on the stack, not heap allocated, so the GC wouldn't come into play.
Probably because Go version is different, compare C vector operations with Go vector operations. C version is operating on pointer without allocating new vector, Go version is allocation new vector on every op.
EDIT: Look at C code assembly, it's generating mostly SIMD instructions and using xmm registers. That's why it's faster. Golang compiler still do not have autovectorization implemented that's why it's so much slower in this case.
EDIT2: It seems Go version also uses SSE here, which is nice. So probably unnecessary allocation from my original post was the reason.
I modified the Go version to pass references (see my second edit) and that made up the difference (or I mistranslated).
Ah, you replied to my comment before the edit, about unnecessary allocation in Go vec handling.
I'm 99% sure the Go version doesn't allocate any vectors; afaict, it's passing everything on the stack.
Python gets 20% faster if you use `__slots__` on the Vector class which is created and destroyed millions of times. It's still the second-slowest, but it's a nice improvement :P
I wouldn’t give much weight to those benchmark numbers at this point. Some of those language versions are quite out of date...
The numbers are indeed out of date. I re-ran the tests for some of the C and Nim programs using Nim 0.19.4 and gcc 7.3.0 on Windows 10. Here are the results:
The base C code is faster than base Nim code. The optimised C code is significantly slower than everything else(!?). The Nim program that uses a threadpool is the fastest of these.crb-omp 0m22.949s crb 0m23.240s crb_opt 0m31.404s nimrb_pmap 0m8.828s nimrb_fn 0m22.988s nimrb 0m26.556sI couldn't get the CPP versions to compile. I'll do the Rust programs some other time.
If you are using a current version of your C compiler and can consistently reproduce those numbers just by adding something like -O3, you should probably file a bug report. Optimizers can go wrong and sometimes pessimize things a bit, but an almost 50% slowdown from enabling optimizations would be treated as an important bug to fix. (Though people are saying that significant amounts of time in the benchmark are spent in random number generation and output, so maybe try to find out first if the problem lies in one of those.)
Ah, nevermind, (a) even the "non-optimized" C version uses -O3, and (b) the "C" and the "optimized C" programs differ not only in compiler flags but they are actually different source codes. Specifically, the "optimized C" version doesn't use the faster random number generator.
If you fix that, on my machine it's 17.3 seconds for the base C version and 13.4 for the optimized one, i.e., a 22% improvement from turning on the extra optimizations (-march=native and -ffast-math).
And for whatever it's worth, because some people love hating on GCC in favor of Clang, my Clang timings are 19.5 and 16.5 seconds, respectively.
It's surprising that the optimized C is slower; it's much faster on my machine. Like 20s (optimized C) vs 1m20s (unoptimized C).
I would imagine current versions would only give the newer, actively developed languages a boost like Rust, Nim, Crystal, and Go. Java and C I don't see improving moving much.
The node version is very old. V8 has improved significantly since v6.
This times performance like this:
$ time ./crb
That means time spent writing the .ppm file is included.In the implementations I browsed, that is about a million print calls, each of which might flush the output buffer, and whose performance may depend on locale.
To benchmark ray tracing I would, instead, just output the sum of the pixel values, or set the exit code depending on that value.
Even though ray tracing is cpu intensive, it also wouldn’t completely surprise me if some of the implementations in less mature languages spent significant time writing that output because their programmers haven’t come around to optimizing such code.
The Go version spends 8 seconds writing data. This is probably par for the course for most implementations.
The haskell version can be made >= 3x faster by making the computations non-lazy, e.g.
-data Vector3 = Vector3 {vx::Float, vy::Float, vz::Float} deriving (Show)
+data Vector3 = Vector3 {vx :: !Float, vy :: !Float, vz :: !Float} deriving (Show)This seems like a nice improvement, could you send a PR for it?
I don't think the C# time is representative. I suspect Mono is really slow here. I just ran it with VS 2015 in 1 min 24 sec.
If I'm not mistaken, Miguel himself said that Mono was meant for portability (can run on Linux) and not performance. Would be a far better test to use .NET Core as you could still run this test on Linux or any other place where Core runs.
The one exception to what I initially wrote is if Mono was used to compile to native binary (which is what Xamarin apps for iOS do: compile to ARM binary). You'll get very different results if you go that route but that's going to require different compiler options.
For some workloads Mono seems awfully slow. The compiler I'm maintaining at work takes about twice as long on Mono on Windows and about four times as long as Mono on Linux compared to NET. I guess NET Core would be comparable or faster than .NET Framework and similar on both platforms.
The C# implementation looks flawed (uses reference types for vectors etc). Using value types and .NET Core should give a much better result than that. Will try to remember doing a PR.
I've just tried out a bit with .NET Core 2.2.
Baseline of the non-multithreaded variant on my machine: 1m56s
Making Vector3 a struct: 1m3s
Making Vector3 a readonly struct: 1m1s
Making Hit and Ray a struct: 1m26s
Will test more tomorrow, I guess, but the most obvious change already yields a 2× speedup. This was also without any profiling, so I don't even know what I did there.
Tested this on the same machine with Ubuntu for Windows + old .NET Core 1.0. 3 min 20 seconds.
Interesting that Nim is slightly faster than C it considering that it compiles down to C.
That's possibly because of the "Code must follow best practices" restriction.
Oftentimes compile to C is "It's C Jim, but not as we know it"
You can write C as if it is a SSA VM or similar intermediate representation that leaves very little work for the first stages of the compiler.
That's correct, as the nim's generated C code is not idiomatic.
I am surprised PyPy has such a huge lead over Python.
$ time python pyrb.py
real 348m35.965s
user 345m51.776s
sys 0m22.880s
$ time pypy pyrb.py
real 14m2.406s
user 13m55.292s
sys 0m1.416sRaytracing is a pretty great place to apply pypy, you have a very heavy loop that will hit the JIT.
I've certainly seen speedups like that on stuff like project euler code.
10-100x speedup seems normal for CPython vs well compiled code. I don't have experience with PyPy though, so in this specific case I'm not sure.
> rustc 1.13.0-nightly
what's an ancient version of rust. Interesting it is faster than C, though.
To be fair, The README.md seems to be three years old.
It would actually be quite interesting to see a comparison with all of the languages using more recent builds to see which ones are developing their performance.
Yeah, I've meant to update it and add more language implementations, but haven't really got the time. Might as well do so soon as almost all compilers/interpreters have new versions which I suspect have many nice optimizations.
Nim 0.14 is also from more than 3 years ago.
You should see a performance boost in the Haskell implementation by compiling with GHC's LLVM backend[0]. Another Haskell ray tracer ran 30 % faster than the native codegen this way[1].
[0]https://gitlab.haskell.org/ghc/ghc/wikis/commentary/compiler...
[1]http://blog.llvm.org/2010/05/glasgow-haskell-compiler-and-ll...
This is awesome! More good press for Nim.
There is a big variation in performance, some of which I find surprising. Do you know what exactly causes some languages to be so slow (e.g., small objects being created and garbage collected frequently)?
Did some testing and found that it boils down to three things: RNG algorithm used in the standard library, forced use of double precision floating point numbers (the case for OCaml and javascript), and like you mentioned, memory management.
EDIT: forgot to mention the obvious things: compiler/interpreter maturity and inherent overhead.
Wonder how would D lang would have been placed
Looking at the Julia implementation fast math wasn't used. In my experience it's usually worth experimenting with turning it on (also of course for the other LLVM based languages), though I understand that this benchmark tries to keep the program correct at all costs.
I looked over the Common Lisp version at https://github.com/niofis/raybench/blob/master/lisprb.lisp and it's… really bad, in a lot of ways.
(declaim (optimize (speed 3) (safety 0) (space 0) (debug 0) (compilation-speed 0)))
Never use `(optimize (safety 0))` in SBCL — it throws safety completely out the window. We're talking C-levels of safety at that point. Buffer overruns, the works. It might buy you 10-20% speed, but it's not worth it. Lisp responsibly, use `(safety 1)`. (defconstant WIDTH 1280)
People generally name constants in CL with +plus-muffs+. Naming them as uppercase doesn't help because the reader uppercases symbol names by default when it reads. So `(defconstant WIDTH ...)` means you can no longer have a variable named `width` (in the same package). (defstruct (vec
(:conc-name v-)
(:constructor v-new (x y z))
(:type (vector float)))
x y z)
Using `:type (vector float)` here is trying to make things faster, but failing. The type designator `float` covers all kinds of floats, e.g. both `single-float`s and `double-float`s in SBCL. So all SBCL knows is that the struct contains some kind of float, and it can't really do much with that information. This means all the vector math functions below have to fall back to generic arithmetic, which is extremely slow. SBCL even warns you about this when it's compiling, thanks to the `(optimize (speed 3))` declaration, but I guess they ignored or didn't understand those warnings. (defconstant ZERO (v-new 0.0 0.0 0.0))
This will cause problems because if it's ever evaluated more than once it'll try to redefine the constant to a new `vec` instance, which will not be `eql` to the old one. Use `alexandria:define-constant` or just make it a global variable.All the vector math functions are slow because they have no useful type information to work with:
(disassemble 'v-add)
; disassembly for V-ADD
; Size: 160 bytes. Origin: #x52D799AF
; 9AF: 488B45F8 MOV RAX, [RBP-8] ; no-arg-parsing entry point
; 9B3: 488B5001 MOV RDX, [RAX+1]
; 9B7: 488B45F0 MOV RAX, [RBP-16]
; 9BB: 488B7801 MOV RDI, [RAX+1]
; 9BF: FF1425A8001052 CALL QWORD PTR [#x521000A8] ; GENERIC-+
; 9C6: 488955E8 MOV [RBP-24], RDX
; 9CA: 488B45F8 MOV RAX, [RBP-8]
; 9CE: 488B5009 MOV RDX, [RAX+9]
; 9D2: 488B45F0 MOV RAX, [RBP-16]
; 9D6: 488B7809 MOV RDI, [RAX+9]
; 9DA: FF1425A8001052 CALL QWORD PTR [#x521000A8] ; GENERIC-+
; 9E1: 488BDA MOV RBX, RDX
; 9E4: 488B45F8 MOV RAX, [RBP-8]
; 9E8: 488B5011 MOV RDX, [RAX+17]
; 9EC: 488B45F0 MOV RAX, [RBP-16]
; 9F0: 488B7811 MOV RDI, [RAX+17]
; 9F4: 48895DE0 MOV [RBP-32], RBX
; 9F8: FF1425A8001052 CALL QWORD PTR [#x521000A8] ; GENERIC-+
; 9FF: 488B5DE0 MOV RBX, [RBP-32]
; A03: 49896D40 MOV [R13+64], RBP ; thread.pseudo-atomic-bits
; A07: 498B4520 MOV RAX, [R13+32] ; thread.alloc-region
; A0B: 4C8D5830 LEA R11, [RAX+48]
; A0F: 4D3B5D28 CMP R11, [R13+40]
; A13: 772E JNBE L2
; A15: 4D895D20 MOV [R13+32], R11 ; thread.alloc-region
; A19: L0: C600D9 MOV BYTE PTR [RAX], -39
; A1C: C6400806 MOV BYTE PTR [RAX+8], 6
; A20: 0C0F OR AL, 15
; A22: 49316D40 XOR [R13+64], RBP ; thread.pseudo-atomic-bits
; A26: 7402 JEQ L1
; A28: CC09 BREAK 9 ; pending interrupt trap
; A2A: L1: 488B4DE8 MOV RCX, [RBP-24]
; A2E: 48894801 MOV [RAX+1], RCX
; A32: 48895809 MOV [RAX+9], RBX
; A36: 48895011 MOV [RAX+17], RDX
; A3A: 488BD0 MOV RDX, RAX
; A3D: 488BE5 MOV RSP, RBP
; A40: F8 CLC
; A41: 5D POP RBP
; A42: C3 RET
; A43: L2: 6A30 PUSH 48
; A45: FF142520001052 CALL QWORD PTR [#x52100020] ; ALLOC-TRAMP
; A4C: 58 POP RAX
; A4D: EBCA JMP L0
If they had done the type declarations correctly, it would look more like this: ; disassembly for V-ADD
; Size: 122 bytes. Origin: #x52C33A78
; 78: F30F104A05 MOVSS XMM1, [RDX+5] ; no-arg-parsing entry point
; 7D: F30F105F05 MOVSS XMM3, [RDI+5]
; 82: F30F58D9 ADDSS XMM3, XMM1
; 86: F30F104A0D MOVSS XMM1, [RDX+13]
; 8B: F30F10670D MOVSS XMM4, [RDI+13]
; 90: F30F58E1 ADDSS XMM4, XMM1
; 94: F30F104A15 MOVSS XMM1, [RDX+21]
; 99: F30F105715 MOVSS XMM2, [RDI+21]
; 9E: F30F58D1 ADDSS XMM2, XMM1
; A2: 49896D40 MOV [R13+64], RBP ; thread.pseudo-atomic-bits
; A6: 498B4520 MOV RAX, [R13+32] ; thread.alloc-region
; AA: 4C8D5820 LEA R11, [RAX+32]
; AE: 4D3B5D28 CMP R11, [R13+40]
; B2: 7734 JNBE L2
; B4: 4D895D20 MOV [R13+32], R11 ; thread.alloc-region
; B8: L0: 66C7005903 MOV WORD PTR [RAX], 857
; BD: 0C03 OR AL, 3
; BF: 49316D40 XOR [R13+64], RBP ; thread.pseudo-atomic-bits
; C3: 7402 JEQ L1
; C5: CC09 BREAK 9 ; pending interrupt trap
; C7: L1: C7400103024F50 MOV DWORD PTR [RAX+1], #x504F0203 ; #<SB-KERNEL:LAYOUT for VEC {504F0203}>
; CE: F30F115805 MOVSS [RAX+5], XMM3
; D3: F30F11600D MOVSS [RAX+13], XMM4
; D8: F30F115015 MOVSS [RAX+21], XMM2
; DD: 488BD0 MOV RDX, RAX
; E0: 488BE5 MOV RSP, RBP
; E3: F8 CLC
; E4: 5D POP RBP
; E5: C3 RET
; E6: CC0F BREAK 15 ; Invalid argument count trap
; E8: L2: 6A20 PUSH 32
; EA: E8F1C64CFF CALL #x521001E0 ; ALLOC-TRAMP
; EF: 58 POP RAX
; F0: EBC6 JMP L0
The weirdness continues: (defstruct (ray
(:conc-name ray-)
(:constructor ray-new (origin direction))
(:type vector))
origin direction)
The `:conc-name ray-` is useless, that's the default conc-name. And again with the `:type vector`… just make it a normal struct. I was going to guess that they were doing it so they could use vector literals to specify the objects, but then why are they bothering to define a BOA constructor here? And the slots are untyped, which, if you're looking for speed, is not doing you any favors.I took a few minutes over lunch to add some type declarations to the slots and important functions, inlined the math, cleaned up the broken indentation and naming issues:
https://gist.github.com/sjl/005f27274adacd12ea2fc7f0b7200b80...
The old version runs in 5m12s on my laptop, the new version runs in 58s. So if we unscientifically extrapolate that to their 24m time, it puts it somewhere around 5m in their list. This matches what I usually see from SBCL: for numeric-heavy code generic arithmetic is very slow, and some judicious use of type declarations can get you to within ~5-10x of C. Getting more improvements beyond that can require really bonkers stuff that often isn't worth it.
I did some quick changes to your code (inlining, stack allocating) and got a further ~2x speedup which makes SBCL performance equivalent to Julia.
Yeah I considered trying some dynamic-extent declarations but just didn't care all that much. Can you post your version? I'm curious how far into the declaration weeds you need to go to get that extra 2x.
EDIT: I'm also curious how much using an optimized vector math library (e.g. sb-cga) would buy you instead of hand-rolling your own vector math. It would certainly be easier.
It would be interesting to take that C version and hammer on it a bit for speed.
...and then add SIMD.
The results are more or less in line with what I would have expected, except for SBCL and Luajit, which I would have expected to be much faster.
The Lisp code is terrible performance-wise and written by someone who obviously doesn't know Common Lisp very well.
Look at Steve Losh's comment here for something a lot better. My own (further) improvements put SBCL performance in the same order as Julia.
The most impressive result here is Lua -- not far behind C! LuaJIT is amazing.
Good to see a few languages like Nim and Rust actually beating C for raw performance, too.
Am I reading it wrong or did LuaJIT take 113x as long as C?
It sure looks that slow to me.
Ha, you’re right, I totally misread it! I read it as seconds and not minutes, doh. :)
You should look again. Lua seems to be the slowest of the bunch.
Whoops, yes, you’re right!
Now I’m tempted to try speeding it up, there’s no way it should be behind Python...
I haven’t checked the code, but I’m also hugely surprised Lua did so poorly. Even the default non-JIT interpreter should be way faster than Python.
Sarcasm?
Unfortunately for me, Hanlon’s Razor applies.