JEP 515: Ahead-of-Time Method Profiling

openjdk.org

99 points by cempaka 3 days ago


motoboi - 2 days ago

The most impact will be achieved on java standard library, like Streams (cited in the article). Right now, although their behavior is well stablished and they are mostly used in the "factory" mode (no user subclassing or implementation of the stream api), they cannot be shipped with the JVM already compiled.

If you can find a way (which this JEP is one way) to make the bulk of the java standard api AOT compiled, then java programs will be faster (much faster).

Also, the JVM is already an engine marvel (java JIT code is fast as hell), but this will make java programs much nimbler.

indolering - 2 days ago

OpenJ9 has had some of this type of functionality for a while now. Glad to see the difference between interpreted and compiled languages continue to get fuzzier.

nmstoker - 3 days ago

Would be interesting if the Faster Python team considered this approach for Python (although maybe they already did?)

rst - 2 days ago

Faint echoes of the very first optimizing compiler, Fortran I, which did a monte carlo simulation of the flow graph to attempt to detect hot spots in the flow graph so it could allocate registers to inner loops first.

tikkabhuna - 2 days ago

Is this similar/the same as Azul Zing’s ReadyNow feature?

mshockwave - 2 days ago

in addition to storing profiles, what about caching some native code? so that we can eliminate the JIT overhead for hot functions

EDIT: they describe this in their "Alternative" section as future work