Settings

Theme

ONNX Runtime and CoreML May Silently Convert Your Model to FP16

ym2132.github.io

98 points by Two_hands 4 days ago · 17 comments

Reader

nuc1e0n 16 hours ago

My experiences with ONNX have not been pleasant. Conversions from models written with Tensorflow and Pytorch often fail. I recommend using TFLite or Executorch for deployment to edge devices instead.

smcleod 4 days ago

This was an interesting read, thanks for sharing. I've recently been building something that uses Parakeet v2/v3 models, I'm using the parakeet-rs package (https://github.com/altunenes/parakeet-rs) which has had a few issues running models with CoreML (unrelated to the linked post), e.g. https://github.com/microsoft/onnxruntime/issues/26355

  • Two_handsOP 4 days ago

    Thank you for reading.

    Also generally I think CoreML isn't the best. The best solution for ORT would probably be to introduce a pure MPS provider (https://github.com/microsoft/onnxruntime/issues/21271), but given they've already bought into CoreML the effort may not be worth the reward for the core team. Which fair enough as it's a pretty mammoth task

    • pzo 4 days ago

      However one benefits of CoreML - it is the only way to be able for 3rd party to execute on ANE (Apple Neural Engine aka NPU). ANE for some models can execute even faster than GPU/MPS and consume even less battery.

      But I agree CoreML in ONNX Runtime is not perfect - most of the time when I tested some models there were too many partitioning and whole graph was running slower compare when using only model in just CoreML format.

      • Two_handsOP 4 days ago

        To be honest it's a shame the whole thing is closed up, I guess it's to be expected from Apple, but I reckon CoreML would be benefit a lot from at least exposing the internals/allowing users to define new ops.

        Also, the ANE only allows some operators to be ran on it right? There's very little transparency/control on what can be offloaded to it and cannot which makes using it difficult.

trashtensor 4 days ago

if you double click the coreml file in a mac and open xcode there is a profiler you can run. the profiler will show you the operations it's using and what the bit depth is.

yousifa 4 days ago

On the coreml side this is likely because the neural engine supports fp16 and offloading some/all layers to ANE significantly increases inference time and power usage when running models. You can inspect in the Xcode profiler to see what is running on each part of the device at what precision.

  • Two_handsOP 4 days ago

    Yeah I can see why they let it be that way, but the fact it is pretty undefined is what bugged me. I suppose it depends on what your goals are - efficiency vs reproducibility.

    Also I did run a test of FP16 vs FP32 for a large matmul on the Apple GPU and the FP16 calculation was 1.28x faster so it makes sense that they'd go for FP16 as a default.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection