AI accelerator support

1 min read Original article ↗

New in v2.6.1.2 (experimental)

compute_backend.py allows hardware acceleration for neural calculations by changing one line:

in config.ini:

[Compute]

backend = numpy

Options:

  • numpy - default, no extra dependencies
  • onnx - enables hardware AI accelerator support via ONNX Runtime

auto-selects DirectML, OpenVINO, QNN, or falls back to numpy if no runtime is present.


requires runtime to be installed (refer to the following list:)

Recommended ONNX runtime packages by platform:

  • Windows | NVIDIA + AMD + Intel GPU + NPU (DirectML) | pip install onnxruntime-directml

  • Windows | NVIDIA only (maximum CUDA performance) | pip install onnxruntime-gpu

  • Windows | Qualcomm 8CX / SQX / Snapdragon (NPU) | pip install onnxruntime-qnn

  • macOS | Apple Silicon and Intel Macs | pip install onnxruntime

  • Linux | NVIDIA GPUs | pip install onnxruntime-gpu

  • Linux | AMD GPUs | Use the ROCm/MIGraphX build (see ONNX Runtime docs)


If no runtime is installed, the default `numpy` will be used (neural calculations performed on CPU)
ONNX support is NEW and EXPERIMENTAL - Please report issues!

The Brain Tool About Tab displays which backend is being used: (version 2.6.2.0+)