imgutils.utils.onnxruntime

Overview:

Management of onnx models.

get_onnx_provider

imgutils.utils.onnxruntime.get_onnx_provider(provider: str | None = None)[source]
Overview:

Get onnx provider.

Parameters:

provider – The provider for ONNX runtime. None by default and will automatically detect if the CUDAExecutionProvider is available. If it is available, it will be used, otherwise the default CPUExecutionProvider will be used.

Returns:

String of the provider.

open_onnx_model

imgutils.utils.onnxruntime.open_onnx_model(ckpt: str, mode: str | None = None) InferenceSession[source]
Overview:

Open an ONNX model and load its ONNX runtime.

Parameters:
  • ckpt – ONNX model file.

  • mode – Provider of the ONNX. Default is None which means the provider will be auto-detected, see get_onnx_provider() for more details.

Returns:

A loaded ONNX runtime object.

Note

When mode is set to None, it will attempt to detect the environment variable ONNX_MODE. This means you can decide which ONNX runtime to use by setting the environment variable. For example, on Linux, executing export ONNX_MODE=cpu will ignore any existing CUDA and force the model inference to run on CPU.