imgutils.utils.onnxruntime
- Overview:
Management of onnx models.
get_onnx_provider
- imgutils.utils.onnxruntime.get_onnx_provider(provider: str | None = None)[source]
- Overview:
Get onnx provider.
- Parameters:
provider – The provider for ONNX runtime.
None
by default and will automatically detect if theCUDAExecutionProvider
is available. If it is available, it will be used, otherwise the defaultCPUExecutionProvider
will be used.- Returns:
String of the provider.
open_onnx_model
- imgutils.utils.onnxruntime.open_onnx_model(ckpt: str, mode: str | None = None) InferenceSession [source]
- Overview:
Open an ONNX model and load its ONNX runtime.
- Parameters:
ckpt – ONNX model file.
mode – Provider of the ONNX. Default is
None
which means the provider will be auto-detected, seeget_onnx_provider()
for more details.
- Returns:
A loaded ONNX runtime object.
Note
When
mode
is set toNone
, it will attempt to detect the environment variableONNX_MODE
. This means you can decide which ONNX runtime to use by setting the environment variable. For example, on Linux, executingexport ONNX_MODE=cpu
will ignore any existing CUDA and force the model inference to run on CPU.