imgutils.utils.onnxruntime
- Overview:
Management of onnx models.
get_onnx_provider
- imgutils.utils.onnxruntime.get_onnx_provider(provider: str | None = None)[source]
- Overview:
Get onnx provider.
- Parameters:
provider – The provider for ONNX runtime.
Noneby default and will automatically detect if theCUDAExecutionProvideris available. If it is available, it will be used, otherwise the defaultCPUExecutionProviderwill be used.- Returns:
String of the provider.
open_onnx_model
- imgutils.utils.onnxruntime.open_onnx_model(ckpt: str, mode: str | None = None) InferenceSession[source]
- Overview:
Open an ONNX model and load its ONNX runtime.
- Parameters:
ckpt – ONNX model file.
mode – Provider of the ONNX. Default is
Nonewhich means the provider will be auto-detected, seeget_onnx_provider()for more details.
- Returns:
A loaded ONNX runtime object.
Note
When
modeis set toNone, it will attempt to detect the environment variableONNX_MODE. This means you can decide which ONNX runtime to use by setting the environment variable. For example, on Linux, executingexport ONNX_MODE=cpuwill ignore any existing CUDA and force the model inference to run on CPU.