imgutils.utils.onnxruntime
- Overview:
- Management of onnx models. 
get_onnx_provider
- imgutils.utils.onnxruntime.get_onnx_provider(provider: str | None = None)[source]
- Overview:
- Get onnx provider. 
 - Parameters:
- provider – The provider for ONNX runtime. - Noneby default and will automatically detect if the- CUDAExecutionProvideris available. If it is available, it will be used, otherwise the default- CPUExecutionProviderwill be used.
- Returns:
- String of the provider. 
 
open_onnx_model
- imgutils.utils.onnxruntime.open_onnx_model(ckpt: str, mode: str | None = None) InferenceSession[source]
- Overview:
- Open an ONNX model and load its ONNX runtime. 
 - Parameters:
- ckpt – ONNX model file. 
- mode – Provider of the ONNX. Default is - Nonewhich means the provider will be auto-detected, see- get_onnx_provider()for more details.
 
- Returns:
- A loaded ONNX runtime object. 
 - Note - When - modeis set to- None, it will attempt to detect the environment variable- ONNX_MODE. This means you can decide which ONNX runtime to use by setting the environment variable. For example, on Linux, executing- export ONNX_MODE=cpuwill ignore any existing CUDA and force the model inference to run on CPU.