You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It is available via the torch-ort-infer python package. This package enables OpenVINO™ Execution Provider for ONNX Runtime by default for accelerating inference on various Intel® CPUs, Intel® integrated GPUs, and Intel® Movidius™ Vision Processing Units - referred to as VPU."
However when I try to use it the dependencies point to the install of torch_ort, which needs CUDA as prerequisite. I don't have either ATI or NVIDIA on this Intel-PC, and want to use the Intel-GPU.
What can I do to omit the CUDA-dependencies completely?