Skip to content

Why should I be forced to have a CUDA or ROCm machine when wanting to run OpenVino on Intel? #175

@rmast

Description

@rmast

This link tells me ort-inference supports OpenVino:
https://github.com/pytorch/ort#-inference
"ONNX Runtime for PyTorch supports PyTorch model inference using ONNX Runtime and Intel® OpenVINO™.

It is available via the torch-ort-infer python package. This package enables OpenVINO™ Execution Provider for ONNX Runtime by default for accelerating inference on various Intel® CPUs, Intel® integrated GPUs, and Intel® Movidius™ Vision Processing Units - referred to as VPU."

However when I try to use it the dependencies point to the install of torch_ort, which needs CUDA as prerequisite. I don't have either ATI or NVIDIA on this Intel-PC, and want to use the Intel-GPU.
What can I do to omit the CUDA-dependencies completely?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions