It is now not uncommon to have a python package that is distributed in a multitude of different "flavors". This happens often with machine learning packages, e.g. onnxruntime has many "flavors" onnxruntime, onnxruntime-gpu, onnxruntime-directml, ... (same for xgboost). All those package provide the same content but have different implementations. Note that most of the time only one "flavor" can be installed in an env.
Now, let say I have a package A that depends on onnxruntime but I want to install this package (A) on a machine that has a (cuda) GPU. To take benefit of the GPU I would have to install onnxruntime-gpu instead of onnxruntime (and not both).
Ideally I would have liked to have an extra-dependency option gpu added to my package A so that if I do
pip install A
the dependency onnxruntime would be installed with it. And if I do
pip install A[gpu]
the dependency onnxruntime-gpu would be installed with it. Sadly if I do that both dependencies would be installed which is not working.
The only workable solution I see is to distribute two "flavors" of the package, A and A-gpu, which two different sets of dependencies (which I don't find ideal). Is there a better option?
Edit - Using extra markers in requirements
I also try adding condition on extra marker in my dependency list but it's not working to exclude a package to be installed while an extra is used.
With
[project.dependencies]
onnxruntime; extra!=cuda #(or extra=="")
[project.optional-dependencies]
cuda=["onnxruntime-gpu"]
will always install onnxruntime package when installing A[cuda] because pip will always analyze the dependencies with any no extra at least one time.
That means that I can't have a valid default installation (no extra).