You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to suggest a feature that would allow specifing which GPU or GPUs to run on directly within the Ollama Python library.
This feature is crucial in shared server environments across multiple GPUs and multiple users, as it allows each Jupyter notebook to run on the corresponding GPU without conflicts. Currently, specifying GPU usage in Ollama is somewhat complex. A streamlined method to assign tasks to specific GPUs directly inside the Python program would prevent conflicts and optimize workflow. Implementing this feature would significantly improve usability and align Ollama with other machine-learning frameworks.
Thank you for considering this suggestion. I would be happy to discuss further details if needed.
The text was updated successfully, but these errors were encountered:
I would like to suggest a feature that would allow specifing which GPU or GPUs to run on directly within the Ollama Python library.
This feature is crucial in shared server environments across multiple GPUs and multiple users, as it allows each Jupyter notebook to run on the corresponding GPU without conflicts. Currently, specifying GPU usage in Ollama is somewhat complex. A streamlined method to assign tasks to specific GPUs directly inside the Python program would prevent conflicts and optimize workflow. Implementing this feature would significantly improve usability and align Ollama with other machine-learning frameworks.
Thank you for considering this suggestion. I would be happy to discuss further details if needed.
The text was updated successfully, but these errors were encountered: