-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update xpu-max1100.yml with new config and add some tests #5668
base: master
Are you sure you want to change the base?
Conversation
python -c "import torch; import intel_extension_for_pytorch; print('XPU available:', torch.xpu.is_available())" | ||
apt-get update | ||
apt-get install clinfo libaio-dev python3-pip -y | ||
pip install torch==2.1.0.post2 -f https://developer.intel.com/ipex-whl-stable-xpu |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need the versions from this index?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Verisions from this index is closer to the release of intel_extension_for_pytorch and wheels can be downloaded stable and fast.
.github/workflows/xpu-max1100.yml
Outdated
pip list | ||
|
||
- name: Unit tests | ||
shell: bash |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need this line?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Deleted.
.github/workflows/xpu-max1100.yml
Outdated
pip install oneccl_bind_pt==2.1.300+xpu -f https://developer.intel.com/ipex-whl-stable-xpu | ||
pip install torchvision==0.16.0.post2 -f https://developer.intel.com/ipex-whl-stable-xpu | ||
pip install py-cpuinfo pytest pytest-timeout tabulate tensorboard wandb transformers accelerate comet_ml mup numpy==1.26 | ||
python3 setup.py bdist_wheel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we replace these with pip install .[dev,autotuning] rather than installing these packages and the whl? That way we ensure we have the correct requirements file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. Previously we install from the whl to avoid some issue. Now we can install as standard CI way. Thanks for reviewing, @loadams.
This PR:
1.Change the container
2.Update the software version (align with docker compiler)
3. Add some tests