Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Halve model loading time for llama demo #4032

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

swolchok
Copy link
Contributor

Summary:
mmap is not recommended for large sequential workloads -- you
have to take a bunch of page faults. Surprisingly, this doesn't seem
to hurt reported peak memory usage.

Differential Revision: D58826044

Copy link

pytorch-bot bot commented Jun 21, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/4032

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 4d922b4 with merge base 3eec95a (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 21, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D58826044

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D58826044

swolchok added a commit to swolchok/executorch that referenced this pull request Jun 22, 2024
Summary:
Pull Request resolved: pytorch#4032

mmap is not recommended for large sequential workloads -- you
have to take a bunch of page faults. Surprisingly, this doesn't seem
to hurt reported peak memory usage.

Differential Revision: D58826044
swolchok added a commit to swolchok/executorch that referenced this pull request Jun 28, 2024
Summary:
Pull Request resolved: pytorch#4032

mmap is not recommended for large sequential workloads -- you
have to take a bunch of page faults. Surprisingly, this doesn't seem
to hurt reported peak memory usage.

Differential Revision: D58826044
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D58826044

Summary:
Pull Request resolved: pytorch#4032

mmap is not recommended for large sequential workloads -- you
have to take a bunch of page faults. I originally assumed this would hurt peak memory usage (we read all the weights into memory at once and then pack them; packing is basically copying them), but it doesn't. In retrospect, this makes sense because we actually operate on one weights tensor at a time, and the individual tensors aren't gigantic, there are just a lot of them.

Reviewed By: larryliu0820

Differential Revision: D58826044
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D58826044

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants