Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ComfyUI support? #68

Open
GamingDaveUk opened this issue Jun 16, 2024 · 7 comments
Open

ComfyUI support? #68

GamingDaveUk opened this issue Jun 16, 2024 · 7 comments

Comments

@GamingDaveUk
Copy link

Is there a way to use this in comfyui?
Really impressed with the prompt following that a user shared into a discord channel.

Also can lora's be created for it? Can it be trained?

@C0nsumption
Copy link

Interested as well (づ ̄3 ̄)づ

@kijai
Copy link

kijai commented Jun 16, 2024

Made a wrapper that can run the T2I models:
https://github.com/kijai/ComfyUI-LuminaWrapper

@GamingDaveUk
Copy link
Author

Made a wrapper that can run the T2I models:
https://github.com/kijai/ComfyUI-LuminaWrapper

Very cool, though I am on Windows and slightly reluctant to use pre-built wheels (the llmvision* issue has me cautious)

*think that was the name of it.

@kijai
Copy link

kijai commented Jun 17, 2024

Made a wrapper that can run the T2I models:
https://github.com/kijai/ComfyUI-LuminaWrapper

Very cool, though I am on Windows and slightly reluctant to use pre-built wheels (the llmvision* issue has me cautious)

*think that was the name of it.

I get that, but it's the same issue with the original code. There's a fallback to SDP attention in the code, but it does not work at all. I built flash_attn on my Windows install and it works fine, took almost an hour though...

@Excidos
Copy link

Excidos commented Jun 17, 2024

I'm having a lot of trouble with flash attention on my windows, any help would be greatly appreciated :)

@kijai
Copy link

kijai commented Jun 17, 2024

I'm having a lot of trouble with flash attention on my windows, any help would be greatly appreciated :)

It now works without flash_attn, it's just much slower and uses twice as much VRAM, but it works.

@PierrunoYT
Copy link

I need a something to host a Demo Locally on my 4090

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants