Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] does not work with --medvram #115

Open
dnl13 opened this issue Sep 6, 2023 · 0 comments
Open

[BUG] does not work with --medvram #115

dnl13 opened this issue Sep 6, 2023 · 0 comments
Labels
bug Something isn't working

Comments

@dnl13
Copy link

dnl13 commented Sep 6, 2023

Describe the bug
This extension stops working completely for me when "set COMMANDLINE_ARGS= --medvram" is applied in "webui-user.bat."

To Reproduce
Steps to reproduce the behavior:
add --medvram to COMMANDLINE_ARGS in webui-user.bat file, it gives errors
remove --medvram and it works
you can toggle the error in this way

Expected behavior
should work with --medvram applied

Screenshots of error
error when --medvram is applied

Batch 1/1
Loading weights [ef76aa2332] from E:\stable-diffusion-webui\models\Stable-diffusion\models\1.5\realisticVisionV51_v51VAE.safetensors
Creating model from config: E:\stable-diffusion-webui\configs\v1-inference.yaml
Applying attention optimization: xformers... done.
*** Error completing request
*** Arguments: ('Huge spectacular Waterfall in ', [[0, 'a dense tropical forest'], [2, 'a Lush jungle'], [3, 'a Thick rainforest'], [5, 'a Verdant canopy']], 'epic perspective,(vegetation overgrowth:1.3)(intricate, ornamentation:1.1),(baroque:1.1), fantasy, (realistic:1) digital painting , (magical,mystical:1.2) , (wide angle shot:1.4), (landscape composed:1.2)(medieval:1.1),(tropical forest:1.4),(river:1.3) volumetric lighting ,epic, style by Alex Horley Wenjun Lin greg rutkowski Ruan Jia (Wayne Barlowe:1.2)', 'frames, border, edges, borderline, text, character, duplicate, error, out of frame, watermark, low quality, ugly, deformed, blur, bad-artist', 5, 8, 35, None, None, 30, 0, 0, 0, 48, 2, 1, -1, 512, 512, 1, 'DDIM', False, 'None', 2, 1, 0, 0, None) {}
    Traceback (most recent call last):
      File "E:\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "E:\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "E:\stable-diffusion-webui\extensions\infinite-zoom-automatic1111-webui\iz_helpers\run.py", line 200, in create_zoom
        result = create_zoom_single(
      File "E:\stable-diffusion-webui\extensions\infinite-zoom-automatic1111-webui\iz_helpers\run.py", line 357, in create_zoom_single
        load_model_from_setting(
      File "E:\stable-diffusion-webui\extensions\infinite-zoom-automatic1111-webui\iz_helpers\helpers.py", line 48, in load_model_from_setting
        modules.sd_models.load_model(checkinfo)
      File "E:\stable-diffusion-webui\modules\sd_models.py", line 649, in load_model
        sd_model.cond_stage_model_empty_prompt = get_empty_cond(sd_model)
      File "E:\stable-diffusion-webui\modules\sd_models.py", line 537, in get_empty_cond
        return sd_model.cond_stage_model([""])
      File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\stable-diffusion-webui\modules\sd_hijack_clip.py", line 234, in forward
        z = self.process_tokens(tokens, multipliers)
      File "E:\stable-diffusion-webui\modules\sd_hijack_clip.py", line 273, in process_tokens
        z = self.encode_with_transformers(tokens)
      File "E:\stable-diffusion-webui\modules\sd_hijack_clip.py", line 326, in encode_with_transformers
        outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
      File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        result = hook(self, args)
      File "E:\stable-diffusion-webui\modules\lowvram.py", line 52, in send_me_to_gpu
        module_in_gpu.to(cpu)
      File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
        return self._apply(convert)
      File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
        module._apply(fn)
      File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
        module._apply(fn)
      File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
        module._apply(fn)
      [Previous line repeated 2 more times]
      File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
        param_applied = fn(param)
      File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
        return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
    NotImplementedError: Cannot copy out of meta tensor; no data!

---
Traceback (most recent call last):
  File "E:\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "E:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1434, in process_api
    data = self.postprocess_data(fn_index, result["prediction"], state)
  File "E:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1297, in postprocess_data
    self.validate_outputs(fn_index, predictions)  # type: ignore
  File "E:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1272, in validate_outputs
    raise ValueError(
ValueError: An event handler (create_zoom) didn't receive enough output values (needed: 5, received: 4).
Wanted outputs:
    [video, gallery, textbox, html, html]
Received outputs:
    [None, "", "", "<div class='error'>NotImplementedError: Cannot copy out of meta tensor; no data!</div><div class='performance'><p class='time'>Time taken: <wbr><span class='measurement'>2.9 sec.</span></p><p class='vram'><abbr title='Active: peak amount of video memory used during generation (excluding cached data)'>A</abbr>: <span class='measurement'>0.27 GB</span>, <wbr><abbr title='Reserved: total amout of video memory allocated by the Torch library '>R</abbr>: <span class='measurement'>0.28 GB</span>, <wbr><abbr title='System: peak amout of video memory allocated by all running programs, out of total capacity'>Sys</abbr>: <span class='measurement'>2.2/8 GB</span> (27.0%)</p></div>"]

without --medvram

Batch 1/1
Loading weights [ef76aa2332] from E:\stable-diffusion-webui\models\Stable-diffusion\models\1.5\realisticVisionV51_v51VAE.safetensors
Creating model from config: E:\stable-diffusion-webui\configs\v1-inference.yaml
Applying attention optimization: xformers... done.
Model loaded in 3.2s (load weights from disk: 0.2s, create model: 0.3s, apply weights to model: 2.4s).
100%|██████████████████████████████████████████████████████████████████████████████████| 35/35 [00:05<00:00,  6.96it/s]
Loading weights [4dafaba867] from E:\stable-diffusion-webui\models\Stable-diffusion\inpaints\realisticVisionV51_v51VAE-inpainting.safetensors00:03<00:00,  9.43it/s]
Creating model from config: E:\stable-diffusion-webui\configs\v1-inpainting-inference.yaml
Applying attention optimization: xformers... done.
Model loaded in 2.5s (load weights from disk: 0.2s, create model: 0.3s, apply weights to model: 1.8s).
Outpaint step: 1 / 5 Seed: -1
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 33/33 [00:03<00:00,  9.25it/s]
Outpaint step: 2 / 5 Seed: -1 9.43it/s]
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 33/33 [00:03<00:00,  9.37it/s]
Outpaint step: 3 / 5 Seed: -1  9.36it/s]
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 33/33 [00:03<00:00,  9.36it/s]
Outpaint step: 4 / 5 Seed: -1  9.35it/s]
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 33/33 [00:03<00:00,  9.37it/s]
Outpaint step: 5 / 5 Seed: -1  9.33it/s]
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 33/33 [00:03<00:00,  9.36it/s]
Video saved in: E:\stable-diffusion-webui\outputs\infinite-zooms\infinite_zoom_1694025954.mp4

Desktop (please complete the following information):

  • OS: Windows 10, RTX 2070Super 8GB
  • Version
    infinite-zoom-automatic1111-webui version: d6461e7
    automatic1111 : version: [v1.6.0] •  python: 3.10.11  •  torch: 2.0.1+cu118  •  xformers: 0.0.20  •  gradio: 3.41.2  • 

Additional context
my complete "set COMMANDLINE_ARGS=--xformers --always-batch-cond-uncond --medvram --medvram-sdxl --no-half-vae --api --autolaunch --update-check --update-all-extensions"

Removing "--medvram" from "set COMMANDLINE_ARGS="
will fix it, but i need --medvram for other tasks in automatic1111 also, would be nice not to switch back and forth

I did not test it in a fresh install without other extensions, so I can't tell if other extensions are interfering.

Thank you, and I'm hoping that exit_frame will be merged into the main branch soon.

@dnl13 dnl13 added the bug Something isn't working label Sep 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant