Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ImportError: attempted relative import beyond top-level package #81

Open
SoftologyPro opened this issue Jun 19, 2024 · 25 comments
Open

Comments

@SoftologyPro
Copy link

SoftologyPro commented Jun 19, 2024

Trying to get the lumina_music demo working.
I have all the models downloaded locally.
Using the command...
python demo_music.py --ckpt ..\models\Lumina-Music\music_generation --vocoder_ckpt ..\models\Lumina-Music\bigvnat --config_path .\configs\lumina-text2music.yaml --sample_rate 16000
I edited the yaml ckpt path to be
ckpt_path: ../../models/Lumina-T2Music/maa2
That does point to the maa2 directory.
But when it runs I get this error

    from ..util import instantiate_from_config
ImportError: attempted relative import beyond top-level package

Any ideas what I am doing wrong here?

Even if I use an explicit full path to the ckpt, same error.
ckpt_path: "D:/MachineLearning/Lumina-T2X/Lumina-T2X/models/Lumina-T2Music/maa2/maa2.ckpt"

@PommesPeter
Copy link
Contributor

please pull the latest code on this repo.

@SoftologyPro
Copy link
Author

I still get the same error.
What format should the ckpt_path be in the yaml file? Is it relative to the demo script?
I point to the other model folders with this command line
python demo_music.py --ckpt ..\models\Lumina-Music\music_generation --vocoder_ckpt ..\models\Lumina-Music\bigvnat --config_path .\configs\lumina-text2music.yaml --sample_rate 16000
And in the yaml file I use
ckpt_path: ../models/Lumina-T2Music/maa2/maa2.ckpt
Do I need to include the ckpt filename?
These all cause the same top-level package error

        ckpt_path: ../models/Lumina-T2Music/maa2/
        ckpt_path: ../models/Lumina-T2Music/maa2/maa2.ckpt
        ckpt_path: ../models/Lumina-T2Music/maa2
        ckpt_path: "D:\\FullPathHere\\Lumina-T2X\\Lumina-T2X\\models\\Lumina-T2Music\\maa2\\maa2.ckpt"
        ckpt_path: D:\\FullPathHere\\Lumina-T2X\\Lumina-T2X\\models\\Lumina-T2Music\\maa2\\maa2.ckpt
        ckpt_path: D:\\FullPathHere\\Lumina-T2X\\Lumina-T2X\\models\\Lumina-T2Music\\maa2\\
        ckpt_path: D:\\FullPathHere\\Lumina-T2X\\Lumina-T2X\\models\\Lumina-T2Music\\maa2

What format do I need to use in the yaml?

@SoftologyPro
Copy link
Author

SoftologyPro commented Jun 20, 2024

Even if I copy the maa2.ckpt file into the same folder as demo_music.py and change the path in the yaml to
ckpt_path: maa2.ckpt
it still gives the same error.

@PommesPeter
Copy link
Contributor

Could you provide full error logs? we will help you to solve this problem.

@PommesPeter
Copy link
Contributor

PommesPeter commented Jun 20, 2024

check this line on demo_music.py:

- from ..util import instantiate_from_config
+ from models.util import instantiate_from_config

@PommesPeter
Copy link
Contributor

In the YAML file, the ckpt_path is relative to the directory from which you are running the program. For example, if you are running the demo_music.py file from D:\test\lumina_music\demo_music.py, then D:\test\lumina_music will be the base path for the ckpt_path.

@SoftologyPro
Copy link
Author

OK, I did a fresh install with a new git clone and all pip installs are inside a newly created venv.

Root directory is D:\Tests\Lumina-T2X\
Clone is under that root D:\Tests\Lumina-T2X\Lumina-T2X\
Models are under D:\Tests\Lumina-T2X\Lumina-T2X\models\

The yaml file is D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\configs\lumina-text2music.yaml
yaml is edited and the ckpt path is changed to D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music
ckpt_path: D:\test\Lumina-T2X\Lumina-T2X\models\Lumina-T2Music

Changing Lumina-T2X\lumina_music\models\autoencoder1d.py
from ..util import instantiate_from_config
to
from models.util import instantiate_from_config
fixes that top-level error, so that is a worthy edit to your script.

I change into the D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\ directory and run
python demo_music.py --ckpt ..\models\Lumina-T2Music\music_generation --vocoder_ckpt ..\models\Lumina-T2Music\bigvnat --config_path .\configs\lumina-text2music.yaml --sample_rate 16000

Full stats

Creating Model: Lumina-T2A
CFM: Running in eps-prediction mode
D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\models\diffusion\component.py:9: UserWarning: Cannot import apex RMSNorm, switch to vanilla implementation
  warnings.warn("Cannot import apex RMSNorm, switch to vanilla implementation")
theta 10000.0 rope scaling 1.0 ntk 1.0
-------------------------------- successfully init! --------------------------------
DiffusionWrapper has 197.94 M params.
downsample rates is 2
upsample rates is 2
Process Process-1:
Traceback (most recent call last):
  File "D:\Python\lib\multiprocessing\process.py", line 314, in _bootstrap
    self.run()
  File "D:\Python\lib\multiprocessing\process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\demo_music.py", line 121, in model_main
    model = load_model_from_config(config, args.ckpt)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\demo_music.py", line 18, in load_model_from_config
    model = instantiate_from_config(config.model)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\models\util.py", line 116, in instantiate_from_config
    return get_obj_from_str(config["target"], reload=reload)(**config.get("params", dict()))
  File "D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\models\diffusion\ddpm_audio.py", line 998, in __init__
    super(CFM, self).__init__(**kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\models\diffusion\ddpm_audio.py", line 73, in __init__
    self.instantiate_first_stage(first_stage_config)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\models\diffusion\ddpm_audio.py", line 132, in instantiate_first_stage
    model = instantiate_from_config(config)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\models\util.py", line 116, in instantiate_from_config
    return get_obj_from_str(config["target"], reload=reload)(**config.get("params", dict()))
  File "D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\models\autoencoder1d.py", line 46, in __init__
    self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\models\autoencoder1d.py", line 49, in init_from_ckpt
    sd = torch.load(path, map_location="cpu")["state_dict"]
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\serialization.py", line 998, in load
    with _open_file_like(f, 'rb') as opened_file:
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\serialization.py", line 445, in _open_file_like
    return _open_file(name_or_buffer, mode)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\serialization.py", line 426, in __init__
    super().__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'D:\\test\\Lumina-T2X\\Lumina-T2X\\models\\Lumina-T2Music'

At that point the script seems to hang. Normally a python script would exit back to the command line at that point.
I Ctrl-C to continue and get this

Traceback (most recent call last):
  File "D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\demo_music.py", line 392, in <module>
    main()
  File "D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\demo_music.py", line 386, in main
    mp_barrier.wait()
  File "D:\Python\lib\threading.py", line 668, in wait
    self._wait(timeout)
  File "D:\Python\lib\threading.py", line 703, in _wait
    if not self._cond.wait_for(lambda : self._state != 0, timeout):
  File "D:\Python\lib\multiprocessing\synchronize.py", line 313, in wait_for
    self.wait(waittime)
  File "D:\Python\lib\multiprocessing\synchronize.py", line 261, in wait
    return self._wait_semaphore.acquire(True, timeout)
KeyboardInterrupt
^CTerminate batch job (Y/N)?

Trying alternate paths in the yaml give similar errors, ie

FileNotFoundError: [Errno 2] No such file or directory: 'D:\\test\\Lumina-T2X\\Lumina-T2X\\models\\Lumina-T2Music\\maa2\\'
FileNotFoundError: [Errno 2] No such file or directory: 'D:\\test\\Lumina-T2X\\Lumina-T2X\\models\\Lumina-T2Music\\maa2\\maa2.ckpt'

@SoftologyPro
Copy link
Author

SoftologyPro commented Jun 20, 2024

OK, working. As always, once you post the issue it is fixed a minute later :)
Working ckpt path is
ckpt_path: ../models/Lumina-T2Music/maa2/maa2.ckpt

So the main cause was the from ..util import instantiate_from_config

@SoftologyPro
Copy link
Author

Another issue now further on. After getting past the above issues, the script downloads a few models but then fails with a permission denied. Stats for demo_music.py

Creating Model: Lumina-T2A
CFM: Running in eps-prediction mode
D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\models\diffusion\component.py:9: UserWarning: Cannot import apex RMSNorm, switch to vanilla implementation
  warnings.warn("Cannot import apex RMSNorm, switch to vanilla implementation")
theta 10000.0 rope scaling 1.0 ntk 1.0
-------------------------------- successfully init! --------------------------------
DiffusionWrapper has 197.94 M params.
downsample rates is 2
upsample rates is 2
AutoencoderKL Restored from ../models/Lumina-T2Music/maa2/maa2.ckpt Done
You are using the default legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading model from ..\models\Lumina-T2Music\music_generation
Process Process-1:
Traceback (most recent call last):
  File "D:\Python\lib\multiprocessing\process.py", line 314, in _bootstrap
    self.run()
  File "D:\Python\lib\multiprocessing\process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\demo_music.py", line 121, in model_main
    model = load_model_from_config(config, args.ckpt)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\demo_music.py", line 21, in load_model_from_config
    pl_sd = torch.load(ckpt, map_location="cpu")
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\serialization.py", line 998, in load
    with _open_file_like(f, 'rb') as opened_file:
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\serialization.py", line 445, in _open_file_like
    return _open_file(name_or_buffer, mode)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\serialization.py", line 426, in __init__
    super().__init__(open(name, mode))
PermissionError: [Errno 13] Permission denied: '..\\models\\Lumina-T2Music\\music_generation'

Same error even if I run it from an administrator command prompt and I made sure the directory is not read only.

@PommesPeter
Copy link
Contributor

Could I check your yaml file?

@SoftologyPro
Copy link
Author

model:
  base_learning_rate: 3.0e-06
  target: models.diffusion.ddpm_audio.CFM
  params:
    linear_start: 0.00085
    linear_end: 0.012
    num_timesteps_cond: 1
    log_every_t: 200
    timesteps: 1000
    first_stage_key: image
    cond_stage_key: caption
    mel_dim: 20
    mel_length: 256
    channels: 0
    cond_stage_trainable: True
    conditioning_key: crossattn
    monitor: val/loss_simple_ema
    scale_by_std: true
    use_ema: false
    scheduler_config:
      target: models.lr_scheduler.LambdaLinearScheduler
      params:
        warm_up_steps:
        - 10000
        cycle_lengths:
        - 10000000000000
        f_start:
        - 1.0e-06
        f_max:
        - 1.0
        f_min:
        - 1.0
    unet_config:
      target: models.diffusion.flag_large_dit.FlagDiTv2
      params:
        in_channels: 20
        context_dim: 1024
        hidden_size: 768
        num_heads: 32
        depth: 16
        max_len: 1000

    first_stage_config:
      target: models.autoencoder1d.AutoencoderKL
      params:
        embed_dim: 20
        monitor: val/rec_loss
        ckpt_path: ../models/Lumina-T2Music/maa2/maa2.ckpt
        ddconfig:
          double_z: true
          in_channels: 80
          out_ch: 80
          z_channels: 20
          kernel_size: 5
          ch: 384
          ch_mult:
          - 1
          - 2
          - 4
          num_res_blocks: 2
          attn_layers:
          - 3
          down_layers:
          - 0
          dropout: 0.0
        lossconfig:
          target: torch.nn.Identity
    cond_stage_config:
        target: models.encoders.modules.FrozenFLANEmbedder

test_dataset:
  target: data.joinaudiodataset_struct_sample_anylen.TestManifest
  params:
    manifest: ./musiccaps_test_16000_struct.tsv
    spec_crop_len: 624

@SoftologyPro
Copy link
Author

This is under Windows too if that makes a difference?
I was able to get your other 3 image generation gradios working fine.

@PommesPeter
Copy link
Contributor

I will try to reproduce in windows. let me check

@PommesPeter
Copy link
Contributor

I will try to reproduce in windows. let me check

We cannot guarantee that it will work properly on Windows; it is best to use Linux. We will release a tutorial for using it on Windows after we have tested it.

@PommesPeter
Copy link
Contributor

PommesPeter commented Jun 21, 2024

Hi @SoftologyPro ,

in your case, you can use the absolute path in yaml file.

my case:
1718947936078

@SoftologyPro
Copy link
Author

SoftologyPro commented Jun 21, 2024

What format should the full path be? Which file/line needs to be changed?
I try and change the D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\configs\lumina-text2music.yaml to
ckpt_path: D:\Tests\Lumina-T2X\Lumina-T2X\models\Lumina-T2Music\maa2\maa2.ckpt
and got the same error about permissions

Creating Model: Lumina-T2A
CFM: Running in eps-prediction mode
D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\models\diffusion\component.py:9: UserWarning: Cannot import apex RMSNorm, switch to vanilla implementation
  warnings.warn("Cannot import apex RMSNorm, switch to vanilla implementation")
theta 10000.0 rope scaling 1.0 ntk 1.0
-------------------------------- successfully init! --------------------------------
DiffusionWrapper has 197.94 M params.
downsample rates is 2
upsample rates is 2
AutoencoderKL Restored from D:\Tests\Lumina-T2X\Lumina-T2X\models\Lumina-T2Music\maa2\maa2.ckpt Done
You are using the default legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading model from ..\models\Lumina-T2Music\music_generation
Process Process-1:
Traceback (most recent call last):
  File "D:\Python\lib\multiprocessing\process.py", line 314, in _bootstrap
    self.run()
  File "D:\Python\lib\multiprocessing\process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\demo_music.py", line 121, in model_main
    model = load_model_from_config(config, args.ckpt)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\demo_music.py", line 21, in load_model_from_config
    pl_sd = torch.load(ckpt, map_location="cpu")
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\serialization.py", line 998, in load
    with _open_file_like(f, 'rb') as opened_file:
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\serialization.py", line 445, in _open_file_like
    return _open_file(name_or_buffer, mode)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\serialization.py", line 426, in __init__
    super().__init__(open(name, mode))
PermissionError: [Errno 13] Permission denied: '..\\models\\Lumina-T2Music\\music_generation'

Notice the error points to the relativ epath still Loading model from ..\models\Lumina-T2Music\music_generation so am I editing the wrong yaml config file?

@PommesPeter
Copy link
Contributor

PommesPeter commented Jun 21, 2024

modify --ckpt using absolute path like C:\Users\xxxxxx\Lumina-T2X\ckpt\music_generation\119.ckpt

@SoftologyPro
Copy link
Author

OK, that got the UI launching.
But then when I type a prompt and click submit I get all this...

To create a public link, set `share=True` in `launch()`.
> params: {
  "cap": "an uplifting trance melody",
  "num_sampling_steps": 40,
  "cfg_scale": 5,
  "solver": "euler",
  "seed": 100
}
D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torchdyn\numerics\odeint.py:83: UserWarning: Setting tolerances has no effect on fixed-step methods
  warn("Setting tolerances has no effect on fixed-step methods")
D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\models\diffusion\flag_large_dit.py:298: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263.)
  F.scaled_dot_product_attention(
Traceback (most recent call last):
  File "D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\demo_music.py", line 155, in model_main
    samples_ddim = generator.gen_test_sample(cap, num_sampling_steps, cfg_scale, solver)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\demo_music.py", line 81, in gen_test_sample
    sample, _ = self.model.sample_cfg(
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\models\diffusion\ddpm_audio.py", line 1109, in sample_cfg
    eval_points, traj = neural_ode(x0, t_span)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torchdyn\core\neuralde.py", line 94, in forward
    t_eval, sol =  super().forward(x, t_span, save_at, args)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torchdyn\core\problems.py", line 89, in forward
    return self.odeint(x, t_span, save_at, args)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torchdyn\core\problems.py", line 85, in odeint
    return self._autograd_func()(self.vf_params, x, t_span, save_at, args)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\autograd\function.py", line 553, in apply
    return super().apply(*args, **kwargs)  # type: ignore[misc]
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torchdyn\numerics\sensitivity.py", line 38, in forward
    t_sol, sol = generic_odeint(problem_type, vf, x, t_span, solver, atol, rtol, interpolator, B,
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torchdyn\numerics\sensitivity.py", line 24, in generic_odeint
    return odeint(vf, x, t_span, solver, atol=atol, rtol=rtol, interpolator=interpolator, return_all_eval=return_all_eval,
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torchdyn\numerics\odeint.py", line 85, in odeint
    return _fixed_odeint(f_, x, t_span, solver, save_at=save_at, args=args)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torchdyn\numerics\odeint.py", line 428, in _fixed_odeint
    _, x, _ = solver.step(f, x, t, dt, k1=None, args=args)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torchdyn\numerics\solvers\ode.py", line 69, in step
    if k1 == None: k1 = f(t, x)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torchdyn\core\defunc.py", line 77, in forward
    else: x = self.vf(t, x, args=args)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\models\diffusion\ddpm_audio.py", line 1162, in forward
    e_t_uncond, e_t = self.net.apply_model(x_in, t_in, c_in).chunk(2)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\models\diffusion\ddpm_audio.py", line 466, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\models\diffusion\ddpm.py", line 1604, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\models\diffusion\flag_large_dit.py", line 574, in forward
    x = block(x, mask, context, cap_mask, self.freqs_cis[: x.size(1)], adaln_input=adaln_input)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\models\diffusion\flag_large_dit.py", line 443, in forward
    out = h + gate_mlp.unsqueeze(1) * self.feed_forward(
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\models\diffusion\flag_large_dit.py", line 372, in forward
    return self.w2(self._forward_silu_gating(self.w1(x), self.w3(x)))
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 489, in _fn
    return fn(*args, **kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 655, in catch_errors
    return callback(frame, cache_entry, hooks, frame_state)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 727, in _convert_frame
    result = inner_convert(frame, cache_entry, hooks, frame_state)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 383, in _convert_frame_assert
    compiled_product = _compile(
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 646, in _compile
    guarded_code = compile_inner(code, one_graph, hooks, transform)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_dynamo\utils.py", line 244, in time_wrapper
    r = func(*args, **kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 562, in compile_inner
    out_code = transform_code_object(code, transform)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_dynamo\bytecode_transformation.py", line 1033, in transform_code_object
    transformations(instructions, code_options)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 151, in _fn
    return fn(*args, **kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 527, in transform
    tracer.run()
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2128, in run
    super().run()
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 818, in run
    and self.step()
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 781, in step
    getattr(self, inst.opname)(inst)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2243, in RETURN_VALUE
    self.output.compile_subgraph(
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 919, in compile_subgraph
    self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
  File "D:\Python\lib\contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 1087, in compile_and_call_fx_graph
    compiled_fn = self.call_user_compiler(gm)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_dynamo\utils.py", line 244, in time_wrapper
    r = func(*args, **kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 1159, in call_user_compiler
    raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 1140, in call_user_compiler
    compiled_fn = compiler_fn(gm, self.example_inputs())
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_dynamo\repro\after_dynamo.py", line 117, in debug_wrapper
    compiled_gm = compiler_fn(gm, example_inputs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\__init__.py", line 1668, in __call__
    return compile_fx(model_, inputs_, config_patches=self.config)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_inductor\compile_fx.py", line 1168, in compile_fx
    return aot_autograd(
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_dynamo\backends\common.py", line 55, in compiler_fn
    cg = aot_module_simplified(gm, example_inputs, **kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_functorch\aot_autograd.py", line 887, in aot_module_simplified
    compiled_fn = create_aot_dispatcher_function(
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_dynamo\utils.py", line 244, in time_wrapper
    r = func(*args, **kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_functorch\aot_autograd.py", line 600, in create_aot_dispatcher_function
    compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_functorch\_aot_autograd\runtime_wrappers.py", line 425, in aot_wrapper_dedupe
    return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_functorch\_aot_autograd\runtime_wrappers.py", line 630, in aot_wrapper_synthetic_base
    return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_functorch\_aot_autograd\jit_compile_runtime_wrappers.py", line 97, in aot_dispatch_base
    compiled_fw = compiler(fw_module, updated_flat_args)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_dynamo\utils.py", line 244, in time_wrapper
    r = func(*args, **kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_inductor\compile_fx.py", line 1100, in fw_compiler_base
    return inner_compile(
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_dynamo\repro\after_aot.py", line 83, in debug_wrapper
    inner_compiled_fn = compiler_fn(gm, example_inputs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_inductor\debug.py", line 305, in inner
    return fn(*args, **kwargs)
  File "D:\Python\lib\contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_inductor\compile_fx.py", line 320, in compile_fx_inner
    compiled_graph = fx_codegen_and_compile(
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_inductor\compile_fx.py", line 550, in fx_codegen_and_compile
    compiled_fn = graph.compile_to_fn()
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_inductor\graph.py", line 1116, in compile_to_fn
    return self.compile_to_module().call
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_dynamo\utils.py", line 244, in time_wrapper
    r = func(*args, **kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_inductor\graph.py", line 1066, in compile_to_module
    self.codegen_with_cpp_wrapper() if self.cpp_wrapper else self.codegen()
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_inductor\graph.py", line 1041, in codegen
    self.scheduler = Scheduler(self.buffers)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_dynamo\utils.py", line 244, in time_wrapper
    r = func(*args, **kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_inductor\scheduler.py", line 1198, in __init__
    self.nodes = [self.create_scheduler_node(n) for n in nodes]
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_inductor\scheduler.py", line 1198, in <listcomp>
    self.nodes = [self.create_scheduler_node(n) for n in nodes]
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_inductor\scheduler.py", line 1289, in create_scheduler_node
    group_fn = self.get_backend(node.get_device()).group_fn
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_inductor\scheduler.py", line 2154, in get_backend
    self.backends[device] = self.create_backend(device)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\torch\_inductor\scheduler.py", line 2146, in create_backend
    raise RuntimeError(
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
RuntimeError: Cannot find a working triton installation. More information on installing Triton can be found at https://github.com/openai/triton

Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information


You can suppress this exception and fall back to eager by setting:
    import torch._dynamo
    torch._dynamo.config.suppress_errors = True


Traceback (most recent call last):
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\gradio\queueing.py", line 532, in process_events
    response = await route_utils.call_process_api(
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\gradio\route_utils.py", line 276, in call_process_api
    output = await app.get_blocks().process_api(
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\gradio\blocks.py", line 1928, in process_api
    result = await self.call_function(
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\gradio\blocks.py", line 1514, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 2177, in run_sync_in_worker_thread
    return await future
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 859, in run
    result = context.run(func, *args)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\venv\lib\site-packages\gradio\utils.py", line 832, in wrapper
    response = f(*args, **kwargs)
  File "D:\Tests\Lumina-T2X\Lumina-T2X\lumina_music\demo_music.py", line 367, in on_submit
    audio, metadata = result
TypeError: cannot unpack non-iterable ModelFailure object

Maybe I should just give up for now until you guys have a working Windows version.

@PommesPeter
Copy link
Contributor

following the error information, you should install triton>=2.2.0 on your windows. I'm trying to install this one.

@SoftologyPro
Copy link
Author

SoftologyPro commented Jun 21, 2024

That is a problem. pip install triton fails on Windows. I do have a WHL for triton for version 2.1.0, but that gives other errors.
I will see if I can build a WHL from the source for the latest Triton.

@SoftologyPro
Copy link
Author

SoftologyPro commented Jun 21, 2024

Trying to build the latest triton from their github https://github.com/triton-lang/triton fails too with a required file 404.

D:\Tests\triton\python>python setup.py sdist bdist_wheel
downloading and extracting https://anaconda.org/nvidia/cuda-nvcc/12.4.99/download/linux-AMD64/cuda-nvcc-12.4.99-0.tar.bz2 ...
Traceback (most recent call last):
  File "D:\Tests\triton\python\setup.py", line 439, in <module>
    download_and_copy(
  File "D:\Tests\triton\python\setup.py", line 268, in download_and_copy
    file = tarfile.open(fileobj=open_url(url), mode="r|*")
  File "D:\Tests\triton\python\setup.py", line 199, in open_url
    return urllib.request.urlopen(request, timeout=300)
  File "D:\Python\lib\urllib\request.py", line 216, in urlopen
    return opener.open(url, data, timeout)
  File "D:\Python\lib\urllib\request.py", line 525, in open
    response = meth(req, response)
  File "D:\Python\lib\urllib\request.py", line 634, in http_response
    response = self.parent.error(
  File "D:\Python\lib\urllib\request.py", line 563, in error
    return self._call_chain(*args)
  File "D:\Python\lib\urllib\request.py", line 496, in _call_chain
    result = func(*args)
  File "D:\Python\lib\urllib\request.py", line 643, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 404: Not Found

This is a no go for now unless I can find a Windows WHL for Triton >=2.2.0 or someone can get a WHL compiled for Windows.
I raised this issue triton-lang/triton#4184 but they tend to ignore Windows.

@PommesPeter
Copy link
Contributor

we recommend using in Linux

@SoftologyPro
Copy link
Author

Well of course you do :)
But I am trying to add support for Lumina-T2X into Visions of Chaos which is strictly Windows only.

@PommesPeter
Copy link
Contributor

we will try our best to solve this problem! we're working on merge our code into diffusers. Is Visions of Chaos supports diffusers?

@SoftologyPro
Copy link
Author

Yes, I have used Diffusers with some other Text-to-Image scripts in the past.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants