Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

\Lumina-Next-T2I\config.json' is not a valid JSON file. #74

Open
danieldietzel opened this issue Jun 18, 2024 · 3 comments
Open

\Lumina-Next-T2I\config.json' is not a valid JSON file. #74

danieldietzel opened this issue Jun 18, 2024 · 3 comments

Comments

@danieldietzel
Copy link

I tried following the instructions here:

https://huggingface.co/Alpha-VLLM/Lumina-Next-T2I

And installed the repo via hugginface CLI and via github, in both cases I get this error:

[rank0]: OSError: It looks like the config file at '........\Lumina-T2X\Lumina-Next-T2I\config.json' is not a valid JSON file.

Using this command:

lumina_next infer -c "lumina_next_t2i/configs/infer/settings.yaml" "a snowman" "./outputs"

in my settings.yaml I've tried the local path, huggingface path, and repo download path.

@PommesPeter
Copy link
Contributor

Hi @danieldietzel ,
Could you provide your yaml config? we will help you check the correctness

@danieldietzel
Copy link
Author

danieldietzel commented Jun 18, 2024

Hi @danieldietzel , Could you provide your yaml config? we will help you check the correctness

Hi Pommes,

I ran through the steps again and realized the second item in this config is supposed to be the LLM checkpoint, not the lumina model.

https://github.com/Alpha-VLLM/Lumina-T2X/blob/main/lumina_next_t2i/configs/infer/settings.yaml

Now my Yaml is:

  • settings:

    model:
    ckpt: 'C:\models\lumina'
    ckpt_lm: 'C:\models\gemma'
    token: ""

    transport:
    path_type: "Linear" # option: ["Linear", "GVP", "VP"]
    prediction: "velocity" # option: ["velocity", "score", "noise"]
    loss_weight: "velocity" # option: [None, "velocity", "likelihood"]
    sample_eps: 0.1
    train_eps: 0.2

    ode:
    atol: 1e-6 # Absolute tolerance
    rtol: 1e-3 # Relative tolerance
    reverse: false # option: true or false
    likelihood: false # option: true or false

    infer:
    resolution: "1024x1024" # option: ["1024x1024", "512x2048", "2048x512", "(Extrapolation) 1664x1664", "(Extrapolation) 1024x2048", "(Extrapolation) 2048x1024"]
    num_sampling_steps: 60 # range: 1-1000
    cfg_scale: 4. # range: 1-20
    solver: "euler" # option: ["euler", "dopri5", "dopri8"]
    t_shift: 4 # range: 1-20 (int only)
    ntk_scaling: true # option: true or false
    proportional_attn: true # option: true or false
    seed: 0 # rnage: any number

But I get this:
TypeError: NextDiT.forward_with_cfg() got an unexpected keyword argument 'ntk_factor'
[rank0]: AttributeError: 'NoneType' object has no attribute 'float'

I am on Windows by the way if it helps. Had to change all dist.init_process_group("nccl") to dist.init_process_group("gloo") to get this far not sure if that breaks things.

@PommesPeter
Copy link
Contributor

This may not impact performance, but we have not tested whether it can run correctly on the Gloo backend. you could try running the mini version of Lumina-Next-T2I on https://github.com/Alpha-VLLM/Lumina-T2X/tree/main/lumina_next_t2i_mini

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants