Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chatglm3 fails with jsonl input #476

Open
avinashbhat09 opened this issue Jun 6, 2024 · 7 comments
Open

chatglm3 fails with jsonl input #476

avinashbhat09 opened this issue Jun 6, 2024 · 7 comments

Comments

@avinashbhat09
Copy link

avinashbhat09 commented Jun 6, 2024

Context

when I try to run chatglm3 it fails.
command used:
image

error:
[ ERROR ] An exception occurred
[ INFO ] Traceback (most recent call last):
File "C:\Users\Local_Admin\openvino.genai\llm_bench\python\benchmark.py", line 547, in main
iter_data_list, pretrain_time = CASE_TO_BENCH[model_args['use_case']](model_path, framework, args.device, model_args, args.num_iters)
File "C:\Users\Local_Admin\openvino.genai\llm_bench\python\benchmark.py", line 210, in run_text_generation_benchmark
run_text_generation(input_text, num, model, tokenizer, args, iter_data_list, warmup_md5, prompt_idx, bench_hook, model_precision, proc_id)
File "C:\Users\Local_Admin\openvino.genai\llm_bench\python\benchmark.py", line 107, in run_text_generation
result = model.generate(**input_data, max_new_tokens=int(max_gen_tokens), num_beams=args['num_beams'], use_cache=True, eos_token_id=None)
File "C:\Users\Local_Admin\openvino.genai\llm_bench\python\python-env\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\Local_Admin\openvino.genai\llm_bench\python\python-env\lib\site-packages\optimum\intel\openvino\modeling_decoder.py", line 642, in generate
result = super().generate(
File "C:\Users\Local_Admin\openvino.genai\llm_bench\python\python-env\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\Local_Admin\openvino.genai\llm_bench\python\python-env\lib\site-packages\transformers\generation\utils.py", line 1576, in generate
result = self._greedy_search(
File "C:\Users\Local_Admin\openvino.genai\llm_bench\python\utils\hook_greedy_search.py", line 234, in new_greedy_search
model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
File "C:\Users\Local_Admin\openvino.genai\llm_bench\python\utils\ov_model_classes.py", line 311, in prepare_inputs_for_generation
mask_positions.append(seq.index(tmp_mask_token))
ValueError: 130000 is not in list

What needs to be done?

not sure

Example Pull Requests

No response

Resources

Contact points

NA

Ticket

No response

@avinashbhat09 avinashbhat09 added the good first issue Good for newcomers label Jun 6, 2024
@avinashbhat09
Copy link
Author

avinashbhat09 commented Jun 6, 2024

console log: chatGLM. error.txt

@avinashbhat09
Copy link
Author

Issue is seen with simple prompts as well (-p) cli

@peterchen-intel
Copy link
Collaborator

@avinashbhat09 If this is same as CVS-136766?

@avinashbhat09
Copy link
Author

@peterchen-intel : Yes, it looks to be same issue

@peterchen-intel
Copy link
Collaborator

@avinashbhat09 Can you please provide one details
openvino.genai commit id
hugging face model commit ID or how you get the IR model?
conversion command line

@avinashbhat09
Copy link
Author

current genai commit id: a5b14c7
I got the model from HF: https://huggingface.co/THUDM/chatglm3-6b
conversion command line: python convert.py --model_id THUDM/chatglm3-6b --output_dir chatglm3-6b_ov --precision FP16 –c INT4_SYM

@wgzintel
Copy link
Contributor

wgzintel commented Jun 14, 2024

@avinashbhat09
genai commit id: 5562025 (master)
openvino: 4fcfaf24b30 (master)
chatglm3-6b can run successfully on both CPU and dGPU on RPL i9-14900K
chatglm3_run_on_dGPU.txt
chatglm3_run_on_CPU.txt
pls pip install requirements.txt in genai (commit id: 5562025)

@eaidova eaidova removed the good first issue Good for newcomers label Jun 24, 2024
@eaidova eaidova changed the title [Good First Issue]: chatglm3 fails with jsonl input chatglm3 fails with jsonl input Jun 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Contributors Needed
Development

No branches or pull requests

4 participants