-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chatglm3 fails with jsonl input #476
Comments
console log: chatGLM. error.txt |
Issue is seen with simple prompts as well (-p) cli |
@avinashbhat09 If this is same as CVS-136766? |
@peterchen-intel : Yes, it looks to be same issue |
@avinashbhat09 Can you please provide one details |
current genai commit id: a5b14c7 |
@avinashbhat09 |
Context
when I try to run chatglm3 it fails.
command used:
error:
[ ERROR ] An exception occurred
[ INFO ] Traceback (most recent call last):
File "C:\Users\Local_Admin\openvino.genai\llm_bench\python\benchmark.py", line 547, in main
iter_data_list, pretrain_time = CASE_TO_BENCH[model_args['use_case']](model_path, framework, args.device, model_args, args.num_iters)
File "C:\Users\Local_Admin\openvino.genai\llm_bench\python\benchmark.py", line 210, in run_text_generation_benchmark
run_text_generation(input_text, num, model, tokenizer, args, iter_data_list, warmup_md5, prompt_idx, bench_hook, model_precision, proc_id)
File "C:\Users\Local_Admin\openvino.genai\llm_bench\python\benchmark.py", line 107, in run_text_generation
result = model.generate(**input_data, max_new_tokens=int(max_gen_tokens), num_beams=args['num_beams'], use_cache=True, eos_token_id=None)
File "C:\Users\Local_Admin\openvino.genai\llm_bench\python\python-env\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\Local_Admin\openvino.genai\llm_bench\python\python-env\lib\site-packages\optimum\intel\openvino\modeling_decoder.py", line 642, in generate
result = super().generate(
File "C:\Users\Local_Admin\openvino.genai\llm_bench\python\python-env\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\Local_Admin\openvino.genai\llm_bench\python\python-env\lib\site-packages\transformers\generation\utils.py", line 1576, in generate
result = self._greedy_search(
File "C:\Users\Local_Admin\openvino.genai\llm_bench\python\utils\hook_greedy_search.py", line 234, in new_greedy_search
model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
File "C:\Users\Local_Admin\openvino.genai\llm_bench\python\utils\ov_model_classes.py", line 311, in prepare_inputs_for_generation
mask_positions.append(seq.index(tmp_mask_token))
ValueError: 130000 is not in list
What needs to be done?
not sure
Example Pull Requests
No response
Resources
Contact points
NA
Ticket
No response
The text was updated successfully, but these errors were encountered: