We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When attempting to to run the following piece of Java code, I'm getting a rather non-descriptive error message. Any ideas how to debug this?
I compiled the libwhisper.so from the latest source code. Running ./server or ./stream works just fine.
libwhisper.so
./server
./stream
My code (an excerpt):
// [...] final WhisperCpp whisper = new WhisperCpp(); try { whisper.initContext("base.en"); } catch (FileNotFoundException e) { System.err.println("failed to load model file"); return; } // [...]
Error:
whisper_init_from_file_with_params_no_state: loading model from '/home/ferdinand/.cache/whisper/ggml-base.en.bin' whisper_init_with_params_no_state: use gpu = 7 whisper_init_with_params_no_state: flash attn = 0 whisper_init_with_params_no_state: gpu_device = 0 whisper_init_with_params_no_state: dtw = 192 whisper_model_load: loading model whisper_model_load: n_vocab = 51864 whisper_model_load: n_audio_ctx = 1500 whisper_model_load: n_audio_state = 512 whisper_model_load: n_audio_head = 8 whisper_model_load: n_audio_layer = 6 whisper_model_load: n_text_ctx = 448 whisper_model_load: n_text_state = 512 whisper_model_load: n_text_head = 8 whisper_model_load: n_text_layer = 6 whisper_model_load: n_mels = 80 whisper_model_load: ftype = 1 whisper_model_load: qntvr = 0 whisper_model_load: type = 2 (base) whisper_model_load: adding 1607 extra tokens whisper_model_load: n_langs = 99 whisper_backend_init: using CUDA backend ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3050 Laptop GPU, compute capability 8.6, VMM: yes whisper_model_load: CUDA0 total size = 147.37 MB whisper_model_load: model size = 147.37 MB whisper_backend_init: using CUDA backend whisper_mel_init: n_len = 3001, n_len_org = 1, n_mel = 80 whisper_init_state: kv self size = 18.87 MB whisper_init_state: kv cross size = 18.87 MB whisper_init_state: kv pad size = 3.15 MB terminate called after throwing an instance of 'std::out_of_range' what(): map::at
The text was updated successfully, but these errors were encountered:
No branches or pull requests
When attempting to to run the following piece of Java code, I'm getting a rather non-descriptive error message. Any ideas how to debug this?
I compiled the
libwhisper.so
from the latest source code. Running./server
or./stream
works just fine.My code (an excerpt):
Error:
The text was updated successfully, but these errors were encountered: