Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Code completion not working #239

Open
andremald opened this issue May 6, 2024 · 15 comments
Open

Code completion not working #239

andremald opened this issue May 6, 2024 · 15 comments
Labels
help wanted Extra attention is needed question Further information is requested

Comments

@andremald
Copy link

Hi! I am trying to use the tool but somehow the code completion is not working. The chat functionality works just fine so I am quite sure I configured the connectors properly. Unfortunately, I couldn't find any logging. Hence I am not even sure whether the completion request is being sent.

It tried the following models: codegemma, codellama and starcoder. ( always the fim version)

The path is /api/generate

Though it is also not working in a host vs code, I normally work inside a dev container one. Hence, I updated the hostname to host.docker.internal

As I said, the chat functionality works just fine. I am wondering what could be the issue with the coding completion one?

@rjmacarthy
Copy link
Owner

rjmacarthy commented May 6, 2024

Hello,

Please confirm all settings used for FIM completion providers. Also, please enable the debugging information in the extension settings and tick enable logging, then go to Help -> Toggle Developer Tools inside Visual Studio Code to look out for any errors.

Many thanks,

@rjmacarthy rjmacarthy added help wanted Extra attention is needed question Further information is requested labels May 6, 2024
@andremald
Copy link
Author

andremald commented May 7, 2024

Hi! While debugging I could see that there is a request being sent if I use the chat functionality. However, nothing shows up in the console for the code completion , even when I request it with Option + \ ( I am a mac user ).

I also found a
"Problem creating default templates "/root/.twinny/templates""

Can it be that's the issue? No template-> no FIM?

@rjmacarthy
Copy link
Owner

Hey, that shouldn't be an issue for fim as they are built in. Please provide all the provider configuration settings as previously requested.

@cold-eye
Copy link

cold-eye commented May 7, 2024

c41e9d3372f0fb5080c93d66b7a04fc9

@andremald
Copy link
Author

andremald commented May 7, 2024

Type: FIM
Fim template: codegemma
Provider: ollama
Protocol: http
Model name: codegemma:2b
Hostname: host.docker.internal
Port: 11434
Path: /api/generate

As I mentioned in the previous message, I don't get request in the console as you do ( based on your photo )

EDIT: out of curiosity I did a ls at /root/.twinny/templates and cat in the fim *.hbs files ( there were two: fim.hbs and fim-system.hbs )

fim-system.hbs is empty.

fim.hbs contains the following

<PRE>{{{prefix}}} <SUF>>{{{sufix}}} <MID>

Hope that rings a bell. I would expect to have either more templates in a file or more template files.

EDIT 2: After staying stuck in the train I had the chance to 1) check your repo with more care, 2) debug a bit further.
With regards to 1: Just ignore the message about the *.hbs files. I already understand that what you meant by "built-in".

With regards to 2: despite the fact that I don't get any logs about the request being sent, like I get when using the chat functionality, I do get the following.

2024-05-07 23:36:04.369 [info] [KeybindingService]: / Soft dispatching keyboard event
2024-05-07 23:36:04.369 [info] [KeybindingService]: \ Keyboard event cannot be dispatched
2024-05-07 23:36:04.369 [info] [KeybindingService]: / Received  keydown event - modifiers: [alt], code: AltRight, keyCode: 18, key: Alt
2024-05-07 23:36:04.370 [info] [KeybindingService]: | Converted keydown event - modifiers: [alt], code: AltRight, keyCode: 6 ('Alt')
2024-05-07 23:36:04.370 [info] [KeybindingService]: \ Keyboard event cannot be dispatched in keydown phase.
2024-05-07 23:36:04.408 [info] [KeybindingService]: / Soft dispatching keyboard event
2024-05-07 23:36:04.408 [info] [KeybindingService]: | Resolving alt+[Backslash]
2024-05-07 23:36:04.408 [info] [KeybindingService]: \ From 1 keybinding entries, matched editor.action.inlineSuggest.trigger, when: editorTextFocus && !editorReadonly, source: user extension rjmacarthy.twinny.
2024-05-07 23:36:04.408 [info] [KeybindingService]: / Received  keydown event - modifiers: [alt], code: Backslash, keyCode: 220, key: «
2024-05-07 23:36:04.408 [info] [KeybindingService]: | Converted keydown event - modifiers: [alt], code: Backslash, keyCode: 93 ('\')
2024-05-07 23:36:04.408 [info] [KeybindingService]: | Resolving alt+[Backslash]
2024-05-07 23:36:04.409 [info] [KeybindingService]: \ From 1 keybinding entries, matched editor.action.inlineSuggest.trigger, when: editorTextFocus && !editorReadonly, source: user extension rjmacarthy.twinny.
2024-05-07 23:36:04.409 [info] [KeybindingService]: + Invoking command editor.action.inlineSuggest.trigger.
2024-05-07 23:36:04.586 [info] [KeybindingService]: + Ignoring single modifier alt due to it being pressed together with other keys.

Attention to 2024-05-07 23:36:04.409 [info] [KeybindingService]: + Invoking command editor.action.inlineSuggest.trigger.

Hope it rings a bell now. I went through your code and though I am not a typescript programmer, I could follow most of it and it looks alright. I am somewhat clueless.

@rjmacarthy
Copy link
Owner

I would recommend trying codellama:7b-code to see if it works.

@andremald
Copy link
Author

andremald commented May 8, 2024

Just gave a try, still nothing:

Settings:
Screenshot 2024-05-08 at 23 28 38
Edit: obviously with hostname replaced by localhost

Console after a successful call to the chat api and several "Option + \ " in a python file:
Screenshot 2024-05-08 at 23 33 52

@dishbrains
Copy link

dishbrains commented May 25, 2024

I have the same problem as you andremald, i.e. chat is working fine but FIM does nothing

anyway, and input on how to fix this would be appreciated. this otherwise great ext is not usable for me like that

@oregonpillow
Copy link

Same problem here. All settings correct. 13b or 7b, doesn't matter. Only chat seems to work. I see the robot icon loading when i start coding, but no autocomplete prompts ever how

@localbarrage
Copy link

Failing for me too. I can see the message being received by the provider, but no response and no error. I am using Aphrodite's openai api server. I have tried different providers, yet none give a resopnse.

@localbarrage
Copy link

My issue might partly related to there not being an actual supported OpenAi provider. I setup a litellm proxy to forward to my model and I am still not getting any completions.

@hitzhangjie
Copy link

+1

@jleivo
Copy link

jleivo commented Jun 14, 2024

Hi.

I have the same issue: chat works, FIM doesn't, no matter what I do in the configurations.
Setup: Ollama on a separate server, coding done within WSL => twinny is in WSL

I was looking at the developer tools. as suggested, and when I was writing in the VScode I saw this in the Developer console

ERR memory access out of bounds: RuntimeError: memory access out of bounds
at wasm://wasm/000bc226:wasm-function[254]:0x2b979
at Parser.parse (/home/juleivo/.vscode-server/extensions/rjmacarthy.twinny-3.11.39/out/index.js:2:218649)
at t.CompletionProvider.provideInlineCompletionItems (/home/juleivo/.vscode-server/extensions/rjmacarthy.twinny-3.11.39/out/index.js:2:123675)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async B.provideInlineCompletions (/home/juleivo/.vscode-server/bin/dc96b837cf6bb4af9cd736aa3af08cf8279f7685/out/vs/workbench/api/node/extensionHostProcess.js:155:108949)

After receiving this error I went to the WSL to ~/.vscode and deleted all twinny related folders. Started VS code and installed fresh copy of twinny. Now it works. I had twinny 3.11.10 and 3.11.31 on the host, now I have 3.11.39 and all is good again. I'll repeat this on my work computer later on, to see if this for some reason fixes the issue...

@NeoMatrixJR
Copy link

NeoMatrixJR commented Jun 14, 2024

Same issues here. Ollama is running in docker on an external server. Chat works, no FIM. Have tried other extensions (continue dev at least) and get FIM...not good...but it at least does something....so I know it's not an issue with Ollama.
EDIT:
I dropped this back to a 3.10.* version, kept everything as stock as possible...installed the default codellama models and set it to the IP of my server....now it seems to work. I'll try and tweak it later to see if it's a plugin version issue, a model issue....???

@KizzyCode
Copy link

KizzyCode commented Jun 17, 2024

Same issue, OS is macOS, provider is ollama with starcoder:3b – ollama gets request and does computation, but whatever is computed does not show up in VSCode...

Weird is that some time ago, the extension worked flawlessly and I did no manual change except installing auto-updates.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed question Further information is requested
Projects
None yet
Development

No branches or pull requests

10 participants