Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Request] ollama本地大模型自定义选择是否支持视觉 #2493

Closed
MarsSovereign opened this issue May 14, 2024 · 11 comments
Closed

[Request] ollama本地大模型自定义选择是否支持视觉 #2493

MarsSovereign opened this issue May 14, 2024 · 11 comments
Labels
🌠 Feature Request New feature or request | 特性与建议

Comments

@MarsSovereign
Copy link

🥰 需求描述

ollama运行的本地大模型,例如llava,是支持视觉的,但是目前lobechat中加载ollama运行的模型都是默认不支持视觉,而且没有选项可以自行打开

🧐 解决方案

当使用ollama本地模型时,允许用户自行选择是否打开视觉能力(其实其他能力也一样,比如插件)

📝 补充信息

No response

@MarsSovereign MarsSovereign added the 🌠 Feature Request New feature or request | 特性与建议 label May 14, 2024
@lobehubbot
Copy link
Member

👀 @MarsSovereign

Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible.
Please make sure you have given us as much context as possible.
非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。

@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


🥰 Description of requirements

Large local models run by ollama, such as llava, support vision. However, currently, the models loaded by ollama in lobechat do not support vision by default, and there is no option to open it by yourself.

🧐 Solution

When using the ollama local model, users are allowed to choose whether to turn on visual capabilities (in fact, the same is true for other capabilities, such as plug-ins)

📝 Supplementary information

No response

@Luffyzm3D2Y
Copy link

我也发现了,lobe-chat似乎不支持Ollama中的支持视觉的大模型,在Ollama上下载了Llava,但是无法在lobe-chat上使用。。。。。

@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


I also discovered that lobe-chat does not seem to support the large vision-supported models in Ollama. I downloaded Llava on Ollama, but it cannot be used on lobe-chat. . . . .

@Luffyzm3D2Y
Copy link

Luffyzm3D2Y commented Jun 20, 2024

而且希望Ollama中的自定义支持/不支持视觉的模型都能够使用Lobe-chat. 我不确定目前是否支持该功能

@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


And I hope that custom models in Ollama that support/do not support vision can use Lobe-chat.

@Luffyzm3D2Y
Copy link

我也发现了,lobe-chat似乎不支持Ollama中的支持视觉的大模型,在Ollama上下载了Llava,但是无法在lobe-chat上使用。。。。。

可以看一下这个issue #1351,在前端更新设置之后就没有这个问题了。

@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


I also discovered that lobe-chat does not seem to support the large visual-supported models in Ollama. I downloaded Llava on Ollama, but it cannot be used on lobe-chat. . . . .

You can take a look at this issue #1351. After updating the front-end settings, this problem disappears.

@arvinxx
Copy link
Contributor

arvinxx commented Jun 20, 2024

这是支持的

@arvinxx arvinxx closed this as completed Jun 20, 2024
@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


This is supported

@lobehubbot
Copy link
Member

@MarsSovereign

This issue is closed, If you have any questions, you can comment and reply.
此问题已经关闭。如果您有任何问题,可以留言并回复。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🌠 Feature Request New feature or request | 特性与建议
Projects
None yet
Development

No branches or pull requests

4 participants