Skip to content
huangkaihan edited this page Nov 16, 2023 · 5 revisions

Project Introduction

Using Langchain's ideas to build SpringBoot AI applications 。

🚩 This project does not involve fine-tuning or training the AI model, and only uses llms as the technical base to implement the relevant functions. Please refer to the relevant official documents for the use of the model。

Requirement

  • java17+
  • mysql5.7
  • docker
  • python (only when use local llm)

Quick Start

  1. clone the repository
git clone https://github.com/hkh1012/langchain-springboot.git
  1. run the sql in the file
init-script/db.sql
  1. install local vector db (weaviate or milvus) with docker
docker-compose up -d
  1. config the llm access token or run local llm

application-dev.properties

openai.token=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

or run the chatglm2

python .\openai_api.py
  1. run Spring Boot Application
spring.profiles.active=dev

Advance configuration

openai api proxy config

This project surports two methods to proxy openai api.

first: system proxy

proxy.enable=true
proxy.mode=socket
proxy.socket.host=127.0.0.1
proxy.socket.port=26001

second: base url

proxy.enable=true
proxy.mode=http
chain.llm.openai.baseurl=http://192.168.1.100:8080/

if you do not need proxy, set the proxy.enable to false

proxy.enable=false

llms configuration

Now this project surports three types of llm, openai、chatglm2、ERNIE-Bot(文心一言).

openai

if you want to use openai api as the project's large language model,please refer to the configuration below.

openai.token=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx            #this config is in the application-dev.properties file
chain.llm.type=openai
chain.llm.openai.token=${openai.token}
chain.llm.openai.model=gpt-3.5-turbo-16k-0613

local chatglm2

if you want to use local chatglm2 as the project's large language model,please refer to the configuration below.

chain.llm.type=chatglm
chain.llm.chatglm.baseurl=http://127.0.0.1:8000/
chain.llm.chatglm.model=chatglm2-6b

ERNIE-Bot(文心一言)

if you want to use ERNIE-Bot(文心一言) as the project's large language model,please refer to the configuration below.The optional list includes ernie_bot、ernie_bot4、ernie_bot_turbo

baidu.appKey=xxxxxxxxxxxxxxxxx                                  #this config is in the application-dev.properties file
baidu.secretKey=xxxxxxxxxxxxxxxxxxxxxxxxx                       #this config is in the application-dev.properties file
chain.llm.type=baidu
chain.llm.baidu.appKey=${baidu.appKey}
chain.llm.baidu.secretKey=${baidu.secretKey}
chain.llm.baidu.model=ernie_bot

embeddings

Now this project surports three types of embeddings, text-embedding-ada-002 of openai、bge-large-zh of baidu、t2v-transformer within weaviate。

text-embedding-ada-002 of openai

if you want to use text-embedding-ada-002 as the project's embedding model, please refer to the configuration below.

chain.vectorization.type=openai
chain.vectorization.openai.token=${openai.token}
chain.vectorization.openai.model=text-embedding-ada-002

bge-large-zh of baidu

if you want to use bge-large-zh of baidu as the project's embedding model,please refer to the configuration below. The optional list includes bge-large-zh、bge-large-en、Embedding-V1

chain.vectorization.type=baidu
chain.vectorization.baidu.model=bge-large-zh

t2v-transformer within weaviate

if you want to use local t2v-transformer within weaviate as the project's embedding model, please refer to the configuration below.

chain.vectorization.type=local

vector store

Now this project surports two types of vector db, weaviate and milvus.

weaviate

if you want to use weaviate as the project's vector store,please refer to the configuration below.

chain.vector.store.type=weaviate
chain.vector.store.weaviate.protocol=http
chain.vector.store.weaviate.host=192.168.40.229:8080                             # your self weaviate host
chain.vector.store.weaviate.classname=LocalKnowledge

milvus

if you want to use milvusas the project's vector store,please refer to the configuration below.

chain.vector.store.type=milvus
chain.vector.store.milvus.host=192.168.40.229                                    # your self milvus host
chain.vector.store.milvus.port=19530
chain.vector.store.milvus.dimension=1536                                         # the embedding's dimension
chain.vector.store.milvus.collection=LocalKnowledge

User Case(TODO)