Welcome to JamAI Base – the real-time database that orchestrates Large Language Models (LLMs) for you. Designed to simplify AI integration, JamAI Base offers a Backend as a Service (BaaS) platform with an intuitive, spreadsheet-like interface. Focus on defining your data requirements through natural language prompts, and let us handle the complexities of RAG, LLMOps, conversation histories, and LLM orchestration.
- Interface: Simple, intuitive spreadsheet-like interface.
- Focus: Define data requirements through natural language prompts.
- Foundation: Built on LanceDB, an open-source vector database designed for AI workloads.
- Performance: Serverless design ensures optimal performance and seamless scalability.
- LLM Support: Supports any LLMs, including OpenAI GPT-4, Anthropic Claude 3, Mistral AI Mixtral, and Meta Llama3.
- Capabilities: Leverage state-of-the-art AI capabilities effortlessly.
- Approach: Define the "what" rather than the "how."
- Simplification: Simplifies complex data operations, making them accessible to users with varying levels of technical expertise.
- Effortless RAG: Built-in RAG features, no need to build the RAG pipeline yourself.
- Query Rewriting: Boosts the accuracy and relevance of your search queries.
- Hybrid Search & Reranking: Combines keyword-based search, structured search, and vector search for the best results.
- Structured RAG Content Management: Organizes and manages your structured content seamlessly.
- Adaptive Chunking: Automatically determines the best way to chunk your data.
- BGE M3-Embedding: Leverages multi-lingual, multi-functional, and multi-granular text embeddings for free.
Transform static database tables into dynamic, AI-enhanced entities.
- Dynamic Data Generation: Automatically populate columns with relevant data generated by LLMs.
- Built-in REST API Endpoint: Streamline the process of integrating AI capabilities into applications.
Facilitate real-time interactions between the application frontend and the LLM backend.
- Real-Time Responsiveness: Provide a responsive AI interaction layer for applications.
- Automated Backend Management: Eliminate the need for manual backend management of user inputs and outputs.
- Complex Workflow Orchestration: Enable the creation of sophisticated LLM workflows.
Act as repositories for structured data and documents, enhancing the LLM’s contextual understanding.
- Rich Contextual Backdrop: Provide a rich contextual backdrop for LLM operations.
- Enhanced Data Retrieval: Support other generative tables by supplying detailed, structured contextual information.
- Efficient Document Management: Enable uploading and synchronization of documents and data.
Simplify the creation and management of intelligent chatbot applications.
- Intelligent Chatbot Development: Simplify the development and operational management of chatbots.
- Context-Aware Interactions: Enhance user engagement through intelligent and context-aware interactions.
- Seamless Integration: Integrate with Retrieval-Augmented Generation (RAG) to utilize content from any Knowledge Table.
Efficient management and querying of large-scale multi-modal data.
- Optimized Data Handling: Store, manage, query, and retrieve embeddings on large-scale multi-modal data efficiently.
- Scalability: Ensure optimal performance and seamless scalability.
Focus on defining "what" you want to achieve rather than "how" to achieve it.
- Simplified Development: Allow users to define relationships and desired outcomes.
- Non-Procedural Approach: Eliminate the need to write procedures.
- Functional Flexibility: Support functional programming through LLMs.
- Create a Python (>= 3.10) environment and install
jamaibase
:$ mm create -n jam310 python=3.10 -y $ pip install jamaibase
- Create a Python (>= 3.10) environment and install
jamaibase
:$ npm install jamaibase
Get free LLM tokens on JamAI Base Cloud. Sign up now.
-
Clone the repository:
$ git clone https://github.com/EmbeddedLLM/JamAIBase.git $ cd JamAIBase
-
Add your API keys into
.env
:OPENAI_API_KEY=your_key
-
Launch the Docker containers by running one of these:
# CPU-only $ docker compose -f docker/compose.cpu.yml up --quiet-pull -d # With NVIDIA GPU $ docker compose -f docker/compose.nvidia.yml up --quiet-pull -d
Tip
By default, frontend and backend are accessible at ports 4000 and 6969.
You can change the ports exposed to host by setting env var like so API_PORT=6970 FRONTEND_PORT=4001 docker compose -f docker/compose.cpu.yml up --quiet-pull -d
-
Try the command below in your terminal, or open your browser and go to
localhost:4000
.$ curl localhost:6969/api/v1/models
Want to try building apps with JamAI Base? We've got some awesome examples to get you started! Check out our example docs for inspiration.
Here are a couple of cool frontend examples:
- Simple Chatbot Bot using NLUX: Build a basic chatbot without any backend setup. It's a great way to dip your toes in!
- Simple Chatbot Bot using NLUX + Express.js: Take it a step further and add some backend power with Express.js.
Let us know if you have any questions – we're here to help! Happy coding! 😊
Join our vibrant developer community for comprehensive documentation, tutorials, and resources:
- Discord: Join our Discord
- GitHub: Star our GitHub repository
We welcome contributions! Please read our Contributing Guide to get started.
This project is released under the Apache 2.0 License. - see the LICENSE file for details.