Run state-of-the-art language models locally. Chat with AI using simple slash commands. Zero cloud, zero cost – just pure, home-brewed AI magic.
-
Updated
Jun 23, 2024 - Python
Run state-of-the-art language models locally. Chat with AI using simple slash commands. Zero cloud, zero cost – just pure, home-brewed AI magic.
Instruct and validate structured outputs from LLMs with Ollama.
implemented vector similarity algorithms to understand their inner workings, used local embeddding models
A constrained generation filter for local LLMs that makes them quote properly from a source document
automate the batching and execution of prompts.
Local AI Open Orca For Dummies is a user-friendly guide to running Large Language Models locally. Simplify your AI journey with easy-to-follow instructions and minimal setup. Perfect for developers tired of complex processes!
50-line local LLM assistant in Python with Streamlit and GPT4All
Uchinoko Studio is a web application designed to facilitate real-time voice conversations with AI.
GPT powered rubber duck debugger as CS50 2023 final project.
CrewAI Local LLM is a GitHub repository for a locally hosted large language model (LLM) designed to enable private, offline AI model usage and experimentation.
A comprehensive AI companion leveraging advanced semantic analysis, sentiment detection, and voice processing to provide personalized and context-aware interactions using Autogen, semantic-router, and VoiceProcessingToolkit.
Add a description, image, and links to the local-llm topic page so that developers can more easily learn about it.
To associate your repository with the local-llm topic, visit your repo's landing page and select "manage topics."