🐢 Open-Source Evaluation & Testing for LLMs and ML models
-
Updated
Jun 26, 2024 - Python
🐢 Open-Source Evaluation & Testing for LLMs and ML models
The fastest && easiest LLM security and privacy guardrails for GenAI apps.
Whispers in the Machine: Confidentiality in LLM-integrated Systems
LLM App templates for Dynamic RAG. Ready to run with Docker,⚡in sync with your data sources.
Agentic LLM Vulnerability Scanner
The Security Toolkit for LLM Interactions
A secure low code honeypot framework, leveraging AI for System Virtualization.
Trained Without My Consent (TraWiC): Detecting Code Inclusion In Language Models Trained on Code
Repository for our paper "Frustratingly Easy Jailbreak of Large Language Models via Output Prefix Attacks". https://www.researchsquare.com/article/rs-4385503/latest
Ultra-fast, low latency LLM prompt injection/jailbreak detection ⛓️
A benchmark for prompt injection detection systems.
SecGPT: An execution isolation architecture for LLM-based systems
Papers and resources related to the security and privacy of LLMs 🤖
安全手册,企业安全实践、攻防与安全研究知识库
[CCS'24] A dataset consists of 15,140 ChatGPT prompts from Reddit, Discord, websites, and open-source datasets (including 1,405 jailbreak prompts).
AI-driven Threat modeling-as-a-Code (TaaC-AI)
AiShields is an open-source Artificial Intelligence Data Input and Output Sanitizer
Formalizing and Benchmarking Prompt Injection Attacks and Defenses
Add a description, image, and links to the llm-security topic page so that developers can more easily learn about it.
To associate your repository with the llm-security topic, visit your repo's landing page and select "manage topics."