🐢 Open-Source Evaluation & Testing for LLMs and ML models
-
Updated
Jun 26, 2024 - Python
🐢 Open-Source Evaluation & Testing for LLMs and ML models
A curated list of awesome academic research, books, code of ethics, data sets, institutes, newsletters, principles, podcasts, reports, tools, regulations and standards related to Responsible AI, Trustworthy AI, and Human-Centered AI.
The Python Risk Identification Tool for generative AI (PyRIT) is an open access automation framework to empower security professionals and machine learning engineers to proactively find risks in their generative AI systems.
reweaving artificial intelligence
A Python package to assess and improve fairness of machine learning models.
Concise summaries of key papers in responsible AI.
This framework aims to assists in the documentation of datasets to promote transparency and help dataset creators and consumers make informed decisions about whether specific datasets meet their needs and what limitations they need to consider
A comprehensive cheat sheet for the AI-900 Azure AI Fundamentals exam covering artificial intelligence workloads, machine learning principles, computer vision, natural language processing (NLP), generative AI, and responsible AI considerations. Includes Azure tools and services with links and logos for visual clarity.
Deliver safe & effective language models
Framework to create formal configurations of constraints.
Zero Trust AI 360
Reading list for adversarial perspective and robustness in deep reinforcement learning.
Package for evaluating the performance of methods which aim to increase fairness, accountability and/or transparency
Responsible AI Masterclass (June 2024 Run)
WOUAF: Weight Modulation for User Attribution and Fingerprinting in Text-to-Image Diffusion Models (CVPR 2024)
Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.
Experiments for the paper "Finding patterns in ambiguity", accepted at ReGenAI workshop @ CVPR 2024
Code for paper "Concept Distillation: Leveraging Human-Centered Explanations for Model Improvement", Neurips 2023
moDel Agnostic Language for Exploration and eXplanation
Add a description, image, and links to the responsible-ai topic page so that developers can more easily learn about it.
To associate your repository with the responsible-ai topic, visit your repo's landing page and select "manage topics."