Brevitas: neural network quantization in PyTorch
-
Updated
Jun 28, 2024 - Python
Brevitas: neural network quantization in PyTorch
yolo model qat and deploy with deepstream&tensorrt
More readable and flexible yolov5 with more backbone(gcn, resnet, shufflenet, moblienet, efficientnet, hrnet, swin-transformer, etc) and (cbam,dcn and so on), and tensorrt
Model Compression Toolkit (MCT) is an open source project for neural network model optimization under efficient, constrained hardware. This project provides researchers, developers, and engineers advanced quantization and compression tools for deploying state-of-the-art neural networks.
针对pytorch模型的自动化模型结构分析和修改工具集,包含自动分析模型结构的模型压缩算法库
This project enables Intel® platform technologies (SGX, QAT) and GPUs on Red Hat OpenShift Container Platform
FakeQuantize with Learned Step Size(LSQ+) as Observer in PyTorch
QAT(quantize aware training) for classification with MQBench
quantization example for pqt & qat
Training U-Net based Convolutional Neural Network model to automatically identify and delineate areas of qat agriculture in Sentinel-2 multispectral imagery.
Build AI model to classify beverages for blind individuals
Quantization of Models : Post-Training Quantization(PTQ) and Quantize Aware Training(QAT)
Official website of qat programming language...
Server for https://qat.dev - official site of the Qat programming language...
Add a description, image, and links to the qat topic page so that developers can more easily learn about it.
To associate your repository with the qat topic, visit your repo's landing page and select "manage topics."