Attention Residual UNet for vein image segmentation in the field of biometric identification
-
Updated
Jun 25, 2024 - Jupyter Notebook
Attention Residual UNet for vein image segmentation in the field of biometric identification
Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.
Explainable Neural Subgraph Matching with Graph Learnable Multi-hop Attention Networks
The Enterprise-Grade Production-Ready Multi-Agent Orchestration Framework Join our Community: https://discord.com/servers/agora-999382051935506503
Implementing a modified Swin Transformer model in PyTorch on CIFAR-10 for image classification
Contrastive-LSH Embedding and Tokenization Technique for Multivariate Time Series Classification
Implementation of the original transformer model described by Vaswani et al for English to German translation
LSTM-ARIMA with Attention and multiplicative decomposition for sophisticated stock forecasting.
The official PyTorch implementation of the paper "SAITS: Self-Attention-based Imputation for Time Series". A fast and state-of-the-art (SOTA) deep-learning neural network model for efficient time-series imputation (impute multivariate incomplete time series containing NaN missing data/values with machine learning). https://arxiv.org/abs/2202.08516
A simple but complete full-attention transformer with a set of promising experimental features from various papers
Pure C multi modal 3D Hybrid GAN using Cross attention, attention and convolution
Faster alternative to Metal Performance Shaders
Official code for Dual-domain attention in my graduation thesis
CNN and Attention Mechanisms for Parkinson's Diagnosis and Speech Deficit Detection
Tensorflow implementation of a 3D-CNN U-net with Grid Attention and DSV for pancreas segmentation trained on CT-82.
An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
With the goal of recovering high-quality image content from its degraded version, image restoration enjoys numerous applications, such as in photography, security, medical imaging, and remote sensing. In this project I have implemented an model named MirNet for low-light image enhancement.
The original transformer implementation from scratch. It contains informative comments on each block
Experimental project on building custom LSTM and LSTM with Attention layer for comparison analysis on FTS forecasting (June, 2024)
Add a description, image, and links to the attention-mechanism topic page so that developers can more easily learn about it.
To associate your repository with the attention-mechanism topic, visit your repo's landing page and select "manage topics."