An optimized implementation of spatiotemporal masked autoencoders
-
Updated
May 2, 2024 - Python
An optimized implementation of spatiotemporal masked autoencoders
Investigate possibilities for Vision Transformers with multiscale grids
TorchGeo: datasets, transforms, and models for geospatial data
Project for Computer Vision course @ MSc in Artificial Intelligence, UniVR
Change detection on satellite images with masked autoencoders.
An optimized implementation of masked autoencoders (MAEs)
Re-implementation of the method proposed in ''DreamDiffusion: Generating High-Quality Images from Brain EEG Signals'' by Y. Bai, X. Wang et al. for Neural Network Course exam Topics
Train MAE on Kaggle 2 GPUs (T4x2), Log to Wandb
The code for the paper "Contrastive Masked Autoencoders for Self-Supervised Video Hashing" (AAAI'23)
Reproducing the MET framework with PyTorch
PyTorch implementation of MADE
Generative modeling and representation learning through reconstruction
R-MAE: Pre-training LiDAR Perception with Masked Autoencoders for Ultra-Efficient 3D Sensing
PyTorch wrapper for Deep Density Estimation Models
code for "AdPE: Adversarial Positional Embeddings for Pretraining Vision Transformers via MAE+"
Extraction of deep features/representation of birds by deep learning algorithms.
HSIMAE: A Unified Masked Autoencoder with large-scale pretraining for Hyperspectral Image Classification
Official code for CVPR2024 “VideoMAC: Video Masked Autoencoders Meet ConvNets”
A Vector Quantized Masked AutoEncoder for speech emotion recognition
Codebase for Imperial MSc AI Individual Project - Self-Supervised Learning for Audio Inference
Add a description, image, and links to the masked-autoencoder topic page so that developers can more easily learn about it.
To associate your repository with the masked-autoencoder topic, visit your repo's landing page and select "manage topics."