You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Testing the Reproducibility of the paper: MixSeq. Under the assumption that macroscopic time series follow a mixture distribution, they hypothesise that lower variance of constituting latent mixture components could improve the estimation of macroscopic time series.
Built a model to create highlights/summary of given video. The results of this study shows that, with a remarkable similarity index(SSIM) of 98%, the recommended technique is quite successful in choosing keyframes that are both educational and distinctive from the original movie
A variational Autoencoder (VAE) to generate human faces based on the CelebA dataset. A VAE is a generative model that learns to represent high-dimensional data (like images) in a lower-dimensional latent space, and then generates new data from this space.
This repository contains the code, data and scripts used to write the Bachelor Thesis "Latent representations for traditional music analysis and generation".
A variational autoencoder can be defined as being an autoencoder whose training is regularised to avoid overfitting and ensure that the latent space has good properties that enable generative process.
Topics include function approximation, learning dynamics, using learned dynamics in control and planning, handling uncertainty in learned models, learning from demonstration, and model-based and model-free reinforcement learning.