Skip to content

yhlleo/TriangleGAN

Repository files navigation

Python 3.5 Packagist Last Commit Maintenance Contributing

TriangleGAN

A new gesture-to-gesture translation framework. Gesture-to-Gesture Translation in the Wild via Category-Independent Conditional Maps, published in ACM International Conference on Multimedia, 2019.

1.Dataset preparing

More details >>>

2.Installation

We provide an user-friendly configuring method via Conda system, and you can create a new Conda environment using the command:

conda env create -f environment.yml

3.Train/Test

1.Download dataset and copy them into ./datasets

2.Modify the scripts to train/test:

  • Training
sh ./scripts/train_trianglegan_ntu.sh <gpu_id>
sh ./scripts/train_trianglegan_senz3d.sh <gpu_id>
  • Testing
sh ./scripts/test_trianglegan_ntu.sh <gpu_id>
sh ./scripts/train_trianglegan_senz3d.sh <gpu_id>

3.The pretrained model is saved at ./checkpoints/{model_name}. Check here for all the available TriangleGAN models.

4.We provide an implementation of GestureGAN, ACM MM 2018 [paper]|[code].

sh ./scripts/train_gesturegan_ntu.sh <gpu_id>
sh ./scripts/train_gesturegan_senz3d.sh <gpu_id>

4.Evaluation

More Details >>>

5.Visual Results

More Details >>>

Acknowledgment

This code is based on the pytorch-CycleGAN-and-pix2pix. Thanks to the contributors of this project.

Related Work

Evaluation codes

We recommend to evaluate the performances of the compared models mainly based on this repo: GAN-Metrics

References

If you take use of our datasets or code, please cite our papers:

@inproceedings{liu2019gesture,
  title={Gesture-to-gesture translation in the wild via category-independent conditional maps},
  author={Liu, Yahui and De Nadai, Marco and Zen, Gloria and Sebe, Nicu and Lepri, Bruno},
  booktitle={Proceedings of the 27th ACM International Conference on Multimedia},
  pages={1916--1924},
  year={2019}
}

If you have any questions, please contact me without hesitation (yahui.liu AT unitn.it).