Skip to content

Latest commit

 

History

History
90 lines (74 loc) · 5.57 KB

README.org

File metadata and controls

90 lines (74 loc) · 5.57 KB
██╗  ██╗███████╗██╗ ██████╗
██║  ██║██╔════╝██║██╔════╝
███████║███████╗██║██║
██╔══██║╚════██║██║██║         Let's kernelize all your life into pieces :)
██║  ██║███████║██║╚██████╗      and beat Covid19 with HSIC
╚═╝  ╚═╝╚══════╝╚═╝ ╚═════╝

██████╗  ██████╗ ████████╗████████╗██╗     ███████╗███╗   ██╗███████╗ ██████╗██╗  ██╗
██╔══██╗██╔═══██╗╚══██╔══╝╚══██╔══╝██║     ██╔════╝████╗  ██║██╔════╝██╔════╝██║ ██╔╝
██████╔╝██║   ██║   ██║      ██║   ██║     █████╗  ██╔██╗ ██║█████╗  ██║     █████╔╝
██╔══██╗██║   ██║   ██║      ██║   ██║     ██╔══╝  ██║╚██╗██║██╔══╝  ██║     ██╔═██╗
██████╔╝╚██████╔╝   ██║      ██║   ███████╗███████╗██║ ╚████║███████╗╚██████╗██║  ██╗
╚═════╝  ╚═════╝    ╚═╝      ╚═╝   ╚══════╝╚══════╝╚═╝  ╚═══╝╚══════╝ ╚═════╝╚═╝  ╚═╝

HSIC-Bottleneck

This is the released repo for our work entitled `The HSIC Bottleneck: Deep Learning without Back-Propagation`. All the experiments in the paper were produced by this repository.

Updates

  • 2020/Aug/7

I’m preparing a different repository to summarize my PhD works including HSIC-bottleneck, with better project architecture and API. If people would like to extend our work, please contact with us.

  • 2019/Nov/14

Our work has been accepted by AAAI2020 international conference.

Researcher

Here’s the guide to work with HSIC-bottleneck

Environment

  • pytorch-1.1.0
  • torchvision-0.3.0
  • numpy-1.16.4
  • scipy-1.3.0
  • tqdm-4.33.0
  • yaml-5.1.1

Experimenting

  • Reproducibility: Please refer to the task oriented scripts guide (link)
  • Configuration: Please refer to configuration guide to suit your interests (link)

Developing

  • For people would like to extend our work, we have prepare the simple template for your toy experiment to make your life easier (link)
  • Otherwise, please see the developing guide for your intersts (link)

Features

loggers

For every single training, the system will keep almost all of information in the assets/*/raw folder and make the symlink under assets/* pointing to the latest experiments. After the training, the system will automatically generate the experiment figure in the folder.

visualization

Except the static experiment figures, you can also do animation based on the saved loggers. Checkout the script link for more information.

needle test

  • Model 786 to 1 output of backprop(left) and HSIC-bottleneck comparison
assets/samples/fig8a-needle-1d-dist-backprop.gifassets/samples/fig8b-needle-1d-dist-hsictrain.gif

unformatted activation distribution

  • Showing activation distribution from MNIST class 1 (top-left) to class 9 (bottom-right) respectively, this is what we called unformatted information as the class of images might be activated at different entry of the output. (In this case, class 1 images are gathering around entry 5 by looking at top-left figure)

assets/samples/fig3-layer2-hsic-solve-actdist-mnist-batch.gif (please note that the above results are not trained with optimial parameters, they are only for showing animation purpose)

learning triggers

The entry point of our framework is with run_hsicbt command plus a configuration file (link). You can also specify the argument to overwrite the config file to achieve the goal of parameter searching as in task scripts for instance.

future work

WIP

Cite

Please cite our work if it is relevant to your research work, thanks! Hopfully you enjoy our work and please feel free to open issues or email us for more discussion.

@inproceedings{DBLP:conf/aaai/MaLK20,
  author    = {Kurt Wan{-}Duo Ma and
               J. P. Lewis and
               W. Bastiaan Kleijn},
  title     = {The {HSIC} Bottleneck: Deep Learning without Back-Propagation},
  booktitle = {The Thirty-Fourth {AAAI} Conference on Artificial Intelligence, {AAAI}
               2020, The Thirty-Second Innovative Applications of Artificial Intelligence
               Conference, {IAAI} 2020, The Tenth {AAAI} Symposium on Educational
               Advances in Artificial Intelligence, {EAAI} 2020, New York, NY, USA,
               February 7-12, 2020},
  pages     = {5085--5092},
  publisher = {{AAAI} Press},
  year      = {2020},
  url       = {https://aaai.org/ojs/index.php/AAAI/article/view/5950},
  timestamp = {Thu, 04 Jun 2020 16:49:55 +0200},
  biburl    = {https://dblp.org/rec/conf/aaai/MaLK20.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}