2016-12-22 13:57:30 +00:00
|
|
|
# Basic VAE Example
|
|
|
|
|
|
2019-08-23 09:07:09 +05:30
|
|
|
This is an improved implementation of the paper [Auto-Encoding Variational Bayes](http://arxiv.org/abs/1312.6114) by Kingma and Welling.
|
2016-12-22 13:57:30 +00:00
|
|
|
It uses ReLUs and the adam optimizer, instead of sigmoids and adagrad. These changes make the network converge much faster.
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
pip install -r requirements.txt
|
|
|
|
|
python main.py
|
2017-01-17 19:16:08 +01:00
|
|
|
```
|
2020-03-04 20:09:44 -05:00
|
|
|
The main.py script accepts the following arguments:
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
optional arguments:
|
|
|
|
|
--batch-size input batch size for training (default: 128)
|
|
|
|
|
--epochs number of epochs to train (default: 10)
|
|
|
|
|
--no-cuda enables CUDA training
|
|
|
|
|
--seed random seed (default: 1)
|
|
|
|
|
--log-interval how many batches to wait before logging training status
|
|
|
|
|
```
|