birthcard/README.md
2021-03-31 09:41:47 +02:00

2.5 KiB

Use pyenv and poetry to set up virtualenv and run:

pyenv shell
poetry install

Turn numbered svgs into usable arrays:

python create_dataset.py --dataset_dir datasets/naam6/

Train algorithm: (save often, as we'll use the intermediate steps)

poetry shell
#sketch_rnn_train --log_root=models/naam6 --data_dir=datasets/naam6 --hparams="data_set=[diede.npz,blokletters.npz],dec_model=layer_norm,dec_rnn_size=450,enc_model=layer_norm,enc_rnn_size=300,save_every=50,grad_clip=1.0,use_recurrent_dropout=0,conditional=True,num_steps=5000"
sketch_rnn_train --log_root=models/naam6 --data_dir=datasets/naam6 --hparams="data_set=[diede.npz,blokletters.npz,pleun.npz],dec_model=layer_norm,dec_rnn_size=450,enc_model=layer_norm,enc_rnn_size=300,save_every=50,grad_clip=1.0,use_recurrent_dropout=0,conditional=True,num_steps=5000"

Generate a card:

python create_card.py --data_dir datasets/naam4 --model_dir models/naam4 --max_checkpoint_factor .8 --columns 5 --rows 13 --create_grid --last_is_target --last_in_group --target_sample 202 --output_file generated/cards/card-99x190-1.svg

generate a3 poster:

python create_card.py --data_dir datasets/naam4 --model_dir models/naam4 --max_checkpoint_factor .8 --columns 15 --rows 28 --last_is_target --last_in_group --target_sample 202 --output_file generated/poster1.svg --width 297mm --height 420mm --page_margin 110
max_checkpoint_factor
set was trained for too many iterations in order to generate a nice card (~half of the card looks already smooth), by lowering this factor, we use eg. only the first 80% (.8) iteration
split_paths
Drawings that consist of mulitple strokes are split over paths, which are split over a given number of groups (see nr_of_paths)
last_is_target
Last item (bottom right) is not generated but hand picked from the dataset (see target_sample)
last_in_group
Puts the last drawing in a separate group

naam4: strokes & blokletters: 101-360 strokes only: 101-259