Stable diffusion fine-tune repo for Paris shopfront experiment (This Place Does Exist)
Find a file
Justin Pinkney 12dd21670b ui screen
2022-09-04 17:31:46 -04:00
assets ui screen 2022-09-04 17:31:46 -04:00
configs Img condition (#1) 2022-09-04 21:46:39 +01:00
data coco meta 2022-06-09 10:57:03 +02:00
ldm Img condition (#1) 2022-09-04 21:46:39 +01:00
models add configs for training unconditional/class-conditional ldms 2021-12-22 15:57:23 +01:00
scripts Img condition (#1) 2022-09-04 21:46:39 +01:00
.gitignore Img condition (#1) 2022-09-04 21:46:39 +01:00
environment.yaml v1 edgeinpainting 2022-08-02 22:53:58 +00:00
LICENSE add LICENSE 2022-01-14 18:24:18 +01:00
main.py Img condition (#1) 2022-09-04 21:46:39 +01:00
notebook_helpers.py add code 2021-12-21 03:23:41 +01:00
README.md ui screen 2022-09-04 17:31:46 -04:00
requirements.txt Img condition (#1) 2022-09-04 21:46:39 +01:00
setup.py add code 2021-12-21 03:23:41 +01:00

Experiments with Stable Diffusion

Image variations

TODO describe in more detail

  • Get access to a Linux machine with a decent NVIDIA GPU (e.g. on Lambda GPU Cloud)
  • Clone this repo
  • Make sure PyTorch is installed and then install other requirements: pip install -r requirements.txt
  • Get model from huggingface hub lambdalabs/stable-diffusion-image-conditioned
  • Put model in models/ldm/stable-diffusion-v1/sd-clip-vit-l14-img-embed_ema_only.ckpt
  • Run scripts/image_variations.py or scripts/gradio_variations.py

All together:

git clone https://github.com/justinpinkney/stable-diffusion.git
cd stable-diffusion
mkdir -p models/ldm/stable-diffusion-v1
wget https://huggingface.co/lambdalabs/stable-diffusion-image-conditioned/resolve/main/sd-clip-vit-l14-img-embed_ema_only.ckpt -O models/ldm/stable-diffusion-v1/sd-clip-vit-l14-img-embed_ema_only.ckpt
pip install -r requirements.txt
python scripts/gradio_variations.py

Then you should see this:

Trained by Justin Pinkney (@Buntworthy) at Lambda