Stable diffusion fine-tune repo for Paris shopfront experiment (This Place Does Exist)
Find a file
2022-09-16 08:37:20 +01:00
assets add missing image 2022-09-13 04:19:24 -04:00
configs Img condition (#1) 2022-09-04 21:46:39 +01:00
data coco meta 2022-06-09 10:57:03 +02:00
examples update prior notebook 2022-09-05 04:30:24 -04:00
ldm Img condition (#1) 2022-09-04 21:46:39 +01:00
models add configs for training unconditional/class-conditional ldms 2021-12-22 15:57:23 +01:00
scripts update demo 2022-09-05 07:33:35 -04:00
.gitignore add prior to sd notebook 2022-09-05 03:34:29 -04:00
environment.yaml v1 edgeinpainting 2022-08-02 22:53:58 +00:00
LICENSE add LICENSE 2022-01-14 18:24:18 +01:00
main.py Img condition (#1) 2022-09-04 21:46:39 +01:00
notebook_helpers.py add code 2021-12-21 03:23:41 +01:00
README.md add colab badge (#9) 2022-09-16 08:37:20 +01:00
requirements.txt find gradio version that actually works 2022-09-05 06:00:21 -04:00
setup.py add code 2021-12-21 03:23:41 +01:00

Experiments with Stable Diffusion

Image variations

Try it out in colab: Open In Colab

TODO describe in more detail

  • Get access to a Linux machine with a decent NVIDIA GPU (e.g. on Lambda GPU Cloud)
  • Clone this repo
  • Make sure PyTorch is installed and then install other requirements: pip install -r requirements.txt
  • Get model from huggingface hub lambdalabs/stable-diffusion-image-conditioned
  • Put model in models/ldm/stable-diffusion-v1/sd-clip-vit-l14-img-embed_ema_only.ckpt
  • Run scripts/image_variations.py or scripts/gradio_variations.py

All together:

git clone https://github.com/justinpinkney/stable-diffusion.git
cd stable-diffusion
mkdir -p models/ldm/stable-diffusion-v1
wget https://huggingface.co/lambdalabs/stable-diffusion-image-conditioned/resolve/main/sd-clip-vit-l14-img-embed_ema_only.ckpt -O models/ldm/stable-diffusion-v1/sd-clip-vit-l14-img-embed_ema_only.ckpt
pip install -r requirements.txt
python scripts/gradio_variations.py

Then you should see this:

Trained by Justin Pinkney (@Buntworthy) at Lambda