Stable diffusion fine-tune repo for Paris shopfront experiment (This Place Does Exist)
Go to file
Justin 5346889189
Delete environment.yaml
2022-09-23 09:28:43 +01:00
assets Finetuning (#11) 2022-09-16 14:01:18 +01:00
configs Finetuning (#11) 2022-09-16 14:01:18 +01:00
data coco meta 2022-06-09 10:57:03 +02:00
examples update prior notebook 2022-09-05 04:30:24 -04:00
ldm Finetuning (#11) 2022-09-16 14:01:18 +01:00
models add configs for training unconditional/class-conditional ldms 2021-12-22 15:57:23 +01:00
scripts Update slimify.py 2022-09-23 09:21:53 +01:00
.gitignore Finetuning (#11) 2022-09-16 14:01:18 +01:00
LICENSE add LICENSE 2022-01-14 18:24:18 +01:00
README.md Finetuning (#11) 2022-09-16 14:01:18 +01:00
main.py Finetuning (#11) 2022-09-16 14:01:18 +01:00
notebook_helpers.py add code 2021-12-21 03:23:41 +01:00
requirements.txt Finetuning (#11) 2022-09-16 14:01:18 +01:00
setup.py add code 2021-12-21 03:23:41 +01:00

README.md

Experiments with Stable Diffusion

This repository extends and adds to the original training repo for Stable Diffusion.

Currently it adds:

Fine tuning

Makes it easy to fine tune Stable Diffusion on your own dataset. For example generating new Pokemon from text:

Girl with a pearl earring, Cute Obama creature, Donald Trump, Boris Johnson, Totoro, Hello Kitty

For a step by step guide see the Lambda Labs examples repo.

Image variations

Open Demo Open In Colab Open in Spaces

For more details on the Image Variation model see the model card.

  • Get access to a Linux machine with a decent NVIDIA GPU (e.g. on Lambda GPU Cloud)
  • Clone this repo
  • Make sure PyTorch is installed and then install other requirements: pip install -r requirements.txt
  • Get model from huggingface hub lambdalabs/stable-diffusion-image-conditioned
  • Put model in models/ldm/stable-diffusion-v1/sd-clip-vit-l14-img-embed_ema_only.ckpt
  • Run scripts/image_variations.py or scripts/gradio_variations.py

All together:

git clone https://github.com/justinpinkney/stable-diffusion.git
cd stable-diffusion
mkdir -p models/ldm/stable-diffusion-v1
wget https://huggingface.co/lambdalabs/stable-diffusion-image-conditioned/resolve/main/sd-clip-vit-l14-img-embed_ema_only.ckpt -O models/ldm/stable-diffusion-v1/sd-clip-vit-l14-img-embed_ema_only.ckpt
pip install -r requirements.txt
python scripts/gradio_variations.py

Then you should see this:

Trained by Justin Pinkney (@Buntworthy) at Lambda