stable-diffusion-finetune/README.md

34 lines
1.6 KiB
Markdown
Raw Normal View History

# Experiments with Stable Diffusion
## Image variations
[![](assets/img-vars.jpg)](https://twitter.com/Buntworthy/status/1561703483316781057)
2022-09-16 09:37:20 +02:00
Try it out in colab: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1JqNbI_kDq_Gth2MIYdsphgNgyGIJxBgB?usp=sharing)
_TODO describe in more detail_
2022-09-04 23:22:56 +02:00
- Get access to a Linux machine with a decent NVIDIA GPU (e.g. on [Lambda GPU Cloud](https://lambdalabs.com/service/gpu-cloud))
- Clone this repo
- Make sure PyTorch is installed and then install other requirements: `pip install -r requirements.txt`
- Get model from huggingface hub [lambdalabs/stable-diffusion-image-conditioned](https://huggingface.co/lambdalabs/stable-diffusion-image-conditioned/blob/main/sd-clip-vit-l14-img-embed_ema_only.ckpt)
- Put model in `models/ldm/stable-diffusion-v1/sd-clip-vit-l14-img-embed_ema_only.ckpt`
- Run `scripts/image_variations.py` or `scripts/gradio_variations.py`
2021-12-21 01:59:06 +01:00
2022-09-04 23:22:56 +02:00
All together:
```
git clone https://github.com/justinpinkney/stable-diffusion.git
cd stable-diffusion
mkdir -p models/ldm/stable-diffusion-v1
wget https://huggingface.co/lambdalabs/stable-diffusion-image-conditioned/resolve/main/sd-clip-vit-l14-img-embed_ema_only.ckpt -O models/ldm/stable-diffusion-v1/sd-clip-vit-l14-img-embed_ema_only.ckpt
2022-09-04 23:29:50 +02:00
pip install -r requirements.txt
python scripts/gradio_variations.py
2022-09-04 23:22:56 +02:00
```
2022-09-04 23:31:46 +02:00
Then you should see this:
[![](assets/gradio_variations.jpeg)](https://twitter.com/Buntworthy/status/1565704770056294400)
2022-09-16 09:37:20 +02:00
Trained by [Justin Pinkney](https://www.justinpinkney.com) ([@Buntworthy](https://twitter.com/Buntworthy)) at [Lambda](https://lambdalabs.com/)