# Experiments with Stable Diffusion ## Image variations [![](assets/img-vars.jpg)](https://twitter.com/Buntworthy/status/1561703483316781057) _TODO describe in more detail_ - Get access to a Linux machine with a decent NVIDIA GPU (e.g. on [Lambda GPU Cloud](https://lambdalabs.com/service/gpu-cloud)) - Clone this repo - Make sure PyTorch is installed and then install other requirements: `pip install -r requirements.txt` - Get model from huggingface hub [lambdalabs/stable-diffusion-image-conditioned](https://huggingface.co/lambdalabs/stable-diffusion-image-conditioned/blob/main/sd-clip-vit-l14-img-embed_ema_only.ckpt) - Put model in `models/ldm/stable-diffusion-v1/sd-clip-vit-l14-img-embed_ema_only.ckpt` - Run `scripts/image_variations.py` or `scripts/gradio_variations.py` All together: ``` git clone https://github.com/justinpinkney/stable-diffusion.git cd stable-diffusion mkdir -p models/ldm/stable-diffusion-v1 wget https://huggingface.co/lambdalabs/stable-diffusion-image-conditioned/resolve/main/sd-clip-vit-l14-img-embed_ema_only.ckpt -O models/ldm/stable-diffusion-v1/sd-clip-vit-l14-img-embed_ema_only.ckpt scripts/gradio_variations.py ``` Trained by [Justin Pinkney](https://www.justinpinkney.com) ([@Buntworthy](https://twitter.com/Buntworthy)) at [Lambda](https://lambdalabs.com/)