From 5b110b020e3b1d0badabd216f68bd7f100969d5e Mon Sep 17 00:00:00 2001 From: AK391 <81195143+AK391@users.noreply.github.com> Date: Thu, 7 Apr 2022 20:25:21 -0400 Subject: [PATCH] add web demo link --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index 0b06234..ffa0d75 100644 --- a/README.md +++ b/README.md @@ -24,6 +24,7 @@ - More pre-trained LDMs are available: - A 1.45B [model](#text-to-image) trained on the [LAION-400M](https://arxiv.org/abs/2111.02114) database. - A class-conditional model on ImageNet, achieving a FID of 3.6 when using [classifier-free guidance](https://openreview.net/pdf?id=qw8AKxfYbI) Available via a [colab notebook](https://colab.research.google.com/github/CompVis/latent-diffusion/blob/main/scripts/latent_imagenet_diffusion.ipynb) [![][colab]][colab-cin]. + - Integrated into [Huggingface Spaces 🤗](https://huggingface.co/spaces) using [Gradio](https://github.com/gradio-app/gradio). Try out the Web Demo: [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/multimodalart/latentdiffusion) ## Requirements A suitable [conda](https://conda.io/) environment named `ldm` can be created