Stable Diffusion Models are Secretly Good at Visual In-Context Learning
Large language models (LLM) in natural language processing (NLP) have demonstrated great potential for in-context learning (ICL) — the ability to leverage a few sets of example prompts to adapt to various tasks without having to explicitly update the model weights. ICL has recently been explored for computer vision tasks with promising early outcomes. These …
Read more “Stable Diffusion Models are Secretly Good at Visual In-Context Learning”