| | You thought you can get away from it? Never. Guys at Yandex and Adobe implemented CLIP for bunch of models that don’t use it – https://github.com/quickjkee/modulation-guidance I made it into ComfyUI node for Anima – https://github.com/Anzhc/Anima-Mod-Guidance-ComfyUI-Node For images above and below i used CLIP L from here – https://huggingface.co/Anzhc/Noobai11-CLIP-L-and-BigG-Anime-Text-Encoders Basic CLIP L also works, but your mileage may vary, every CLIP has different effect. — Unfortunately it won’t let you use weighting as on SDXL, but from what i tested that also was a bit better at least. So what are the benefits anyway? From what i tested(Left is base Anima, right with Modulation Guidance): – Can reduce color leaks (necktie is not even prompted) – Improve composition and stability (Yes, i picked the funniest example, sue me) – Beach For no reason whatsoever, Anima LOVES to default to ocean or beach, that effect is reduced with CLIP. – Less unprompted horny (I know for most of you this is a negative though) (Afterimages prompted, i just wanted her to sweep floors…) – Little bit better (from what i tested) character separation, and adherence to character look But it still largely relies on base model understanding in this aspect. – Can also improve quality in general (subjective) – Less 1girl bias (prompt is just `masterpiece, best quality, scenery`) I primarily tested with tags only, while i did test with some NL, i generally don’t have much luck with it on Anima, for me it’s unstable and inconsistent, so i’ll leave it to you to find if CLIP is helping there or not. P.S. All girls in images are clothed/in bikini, i just censored them to keep it safe. But i really can’t emphasize how horny Anima is by default… It’s easy to use, and i’ve included prepared workflow for you to compare both results for yourself: You can find it in repo. To use it, you don’t need to write a prompt for it every time, generally you just use it as secondary quality tags, and wire negative and base in from main prompts. Based on official repo, you can tune it to affect different things, but i haven’t tried using it like that, so up to you to test it. That’s it. Have fun. Till next time. Also She’s just like me frfr If you’re here, here are links from the top of post so you don’t have to scroll: Original implementation – https://github.com/quickjkee/modulation-guidance ComfyUI node for Anima – https://github.com/Anzhc/Anima-Mod-Guidance-ComfyUI-Node Workflows also can be found right in node repo. For images above i used CLIP L from here – https://huggingface.co/Anzhc/Noobai11-CLIP-L-and-BigG-Anime-Text-Encoders submitted by /u/Anzhc |
Data fusion , or combining diverse pieces of data into a single pipeline, sounds ambitious…
Prior studies investigating the internal workings of LLMs have uncovered sparse subnetworks, often referred to…
Organizations and individuals running multiple custom AI models, especially recent Mixture of Experts (MoE) model…
Something has shifted in the developer community over the past year. AI agents have moved…
After migrating from misogynist forums to social media feeds, terms like “looksmaxxing” and “mogged” are…
The Care Bears taught a generation of kids that sharing is caring, but not everyone…