| Hey all. I’ve been working on WaTale, a visual novel app powered by local AI. It combines text, image, and voice models to create fully interactive, branching visual novels entirely on your own hardware. This is a free to use, hassle-free, fully bundled solution. When relying on the local generation pipeline (Ollama for text, Stable Diffusion 1.5 for images using LayerDiffuse and ControlNet, and Kokoro ONNX for TTS), your stories and character data remain completely private. (There is also optional support for Ollama Cloud/Anthropic/OpenAI APIs if you prefer cloud text models). The engine handles real-time generation and playback. It renders SD-generated scene backgrounds with depth parallax, full-body transparent character sprites with idle animations, and real-time lip-syncing via face inpainting. You can create custom characters, put yourself in the story, play through generated narratives with integrated minigames, export your stories, or let your characters interact autonomously. Keep in mind this is an early preview requiring an NVIDIA GPU with at least 4GB of VRAM; you might encounter some bugs and things may break. Looking for feedback of all types, especially on the Stable Diffusion implementation. You can see demo footage and download the application directly at watale – com. Let me know what you think or if you have any questions about how it works under the hood. submitted by /u/Churrucaman |