While experimenting with the video generation model, I had the idea of taking a picture of my room and using it in the ComfyUI workflow. I thought it could be fun.
So, I decided to take a photo with my phone and transfer it to my computer. Apart from the furniture and walls, nothing else appeared in the picture. I selected the image in the workflow and wrote a very short prompt to test: “A guy in the room.” My main goal was to see if the room would maintain its consistency in the generated video.
Once the rendering was complete, I felt the onset of a panic attack. Why? The man generated in the AI video was none other than myself. I jumped up from my chair, completely panicked and plunged into total confusion as all the most extravagant theories raced through my mind.
Once I had calmed down, though still perplexed, I started analyzing the photo I had taken. After a few minutes of investigation, I finally discovered a faint reflection of myself taking the picture.
submitted by /u/Naji128
[link] [comments]
I created a completely local Ethot online as an experiment. I dream of a world…
Traditional databases answer a well-defined question: does the record matching these criteria exist?
Despite their output being ultimately consumed by human viewers, 3D Gaussian Splatting (3DGS) methods often…
How we built lightweight, real-time map collaboration for teams operating at the edge.About This SeriesFrontend engineering at…
Kia ora! Customers in New Zealand have been asking for access to foundation models (FMs)…
AI has made it easier than ever for student developers to work efficiently, tackle harder…