While experimenting with the video generation model, I had the idea of taking a picture of my room and using it in the ComfyUI workflow. I thought it could be fun.
So, I decided to take a photo with my phone and transfer it to my computer. Apart from the furniture and walls, nothing else appeared in the picture. I selected the image in the workflow and wrote a very short prompt to test: “A guy in the room.” My main goal was to see if the room would maintain its consistency in the generated video.
Once the rendering was complete, I felt the onset of a panic attack. Why? The man generated in the AI video was none other than myself. I jumped up from my chair, completely panicked and plunged into total confusion as all the most extravagant theories raced through my mind.
Once I had calmed down, though still perplexed, I started analyzing the photo I had taken. After a few minutes of investigation, I finally discovered a faint reflection of myself taking the picture.
submitted by /u/Naji128
[link] [comments]
When I first started reading machine learning research papers, I honestly thought something was wrong…
Our latest Veo update generates lively, dynamic clips that feel natural and engaging — and…
The adoption and implementation of generative AI inference has increased with organizations building more operational…
AI agents are moving from test environments to the core of enterprise operations, where they…
Salesforce on Tuesday launched an entirely rebuilt version of Slackbot, the company's workplace assistant, transforming…
As vehicles grow more software-dependent, repairing them has become harder than ever. A bill in…