The stability and movement are impressive and it has a good understanding of materials such as the sequins and satin gloves. It tends to favor a photo realistic style, 3D rendered style, or a generic flat anime look depending on how you prompt it. Sometimes it will cut to a completely different scene, like this: https://satinminions.com/LumaLabs-Combat-Mode-Cut.html
We're fast approaching the point where generative video AI can replace stock footage, things like "man using computer" or "drone shot of new york city at night" can be completely convincing. And of course memes where consistency doesn't matter are great. But I don't think the current model architectures will be able to replicate a style or character without specific training.
The current target audience seems to be the zero-skill user, where you only have to provide the bare minimum - because that's what gets the most engagement. As a "professional" though, I would much rather see tools that required more advanced inputs - for instance, input a complete line animation and a single colored frame and it would propagate that coloring through the animation.
That is MUCH better, though her face goes from something fairly close to your usual style to something more "conventionally" pretty. Still, imminently fuckable.
Here I used this image:
https://satinminions.com/Alison-Jessica-Rabbit-Side.html
The stability and movement are impressive and it has a good understanding of materials such as the sequins and satin gloves. It tends to favor a photo realistic style, 3D rendered style, or a generic flat anime look depending on how you prompt it. Sometimes it will cut to a completely different scene, like this:
https://satinminions.com/LumaLabs-Combat-Mode-Cut.html
We're fast approaching the point where generative video AI can replace stock footage, things like "man using computer" or "drone shot of new york city at night" can be completely convincing. And of course memes where consistency doesn't matter are great. But I don't think the current model architectures will be able to replicate a style or character without specific training.
The current target audience seems to be the zero-skill user, where you only have to provide the bare minimum - because that's what gets the most engagement. As a "professional" though, I would much rather see tools that required more advanced inputs - for instance, input a complete line animation and a single colored frame and it would propagate that coloring through the animation.