This is a result from RunwayML Gen2 AI image-to-video. It is implemented as a web service, so it costs money, it can be slow, there is content filtering, and you're at their mercy for data privacy and access.
It generally struggles to maintain a consistent form, with characters often melting into someone else if there is significant animation. It also can't replicate a style, which is why I used the stable diffusion edit of this character as the image prompt, since it is a more generic photorealistic style.
They have announced Gen3, but it is not available to the public yet. From the results they've shown, I'd say it has potential to replace stock video for corporate ads and video essay filler, but it's not quite there yet for animation production.
Again, I love the look of bewilderment. It's like she was just transformed on the spot and is just reeling from the sensation and like "wh-what happened?"
It generally struggles to maintain a consistent form, with characters often melting into someone else if there is significant animation. It also can't replicate a style, which is why I used the stable diffusion edit of this character as the image prompt, since it is a more generic photorealistic style.
They have announced Gen3, but it is not available to the public yet. From the results they've shown, I'd say it has potential to replace stock video for corporate ads and video essay filler, but it's not quite there yet for animation production.