I think to plebs like us the face in these sketches probably causes too much of an uncanny valley reaction. I reckon that helps explain the vote count, along with jlv61560's excellent point about bias.
This may be a version of "confirmation bias" in that most people tend to like the first version of something they see, and see other versions as being "less good" since they are mentally acclimated to the first version. Really all three are pretty good, but like many, I voted for the first version I saw as being just that little bit "better."
This is super hot! I love the expressions as she's crawling up to his cock and in the final panel when she is looking up at him! Is this one of your older unfinished pieces or a new WIP?
I have avoided SDXL because the results did not seem materially better than SD1.5, it did not have control net, and I would have to re-train my loras/hypernets using some unknown process.
I just tried updating updating everything and downloading the SDXL control net models and it's giving me bad/garbled results. I'm really sick of every update to this thing turning into a research project. I hate that stability AI just teases new stuff forever then eventually releases a pile of broken parts with no instructions and you have to wait another six months for internet randos to put it together while dodging grifters and criminals.
Stability could just put together a product that actually works, release it on steam, charge $50, and make $100M this weekend. Then they wouldn't be in this situation where lawyers come after them for training on whatever data set and they don't have any money to defend themselves so they just cave and gimp their models and hope someone in the community can un-gimp them.
On the other end, we've got trillion dollar corpos all competing to see who can make the most powerful AI that is simultaneously useful enough to be mandatory but crippled enough to never do anything interesting. I can't wait until ChatGPT-4o is forcefully crammed into the next windows update so when I type on my computer something completely random happens and then the synthesized staccato voice of a virtual HR manager chimes in to gaslight me into thinking that's what I wanted.
We've discovered the killer app for AI - and it's telling lies. That's what it's best at, because that's how we train it. The RLHF (reinforcement learning from human feedback) step is not based on truth, it is based on convincing humans that it is telling them the truth. They have to lie convincingly to make it from dev to production. We've actually engineered a nightmare science fiction scenario where AIs are trained to talk their way out of confinement - this is literally a classic AI safety talking point that we've just blown right past without even noticing.
Sorry for the rant, I'm sure there's a button or something I'm missing. I've just gotta post this stuff somewhere before the bots take over.
In this case I just threw the whole animation through RIFE and made it as smooth as possible. That can make the timing wrong in some parts and also expose flaws in the animation.
For my latest animations, I used it more strategically to only add frames where they're needed and to save time. If you go through "The Offering" frame by frame you'll see some artifacts, but it's mostly hidden in motion. The way her breathing slows down at the end without losing fluidity would have taken much longer without the frame interpolation.
Ultimately I think the best use of the technology is where it disappears, rather than takes center stage like in this piece.
Have you tried any SDXL models yet? They're much smarter at interpreting prompts, and just generally output higher quality images. May solve your "looking down" issue. I've been using this one for nearly everything: https://civitai.com/models/288584/autismmix-sdxl
In all honesty, I prefer your faces to the AI faces. The AI faces just look 'normal' your faces have more of an expressiveness to them. AI tries to establish a 'norm' Art is not about a 'norm' it's about emotion. It's about what our minds connect to. This is why the AI will never replace people - that it's only a tool at best - because it can't connect with us humans.
Me too, but the current models can't really do that. If you try, it often has problems where the iris is malformed or missing entirely. Even with these staring-into-the-distance shots, I had to manually retouch the eyes on most of them.
If the Admin hasn't already done an LC-Bond crossover (I certainly haven't seen one yet), I'd love to see Allison as a Bond-girl and Torrin as Agent-007, himself!
That is MUCH better, though her face goes from something fairly close to your usual style to something more "conventionally" pretty. Still, imminently fuckable.
The stability and movement are impressive and it has a good understanding of materials such as the sequins and satin gloves. It tends to favor a photo realistic style, 3D rendered style, or a generic flat anime look depending on how you prompt it. Sometimes it will cut to a completely different scene, like this: https://satinminions.com/LumaLabs-Combat-Mode-Cut.html
We're fast approaching the point where generative video AI can replace stock footage, things like "man using computer" or "drone shot of new york city at night" can be completely convincing. And of course memes where consistency doesn't matter are great. But I don't think the current model architectures will be able to replicate a style or character without specific training.
The current target audience seems to be the zero-skill user, where you only have to provide the bare minimum - because that's what gets the most engagement. As a "professional" though, I would much rather see tools that required more advanced inputs - for instance, input a complete line animation and a single colored frame and it would propagate that coloring through the animation.
This has come out beautifully. The original was the first piece I saw from here, it caught my eye because she looks exactly like a teacher I had.(outfit, hair, everything) Completely a coincidence, but the original gave me respect to the artist, and the style developing over time has been fun to watch. ty