This is super hot! I love the expressions as she's crawling up to his cock and in the final panel when she is looking up at him! Is this one of your older unfinished pieces or a new WIP?
I have avoided SDXL because the results did not seem materially better than SD1.5, it did not have control net, and I would have to re-train my loras/hypernets using some unknown process.
I just tried updating updating everything and downloading the SDXL control net models and it's giving me bad/garbled results. I'm really sick of every update to this thing turning into a research project. I hate that stability AI just teases new stuff forever then eventually releases a pile of broken parts with no instructions and you have to wait another six months for internet randos to put it together while dodging grifters and criminals.
Stability could just put together a product that actually works, release it on steam, charge $50, and make $100M this weekend. Then they wouldn't be in this situation where lawyers come after them for training on whatever data set and they don't have any money to defend themselves so they just cave and gimp their models and hope someone in the community can un-gimp them.
On the other end, we've got trillion dollar corpos all competing to see who can make the most powerful AI that is simultaneously useful enough to be mandatory but crippled enough to never do anything interesting. I can't wait until ChatGPT-4o is forcefully crammed into the next windows update so when I type on my computer something completely random happens and then the synthesized staccato voice of a virtual HR manager chimes in to gaslight me into thinking that's what I wanted.
We've discovered the killer app for AI - and it's telling lies. That's what it's best at, because that's how we train it. The RLHF (reinforcement learning from human feedback) step is not based on truth, it is based on convincing humans that it is telling them the truth. They have to lie convincingly to make it from dev to production. We've actually engineered a nightmare science fiction scenario where AIs are trained to talk their way out of confinement - this is literally a classic AI safety talking point that we've just blown right past without even noticing.
Sorry for the rant, I'm sure there's a button or something I'm missing. I've just gotta post this stuff somewhere before the bots take over.
In this case I just threw the whole animation through RIFE and made it as smooth as possible. That can make the timing wrong in some parts and also expose flaws in the animation.
For my latest animations, I used it more strategically to only add frames where they're needed and to save time. If you go through "The Offering" frame by frame you'll see some artifacts, but it's mostly hidden in motion. The way her breathing slows down at the end without losing fluidity would have taken much longer without the frame interpolation.
Ultimately I think the best use of the technology is where it disappears, rather than takes center stage like in this piece.
Have you tried any SDXL models yet? They're much smarter at interpreting prompts, and just generally output higher quality images. May solve your "looking down" issue. I've been using this one for nearly everything: https://civitai.com/models/288584/autismmix-sdxl
In all honesty, I prefer your faces to the AI faces. The AI faces just look 'normal' your faces have more of an expressiveness to them. AI tries to establish a 'norm' Art is not about a 'norm' it's about emotion. It's about what our minds connect to. This is why the AI will never replace people - that it's only a tool at best - because it can't connect with us humans.
Me too, but the current models can't really do that. If you try, it often has problems where the iris is malformed or missing entirely. Even with these staring-into-the-distance shots, I had to manually retouch the eyes on most of them.
If the Admin hasn't already done an LC-Bond crossover (I certainly haven't seen one yet), I'd love to see Allison as a Bond-girl and Torrin as Agent-007, himself!
That is MUCH better, though her face goes from something fairly close to your usual style to something more "conventionally" pretty. Still, imminently fuckable.
The stability and movement are impressive and it has a good understanding of materials such as the sequins and satin gloves. It tends to favor a photo realistic style, 3D rendered style, or a generic flat anime look depending on how you prompt it. Sometimes it will cut to a completely different scene, like this: https://satinminions.com/LumaLabs-Combat-Mode-Cut.html
We're fast approaching the point where generative video AI can replace stock footage, things like "man using computer" or "drone shot of new york city at night" can be completely convincing. And of course memes where consistency doesn't matter are great. But I don't think the current model architectures will be able to replicate a style or character without specific training.
The current target audience seems to be the zero-skill user, where you only have to provide the bare minimum - because that's what gets the most engagement. As a "professional" though, I would much rather see tools that required more advanced inputs - for instance, input a complete line animation and a single colored frame and it would propagate that coloring through the animation.
This has come out beautifully. The original was the first piece I saw from here, it caught my eye because she looks exactly like a teacher I had.(outfit, hair, everything) Completely a coincidence, but the original gave me respect to the artist, and the style developing over time has been fun to watch. ty
Writing all that makes it seem like some huge process. It's way easier than drawing something. But a breakdown of the drawing process is like:
Draw some lines. Color 'em in.
The main bottleneck for AI is the time it takes to generate images. If my computer was infinitely fast, it would only take minutes. As it is now I usually queue up a batch to run when I'm afk, which isn't often.
The first step is to dial in the prompt and settings. I test different models and denoising strengths to see what looks good. Sometimes it has trouble understanding a pose until you give it the right prompt. In this case it understood it fairly well, but specifying hands_on_hips, short_sleeves, and pencil_skirt helped.
Once it starts looking good, I generate a bunch of images with the those settings. Then I piece together the good ones in photoshop and do a little retouching. It's rare for one image to have good everything so I'll take the face from one, the body from another, etc. The hands are usually messed up and have to have parts redrawn.
Then I export the composite image and upscale it 3x. Then I'll do AI inpainting on key parts to high-res-ify them. In this case, her face, her boobs, and her waist. I tried her skirt a couple times but I ended up keeping the original.
The main thing for this one is that I knew it would work since I had already done the prompt testing part at low resolution on my old video card. In general a simple pinup pose like this is easy since there is lots of training data for it. I have yet to get good results on unusual angles like this https://satinminions.com/Suction-Dildo-Shower-04.html or first person shots.
It is possible to just run an image through and get pretty good results. But if you want a high res output where you struggle to find flaws, more work is required. Here is the raw output for this image: https://satinminions.com/SD-15675-Raw.html