I hope that we get to see a full-color version of this panel in a future issue of Light Chains 'cuz it'd be killer hot to see Allison get fucked so savagely by Torin like this!!
oh god yes! always wanted to see more takes on a tgtf like this, even though you havent made many of these tgtf animations yours are among the best anyone has made.
Oh my gosh, did I really just... It is... I can taste it. Feel it in my tongue. Oh my gosh, so warm. So thick. So tasty. Oh my gosh, why do I think it is tasty? I shouldn't feel that... Or the... Pride? Do I feel proud for having ducked him like... a good girl? Why does it feel so... Hot? Why am I so hot? I want more. I need...
What I love about this panel is how mesmerized by Torrin's cock Allison is -- how 110% of her attention is focused on it and how you can plainly see how much she yearns to please him and his cock with her oral skills and how religiously dedicated she is with respect to fulfilling his desires!
This turned into a minor blog post, don't mind me:
Stable diffusion works best on single character scenes in standard poses. Pinups basically. It's also pretty good at backgrounds.
The issue is that it doesn't reliably understand relative conditional statements or adjectives. So if you prompt it "a blue ball on top of a red box", it will give you that... sometimes - but you'll also get random combinations of red, blue, ball and box. It gets worse the more adjectives and conditionals you add in.
In the case of multiple characters it's almost impossible to apply the conditionals correctly. If you say "a woman with tan skin and red hair putting bunny ears on a kneeling woman with short blonde hair wearing a red leather bustier", you're gonna get red hair on both of them a because that's a common thing and you have the "red" token in there twice. It's also going to screw up normal features twice as much because there are more things in the scene. Even getting a single character that looks good often requires rolling the dice dozens of times because random bits will be screwed up.
I have done some multiple character scenes by inpainting each part individually, but the problem is you end up with a slightly different style and lighting on each part. Plus it's very time consuming. The patchwork look is a problem even on single character composite images, it's easy to accidentally stray into something that looks like a collage or that un-tooned homer simpson meme. The recent gold dress pinup kind of strays into that territory.
Dall-E is better at comprehension but it's proprietry and too large of a model to run on consumer hardware at this time. I'm generally not interested in AI models that I can't run locally and I'm especially disinterested in corporate mandated artificial brain damage. Even if you're not trying to make edgy stuff, unfiltered models are just better because they have a more complete understanding of the world. Even stable diffusion has fallen into the "trust and safety" trap and it looks like future developments will have to be underground.
The control net extension, which I used to make the latest Alison pic, is a major improvement, but it still doesn't solve multiple characters or animations. I think the possibility is there. I could see something being added like a segmentation map where each segment could be given different prompts. Temporal stability has been shown to be possible in things like nvidia's styleGAN, and some newer text-to-video models. At some point you will be able to go from a sketch animation to a perfect render. The capability in the AI model is there, it just needs to be activated appropriately. Similar to how chat-gpt is an activation layer on top of gpt-3.
I've done a lot of AI tinkering instead of drawing lately, and some people don't like it - but I hope everyone can appreciate that this an existential crisis in art. Lots of people are in "anger" and "denial" stages of grief. I've had some truly bizarre discussions on other forums where I try to demonstrate SD's ability to generate backgrounds and they will start picking apart some 3 pixel high blob of a bush in the distance because it's not an exact technical drawing. Like, have you ever seen a painting by a person? Bob Ross? The guy just smooshed his brush on the canvas and it looks great. A bunch of people were upset that netflix made an anime short using AI for the backgrounds - but tons of anime have been using crappily filtered stock photographs and 3D models for backgrounds for decades, AI could only improve this situation. Even big budget titles frequently use painted over photographs because even among artists, very few people can generate an accurate scene entirely from their mind.
A big problem is that a lot of people are walking around without any comprehension of what they're looking at or reading or listening to. They just make value judgements based on surface level traits that, in the past, have reliably served as proxies for quality. There's a bunch of big words here? Must be wr
What sells this panel is the expression on Allison's face coupled with her body language!! In this scene, she appears to have settled into acceptance of her new voluptuously feminine form and is eagerly anticipating the sexual conquest that her mistress is about to engage in with her while playing coy at the same time, and her mistress seems to be aware of Allison's carnal hunger, seeing thru her surface shyness.