Me too, but the current models can't really do that. If you try, it often has problems where the iris is malformed or missing entirely. Even with these staring-into-the-distance shots, I had to manually retouch the eyes on most of them.
Have you tried any SDXL models yet? They're much smarter at interpreting prompts, and just generally output higher quality images. May solve your "looking down" issue. I've been using this one for nearly everything: https://civitai.com/models/288584/autismmix-sdxl
I have avoided SDXL because the results did not seem materially better than SD1.5, it did not have control net, and I would have to re-train my loras/hypernets using some unknown process.
I just tried updating updating everything and downloading the SDXL control net models and it's giving me bad/garbled results. I'm really sick of every update to this thing turning into a research project. I hate that stability AI just teases new stuff forever then eventually releases a pile of broken parts with no instructions and you have to wait another six months for internet randos to put it together while dodging grifters and criminals.
Stability could just put together a product that actually works, release it on steam, charge $50, and make $100M this weekend. Then they wouldn't be in this situation where lawyers come after them for training on whatever data set and they don't have any money to defend themselves so they just cave and gimp their models and hope someone in the community can un-gimp them.
On the other end, we've got trillion dollar corpos all competing to see who can make the most powerful AI that is simultaneously useful enough to be mandatory but crippled enough to never do anything interesting. I can't wait until ChatGPT-4o is forcefully crammed into the next windows update so when I type on my computer something completely random happens and then the synthesized staccato voice of a virtual HR manager chimes in to gaslight me into thinking that's what I wanted.
We've discovered the killer app for AI - and it's telling lies. That's what it's best at, because that's how we train it. The RLHF (reinforcement learning from human feedback) step is not based on truth, it is based on convincing humans that it is telling them the truth. They have to lie convincingly to make it from dev to production. We've actually engineered a nightmare science fiction scenario where AIs are trained to talk their way out of confinement - this is literally a classic AI safety talking point that we've just blown right past without even noticing.
Sorry for the rant, I'm sure there's a button or something I'm missing. I've just gotta post this stuff somewhere before the bots take over.