General Category > Work in Progress/Tests

Juraj's Renderings thread

<< < (146/147) > >>

Juraj:
I hope no one will laugh at me, but I keep coming back to this as inspirations: https://cdn.profoto.com/cdn/05219b3/contentassets/ca88cb91c4274a8d8dfc428c204fb963/002profoto-b1-phaseone-richard-thompson-02_cf003440-600x450.jpg?width=2840&quality=75&format=jpg
(Full photoshoot can be found somewhere on PhaseOne page).

I originally disliked the set esp. how unrealistic they were, they were over-retouched. But then the photographer posted defense of it being intentional as sort of painterly style and I started looking at it more until I eventually liked it.
It's not like I succeeded in making my rendering painterly, that's hardly close to my style, but when it veers little bit into that direction.

So I do a bit heavier retouch sometimes, and it's very manual. Old-school dodge& burn to highlight some edges and shapes. The images become little bit uncanny, but also little bit more impactful.
Constant balancing of trade-offs, but I want them to stand out.

dj_buckley:

--- Quote from: TomG on 2024-01-12, 13:09:06 ---upscale it through a 4x upscaler to 1496x2672
--- End quote ---

Hey Tom, are you referring to an  external upscaler like Topaz here or an AI upscaler inside Stable Diffusion, just wrapping my head around all this stuff

dj_buckley:

--- Quote from: TomG on 2024-01-12, 13:09:06 ---I will leave Juraj to comment on his workflow :) I have tested something similar this last week though, using ComfyUI, and I can say that trying to do "face improvement" at 512x512 does not work, it just uglifies things :) For me I had to take a crop of my test image which ended up being 374 x 669, upscale it through a 4x upscaler to 1496x2672, then pass that through the Realistic model with 0.49 denoising, and then it improved clothes and faces. Trying to do that on the original crop with no upscaling just made things worse - so in the way I had things set up, a) you are not limited to 512 at all and b) 512 makes things worse.

This may be dependent on GPU memory though.

For interest, this was on a 4080 laptop GPU with 12GB memory, with similar performance seen on a 3070 Ti desktop. It was about 60 seconds to process the crop (that is to upscale it with a face sensitive upscaler, and then feed it through the model, combined in that time)

--- End quote ---

Also Tom (sorry to hijack the thread a bit here feel free to move into a new thread), i'm assuming here that your crop was of the whole person, so in effect the face was much much smaller than 374 x 669 crop.  The upscale then allowed the 'face' to fill the 512 marquee better in the resulting 1496x2672 image giving more starting fidelity for SD to work with?

TomG:
In the workflow I tested, I was using upscaling inside ComfyUI (so, Stable Diffusion run locally). I did indeed crop the whole person (and some surrounding area as it was a crude rectangle mask, which I would mask out when overlaying into the image, so I drew the more detailed mask at the end as it happens :) ), as I wanted to improve the skin and clothing overall as well as the all-important face. Hope this helps!

romullus:

--- Quote from: dj_buckley on 2024-05-01, 12:15:39 ---(sorry to hijack the thread a bit here feel free to move into a new thread)

--- End quote ---

I think it would be better indeed if someone would start a new all things AI related topic, as such questions will only increase in the near future.

And at the risk of completely derailing Juraj's thread, may i ask you Tom, why did you choose ComfyUI instead of more "traditional" UI? For the past few weeks i'm completely immersed in this new for me topic. Lately i'm thinking about switching to ComfyUI, since it looks that's where all the power and flexibility is, but i'm afraid that it might be overwhelming experience with node system when i don't know basics well enough yet. Did you choose Comfy since the beginning, or did you switch to it from some other UI?

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version