Author Topic: Juraj's Renderings thread  (Read 497401 times)

2024-04-10, 17:11:42
Reply #720

Juraj

  • Active Users
  • **
  • Posts: 4763
    • View Profile
    • studio website
Great as always Juraj!
What's your workload on the rugs/carpets if I may ask? Each single one of them is looking very convincing.
Hair&Fur for hairy ones or Ornatrix? High wuality displacement map + nice shader setup for short hair type?
Cheers!

Hi,

3 carpets :- ):

1) Black interior: Disp + Sheen + Slight AI overlay
2) "Children's" Messy room: Corona Scatter with B&W Map for rotation only (zero AI)
3) Big Classical room/Main image: Native Max Hair system + CoronaFur shader with Triplanared (or Real-Worlded in z-axis? Not sure which I used, but it's the same in the end) color map. (zero AI)

The best solution is the Corona Scatter, but that requires patience for modelling the right kind of strand which I only had once. I've seen few studios in past two years going strong in this direction, but patience in commercial projects is limited :- ).
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2024-04-10, 18:57:29
Reply #721

pokoy

  • Active Users
  • **
  • Posts: 1866
    • View Profile
Man that's some impressive work, I wouldn't know how to do some of the materials displayed there. Insanely good and beautiful.

2024-04-11, 02:44:29
Reply #722

Tom

  • Active Users
  • **
  • Posts: 242
    • View Profile
    • www
Very impressive work as usual Juraj, thanks for sharing.

Here is a great rug tutorial by Pikcells:

If I may offer a critique, on the rendering with the woman walking with the coffee cup, I feel like the background is a bit underexposed (the curtain seems brighter than the outside). Apart from that ... perfect job :)

As always, I have great admiration for your glows on the windows. I'm eagerly waiting for Corona to one day produce glows as realistic as yours. Your glows remind me a lot of those by Bertrand Benoit or Jesus Selvera. I imagine you create them in post-production in 32 bits using software like ArionFX?

2024-04-11, 08:16:23
Reply #723

Juraj

  • Active Users
  • **
  • Posts: 4763
    • View Profile
    • studio website
Nope, I paint most of them with manual brush in PS :- ) No 32bit post-production for me. The underexposed background was also artistic intent, I don't really care much about realism now that all images look fairly realistic by default. When the background is meant to be unobtrusive, I overexpose it heavily to almost white for clients who want to only focus on interior furniture. Oppositely, like here, if the mood is important, it's under-exposed.
But.. it's not done super-well, like I could have spent more time perhaps on it but I liked it enough to keep it with all its imperfections.

Yup, the fluffy carpet in grand white room is based on that Pikcells one!
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2024-04-11, 08:28:30
Reply #724

Tom

  • Active Users
  • **
  • Posts: 242
    • View Profile
    • www
Thanks for your comments Juraj. About the background, I've opened your render in PS, turned it into B&W and checked the exposure levels with the eyedropper: I was wrong actually, the exposure levels are correct, my bad. It's funny how it looked underexposed to me when looking at the original render, but it looks correct when turning the render into B&W. For some reason the colours tricked my eyes :-)
I totally get what you say about keeping the imperfections of the images, it makes sense when you work under pressure (it's hard to find imperfections in your images Juraj!)

2024-04-11, 08:37:14
Reply #725

Juraj

  • Active Users
  • **
  • Posts: 4763
    • View Profile
    • studio website
I hope no one will laugh at me, but I keep coming back to this as inspirations: https://cdn.profoto.com/cdn/05219b3/contentassets/ca88cb91c4274a8d8dfc428c204fb963/002profoto-b1-phaseone-richard-thompson-02_cf003440-600x450.jpg?width=2840&quality=75&format=jpg
(Full photoshoot can be found somewhere on PhaseOne page).

I originally disliked the set esp. how unrealistic they were, they were over-retouched. But then the photographer posted defense of it being intentional as sort of painterly style and I started looking at it more until I eventually liked it.
It's not like I succeeded in making my rendering painterly, that's hardly close to my style, but when it veers little bit into that direction.

So I do a bit heavier retouch sometimes, and it's very manual. Old-school dodge& burn to highlight some edges and shapes. The images become little bit uncanny, but also little bit more impactful.
Constant balancing of trade-offs, but I want them to stand out.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

Today at 10:38:27
Reply #726

dj_buckley

  • Active Users
  • **
  • Posts: 878
    • View Profile
upscale it through a 4x upscaler to 1496x2672

Hey Tom, are you referring to an  external upscaler like Topaz here or an AI upscaler inside Stable Diffusion, just wrapping my head around all this stuff

Today at 12:15:39
Reply #727

dj_buckley

  • Active Users
  • **
  • Posts: 878
    • View Profile
I will leave Juraj to comment on his workflow :) I have tested something similar this last week though, using ComfyUI, and I can say that trying to do "face improvement" at 512x512 does not work, it just uglifies things :) For me I had to take a crop of my test image which ended up being 374 x 669, upscale it through a 4x upscaler to 1496x2672, then pass that through the Realistic model with 0.49 denoising, and then it improved clothes and faces. Trying to do that on the original crop with no upscaling just made things worse - so in the way I had things set up, a) you are not limited to 512 at all and b) 512 makes things worse.

This may be dependent on GPU memory though.

For interest, this was on a 4080 laptop GPU with 12GB memory, with similar performance seen on a 3070 Ti desktop. It was about 60 seconds to process the crop (that is to upscale it with a face sensitive upscaler, and then feed it through the model, combined in that time)

Also Tom (sorry to hijack the thread a bit here feel free to move into a new thread), i'm assuming here that your crop was of the whole person, so in effect the face was much much smaller than 374 x 669 crop.  The upscale then allowed the 'face' to fill the 512 marquee better in the resulting 1496x2672 image giving more starting fidelity for SD to work with?

Today at 12:32:56
Reply #728

TomG

  • Administrator
  • Active Users
  • *****
  • Posts: 5472
    • View Profile
In the workflow I tested, I was using upscaling inside ComfyUI (so, Stable Diffusion run locally). I did indeed crop the whole person (and some surrounding area as it was a crude rectangle mask, which I would mask out when overlaying into the image, so I drew the more detailed mask at the end as it happens :) ), as I wanted to improve the skin and clothing overall as well as the all-important face. Hope this helps!
Tom Grimes | chaos-corona.com
Product Manager | contact us

Today at 14:30:36
Reply #729

romullus

  • Global Moderator
  • Active Users
  • ****
  • Posts: 8862
  • Let's move this topic, shall we?
    • View Profile
    • My Models
(sorry to hijack the thread a bit here feel free to move into a new thread)

I think it would be better indeed if someone would start a new all things AI related topic, as such questions will only increase in the near future.

And at the risk of completely derailing Juraj's thread, may i ask you Tom, why did you choose ComfyUI instead of more "traditional" UI? For the past few weeks i'm completely immersed in this new for me topic. Lately i'm thinking about switching to ComfyUI, since it looks that's where all the power and flexibility is, but i'm afraid that it might be overwhelming experience with node system when i don't know basics well enough yet. Did you choose Comfy since the beginning, or did you switch to it from some other UI?
I'm not Corona Team member. Everything i say, is my personal opinion only.
My Models | My Videos | My Pictures

Today at 14:42:09
Reply #730

TomG

  • Administrator
  • Active Users
  • *****
  • Posts: 5472
    • View Profile
My apologies too, to Juraj :O

I started with ComfyUI, and picked it as I like node based editing vs. how Automatic1111 works; also there were "just download and run this and everything installs" for ComfyUI vs. "Go to github, download this, run some weird command, now go back to github to download this other thing, also be sure to install Python..." which I did not want to mess with ;) Plus then the Manager you can install into ComfyUI that makes it easy to install loads of other things, including automatically finding and installing missing nodes or functions used in a workflow (the "drag and drop the PNG output from ComfyUI into ComfyUI and it recreates all the nodes and prompts right there" is something else I also really like - no need to save that separately, it is all embedded in the image, and of course this makes it easy to get someone else's ComfyUI set up, just drop their image in there when they provide it for that purpose).
Tom Grimes | chaos-corona.com
Product Manager | contact us