Author Topic: Juraj's Renderings thread  (Read 495673 times)

2023-12-13, 06:18:46
Reply #705

Tom

  • Active Users
  • **
  • Posts: 236
    • View Profile
    • www
Hi Juraj,

Very impressed with the result; the characters look incredibly lifelike, aside from some details that AI still struggles with, like hands and feet. But that's a minor point; the leap in quality from 100% AXYZ to the AI/AXYZ hybrid is huge.

Like you, I've always been (and still am) wary of anything related to AI, but, as you mentioned, it's interesting to learn how to use it, as long as it remains a tool in service of our creativity and not the other way around. I would be curious to learn more about your workflow if you're willing. I appreciate your mindset and the fact that you share your passion.

2023-12-13, 09:45:36
Reply #706

James Vella

  • Active Users
  • **
  • Posts: 540
    • View Profile
Absolutely, here you go :- ). But that's for now, it needs a bit of mystery!

I use everything, and for foreground I use 2D people and retouch them myself. But it is a complicated hassle to get right so I always opt for motion blurred, ideally moving away or towards the camera :- ).

I think someone already asked about the bag, but that beautiful asset is in-house modelled already to great detail.
The reason AI works well for something like this, is the input is great too.
Fun fact: I tried it on NDA & Not-published Zaha Hadid project and no algorithm could make anything better out of it. The datasets just had nothing similar. And it really doesn't do well with marbles and complex&unique expensive materials. It's best for it to boost common basics.

Thanks for sharing. I had to squint really hard to see the difference in the plants, almost like a 5% sharpening. I was watching some people using it to replace fabric on leather sofas and the wood framing too and it just looked incredible. All for the better I think since that last 10% extra detail is not worth the 90% extra time and if we can achieve it with AI I'm all for that.

2023-12-13, 09:59:51
Reply #707

Juraj

  • Active Users
  • **
  • Posts: 4761
    • View Profile
    • studio website
Full agree! Absolutely my words. Now we can focus on creative parts and not struggle with technical drudgery where not strictly necessary. It can really become a good tool.
It will be interesting how it will transform the field. In both good and bad ways at same time. But for now, I am mildly positive, even cheerful, less afraid of change.

After all, most of my issues this year were from clients and their financial situation due to downturn in many markets, hardly from AI or anything like that.

Not sure what and how much I can post here without this becoming AI playground, but I do have an example of some beautiful wrinkled cotton sheets :- ). I also don't really want to focus on the AI aspect alone, I like what it can do for my images, but they're still my images, my work, not AI's.
But maybe next I will experiment with heavier use.

Hi Juraj,

Very impressed with the result; the characters look incredibly lifelike, aside from some details that AI still struggles with, like hands and feet. But that's a minor point; the leap in quality from 100% AXYZ to the AI/AXYZ hybrid is huge.

Like you, I've always been (and still am) wary of anything related to AI, but, as you mentioned, it's interesting to learn how to use it, as long as it remains a tool in service of our creativity and not the other way around. I would be curious to learn more about your workflow if you're willing. I appreciate your mindset and the fact that you share your passion.

The motion blurred hand on walking woman is bit of my fumbling in photoshop. I was already lazy and tired and noticed it last minute before exporting to web. I think I will still fix that as otherwise I found the hand and feet almost excellent this time.
But motion blur & depth of field are things that are definitely among those confusing to AI and my source walking woman was motion blurred.

No tutorial for this, apparently there are already too many on youtube :- )
« Last Edit: 2023-12-13, 10:03:53 by Juraj »
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2023-12-13, 10:29:57
Reply #708

fabio81

  • Active Users
  • **
  • Posts: 444
    • View Profile
I am taking short break to finish some actual work instead of playing further with this stuff :- ).
I'll try to answer everything during Holidays, so don't think I am ignoring you if you ask something now.

Hi Juraj,

It would be helpful to understand the workflow for replacing faces and plants. I tried to install stable diffusion and Automatic1111, I did some tests but the maximum resolution is 768x768 and I don't get great quality.
Thank you very much and happy holidays

2023-12-13, 10:51:38
Reply #709

JohnNinos

  • Active Users
  • **
  • Posts: 37
    • View Profile
Fantastic details on the actual plants, aside the Ai, some workaround on the modeling, texturing side of the plants would be much appreciated as they really shine :)

2023-12-13, 14:18:24
Reply #710

romullus

  • Global Moderator
  • Active Users
  • ****
  • Posts: 8856
  • Let's move this topic, shall we?
    • View Profile
    • My Models
It would be helpful to understand the workflow for replacing faces and plants. I tried to install stable diffusion and Automatic1111, I did some tests but the maximum resolution is 768x768 and I don't get great quality.

I was so impressed by Juraj's result, that i decided to give it a try to AI for the first time, but i didn't even manage to install those tools. Once i saw you need to hassle with installation of pythons, githubs and whatnot, i understood it's probably not for me. I think i'm completely out of loop with modern tech. Will wait when it become one click solution, but by that time i of course won't be content creator anymore, just a mere content consumer.
« Last Edit: 2023-12-13, 15:16:52 by romullus »
I'm not Corona Team member. Everything i say, is my personal opinion only.
My Models | My Videos | My Pictures

2023-12-13, 15:09:15
Reply #711

fabio81

  • Active Users
  • **
  • Posts: 444
    • View Profile
I did it, but I don't know what commands to type to create the face. I put "face woman" but it's not good

2024-01-10, 00:27:32
Reply #712

Buzzz

  • Active Users
  • **
  • Posts: 162
    • View Profile
I hope anything AI related isn't banned but let me know otherwise :- ) Here is fun exercise I did to fix 'zombie' faces of CGI people (credit for name to Lasse Rode).
This is otherwise regular commercial project, you're still looking not only at our visualization but also mostly our own interior design for this client. We chose every MillerKnoll and Eames furniture piece.

Anyway, go check it out if it's something that interests you :- ) https://www.behance.net/gallery/186535963/London-Wall-1-5-Office-Visualization-AI-tests

Disclaimer: Images below have AI elements overlayed on top of Corona visual. I used mainly A1111 tool to re-do human faces, details on the humans overall, and many small elements like plants, deco and carpet.
The difference is subtle but very nice, zero change of any silhouette or intended design. It took me whole day to make one image as the rendering is 7680px and I wanted all AI elements to match that fidelity. Effectively, for the result to be unable to distinguish where one starts, where one ends. Just one smooth image. I am not one for converting into hype addict, and I did worry I'll lose job like everyone else, but for now I believe we can take it and boost our own work with it where it can apply. And CGI people are the one thing where it totally applies :- ). I almost abandoned using them, since they tend to look horrible even motion blurred. Now they're back on menu.





Hi all,

Juraj, how are you?
Awesome work!
How do you set the lighting for this type of images?
Are they closed spaces?
Thanks and regards.

2024-01-10, 08:28:52
Reply #713

Juraj

  • Active Users
  • **
  • Posts: 4761
    • View Profile
    • studio website
In this case yes, it's fully closed (the foreground shows shadowed side towards camera but also some light bounce).
But I also have some sets which are open (sometimes even without ceiling).

This case is just Sun&Sky for the majority of natural light (coming from windows), and then there are I think two big "soft-boxes" (some CoronaLight set to rectangle) on each side (to left and to right) to create more fill and more direction.

The second scenario is usually lit by HDRi that imitates some studio setup (like big industrial hall).
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2024-01-10, 19:27:21
Reply #714

Buzzz

  • Active Users
  • **
  • Posts: 162
    • View Profile
In this case yes, it's fully closed (the foreground shows shadowed side towards camera but also some light bounce).
But I also have some sets which are open (sometimes even without ceiling).

This case is just Sun&Sky for the majority of natural light (coming from windows), and then there are I think two big "soft-boxes" (some CoronaLight set to rectangle) on each side (to left and to right) to create more fill and more direction.

The second scenario is usually lit by HDRi that imitates some studio setup (like big industrial hall).

Thank you very much!

2024-01-12, 12:59:14
Reply #715

mienda

  • Active Users
  • **
  • Posts: 22
    • View Profile
Hello Juraj,

I am very interested in your system of replacing 3D elements with AI generated elements. Bravo for your work because it is very convincing (finally characters who are not taxidermied!)
I would have liked to know your method and your means of overcoming the limits of Stable Diffusion via Inpaint
From what I understand, Stable diffusion in classic version (under automatic 1111) generates images in 512X512 pixels (which we can upscale later). But it is possible to use more efficient checkpoints like SD XL (which generates 1024X1024 pixels).
In the case of your image which is in very high definition, how were you able to generate elements consistently, particularly for the young woman seated in the foreground? I imagine that you separated the elements of this character to individually generate (via Inpaint) the face, the hands, the sweater and the shoes because it occupies a large part of the image and certainly seems to occupy more than a square of 1024X1024 pixels with SDXL. Is that the case ?
If this is not the case, have you downscaled your image so as to work comfortably and quickly in 512X512 pixels, with checkpoints more limited in resolution and then upscale your elements to the good definition of your final image?
If so, what checkpoints did you use (SDXL or others) to be able to give as much detail and keep a general consistency to the image.

Thank you very much for your clarifications and above all a very happy new year 2024

2024-01-12, 13:09:06
Reply #716

TomG

  • Administrator
  • Active Users
  • *****
  • Posts: 5468
    • View Profile
I will leave Juraj to comment on his workflow :) I have tested something similar this last week though, using ComfyUI, and I can say that trying to do "face improvement" at 512x512 does not work, it just uglifies things :) For me I had to take a crop of my test image which ended up being 374 x 669, upscale it through a 4x upscaler to 1496x2672, then pass that through the Realistic model with 0.49 denoising, and then it improved clothes and faces. Trying to do that on the original crop with no upscaling just made things worse - so in the way I had things set up, a) you are not limited to 512 at all and b) 512 makes things worse.

This may be dependent on GPU memory though.

For interest, this was on a 4080 laptop GPU with 12GB memory, with similar performance seen on a 3070 Ti desktop. It was about 60 seconds to process the crop (that is to upscale it with a face sensitive upscaler, and then feed it through the model, combined in that time)
Tom Grimes | chaos-corona.com
Product Manager | contact us

2024-01-12, 13:49:37
Reply #717

TomG

  • Administrator
  • Active Users
  • *****
  • Posts: 5468
    • View Profile
Oh I wanted to add about the 512 restriction - I think this still applies if you are doing txt2img, some models prefer 512 or 1024 square, as otherwise they will do double heads or two figures merged together etc. However, here we are doing img2img, and I also had an OpenPose going into a ControlNet, and we are using lowered denoising (in effect, how much the model changes the image, it's not denoising like we know it :) ) which means the model is constrained in what it can imagine. Not an expert by any means, but I am guessing this is why larger and non-square formats work just fine in this case (GPU RAM permitting).
Tom Grimes | chaos-corona.com
Product Manager | contact us

2024-04-10, 15:58:19
Reply #718

Juraj

  • Active Users
  • **
  • Posts: 4761
    • View Profile
    • studio website
I've posted another project online, https://www.behance.net/gallery/195862101/Grand-Swiss-Beds
Will reformat it to forum later.


Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2024-04-10, 17:05:38
Reply #719

Ink Visual

  • Active Users
  • **
  • Posts: 169
    • View Profile
Great as always Juraj!
What's your workload on the rugs/carpets if I may ask? Each single one of them is looking very convincing.
Hair&Fur for hairy ones or Ornatrix? High wuality displacement map + nice shader setup for short hair type?
Cheers!