1
Work in Progress/Tests / Re: Juraj's Renderings thread
« on: 2024-01-12, 12:59:14 »
Hello Juraj,
I am very interested in your system of replacing 3D elements with AI generated elements. Bravo for your work because it is very convincing (finally characters who are not taxidermied!)
I would have liked to know your method and your means of overcoming the limits of Stable Diffusion via Inpaint
From what I understand, Stable diffusion in classic version (under automatic 1111) generates images in 512X512 pixels (which we can upscale later). But it is possible to use more efficient checkpoints like SD XL (which generates 1024X1024 pixels).
In the case of your image which is in very high definition, how were you able to generate elements consistently, particularly for the young woman seated in the foreground? I imagine that you separated the elements of this character to individually generate (via Inpaint) the face, the hands, the sweater and the shoes because it occupies a large part of the image and certainly seems to occupy more than a square of 1024X1024 pixels with SDXL. Is that the case ?
If this is not the case, have you downscaled your image so as to work comfortably and quickly in 512X512 pixels, with checkpoints more limited in resolution and then upscale your elements to the good definition of your final image?
If so, what checkpoints did you use (SDXL or others) to be able to give as much detail and keep a general consistency to the image.
Thank you very much for your clarifications and above all a very happy new year 2024
I am very interested in your system of replacing 3D elements with AI generated elements. Bravo for your work because it is very convincing (finally characters who are not taxidermied!)
I would have liked to know your method and your means of overcoming the limits of Stable Diffusion via Inpaint
From what I understand, Stable diffusion in classic version (under automatic 1111) generates images in 512X512 pixels (which we can upscale later). But it is possible to use more efficient checkpoints like SD XL (which generates 1024X1024 pixels).
In the case of your image which is in very high definition, how were you able to generate elements consistently, particularly for the young woman seated in the foreground? I imagine that you separated the elements of this character to individually generate (via Inpaint) the face, the hands, the sweater and the shoes because it occupies a large part of the image and certainly seems to occupy more than a square of 1024X1024 pixels with SDXL. Is that the case ?
If this is not the case, have you downscaled your image so as to work comfortably and quickly in 512X512 pixels, with checkpoints more limited in resolution and then upscale your elements to the good definition of your final image?
If so, what checkpoints did you use (SDXL or others) to be able to give as much detail and keep a general consistency to the image.
Thank you very much for your clarifications and above all a very happy new year 2024