Author Topic: dubcats secret little hideout  (Read 137648 times)

2018-09-28, 19:15:38
Reply #300

Juraj

  • Active Users
  • **
  • Posts: 4090
    • View Profile
    • studio website
I know about the 0° specular removal, I mentioned that idea multiple times over many years here. But I still have hitch for it to be more complex than just that and my issue is with how the shader uses this data. I know that it is energy conserving and nada,nada.. but it still doesn't end up looking fully right to me.

I've been overthinking this for way too long though..  but I would still like to have physical sample of something that was also scanned by actual BSDF scanner like xRite TAC7 or the ChaosGroup's scanner.
talcikdemovicova.com  Website and blog
be.net/jurajtalcik   Our studio Behance portfolio
Instagram   Our studio Instagram, managed by Veronika

2018-09-28, 19:41:55
Reply #301

karnak

  • Primary Certified Instructor
  • Active Users
  • ***
  • Posts: 65
    • View Profile
That would be extremely cool to have.
What X-Rite device you have? I can help you on the software side if you want to read some surfaces.
I know it's not the same as BSDF scanners, but at least it's something.

edit.

I forgot to say that I get the same values with calibrated polarised photography and with spectrophotometer with manual removed specular.

What happens outside F0 is a totally different matter though and depends on other factors, some of them are well approximated by the BRDF models used inside the shader, others are not.
« Last Edit: 2018-09-29, 08:53:49 by karnak »
Corona Academy (May 2017)

2018-10-01, 13:16:46
Reply #302

Fluss

  • Active Users
  • **
  • Posts: 547
    • View Profile
I forgot to say that I get the same values with calibrated polarised photography and with spectrophotometer with manual removed specular.

What happens outside F0 is a totally different matter though and depends on other factors, some of them are well approximated by the BRDF models used inside the shader, others are not.

Absolutely. Following the energy conservation law, the close relationship between the diffuse and the specular part is what defines the object appearance. When the specularity increases, the diffuse component drops, and vice versa. And that has to be correlated to the micro surface topology itself. These tiny details are what really defines how a surface looks at a defined angle. I guess BSDFs provided by high-end material scanners sort of translate that surface topology information into a fresnel curve across the whole range of angles (sampled x steps, interpolated in between) for both diffuse and specular components. Microfacets models we are using are way too generic in that regards and have a hard time simulating materials with complex microsurface details (wood, fabric etc) but anyway, good enough to achieve pretty decent results most of the time. That said, I'm always wondering why we're stuck with that good old Lambertian diffuse model in corona while almost any other renderers on the market are offering more advanced one.

I miss a macro lens... I'd really like to see how far we can go by scanning small patches of geometry and then making them tillable in order to use them as source pattern in Fstorm Geopattern.
« Last Edit: 2018-10-01, 13:31:02 by Fluss »

2018-10-01, 13:50:19
Reply #303

Juraj

  • Active Users
  • **
  • Posts: 4090
    • View Profile
    • studio website
That said, I'm always wondering why we're stuck with that good old Lambertian diffuse model in corona while almost any other renderers on the market are offering more advanced one.

I keep asking Corona devs about this every three months for past three years. If we could at least get option for some existing alternative to soften up look I would be super grateful but Corona shader is super limited and not evolving in any way.

It desperately needs better diffuse shading, Coating, Sheen, etc..

talcikdemovicova.com  Website and blog
be.net/jurajtalcik   Our studio Behance portfolio
Instagram   Our studio Instagram, managed by Veronika

2018-10-01, 16:55:09
Reply #304

Fluss

  • Active Users
  • **
  • Posts: 547
    • View Profile
Some samples coming from the mura scanner are available at the bottom of the home page :

https://www.muravision.com/

2018-10-01, 17:27:07
Reply #305

Juraj

  • Active Users
  • **
  • Posts: 4090
    • View Profile
    • studio website
So who asked them for quote :- ) ? I wonder where it stands compered to much more advanced Tac7.

I really need to finish my own, it's sitting sadly half-finished against the wall for past year and half...
talcikdemovicova.com  Website and blog
be.net/jurajtalcik   Our studio Behance portfolio
Instagram   Our studio Instagram, managed by Veronika

2018-10-01, 21:07:57
Reply #306

Jpjapers

  • Active Users
  • **
  • Posts: 1352
    • View Profile
That said, I'm always wondering why we're stuck with that good old Lambertian diffuse model in corona while almost any other renderers on the market are offering more advanced one.

I keep asking Corona devs about this every three months for past three years. If we could at least get option for some existing alternative to soften up look I would be super grateful but Corona shader is super limited and not evolving in any way.

It desperately needs better diffuse shading, Coating, Sheen, etc..

It would be great to have a much more powerful shader I agree completely.

Also has there ever been a thread explaining ACES and why its better?

2018-10-02, 09:02:00
Reply #307

HVB

  • Active Users
  • **
  • Posts: 27
    • View Profile
So who asked them for quote :- ) ? I wonder where it stands compered to much more advanced Tac7.

I really need to finish my own, it's sitting sadly half-finished against the wall for past year and half...

I asked them a while ago what a Mura desktop scanner cost. And they go for $75k. So not really something for most of us.

2018-10-02, 11:03:55
Reply #308

piotrus3333

  • Active Users
  • **
  • Posts: 18
    • View Profile
Also has there ever been a thread explaining ACES and why its better?
aces’ purpose is to help film industry to manage their color input. imagine the number of image capture hardware used, all with different ways of representing captured scenes. than try matching footage from 5d, iphone, red dragon and vfx on top of it. aces makes it easy.
we have it easy from start - most renderers that we use output basicaly the same stuff: hdr color defined by rgb primaries, usualy in a form of half float exr.
long time ago Vlado posted on chaos group forums regarding rendering in aces colorspace. and if I remember correctly you should be able to switch to aces in VRayNext. not sure if it’s there yet though.

2018-10-02, 14:06:29
Reply #309

Fluss

  • Active Users
  • **
  • Posts: 547
    • View Profile
aces’ purpose is to help film industry to manage their color input. imagine the number of image capture hardware used, all with different ways of representing captured scenes. than try matching footage from 5d, iphone, red dragon and vfx on top of it. aces makes it easy.

That's true but only one side of the coin. As it's a scene referred workflow, it also allows you to preserve the whole dynamic range of your data all along the pipeline. What's more, the way RRT/ODT handles colorspace conversion gives results way closer to a film response. That's what really matters for us. You should watch this video which shows convincing results :

we have it easy from start - most renderers that we use output basicaly the same stuff: hdr color defined by rgb sRGB primaries, usualy in a form of half float exr.
long time ago Vlado posted on chaos group forums regarding rendering in aces colorspace. and if I remember correctly you should be able to switch to aces in VRayNext. not sure if it’s there yet though.

For a proper ACEScg workflow, you need to pre-convert your color textures and lights to ACEScg colorspace(D60 illuminant). The only issue is that most renderers rely on the 3D software color management which is, most of the time, based on sRGB color space (D65 illuminant). As far as I know, only maya is color managed atm.

Vray next allows you to convert your textures and colors on the fly with an OCIO node. From what I understand, it does not needs any internal changes for the renderer to handle ACEScg data. Indeed, ACES is a linear-encoded color space and as far as you feed the renderer (performing linear operations) with linear data and as far as you're outputting these linear data in a linear file format (exr), everything should work as intended and you should be able to work in ACEScg in your comp software. Renderers should be color space agnostic in that way. But we are not able to convert anything related to color temperatures  (sun, skies, lights set by illuminant etc..). That's the only blocker I can see. So if we avoid using these components, I guess we should be able to work in ACEScg with corona (assuming we pre-converted all the color textures to ACEScg). Vlado said he is actually working on native ACEScg support in Vray but I have not seen anything out for now.

Disclaimer: Everything discussed here is based on MY interpretation/comprehension of the phenomenon involved and might present some inaccuracies or wrong statements.

2018-10-02, 15:07:04
Reply #310

piotrus3333

  • Active Users
  • **
  • Posts: 18
    • View Profile
What's more, the way RRT/ODT handles colorspace conversion gives results way closer to a film response.

aces will not tonemap scene referred image. you will still need to transform it for the display device.

2018-10-02, 15:50:48
Reply #311

Fluss

  • Active Users
  • **
  • Posts: 547
    • View Profile
aces will not tonemap scene referred image.

That's what RRT is meant for.

you will still need to transform it for the display device.

That's what ODT is meant for.

What do you mean?

2018-10-02, 16:06:03
Reply #312

piotrus3333

  • Active Users
  • **
  • Posts: 18
    • View Profile
I just dont see aces space as something better for rendering purpose. I was expressing my opinion as an answer to “why is it better?” question few posts ago.

2018-10-02, 16:29:34
Reply #313

Fluss

  • Active Users
  • **
  • Posts: 547
    • View Profile
I just dont see aces space as something better for rendering purpose. I was expressing my opinion as an answer to “why is it better?” question few posts ago.

No problem with that, everybody has is own opinion :-). But I'd be glad to hear some arguments then!

2018-10-02, 17:30:49
Reply #314

piotrus3333

  • Active Users
  • **
  • Posts: 18
    • View Profile
assumption1: we put our textures and lights through aces transform. all that goes to non spectral renderer that sees just different numbers (aces primaries). math of light transport is the same in both cases. output is another bunch of numbers that describe scene referred color data. transform it again so it looks good an 8 bit display. since all that started with data described by limits of rgb representation nothing new will appear here.

assumption2: since aces is wide gamut color space it can do more than any rgb space. so you can provide the rendering engine with more data if you can somehow capture it.

thats just my take on the thing. input data limit the output no matter how crazy the math in between.