Author Topic: Time to ditch sRGB/Linear as default (?)  (Read 118234 times)

2017-02-18, 23:00:36

Ludvik Koutny

  • VIP
  • Active Users
  • ***
  • Posts: 2557
  • Just another user
    • View Profile
    • My Portfolio
So, lately I've been thinking...

Every single time I do some look development these days, I first set my tone mapping to have some response curve that's somewhat resemblant of digital camera response curve. A bit of highlight compression, bit of contrast boost. It's very important to me in order to correctly see HDRI environments, and be able to correctly replicate materials from photographs to CG, because photographs are usually captured by a camera, which has such response curve.

It also seems that Fstorm's experiment to default to camera-ish film response rather than LWF/sRGB has met with great success and overall positive feedback.


Now, there are some valid reasons as to why most renderers still default to LWF/sRGB, two of the major being:

1, As soon as your output stops being linear, you can not correctly compose individual render elements anymore.

2, If you add some sort of tone mapping and and bake it into final output, you will destructively lose a bit of dynamic range data.

Non the less, I think that these reasons have became historical these days, because:

1, Compositing of separate render elements has became rather rare and niche workflow. It was essential back in the day, when rendering was not physically based by default, so things needed to be "made look right". That is not the case anymore. It would be more reasonable, if those, who utilize rare workflows, had to go extra step and make their renders linear if they want to be doing some advanced compositing, because majority of the Corona users do not.

It could be even implemented as one button solution, simply called "force linear output" or something like that. But there's not much of a reason to abstain from the joy of having Corona behave like digital camera by default just because of a few who still utilize workflows, which are now becoming legacy.

2, Main reason people choose not to do tone mapping in VFB, but instead in post is arguably because "they can bring back highlights in post". The thing here is, that if you apply some sort of tone mapping, such as highlight compression, you do it mainly to get bring those highlights back in the first place.

If you save tone mapped image with highlights that are not completely clamped, just a bit compressed by tone mapping, in some at least 16bit format, you will still be able to go back in post and adjust for example tonal contrast of highlights without getting any banding or artifacts. Yes, the gradient won't be as precise as it would be with linear image, but at the same time, neither would be footage from real movie camera.

The main idea here is that if you were in compositing, you would already start off with something that's a lot closer to movie camera footage, than to a linear render footage, so you could skip the entire one step of making it first look tonally realistic, before proceeding to some creative moody grading.

Many people praise Corona for being a lot like point and shoot camera rather than cumbersome technical tool, so I propose to push Corona even close to that ideal digital camera behavior. I think it's time to enter a new era of rendering, where renderers are becoming complete simulators of a movie/photoshoot sets, simulating most of the real world occurrences. Not just simulators of light transport, surface and volume shading, which then gets the printed onto pixel perfect radiometric grid (digital image), but also optical effects and digital film response to light, that reaches camera film back, such as contrast, glares, subtle blurring and sharpening, possibly even lens flares, and so on. Basically a state, where if you had near-perfect representation of real world scene, with scanned geometry and shaders, you would get an image indistinguishable from actual photo, without needing to work for it hard in Photoshop or other compositing software.

Therefore, I'd like to know your opinion about breaking old habit of linear being default in exchange for the greater good of the future. :)

« Last Edit: 2017-02-18, 23:05:02 by Rawalanche »

2017-02-18, 23:36:08
Reply #1

srikken

  • Active Users
  • **
  • Posts: 39
    • View Profile
Totally agree!   
The hole linear thing is just confusing and a pain in the ass..
Especially since corona 1.5 VFB controls most of the images I make wont even see the post production part, so to have even more realism in corona itself would be awesome!

Coincidentally Blender Gugu posted a video about sRGB in Blender yesterday, about why it sucks. I'm not realy at home in the hole technical stuff so I dont know if it also applys on corona:


And we need a ''like'' button ;)

2017-02-19, 01:59:59
Reply #2

Ludvik Koutny

  • VIP
  • Active Users
  • ***
  • Posts: 2557
  • Just another user
    • View Profile
    • My Portfolio
Yep, actually I've been thinking about it for a year or so already, but just today I saw that video, and it finally pushed me to post about it. The video actually contains a lot of inaccuracies and sometimes nonsense, but regardless, the overall point is he's making aligns with mine :)

2017-02-19, 13:36:20
Reply #3

agentdark45

  • Active Users
  • **
  • Posts: 577
    • View Profile
Great post, I've had similar feelings about this for some time. I think rendering software needs to evolve to suit the needs of how people are using it day to day. I've been eyeing up F-Storm for some time now due to how beautifully photographic the images produced look out of the box/with minimal post processing. Daniel Reutersward's images are a clear example of this: https://www.facebook.com/danielreuterswardvisualisation/

Another prime example is some of the stuff from JakubCech on here and how he talks about emulating a real camera in post: https://forum.corona-renderer.com/index.php/topic,14288.msg91657.html

Regarding mapping I would not like to be very specific as for me its like a Coca Cola formula but I can say that I have been polishing and developing it for a few years now and finally have it in a compact, everytime to use form. Its based on post processing raw 32bit linear imige using software emluation of some of the real photographic process. Complicated stuff easyly - save in linear 32bit, apply some processes (like bleach bypass etc., but precisely) is the core.
2 Years ago I managed to bake it into the LUT and used VFB+ for a long time but finally Corona comes with the LUT thanks god :)

Jakub

Some renders from him that really blew me away: https://www.behance.net/gallery/23707939/The-Ranch
Vray who?

2017-02-19, 13:50:35
Reply #4

burnin

  • Active Users
  • **
  • Posts: 1604
    • View Profile
Good plan, Rawalanche. The old limitations need to be overcome...

Agree about the video... lots of inaccuracies and nonsense... but that's Andrew ;)

Few resources
Filmic Blender started here:
Render with a wider dynamic range in cycles to produce photorealistic looking images
Filmic Blender addon by Sobotka


2017-02-19, 14:04:22
Reply #5

Njen

  • Active Users
  • **
  • Posts: 557
    • View Profile
    • Cyan Eyed
1, Compositing of separate render elements has became rather rare and niche workflow. It was essential back in the day, when rendering was not physically based by default, so things needed to be "made look right". That is not the case anymore.

As a 20 year veteran of the CG/VFX industry I can categorically state that the entire VFX industry still comps using layers/passes/elements. Every single film you see that has VFX is done this way.

Keeping the data pure (linear) is the only real standard across the entire industry. Not doing so will deeply hurt any inroads Corona wants to make past what ever currently small userbase who wants what you are requesting.

Please do not change this.

« Last Edit: 2017-02-19, 14:23:38 by Njen »

2017-02-19, 15:48:06
Reply #6

Ludvik Koutny

  • VIP
  • Active Users
  • ***
  • Posts: 2557
  • Just another user
    • View Profile
    • My Portfolio
1, Compositing of separate render elements has became rather rare and niche workflow. It was essential back in the day, when rendering was not physically based by default, so things needed to be "made look right". That is not the case anymore.

As a 20 year veteran of the CG/VFX industry I can categorically state that the entire VFX industry still comps using layers/passes/elements. Every single film you see that has VFX is done this way.

Keeping the data pure (linear) is the only real standard across the entire industry. Not doing so will deeply hurt any inroads Corona wants to make past what ever currently small userbase who wants what you are requesting.

Please do not change this.

I've heard quite the opposite in recent years... And from several independent high profile sources.

It's not about removing an option to render linearly. It's just about linear output not being the default.

I think the major roadblock here are old CG veterans, who often do things just because "that's the way it has always been done", not really ever stopping and taking some time to think about if things shouldn't be done better.
« Last Edit: 2017-02-19, 15:53:04 by Rawalanche »

2017-02-19, 16:06:45
Reply #7

romullus

  • Global Moderator
  • Active Users
  • ****
  • Posts: 9088
  • Let's move this topic, shall we?
    • View Profile
    • My Models
Can someone explain for dumbass me, are we talking here about changing default tonemapping values, like HC, contrast, curves or something entirely different, like changing colour space from wideRGB to something else? I got confused by original post and this video from blenderguru.
I'm not Corona Team member. Everything i say, is my personal opinion only.
My Models | My Videos | My Pictures

2017-02-19, 16:20:02
Reply #8

Ludvik Koutny

  • VIP
  • Active Users
  • ***
  • Posts: 2557
  • Just another user
    • View Profile
    • My Portfolio
Just changing tonemapping defaults to something non-linear and at the same time adding a button that will set everything to linear with single click (when you need to do pass compositing).

2017-02-19, 16:31:12
Reply #9

agentdark45

  • Active Users
  • **
  • Posts: 577
    • View Profile
Just changing tonemapping defaults to something non-linear and at the same time adding a button that will set everything to linear with single click (when you need to do pass compositing).

This is a good solution and I shouldn't see why anyone would have a problem with it.
Vray who?

2017-02-19, 16:54:36
Reply #10

romullus

  • Global Moderator
  • Active Users
  • ****
  • Posts: 9088
  • Let's move this topic, shall we?
    • View Profile
    • My Models
Ok, i got it. Although i'd prefer presets instead of button, a few predefined and ability to save custom ones.
I'm not Corona Team member. Everything i say, is my personal opinion only.
My Models | My Videos | My Pictures

2017-02-19, 17:44:41
Reply #11

Ludvik Koutny

  • VIP
  • Active Users
  • ***
  • Posts: 2557
  • Just another user
    • View Profile
    • My Portfolio
It would not be buttons, just one button. The point here is not to have a set of presets for everyone pick. The point here is completely different. Fundamentally changing what we perceive as default image.

Right now, we perceive linear sRGB as the default, the start line, and we then work with some parameters to bring that sRGB close to photo-realism. We manually have to twist some knobs in order to take a picture, which by default is not realistic to our eyes, and using some controls, turn it into image that our eyes perceive as photorealistic. So why not just skip this process and have renderer(s) by default output same ranges as cameras do. If you take a picture with your camera, you don't tweak it to look more photorealistic, because it already is a photo, it is realistic. You tweak just mood using some artistic controls. There's no significant reason why renderer should not work the same way. Not by having a dropdown where you can pick numerous response curves, and one of them is called photorealistic, but instead by having it defaulting to a camera, with an option to switch to a very special mode, which will make your output less realistic, but compose-able in post.

This is not just discussion about some feature design. This requires some out of the box thinking, some thinking about future of CG imagery in general. You can't really perceive it properly if your mind stays in the bounds of regular established workflows.

2017-02-19, 18:39:28
Reply #12

Njen

  • Active Users
  • **
  • Posts: 557
    • View Profile
    • Cyan Eyed
I've heard quite the opposite in recent years... And from several independent high profile sources.

I'm not trying to be rude, but I'm going to have to call you out on this. I can't think of one single film where CG has been used straight as is in the finaled deliverable.

I think the major roadblock here are old CG veterans, who often do things just because "that's the way it has always been done", not really ever stopping and taking some time to think about if things shouldn't be done better.

This is incorrect. Linear is the way it is done because of maths. all of the operations to reconstruct various colour components are based on simple operations that can be acurately reproduced in any renderer and compositor.

2017-02-19, 18:45:52
Reply #13

Njen

  • Active Users
  • **
  • Posts: 557
    • View Profile
    • Cyan Eyed
We manually have to twist some knobs in order to take a picture, which by default is not realistic to our eyes, and using some controls, turn it into image that our eyes perceive as photorealistic. So why not just skip this process and have renderer(s) by default output same ranges as cameras do. If you take a picture with your camera, you don't tweak it to look more photorealistic, because it already is a photo, it is realistic.

One major thing I've learnt in the VFX industry is that there is no standard for 'photorealism'. As a lighter by trade, many times I've output what I think are 'photoreal' setups, and backed them up with real world data, only having to be told that it doesn't look 'real' by the client (many times the best directors in the industry).

Quite simply, 'photorealism' is purely a subjective, a moving target that can never be 'pinned' down.

2017-02-19, 19:01:18
Reply #14

Ludvik Koutny

  • VIP
  • Active Users
  • ***
  • Posts: 2557
  • Just another user
    • View Profile
    • My Portfolio
I think there has been misunderstanding. I've never claimed images straight out of VFB are being used. What I've claimed is that the workflow of rendering Diffuse, Reflection, Refraction, Indirect GI, SSS, Self-Illumination passes, and then composing them back in post using ADD operation just to reconstruct what in the end becomes 1:1 beauty pass is not being used much anymore. Mostly because now that we have physically based rendering, color correcting separate light path components does not make things look better anymore. If anything, it actually makes it look worse.

But people still render out lots of passes and masks, but those can be composed together even without output being perfect linear curve. Non-linear image output only removes possibility to compose separate shading components so that they make up exactly 1:1 pixel perfect beauty pass.

Precisely as you said, linear has been standard because compositors needed to reconstruct color components using simple mathematical operations (add and multiply) to get beauty pass. But that was mainly so that they had separate control over those individual color components. And they needed to have control over them mostly because they needed to make bad looking CG pop up. Nowadays, thanks to physically based lighting, shading and rendering, unless it comes to a very unskilled artist, it's very hard to make bad CG in a way, which can be fixed by tweaking separate shading components. If someone sets up bad material, you can only rarely magically fix it by selecting for example reflection component of the beauty pass, and boosting reflection on certain object. Yes, it may improve it slightly, but nowhere near the amount of improvement achieved by actually going back to a 3D scene, and fixing the material there, causing change in the illumination on the surrounding surfaces based on the new properties of the material.

As for the no standard for photorealism, I am talking just about standard for displaying shading and lighting from the renderer on an average screen. I am talking about this: http://acescentral.com/

And I think that photorealism is far from subjective. Actually it can be defined quite easily - a computer-generated image/video, which is by majority of people indistinguishable from photographed image/shot video.
« Last Edit: 2017-02-19, 19:06:19 by Rawalanche »