So, lately I've been thinking...
Every single time I do some look development these days, I first set my tone mapping to have some response curve that's somewhat resemblant of digital camera response curve. A bit of highlight compression, bit of contrast boost. It's very important to me in order to correctly see HDRI environments, and be able to correctly replicate materials from photographs to CG, because photographs are usually captured by a camera, which has such response curve.
It also seems that Fstorm's experiment to default to camera-ish film response rather than LWF/sRGB has met with great success and overall positive feedback.
Now, there are some valid reasons as to why most renderers still default to LWF/sRGB, two of the major being:
1, As soon as your output stops being linear, you can not correctly compose individual render elements anymore.
2, If you add some sort of tone mapping and and bake it into final output, you will destructively lose a bit of dynamic range data.
Non the less, I think that these reasons have became historical these days, because:
1, Compositing of separate render elements has became rather rare and niche workflow. It was essential back in the day, when rendering was not physically based by default, so things needed to be "made look right". That is not the case anymore. It would be more reasonable, if those, who utilize rare workflows, had to go extra step and make their renders linear if they want to be doing some advanced compositing, because majority of the Corona users do not.
It could be even implemented as one button solution, simply called "force linear output" or something like that. But there's not much of a reason to abstain from the joy of having Corona behave like digital camera by default just because of a few who still utilize workflows, which are now becoming legacy.
2, Main reason people choose not to do tone mapping in VFB, but instead in post is arguably because "they can bring back highlights in post". The thing here is, that if you apply some sort of tone mapping, such as highlight compression, you do it mainly to get bring those highlights back in the first place.
If you save tone mapped image with highlights that are not completely clamped, just a bit compressed by tone mapping, in some at least 16bit format, you will still be able to go back in post and adjust for example tonal contrast of highlights without getting any banding or artifacts. Yes, the gradient won't be as precise as it would be with linear image, but at the same time, neither would be footage from real movie camera.
The main idea here is that if you were in compositing, you would already start off with something that's a lot closer to movie camera footage, than to a linear render footage, so you could skip the entire one step of making it first look tonally realistic, before proceeding to some creative moody grading.
Many people praise Corona for being a lot like point and shoot camera rather than cumbersome technical tool, so I propose to push Corona even close to that ideal digital camera behavior. I think it's time to enter a new era of rendering, where renderers are becoming complete simulators of a movie/photoshoot sets, simulating most of the real world occurrences. Not just simulators of light transport, surface and volume shading, which then gets the printed onto pixel perfect radiometric grid (digital image), but also optical effects and digital film response to light, that reaches camera film back, such as contrast, glares, subtle blurring and sharpening, possibly even lens flares, and so on. Basically a state, where if you had near-perfect representation of real world scene, with scanned geometry and shaders, you would get an image indistinguishable from actual photo, without needing to work for it hard in Photoshop or other compositing software.
Therefore, I'd like to know your opinion about breaking old habit of linear being default in exchange for the greater good of the future. :)