Hey guys,
I've been hitting my head with a hammer trying to understand PROPER linear workflow (ending up with a 32 bit file ready for compositing) and the difference in light distribution that it potentially brings. I am not inexperienced per say but to say I understand the real difference in LFW would be far fetched apparently.
So, my question is, if I send out a png image that is already tone mapped in the VFB (highlight and exposure corrections) what would be the difference in the way light propagates in comparison to the same render exported as an OpenEXR with all the VFB settings at default (exposure and highlights) edited in PS / Nuke to match the png image?
Is there even a difference? My tests are inconclusive, especially so because LUTs work differently in 32bit compared to 8 bit, of course :) LFW to me was always about having extra breathing room in post and thats pretty much it.
I stumbled on the fact I don't have a clue about this while studying the difference between tone mapping options of V-Ray, Corona, F-Storm and Octane. Go figure right?
Answers greatly appreciated! Thanks!