Author Topic: Changing type of camera sensor in Corona to achieve new looks - possible?  (Read 10631 times)

2014-08-15, 07:30:15

kubiak54

  • Active Users
  • **
  • Posts: 28
  • jakubcech.net
    • View Profile
    • jakubcech
Ok, my question or theory popped up when I was trying to achieve look / color mapping of Jose Villa's photos. I just love how his colors, and overaly color mapping looks. After spending some time investigating the matter, I found out he shoots film. So he is using some other kind of sensor, some other kind of color mapping (There is pretty complex process to shoot film though). Absolutely every camera has some other kind of sensor, sensor parameters, sensor curves that maps incoming real world data, right? So we have real world data hitting the sensor which maps that data (color mapping) but it maps them according to the light wavelength, exposure and all other incoming data parameters. Now my question is, is there any way, we can change our sensor in corona? I guess that some default sensor is just "built into corona" giving us output - color mapped 32 bit output and we can only apply postproduction, not change that sensor at all (I am saying color mapping as color mapping according to some sensitivity to some wavelength of light etc. not highlight compression). 32-bit exr is equal to HDR real world image - so we know there is no way to magically change sensor afterwards, don't we? It would be incredible to have some parameters / possibilities to define such sensor. I am not expert, I love corona as it (thank you keymaster and company) is and I am really looking forward for new BDFRs, this idea would be really very interesting area to see. I am also attaching a picture of what I mean.
I am looking forward to seeing any insights!
jakubcech.net

2014-08-15, 13:16:15
Reply #1

rampally

  • Active Users
  • **
  • Posts: 209
    • View Profile
interesting............but till now no other program has this type of  options .....may be corona will have ???

2014-08-15, 17:32:04
Reply #2

Juraj

  • Active Users
  • **
  • Posts: 4813
    • View Profile
    • studio website
interesting............but till now no other program has this type of  options .....may be corona will have ???

All of them have it. It's just in primitive form of highlight compression algorithms. Equivalent of camera/human eye tone mapping in CGI is just that, conversion from linear to some curve. This compresses the dynamic range that we can instantaneously
perceive at same time. Film has additional color response derived from film stock.

The filmic color mapping posted on this forum as script for Fusion/Nuke is as close as one can get to it, but one can always do the curve fully manually, or do it for all 3 RGB channels separately to simulate film.
Or use just the curve for whole RGB spectrum, and then apply filmic LUT as additional filter to get both tone-mapping in and color cast.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2014-08-15, 18:31:59
Reply #3

JakubCech

  • Active Users
  • **
  • Posts: 126
  • jakubcech.net
    • View Profile
    • jakubcech
Sorry guys I started thread with my old login.

interesting............but till now no other program has this type of  options .....may be corona will have ???

All of them have it. It's just in primitive form of highlight compression algorithms. Equivalent of camera/human eye tone mapping in CGI is just that, conversion from linear to some curve. This compresses the dynamic range that we can instantaneously
perceive at same time. Film has additional color response derived from film stock.

The filmic color mapping posted on this forum as script for Fusion/Nuke is as close as one can get to it, but one can always do the curve fully manually, or do it for all 3 RGB channels separately to simulate film.
Or use just the curve for whole RGB spectrum, and then apply filmic LUT as additional filter to get both tone-mapping in and color cast.

This is true but if it was absolute true, than it would be possible to create any look from real HDR photograph. But one can not obtain the real film look (or many others different looks of cameras) from HDR photograph (although guys from http://www.gettotallyrad.com/ made pretty amazing job) just that easily. It just handles colors (saturation, luminosity etc.) differently when raising exposure, maps shadows differently and this drives me to the idea of using "different sensor" - different spectral response curves etc.. I may be wrong and 32 bit output may really be all one needs, I am just looking at how this works in real world. Probably you can do any look out of 32 bit with custom highlight compression, RGB custom curves, saturation etc absolutely custom but you will still not hit that precise look and probably spend 10 hours tweaking such postpro for every render.

2014-08-15, 18:35:57
Reply #4

maru

  • Corona Team
  • Active Users
  • ****
  • Posts: 13635
  • Marcin
    • View Profile
I'm not an expert either, but I think the kind of color mapping applied in Corona is dependent from the output format and it usually is called sRGB and the thing you wrote about is post processing done in some other app (like photoshop).
Marcin Miodek | chaos-corona.com
3D Support Team Lead - Corona | contact us

2014-08-15, 19:00:52
Reply #5

Juraj

  • Active Users
  • **
  • Posts: 4813
    • View Profile
    • studio website
Sorry guys I started thread with my old login.

interesting............but till now no other program has this type of  options .....may be corona will have ???

All of them have it. It's just in primitive form of highlight compression algorithms. Equivalent of camera/human eye tone mapping in CGI is just that, conversion from linear to some curve. This compresses the dynamic range that we can instantaneously
perceive at same time. Film has additional color response derived from film stock.

The filmic color mapping posted on this forum as script for Fusion/Nuke is as close as one can get to it, but one can always do the curve fully manually, or do it for all 3 RGB channels separately to simulate film.
Or use just the curve for whole RGB spectrum, and then apply filmic LUT as additional filter to get both tone-mapping in and color cast.

This is true but if it was absolute true, than it would be possible to create any look from real HDR photograph. But one can not obtain the real film look (or many others different looks of cameras) from HDR photograph (although guys from http://www.gettotallyrad.com/ made pretty amazing job) just that easily. It just handles colors (saturation, luminosity etc.) differently when raising exposure, maps shadows differently and this drives me to the idea of using "different sensor" - different spectral response curves etc.. I may be wrong and 32 bit output may really be all one needs, I am just looking at how this works in real world. Probably you can do any look out of 32 bit with custom highlight compression, RGB custom curves, saturation etc absolutely custom but you will still not hit that precise look and probably spend 10 hours tweaking such postpro for every render.

What is "real HDR photograph" ? Current chips in cameras (like current 35mm from Sony found in A7a, NikonD800) already have quite high dynamic range themselves, close to 14 stops, which really is close to what often film stock could achieve.
There is HDR merged look, which only refers to post-production. What are you refering to ? So from raw files you can't derive infinite look, although 14 stops already let you do a lot. Arri Alexa has that and it's identical to film look in most people's eyes as it gets.
Linear output is just that, 1 stop. But if it's fully unclamped, true linear, then you can have as many stops as you wish. And thus you're not limited anyhow in post and you can achieve any look, given you know how to go about it.

What does it have to do with tweaking 10 hours something ? We're not talking artistic control, but pure curve emulation, you do it once and just apply it everytime to linear result. Linear will be linear always.

I'm not an expert either, but I think the kind of color mapping applied in Corona is dependent from the output format and it usually is called sRGB and the thing you wrote about is post processing done in some other app (like photoshop).

That's color space and has very little to do with this topic :- )



Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2014-08-15, 19:54:30
Reply #6

JakubCech

  • Active Users
  • **
  • Posts: 126
  • jakubcech.net
    • View Profile
    • jakubcech
I'm not an expert either, but I think the kind of color mapping applied in Corona is dependent from the output format and it usually is called sRGB and the thing you wrote about is post processing done in some other app (like photoshop).
Well color mapping in corona is just highlight compression. I am not connected with color spaces thought so I cannot really say if it is connected somehow with color space, probably not (its just matter of color rendition).

Sorry guys I started thread with my old login.

interesting............but till now no other program has this type of  options .....may be corona will have ???

All of them have it. It's just in primitive form of highlight compression algorithms. Equivalent of camera/human eye tone mapping in CGI is just that, conversion from linear to some curve. This compresses the dynamic range that we can instantaneously
perceive at same time. Film has additional color response derived from film stock.

The filmic color mapping posted on this forum as script for Fusion/Nuke is as close as one can get to it, but one can always do the curve fully manually, or do it for all 3 RGB channels separately to simulate film.
Or use just the curve for whole RGB spectrum, and then apply filmic LUT as additional filter to get both tone-mapping in and color cast.

This is true but if it was absolute true, than it would be possible to create any look from real HDR photograph. But one can not obtain the real film look (or many others different looks of cameras) from HDR photograph (although guys from http://www.gettotallyrad.com/ made pretty amazing job) just that easily. It just handles colors (saturation, luminosity etc.) differently when raising exposure, maps shadows differently and this drives me to the idea of using "different sensor" - different spectral response curves etc.. I may be wrong and 32 bit output may really be all one needs, I am just looking at how this works in real world. Probably you can do any look out of 32 bit with custom highlight compression, RGB custom curves, saturation etc absolutely custom but you will still not hit that precise look and probably spend 10 hours tweaking such postpro for every render.

What is "real HDR photograph" ? Current chips in cameras (like current 35mm from Sony found in A7a, NikonD800) already have quite high dynamic range themselves, close to 14 stops, which really is close to what often film stock could achieve.
There is HDR merged look, which only refers to post-production. What are you refering to ? So from raw files you can't derive infinite look, although 14 stops already let you do a lot. Arri Alexa has that and it's identical to film look in most people's eyes as it gets.
Linear output is just that, 1 stop. But if it's fully unclamped, true linear, then you can have as many stops as you wish. And thus you're not limited anyhow in post and you can achieve any look, given you know how to go about it.

What does it have to do with tweaking 10 hours something ? We're not talking artistic control, but pure curve emulation, you do it once and just apply it everytime to linear result. Linear will be linear always.

I'm not an expert either, but I think the kind of color mapping applied in Corona is dependent from the output format and it usually is called sRGB and the thing you wrote about is post processing done in some other app (like photoshop).

That's color space and has very little to do with this topic :- )

I am talking about color rendition. So forget about HDR image, you can capture as much dynamic range with high profile digital chip as with film (according to your words) but color rendition is absolutely different - and that is what I am talking about. I mean, if you could achieve precise look of any camera chip then it would be no sense of buying such camera, you could just buy one very high dynamic range camera and achieve any look you want -  but people are still buying cameras (not everybody) because they like how color rendition with that chip and equipment looks. And right this drives me to an idea whether it is not connected with some more physical (wavelenght of light) pre-rendering curves adjustments. So I am wondering.

2014-08-15, 20:08:05
Reply #7

Juraj

  • Active Users
  • **
  • Posts: 4813
    • View Profile
    • studio website
But you can download those color response curves in forms of LUT profiles. Of course, if you attach them directly to linear result it will not look right. Most LUTs are intended for specific device, i.e, RED, Canon,etc.. and those already come with their curve.

Some renderers do integrate them directly, I am pretty sure Octane does, maybe Thea (as their Colimo product does that to any input).

Theoretically, if you map your dynamic range to look like that of DSLR (for example D800) for which you can find response curve of film stock you like (Kodac, Afga, Fuji,etc...), you will have your filmic look. Without artifacts of course, and lens glare,..and other million things that contribute.

you could just buy one very high dynamic range camera and achieve any look you want

And you can. Seems to work fine for movie industry to me.

but people are still buying cameras (not everybody) because they like how color rendition with that chip and equipment looks.


Are you referring to digital cameras ? They look very similar today imho, unless they specifically try to reach the film look, like in Arri example. And most cameras start to use almost the same chips, half of them come from Sony.
Any filmic look in currect TV/Movies/Media/etc.. is in 90perc. achieved by post-production grading.
Also, most people (photographers) aren't fans of extensive post-production. If a device has a "look" from start, it will attract its crowd.


But I know what you want. You want the renderer to instantly capture the image in look that emulates existing device. Which would make everyone in CGI world extremely happy.
I also think this is direction renderers should increasingly integrate more ,instead of additional features. It's 2014 and I would like to have my one-button solution rather than myriads of SSS options (random thing that came on my mind, don't kill me ss guys)
« Last Edit: 2014-08-15, 20:31:37 by Juraj_Talcik »
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2014-08-15, 22:23:26
Reply #8

Adanmq

  • Primary Certified Instructor
  • Active Users
  • ***
  • Posts: 94
    • View Profile
    • 3D Collective
In digital filmaking, the "look" not only depends on the camera used. Ligh/postprocesing it´s the most important apart from a lot of physical effects really difficult to emulate, you can download sample RAW footage from any camera and compare with the end result. Even if you have a renderer that emulate an Alexa or RED, if you don´t light the scene like a film productions does and make perfect materials you will never get the same result. But you can get almost anything in post using a 32bpc render.

2014-08-17, 07:05:53
Reply #9

slebed

  • Active Users
  • **
  • Posts: 5
    • View Profile
Remember that the data coming out of an Alexa or Arri camera is a essentially a flat color image.  The footage has to be graded in color correction which is where the 'look' the cinematographer is after gets realized.  If you save your renders out as 16bit half float EXRs, you'll have the ability to grade your renders to achieve whatever look you'll require.

2014-08-18, 23:21:29
Reply #10

CiroC

  • Active Users
  • **
  • Posts: 506
    • View Profile
    • Portfolio
So, let me see if I understood correctly. If I save a render as 32-bit exr image it is possible to use a LUT file to give a filmic look? Without tweaking gamma?


2014-08-21, 02:09:13
Reply #11

JakubCech

  • Active Users
  • **
  • Posts: 126
  • jakubcech.net
    • View Profile
    • jakubcech
But I know what you want. You want the renderer to instantly capture the image in look that emulates existing device. Which would make everyone in CGI world extremely happy.
Yes very true, that is what I am trying to achieve. I am just saying that all renderings have very similar color mapping because one do not change this sensor and it would be incredible to search in this area for new ways of color mapping, to find new looks, trying to work in pre-prepared LUTs (vray VFM comes with option of applying LUT directly to output) that emulates some look. Maybe it is not bad idea to have a topic where people can share their LUTs / new color mapping workflows of 32bit output exr..?

In digital filmaking, the "look" not only depends on the camera used. Ligh/postprocesing it´s the most important apart from a lot of physical effects really difficult to emulate, you can download sample RAW footage from any camera and compare with the end result. Even if you have a renderer that emulate an Alexa or RED, if you don´t light the scene like a film productions does and make perfect materials you will never get the same result. But you can get almost anything in post using a 32bpc render.

Definitely agree.

Remember that the data coming out of an Alexa or Arri camera is a essentially a flat color image.  The footage has to be graded in color correction which is where the 'look' the cinematographer is after gets realized.  If you save your renders out as 16bit half float EXRs, you'll have the ability to grade your renders to achieve whatever look you'll require.

I find 16bit clamping, when I convert 32bit to 16bit it is already clamped and you get only more information in color precision but clamped highlights and blacks. So 32-bit - unclamped, full range, deep color depth, 16 - bit clamped, deep color depth.
+ Jose Villa uses pretty recognizable look that is achieved by using film fuji 400H in very overexposed condition (+ 3 stops or so). I will take a look if there is a way it is somehow possible to achieve such color mapping from 32bit output.

So, let me see if I understood correctly. If I save a render as 32-bit exr image it is possible to use a LUT file to give a filmic look? Without tweaking gamma?
With LUT you are just shifting colors in a way LUTs describes. There are many LUTs emulating film look so basically if your raw output is close to what you would get from real digital camera, you are going to get some kind of look LUT describes (lets say film look).

2014-08-22, 01:07:06
Reply #12

Juraj

  • Active Users
  • **
  • Posts: 4813
    • View Profile
    • studio website


If you save your renders out as 16bit half float EXRs, you'll have the ability to grade your renders to achieve whatever look you'll require.

I find 16bit clamping, when I convert 32bit to 16bit it is already clamped and you get only more information in color precision but clamped highlights and blacks. So 32-bit - unclamped, full range, deep color depth, 16 - bit clamped, deep color depth.
Just small correction, 16bit exr (half floating point) is still linear format and gets read as that by apps (ie. of you open it in any compositor, it will treat it like 32bit).
It captures 30 stops, which is more than enough in 'most' cases. In those exceptional cases, it cuts the above highlights (as Keymaster mentioned, the full intensity of sun for example in Corona).
The benefit on other side is smaller disk size, and in case of really large rendering (8k) it also speeds up the grading (it's no fun having PSD file where each layer amounts to multiple hundreds of megabytes).

16bit .exr =/= 16bit gamma 2.2 file (jpeg/tiff/png/etc..). In Photoshop, it opens as 32bit. PS only works lineary in this mode, so as soon as you switch to 16 bit mode, it will try to compress the file by either HDR merge, or in CC, by CameraRaw filter.
If you keep it as "smart" file, you can go back to 32bit mode with your layers unharmed, although only those with work in 32bit (levels, but not curves) will work, and blending modes behave differently between linear and 2.2.
Please follow my new Instagram for latest projects, tips&tricks, short video tutorials and free models
Behance  Probably best updated portfolio of my work
lysfaere.com Please check the new stuff!

2014-08-26, 02:56:20
Reply #13

JakubCech

  • Active Users
  • **
  • Posts: 126
  • jakubcech.net
    • View Profile
    • jakubcech