Author Topic: Human-Based reality - We can see into the light  (Read 817 times)

2021-09-24, 20:40:46

Alex Abarca

  • Active Users
  • **
  • Posts: 373
  • Corona Certified Instructor
    • View Profile
    • www.alexabarcaviz.wordpress.com
Hi Corona team,

Can you add a function in the modify panel properties that will enable the user to set values/parameters for the "visible directly" option?

In reality, a person can look directly into the light source and see the components of a bulb, for example, the housing, filament, gas, and led chips. For the most part, renderings burn out the light source, and all we see is a burned circle in the image, which takes away from human-based realism.

I know there are methods to doing this like using two lights sources 1. One light to light up your scene and 2. Light up the light housing or the source either through a Coronalight or CoronaLightMtl with less intensity values.

But what if all the properties are encapsulated in one light under it's properties.

2021-09-28, 17:59:18
Reply #1

maru

  • Corona Team
  • Active Users
  • ****
  • Posts: 10956
  • Marcin
    • View Profile
To be honest I am totally confused by this feature request. I would have to understand it better.

Let's go one by one:

Quote
Can you add a function in the modify panel properties that will enable the user to set values/parameters for the "visible directly" option?
Add a function to what? What are we modifying? A Corona Light?

Quote
In reality, a person can look directly into the light source and see the components of a bulb, for example, the housing, filament, gas, and led chips.
This mostly depends on how bright the light source is.

Quote
For the most part, renderings burn out the light source, and all we see is a burned circle in the image, which takes away from human-based realism.
This depends on the tone mapping / post-processing.
If you are looking at a light source with your eye - you will most likely see the details, yes.
If you take a photo of the same light using a cheap camera - you will most likely see a white spot.
If you take a photo of the same light source with a better camera - you will most likely see details.
If you take a photo with a pretty bad camera, but save your image to RAW format - you will most likely be able to extract some details from the light source because you captured enough data and you can process your image in a way that will enable you to see the details.
When you are rendering, you can save your image in a 32 bit format and capture waaaay more data than any digital camera can capture. Then you can play around with tone mapping / post-processing settings in the VFB or in a 3rd party app to extract all the details you'd like.

Quote
I know there are methods to doing this like using two lights sources 1. One light to light up your scene and 2. Light up the light housing or the source either through a Coronalight or CoronaLightMtl with less intensity values.
This is mostly related to the fact that modelling a light bulb filament and making it generate light would be much harder to sample than using a simple disc object. Brightness of the filament vs brightness of the light emitter should not be the reason here if we want to set up the scene is a physically plausible way. If we want to keep everything as physical as possible, we should make the filament as bright as in real life and the light emitter as bright as in real life, even if we are using this split objects method.

Quote
But what if all the properties are encapsulated in one light under it's properties.
What exact properties do you mean?

Thanks for your replies in advance and please keep in mind that I am not arguing, just trying to understand the idea better.