Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Captain Obvious

Pages: [1] 2 3 ... 12
Captain can you please elaborate what you mean by different parameters? - if you observe it closely (from attached rar) you can see Reinhard RGB is painting car white (nicely adjusted color cast IMO) whereas Luminance Reinhard + Corona leave it somehow blueish.
MODO's Reinhard RGB produces identical results to Corona's Highlight Compression, but use a different input parameter to do so.

A tonemapping amount of 25% is identical to a highlight compression of about 1.156. 50% is 1.414, 75% is 2 exactly. 95% is about 4.472.

The only difference is that a tonemapping amount of 100% is an highlight compression amount of infinity.

The Reinhard RGB tone mapper and the highlight compression in Corona are pretty much identical. They will produce identical results, but for different parameter values.

[Max] I need help! / Re: DoF & MB noise
« on: 2014-12-12, 21:40:41 »
One relatively simple way of dealing with stuff like this is to render to double resolution, run denoising there and then scaling down. Typically gives much better results, because you get more leverage for denoising without detail loss. When rendering an animation you can use video denoising plugins like NeatVideo or DE:Noise, which both usually work quite well. You can also denoise via motion vectors but you'll need Fusion or Nuke or something similar to pull that off.

I've been using GGX and GRT lately, and I really like both.

General CG Discussion / Re: VRay 3.0 SP1
« on: 2014-12-11, 19:52:15 »
I remember when I started (1999-2000)with Final Render... it was so complex! And VRay was so easy, just a few parameters (compared to FR). But, now, they are exaggerating with settings, min, max, AA, mathematical calculations to set subdivs correctly, check render elements to see where are the problems, etc... in fact new scripts were born to simplify it (SolidRocks). And, now, V-Ray needs a Quick toolbar to simplify the rendering setup... Mmmm... this is why I started to use Corona. I want to spend my time to create image, not to spend so many time to tweak the renderer engine.
I wrote the code for a tool for modo that fills the same basic need. Instead of balancing loads of different settings it's just got a quality slider from 0 % to 100 %. It sold reasonably well. :-)

General CG Discussion / Re: VRay 3.0 SP1
« on: 2014-12-11, 14:22:39 »
Ah, that makes sense. Sort of.

One thing worth noting is that Corona seems to be much more efficient at dealing with heavy point instancing (like forests and such). It's only really noticeable if you render on the order of millions of instances, but still.

I understand that out of core for geometry can be tricky, but some kind of bitmap paging system (a native one) could be really beneficial, especially for big heavy HDRIs. You can used things like tiled OpenEXR files and load/cache only the bits needed, as and when they're needed. That helps a lot.

General CG Discussion / Re: VRay 3.0 SP1
« on: 2014-12-11, 12:00:58 »
I am bit wary of new methods since their whole Embree and Probabilistic failure. Embree is unusuable since it disables (without any threshold) every instance, what do I have, 256GB of ram ? It's usable for those who render boxes.
Probabilistic shaves off time...creates visually wrong look (like most of the time completely different intensity), and introduces new noise.
Is that a Max-specific issue? The modo version of V-Ray uses Embree by default and that supports instancing. I've never used V-Ray in Softimage or Maya.

General CG Discussion / Re: (un)bias mode
« on: 2014-12-07, 00:33:27 »
excellent writeup... only small problem is that the "rolling every 5 rolls and interpolating" is actually probably unbiased (without the rounding)
Yes, that would still actually be unbiased in this particular example. You'd just get a lower quality result.

General CG Discussion / Re: (un)bias mode
« on: 2014-12-05, 15:10:38 »
The word "bias," as it applied to 3D rendering, has more or less lost its meaning. The original definition is basically about statistics. Here are some analogies:

Imagine that you're rolling dice. Thousands of them. They're all six-sided and perfectly weighted. Given enough rolls, you will come to the correct average (3.5). However, imagine that you re-roll all ones. The likelihood of getting a one is 1/6th. If you re-roll it, the likelihood of getting another one is also 1/6th, which means the likelihood of rolling an actual one is just 1/36th. The average now becomes ~3.917 instead of 3.5. Your die rolling has become biased because you're selectively re-rolling. This is more or less what happens when you use adaptive sampling (like the adaptive DMC sampler in V-Ray).

Imagine that you're rolling dice again. You've got thousands of die rolls to make and write down in a book, but it's taking you much too long. Let's say you've got 10 000 die rolls to make. Instead of actually rolling 10 000 dice, you roll every 5th for a total of 2000 die rolls. For the rolls you didn't make, you just interpolate between the surrounding rolls. You start by rolling a five and write that down on a line, then you skip four rows ahead and then roll another die. It lands on a three. For the four in-between rows you now interpolated them based on the five and the three: the second line would be 4.6, the third would be 4.2, etcetera. Obviously, you can't actually roll a 4.2 so you could round these numbers off to the nearest integer (whole number). Rounded off, the in-between rows would be 5, 4, 4, 3. When the data is all entered, you have all 10 000 rows but you only made 2000 rolls. This is effectively what methods like irradiance caching (and HD Caching in Corona) do. By interpolating between existing data points you generate something that might look correct, but won't be. This is biased because there's no way of being certain how large the error is. If irradiance caching misses small details, it's because the engine didn't know that there were supposed to be details there in the first place.

When people say "unbiased engine" what they usually mean is an engine that uses brute force path tracing with plausible lighting and shading models, capable of generating advanced lighting phenomena (such as complex caustics). Strictly speaking, none of this is related to bias. It is entirely possible to make a render engine that fulfils all those requirements while still having bias, and vice-versa.

If someone says that Corona is "slightly biased," they likely refer to the fact that Corona largely behaves in a very predictable way and lacks the usual problems associated with biased rendering methods (irradiance cache flickering and detail loss, for example). It doesn't require you to mess around much with quality settings. There are no adaptive noise thresholds to set. It behaves more like Octane or Maxwell, while still having the speed of a "more biased" engine like V-Ray.

[Max] I need help! / Re: DOF affecting backplate
« on: 2014-12-04, 15:22:28 »
It's not a bug, as such. But there does need to be an option for it. Whether you want the background affected by DOF depends on the setup. Sometimes you do, sometimes you don't.

The Titan Z is actually two video cards in a single package. It won't work for anything in 3D except rendering (Octane, Redshift, etc). Corona does not use the GPU for anything and the Titan Z will not be better for viewports than a single Titan. Just buy a Geforce 970 instead. A Titan Z is a waste of money unless you do GPU rendering (or gaming, I guess).

Sure, that's why you'll want a cleraly worded contract for bigger jobs.

The only lawsuits I've ever heard about is when clients have refused to pay. Though with product and arch viz it's often very important to not leak any information or imagery. If you're doing product viz of, I dunno, the new Playstation or whatever and the images end up on the internet, I can imagine that Sony would consider legal action...

Um, you're welcome?

Pages: [1] 2 3 ... 12