Chaos Corona Forum
General Category => General CG Discussion => Topic started by: Juraj on 2015-10-31, 02:31:51
-
Anyone checking on this ? I am absolutely impressed.
http://forums.chaosgroup.com/showthread.php?84980-Altus-denoiser/page6
1 min render (need two of them)
(http://forums.chaosgroup.com/attachment.php?attachmentid=26677&d=1446048580)
1 minute Altus pass and voila
(http://forums.chaosgroup.com/attachment.php?attachmentid=26679&d=1446048609)
LeLe's test. 30 second render, 18 second Altus pass:
(http://forums.chaosgroup.com/attachment.php?attachmentid=26705&d=1446221659)
Another Vlado's scene:
(http://forums.chaosgroup.com/attachment.php?attachmentid=26681&d=1446049631)
(http://forums.chaosgroup.com/attachment.php?attachmentid=26682&d=1446049643)
(http://cgpress.org/wp-content/uploads/2015/10/altus-vray2.jpg)
-
I saw Vlado doing some test last Wednesday, amazing stuff! I just love when Lele joins the threads :)
I feel the renders get that painterly feel, wonder if that ever goes away.
-
I feel the renders get that painterly feel, wonder if that ever goes away.
95 perc. of those renders in thread have less than few passes or 2 AA subdivs. Literally zero information to hold on, the fact they can still get something out it, and in this type of quality is pretty bomb.
Also wonder if the altus is somehow adjustable, if it needs to do 100perc. denoising all the time, which can look a bit unnatural even with perfectly preserved AA.
I find the artifacts less severe than even the slightest denoise in Photoshop (which does shit anyway, since denoising noise only creates blurred noise..."wonderful" look)
-
I find the artifacts less severe than even the slightest denoise in Photoshop
I'm with you all the way :)
-
To have that kind of technology built in would be fantastic
-
Impressive stuff.
-
Nice. Someone should do a proper test in at least 4K resolution, with detailed textures and compare it with normal rendering. Nevertheless, for animations it should be a bomb.
-
The software is already out, and it supports Corona, so someone should give it a try http://www.innobright.com/
-
I'm SHOCKED! O_O
-
I'd love to test this thing. I registered and tried to log in via several different browsers but I just doesn't work here - bummer.
-
Downloaded here, will give it a go later
-
Whats Render elements i need from Corona for this?
Albedo
Beauty
World Position
Its clear
But What is this?:
VIS
NOR - what kind of normal map?
CAU Caustic -where is it in Elements?
-
The required AOVs are as follows:
RGB: This is the beauty render
ALB: This is an albedo AOV, unshaded texture AOV
VIS: This is the visibility of the geometry to the lights, a suitable AOV for this would be a
shadow AOV
NOR: Forward Facing Normals with bump map preservation
POS: World Position
CAU: Caustics AOV
from the pdf. Not sure how we get the caustics
-
Ok
Yes, I also read the PDF
But how exactly this called in Corona Render Elements?
-
Just did a small test:
Just 4 passes
GI/AA: 24
MSI: 10
You need to unlock the sampling pattern
Elemtents: Beauty, CGeometry_NormalsShading, CShading_SourceColor (Albedo?!), CShading_Shadows (VIS?!)
(Not 100% sure if these are the best passes for the AOVs)
Rendertime: 1:40 min per image
Filtering Time: 43Sec
Edit: tried to process a 4960x3508 Image, but there is not enough ram on my gtx 970 und the CPU-Mode does not work for me...
-
Exist blur Artifacts on white lighting on the ceilling
and on highlights/
may be wrong Elements you used?
You use Standalone or version for Maya?
From 3d max I need Command Line for launch this application?
Please describe me a full work process.
-
Exist blur Artifacts on white lighting on the ceilling
and on highlights/
may be wrong Elements you used?
You use Standalone or version for Maya?
From 3d max I need Command Line for launch this application?
Please describe me a full work process.
yeah, there are artefacts, but the input images are reaaally noisy...
I do not know if these Elements are the best to use.
I am using the Altus Standalone with the command line..
in my case the command looks like this:
altus.exe -r 10 -i "text" -o "C:\InnoBright\Altus\bin\output" -b "C:\InnoBright\Altus\bin\input\b0_.exr" -b "C:\InnoBright\Altus\bin\input\b1_.exr" -a "C:\InnoBright\Altus\bin\input\b0_CShading_SourceColor.exr" -a "C:\InnoBright\Altus\bin\input\b1_CShading_SourceColor.exr" -n "C:\InnoBright\Altus\bin\input\b0_CGeometry_NormalsShading.exr" -n "C:\InnoBright\Altus\bin\input\b1_CGeometry_NormalsShading.exr" -v "C:\InnoBright\Altus\bin\input\b0_CShading_Shadows.exr" -v "C:\InnoBright\Altus\bin\input\b1_CShading_Shadows.exr" -g
-
thanks a lot!!!
I will also test this/
Thanks/
-
Exist blur Artifacts on white lighting on the ceilling
and on highlights/
may be wrong Elements you used?
Or just limitation of the algorithm. These things are not all-powerful - they cannot manufacture new information not present in the image. They can only blur the image and hope for the best ;)
-
Just did a small test:
Just 4 passes
GI/AA: 24
MSI: 10
You need to unlock the sampling pattern
Elemtents: Beauty, CGeometry_NormalsShading, CShading_SourceColor (Albedo?!), CShading_Shadows (VIS?!)
(Not 100% sure if these are the best passes for the AOVs)
Rendertime: 1:40 min per image
Filtering Time: 43Sec
Edit: tried to process a 4960x3508 Image, but there is not enough ram on my gtx 970 und the CPU-Mode does not work for me...
This look glorious for 4 fucking passes and first try.
I have some scenes, that look almost correct between 100-200 passes (which can be up to 10 hours on my dual-xeon !!), but even 1000+ passes (tried it...yes, few days), won't clear it completely. Prime candidate for adaptivity but even then it would take almost endless time and still be noisy I believe. This is the kind of thing that can push it further into clean look.
I don't believe artifacts will be too noticeable (or at all) if enough passes are done at high-enough (4k px +) resolution, which should still yield enough speed-up.
-
yes, it looks good. I am quite curious how it will compare with our own adaptivity solution. I guess we will see in november ;)
-
adaptivity solution for Corona it's still better than third-party development
at least for economic reasons!!!
And whats with animations with 300 frames???
After rendering sequences your must render this second time???
+ Post process?
-
What's up with this vs adaptivity ?
(http://i.kinja-img.com/gawker-media/image/upload/aoz8kgx8pzknypz7z38n.jpg)
-
I can imagine this eventually being perfect thing for frame-buffer integration. Even as odd as two different noise pattern renders is, that can be done inside at same time without any overhead using distributed rendering. But that can be internally automatized.
Animation can be batch processed you can probably already do it using some kind of soft like deadline/shotgun with some scripting skill. Although we don't know what kind of artifacts this brings up in animation, I presume it's quite doable if Disney has similar tech primarily animation oriented. If it's good enough for them, it's good enough for me. I mean it's in infancy, so why discount it in favour of something else ? Looks pretty damn promising to me even now.
Imho, adaptivity has potential to "speed" up renders perhaps by twice ? That's 50perc. savings cost for animation budget. This ? This can save up to 90perc. perc of budget if you compromise visual quality. Which many would gladly do for animation, where costs easily reach thousands of euros for few minutes of footage.
And I am not even talking about super cheap pre-viz. Even if this tool wouldn't be suitable for final output, for test renderings/animation, it would be god-send.
Also, adaptivity primarily helps with non-uniform noise distributed in scene (stuck noise). But what about simply heavy scenes with pretty uniform noise ? Those simply need the sampling done and nada. Noise reduction is pretty universal, renderer-agnostic need and wish of any user. There definitely is a place for tool like this.
I, look a lot to where this goes :- )
-
I can imagine this eventually being perfect thing for frame-buffer integration. Even as odd as two different noise pattern renders is, that can be done inside at same time without any overhead using distributed rendering. But that can be internally automatized.
Our prototype has this indeed automated.
Generally: the hard part is the same in both cases - detecting where is the noise in the image. When you have a noise heatmap, next step is pretty trivial in both cases - either you throw more samples in the region, or you blur it more (yes, even the most advanced denoising basically just blurs the image with varying blur radius). This means that if one approach did not work in a particular scene, the other one will be unlikely to work too.
We have the filtering done, and now we are trying our luck with the adaptivity. We will probably release both in daily builds soon, but only the adaptivity will be turned on by default (or maybe always-on), because it is so much simpler to use. There are no reasons why not use it. Same is not true for filtering.
-
[Adaptivity]There are no reasons why not use it. Same is not true for filtering.
Completely agree with both statements :- )
-
One Big Problem.
What with Post Process???
If your want to processing your image with Reflection/Refraction/Wire/Indirect illumination channels in Photoshop/Nuke/After/ and others softwares,
your need to get CLEAR CHANNELS for this, without noise/
What your will doing in this case???
-
Exciting times. We are almost at a point where we can render 'reality' in the same time it took a Polaroid to develop in the 70's
-
We have the filtering done, and now we are trying our luck with the adaptivity. We will probably release both in daily builds soon, but only the adaptivity will be turned on by default (or maybe always-on), because it is so much simpler to use. There are no reasons why not use it. Same is not true for filtering.
Excuse me, i may be a bit slow, but what is that filtering you talking about?
-
Excuse me, i may be a bit slow, but what is that filtering you talking about?
our internal prototype. Will be made public in a month or two
-
our internal prototype. Will be made public in a month or two
It's not an image filtering a.k.a. AA, i suppose?
(http://imageshack.com/a/img910/1104/R3wuc9.jpg)
-
Just did a small test:
Just 4 passes
GI/AA: 24
MSI: 10
You need to unlock the sampling pattern
Elemtents: Beauty, CGeometry_NormalsShading, CShading_SourceColor (Albedo?!), CShading_Shadows (VIS?!)
(Not 100% sure if these are the best passes for the AOVs)
Rendertime: 1:40 min per image
Filtering Time: 43Sec
Edit: tried to process a 4960x3508 Image, but there is not enough ram on my gtx 970 und the CPU-Mode does not work for me...
you forgot about world position pass...
cpu mode works here but it seems it's much slower (just commented out "gpu=" in cfg file). took approx 10+ minutes to process 3000x3000px image and it has required over 7.5GB of RAM. I guess that's why you couldn't filter out high res on a GTX970.
I think it's early to judge. But if they come sooner with an integration for max+corona, than Ondra's integrated solution and it won't cost an arm and a leg. I'd give it a go.
But then again. It's november already :D and once corona has it's adaptivity and noise filtering of it's own...
Would be nice from Ondra to tease us with some numbers if some early tests had been done, to know if it's even worth it to think about this innobright thing :)
-
you forgot about world position pass...
cpu mode works here but it seems it's much slower (just commented out "gpu=" in cfg file). took approx 10+ minutes to process 3000x3000px image and it has required over 7.5GB of RAM. I guess that's why you couldn't filter out high res on a GTX970.
I think it's early to judge. But if they come sooner with an integration for max+corona, than Ondra's integrated solution and it won't cost an arm and a leg. I'd give it a go.
But then again. It's november already :D and once corona has it's adaptivity and noise filtering of it's own...
Would be nice from Ondra to tease us with some numbers if some early tests had been done, to know if it's even worth it to think about this innobright thing :)
read somewhere that someone had strange results with using a world position pass, but i'll give it a try :)
has anyone figured out the correct command line syntax to process an animation? the altus command-help describes the cmd-line-options, but it's not working for me :/
and yeah, i'm also reaaaaaally excited about Ondras solution!
-
yes, it looks good. I am quite curious how it will compare with our own adaptivity solution. I guess we will see in november ;)
Corona Render is always in step with the times... Stuff&Communities it is a Movement Futuristic!
-
I received a letter from the InnoBright:
As long as you can tell your render to generate the AOVs that standalone version needs (as the readme says), Altus denoiser will work. In case of VRay, we provide a Maya python script that allows you generate the AOVs from Maya GUI.
We are working on developing similar script for Corona and hope to release that soon.
Hope that helps.
Thanks,
Raghu
I did some test with Altus
and found that Altus absolutely not remove the noise from the DOF in Corona
which is a very great disadvantage
-
I did some test with Altus
and found that Altus absolutely not remove the noise from the DOF in Corona
which is a very great disadvantage
I was about to ask how about DOF and MB. Care to share your results?
-
There is the tests
On test with DOF you can see that the noise does not disappear especially
on the edges of objects
-
Hello denisgo22
There is a difference of seconds between the one and the other image?
The images were calculated with the same render time?
thx
-
came across this yesterday, and i am floored! it is awesome!
did some tests - and honestly: if there is any way to integrate this into corona: awesome.
this together with built-in adaptivity... rendering time would literally disappear, and we all know how much
rendertime sucks.
l
-
Please check innobright.com/documentation/ for a How to guide for using Altus Standalone with Corona Renderer.
-
You're the most creative spam bot of the day.
-
I tried to use Altus but the only result is a very little file (less than 100Kb) named "output.exr_flt.exr", and it is a complete black image... (?)
In the command propt process I had no warning or errors.
What am I doing wrong?
Can someone help me?
Thank you
Bye
Andrea
-
^im getting the same problem. The strange thing is, if i open the passes in photoshop they all look bright white or completely black but if i look at them through the max save file dialog they all look fine. Altus outputs a completely black image (bar the watermarking)