Author Topic: Hall of shame: nVidia  (Read 11677 times)

2014-07-23, 08:54:20

Ludvik Koutny

  • VIP
  • Active Users
  • ***
  • Posts: 2557
  • Just another user
    • View Profile
    • My Portfolio
Hi,

recently i participated in the discussion about new nVidia GPU based GI solution, and while most of the mental ray fanboys including staff claimed the solution is brute force, it turned out it does some interpolation and i caught it red handed. Once i did, and presented several page of verifiable data, it resulted in me being banned and all my posts being edited by admin to imply something different, and data proving GPU GI indeed does interpolation were deleted.

http://forum.nvidia-arc.com/showthread.php?12992-Maya-2015-and-mental-ray-GI-GPU-prototype

Here is the link to the thread, so have fun (even when most of the relevant data is already deleted), and remember to stay away from the nVidia and their dirty lies.

BTW: It was clear that CPU only Corona easily beats new GPU GI rendering on both CPU and GPU :D
« Last Edit: 2014-07-23, 08:58:50 by Rawalanche »

2014-07-23, 09:29:51
Reply #1

maru

  • Corona Team
  • Active Users
  • ****
  • Posts: 12781
  • Marcin
    • View Profile
Quote
The new GI technique has some options with "presampling" in their names, but it is not about approximating the GI solution. Not like an irradiance cache, or a final gather map, or photon map. (The GI solution is brute force sampling throughout its calculation.) Those cryptic names are about describing the material interaction behavior to the renderer as a model. One way to think of it is like a texture map. A texture map could represent the material well at a chosen resolution, or it might not, if it is too coarse. Another way of thinking about it is as if the BSDF were measured across an object and re-used. Keep in mind these are metaphors I'm using to describe what happens without giving you the implementation details which may adapt to machine architectures over time. We do this because mental ray allows extreme flexibility in shaders. The idea is that we retain this flexibility while taking advantage of where machines are headed, both for CPU and GPU.

It is exciting as we head toward flexibility without compromise in performance. In fact, as we continue in this direction, we are beginning to see great potential for addressing a variety of GI related issues. And even our initial ideas for Unified GI UI control become simpler.
Oh god.

And censoring posts. Really. This is so childish.
Marcin Miodek | chaos-corona.com
3D Support Team Lead - Corona | contact us

2014-07-23, 09:42:24
Reply #2

Ludvik Koutny

  • VIP
  • Active Users
  • ***
  • Posts: 2557
  • Just another user
    • View Profile
    • My Portfolio
Actually, i was banned just because i said he was not being honest and tried to cover up whole thing in an abstract wall of text nitpicking in terminology. I would expect that from MR fanboys, but not the staff themselves :D

2014-07-23, 10:34:47
Reply #3

lacilaci

  • Active Users
  • **
  • Posts: 749
    • View Profile
I'm surprised they want to call it brute force.. Really, nothing more appealing? Is that enough to sell the idea?
I would go with: "brute force ultra real so much faster than cpu that it will make all your balls and boxes rendered within a split second just get some more GPUS you fool"

You know.. If nvidia is to develop mental ray then it has to have some gpu based thing going on at some point.. No matter how ugly or stupid or wrong.. It's gotta sell moar gpus or gtfo.. :D

So what they're left with is to defend this sh*t... No matter how stupid those arguments will be... :D

2014-07-24, 23:37:35
Reply #4

Alex Abarca

  • Active Users
  • **
  • Posts: 422
  • Corona Certified Instructor
    • View Profile
    • Instagram
MR is trash, In 2013 version its unable to make an FGM file across animation frames...WHY? Because the FGM file becomes too big (in the GIGS) for the render file to even open it. What Happens: It just doesnt render the file.

I always have to save down to 2011 w/backburner 2008. I just learned to work my way around the piece of s***, but I do want to step away from it.

2014-07-25, 01:43:59
Reply #5

cecofuli

  • Active Users
  • **
  • Posts: 1577
    • View Profile
    • www.francescolegrenzi.com
I haven't time to read all the 9 pages, but, if you look this image, I can see clearly splotches on the floor.
It remember me the good old problem on VRay. I would like to see 2 seconds of camera moving in MR ;-)
But ok, they have to admit, MR is dead. An don't now, but many years ago. I think in the 2006 a lot of MR user changed side to VRay.
And, when Corona will be able to have:

Adaptivity
Blend shader
Good SSS,
Skin shader,
Hairs,
GI animation flickering free for animated object and lights,
Volumetric elements
Corona-RT,
Baking texture,
fumeFX and some important plug-in

we can say bye bye to VRay =)

2014-07-25, 02:00:35
Reply #6

cecofuli

  • Active Users
  • **
  • Posts: 1577
    • View Profile
    • www.francescolegrenzi.com
See my comparison. A blind man can see how much the GI in MR is approximate!

« Last Edit: 2014-07-25, 02:04:56 by cecofuli »

2014-07-25, 02:13:23
Reply #7

cecofuli

  • Active Users
  • **
  • Posts: 1577
    • View Profile
    • www.francescolegrenzi.com
RAW files. I7 970 @ 3.9 Ghz. About 20-25 minutes (PT+HDCache) ( I don't remember exactly)
I would like to see the same render in MR ;-)
And, we have to add to the rendering time, also the human time for the setting. In Corona 2 minutes. In MR? Or in VRay?



« Last Edit: 2014-07-25, 02:18:09 by cecofuli »

2014-07-25, 06:26:43
Reply #8

Javadevil

  • Active Users
  • **
  • Posts: 399
    • View Profile


Haha Ludvik, up to your old tricks.
Good to see you keep them on there toes.

Its crazy they ban you, nice thread.




cheers

2014-07-25, 10:01:30
Reply #9

Ludvik Koutny

  • VIP
  • Active Users
  • ***
  • Posts: 2557
  • Just another user
    • View Profile
    • My Portfolio
After mailing them, they finally deleted some of my posts, and restored others to unedited state. So the thread is now incomplete, but my posts are not edited...  So i give them a point for that at very least :)

2014-07-25, 13:50:37
Reply #10

jjaz82

  • Active Users
  • **
  • Posts: 310
    • View Profile
OMG... i really don't understand how those users can consider this artifacts a brute force method...

<a href="http://www.loogix.com" title="visit www.loogix.com">visit www.loogix.com[/url]
visit www.loogix.com<a/>

2014-07-25, 14:08:56
Reply #11

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 9048
  • Turning coffee to features since 2009
    • View Profile
The funny thing is that these kinds of threads inevitably ends up with 2-3 people talking like:
- But you can also increase raspberry coefficient to 3, or even 6, if blueberry is not too high
- Actually elderberry threshold decreases raspberries factor resulting in better cranberries multiplier.
- Yes, and with the wildberry change in 2.8 this will cause the cranberries to get more samples resulting in lower time
- I tried it, and it turns out deterministic blueberry sampler does not have an effect, but I got really nice improvement with goosebery sampler set to 0.03
- This proves it, our berries give the best results
- Yes this is the best berry and other berries suck

This is how I actually read all similar fanboy discussions.
Rendering is magic.How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2014-07-25, 14:34:43
Reply #12

zzubnik

  • Active Users
  • **
  • Posts: 124
    • View Profile
This is the problem with render engines with a million settings to alter.

As a user of MR for 10+ years, and I love it, but I am disappointed at their disjointed, ragged development process, and the half-finished solutions that seem to be standard with MR, especially in 3dsmax. It's compounded by the lazy incompetence at Autodesk.

I've been using the GPU GI in Max 2015, and it really has a lot of problems, and isn't always faster than CPU.

All the above are why I prefer Corona as a renderer.


2014-07-25, 15:46:01
Reply #13

jjaz82

  • Active Users
  • **
  • Posts: 310
    • View Profile
The funny thing is that these kinds of threads inevitably ends up with 2-3 people talking like:
- But you can also increase raspberry coefficient to 3, or even 6, if blueberry is not too high
- Actually elderberry threshold decreases raspberries factor resulting in better cranberries multiplier.
- Yes, and with the wildberry change in 2.8 this will cause the cranberries to get more samples resulting in lower time
- I tried it, and it turns out deterministic blueberry sampler does not have an effect, but I got really nice improvement with goosebery sampler set to 0.03
- This proves it, our berries give the best results
- Yes this is the best berry and other berries suck

This is how I actually read all similar fanboy discussions.
hahaha true story :D

2014-07-25, 19:11:39
Reply #14

steyin

  • Active Users
  • **
  • Posts: 375
  • BALLS
    • View Profile
    • Instagram Page
The funny thing is that these kinds of threads inevitably ends up with 2-3 people talking like:
- But you can also increase raspberry coefficient to 3, or even 6, if blueberry is not too high
- Actually elderberry threshold decreases raspberries factor resulting in better cranberries multiplier.
- Yes, and with the wildberry change in 2.8 this will cause the cranberries to get more samples resulting in lower time
- I tried it, and it turns out deterministic blueberry sampler does not have an effect, but I got really nice improvement with goosebery sampler set to 0.03
- This proves it, our berries give the best results
- Yes this is the best berry and other berries suck

This is how I actually read all similar fanboy discussions.

Its as if the Gummy Bears invented a render engine.

2014-07-27, 06:08:33
Reply #15

Javadevil

  • Active Users
  • **
  • Posts: 399
    • View Profile
OMG... i really don't understand how those users can consider this artifacts a brute force method...

<a href="http://www.loogix.com" title="visit www.loogix.com">visit www.loogix.com[/url]
visit www.loogix.com<a/>

Thats nuts that they cannot see that as interpolation !! There high on Crack :)

2014-07-28, 20:12:32
Reply #16

JeffPatton

  • Active Users
  • **
  • Posts: 80
    • View Profile
    • jeffpatton.net
Ludvík at times you can come across rather harsh in forum discussions (IMHO).  However, I personally don't think anything about it because: a. I know you're passionate about the render engines and making sure things work as we all would want and/or expect them to.  b.  Know what you're talking about.  c. Forum posts can't show the proper emotion behind the text...no matter how many smiley emoticons are used.

I'm rather disappointed to see the nVidia folks handle the discussion like that.  I think the appropriate route would have been to simply counter your argument with indisputable facts that prove it wrong (show the math if you will)...if they can't do that then admit you're right and move on.
Workstation: AMD Threadripper 3970x with 128gb RAM and 2x Titan RTX GPUs (win10 pro)

2014-07-28, 21:45:50
Reply #17

Ludvik Koutny

  • VIP
  • Active Users
  • ***
  • Posts: 2557
  • Just another user
    • View Profile
    • My Portfolio
Ludvík at times you can come across rather harsh in forum discussions (IMHO).  However, I personally don't think anything about it because: a. I know you're passionate about the render engines and making sure things work as we all would want and/or expect them to.  b.  Know what you're talking about.  c. Forum posts can't show the proper emotion behind the text...no matter how many smiley emoticons are used.

I'm rather disappointed to see the nVidia folks handle the discussion like that.  I think the appropriate route would have been to simply counter your argument with indisputable facts that prove it wrong (show the math if you will)...if they can't do that then admit you're right and move on.

Yup... 

i usually (at least) try to not be harsh from the start, but i easily lose it when they start to tell me all those old fairy tales all over again. They are again going in the wrong direction (like with importons, BSDF, irradiance particles and very weird placebo multiple importance sampling implementation, all of which are or soon will be discontinued), yet when they are confronted with reality, they plug their ears and go LALALALALA as loud as they can.

Even though i am now mostly involved with Corona, i would still like Mental Ray to get back up it's legs, because there are still many things i like about it. Unfortunately, with this kind of attitude, there's less and less hope each year that will ever happen.

I still remember the area light fiasco, which made you leave Mental Ray for good. And no one even cared such a valued professional like yourself switched renderer, even though fixing that would probably be about a day of work for single person.

It was just way too much of a bait for me not to get caught on, when i saw statements like these in that thread:

- "In my tests on the scene I show about, other current path tracers can take up to 14 times the render time of the new GI on the CPU. Nearly 28 times the time of the GI on the GPU."

- "The GPU massively outperforms my CPU. The new GI GPU just seems incredible fast"
« Last Edit: 2014-07-28, 21:49:57 by Rawalanche »

2014-07-29, 12:05:58
Reply #18

pokoy

  • Active Users
  • **
  • Posts: 1870
    • View Profile
I still remember the area light fiasco, which made you leave Mental Ray for good. And no one even cared such a valued professional like yourself switched renderer, even though fixing that would probably be about a day of work for single person.

Funny that you mention this one, because the general notion seemed to be something like 'who needs that, what we have is good enough', even though the request for a fix has showed up year after year with each release. It showed that there's a huge gap between devs and users in what's considered as 'must-have', 'good enough' and 'not sufficient'. I guess devs need to take user requests seriously even if they seem far-fetched, even if it's just to give the user base the feeling of being heard.