Sure.
Human eyes have both rather wide dynamic range capacity and adaptivness. But here is the thing you might not be noticing in regards to the latter. When you stand in interior room and look towards window out, you clearly see what is outside. But did you notice that the very moment your eyes focus on outside, the interior is getting dimmer on outer edges of your vision ? Human eye can make this transition in fraction of second, which is why most people will never notice, we barely notice what's happening at the fringes of our vision.
If your brain would be capable of taking 'screenshot' of full field of view that you're eyes are looking at, you would notice the room isn't exposed so brightly like you would do it in photography or 3D to make the room look nice. It would be at two good stops of light difference.
There is no dynamic range compression that will ever come out that will make it possible to tonemap single image globally to make up for big dynamic range difference. This has nothing to do with 3D, I just took a photo of seashore against backdrop of mountain baclit by Sun. The dynamic range difference between the black mountains occluded and the sun and its reflection on wet beach was easily beyond 16 stops of light. I took the picture in bracket with Sony A7r ii, 35mm DSLR with second largest dynamic range on market (D810/D850 surpass it at base ISO64). No amount of post-production could fix it to reasonable point.
The 'HDR' adaptive tonemapping compressors make for that ugly, uncanny look. Even if they got perfect and got rid of various artifacts (mixed saturation, halos, etc..) it would still look uncanny, and not resemble anything we would like.
The only way to overcome this is through trickery. The technique would be identical in photography as it is in 3D, throwing more light into less illuminated space to equalize the light levels to point where the disparity in dynamic range isn't as obvious.
I actually use very little dynamic range compression in Corona, in fact, less then HC=1.75 in ALL the time. I actually don't use it at all for product shots where I simply just lift midtones through curve instead. Such brutal compression leads to perceptual flatness, a boring and uncanny look.
I don't personally share the wish of some who think renderers would be able to resemble human vision at some point, I think it rather stems from not understanding of how human vision works. VR with adaptive exposure is already life-like, the solution never was in crazy tonemappers.
One more stance that I own is that I embrace how camera captures light. It's more sensitive to it, puts it more into focus. While things aren't as equalized as looked through eyes, I can just read the light so much more, it's so much more there (feeling best captured when photographing catedrals). I am trying to transfer this to our CGI work further. Make it more about the light than about the space alone, something I believe I didn't do right before. Small steps only :- )