Someone will correct me if I'm wrong, but I think this is more or less as expected and the same issue with other renderers. When there is low light there is a low degree of contrast between pixels, so the renderer has a much harder time, relatively speaking, trying to interpolate/anti-alias between those pixel values. Your brighter image has much more lighting information and detail and therefore contrast across the image, so the renderer will have less trouble anti-aliasing those areas. In other words you're giving the renderer more information to play with. Think of it like trying to learn a language. If you have only half the dictionary it's going to take you longer to learn the language than if you have access to the full dictionary, even if you are able to work out what's missing based on the general concepts. Vray works the same, more or less and I imagine most other engines.
It's the same with real world photography too. It's often desireable to shoot a subject/scene with subtle fill lights and then darken those areas back down in post to get the desired, noise-free look.
This is one of those areas that we're all hoping that Render Legion will be able to improve with adaptive sampling, so the renderer will work out which areas of the image need the most time to clean up and then prioritise those, to avoid wasting samples in areas that are "clean enough" already.
I'm sure others can chime in with better info on this topic.