Exposure is definitely involved in MSI calculations, I think I asked that Ondra once. But I don't know how exactly that works, so my idea is that if you underexpose, MSI might cut away more than it would when compared to neutral/average luminance exposure.
Adaptivity is also I believe, though again, I am not knowledgeable how. But I can tell you I never render (severely) under-exposed. You do that, rise the luminance in post, and find out your black parts are noisy. Actually, very similar behavior to modern day CMOS chips in DSLRs. To get the best quality/dynamic range out of modern day camera, you render 1 stop higher :- ) and tone down in post. This way you get clean dark parts. That's what I do wit my D800. With Corona I render roughly what looks to me like neutral EV.
I render with no-post in Corona framebuffer because I use (used up till now) VFB+ to tonemap. Anyway, I am currently installing 1.5 daily so these things might change once again.
I am getting bit uneasy about delaying any write-up into how I actually do post-production, because it's not stable. I change this stuff routinely like every second month, I approach each project differently, for no other reason than to experiment and self-doubt.
What I am really after is emulate photographic workflow 1:1. I don't want brutal dynamic range and tonemapping, the dream of HDR photographers and CGI scientists, I just want load my image in ACR/Lightroom and pretend I am doing .raw file development. At some times, I almost try hysterically odd measures to reach this point :- ) If I ever reach something that satisfies me roughly, you'll hear of it :- ).