Man, I am starting to worry what are you in reading :- ) You periodically post some odd conclusions. Where did this one come from ?
Some textures benefit from higher bit-depth ( 16-32bit per channel) because it contains more information of tonal gradient. 8bit is only 255 steps.
16bit .png or 16/32bit .exr both contain enough capacity to store much more information, so they're preferred formats for displacement for example, where 255steps would cause...well, stepping :- ). Like Minecraft voxels.
16bit .png is by default 2.2 gamma, 16/32bit .exr is linear/1.0 gamma. This is how they are referred, but you can still use 2.2 format and tell the renderer to interpret it as linear (in 3dsMax you do this with "override gamma" with opening dialog).
We specify some textures to be interpreted linearly because we want the values to directly translate without any curve to it, input to match output. Let's say we use displacement modifier specified to use 0.5 as middle and 20 cm height. With linear texture, RGB 128 is going to stay as is, RGB 0 will move 10 cm down, and RGB 255 with move 10 cm up. If it was gamma 2.2, RGB 186 would be middle point, so the texture would move more down then up.
Textures this applies to commonly are reflection map, glossiness/roughness map, normal map, anisotropy map, Fresnel IOR map, bump&normal&displacement. This isn't actually necessary per-se, we can use them as gamma, we'll just get different values, which in case of glossiness or bump will not even matter, but normal maps will go totally wrong.
One big reason why this is often trouble in workflow, is that 3dsMax isn't nearly smart enough to know when to use which. Auto-gamma correctly detects gamma 2.2 formats like jpegs/png/tif... to be 2.2 and linear files like .hdr/.exr to be 1.0. It doesn't know we want every format to be read as linear if we place it as normal map. So we have to override it manually.
Some softwares like Unreal 4 already made themselves smart enough to always interpret normal maps as 1.0, regardless of format it's stored :- ).
And as Ondra said, every texture is in fact linearized by renderer in background, whether it's interpreted as 1.0 or 2.2. We don't need to linearize anything by ourselves.
Bit-depth and gamma are different, non-conflated things. 32bit in CGI use just happens to be linear (gamma 1.0) and floating point (fractional numbers), but it could just as well be Integer and store values on gamma curve.