Deep samples points not only in x and y, but also in z.
For example, if you had two objects, BoxA partially obscuring SphereB and rendered them separately, you would need to know which one was supposed to be in front when merging them together in a post program, like Nuke. But if you rendered the two objects in deep format and used a 'deep merge', you don't need to know which object was supposed to be on top, as the sample data in z knows how to arrange the resulting pixels.
Another example, if you rendered out a volume in deep (ignoring the fact that the file could be several hundred MB), and an object in deep, a 'deep merge' will know where to place the object inside the volume and have it appropriately occluded.