Well, buckets are a bit tricky now.
Basically you first set amount of AA subdivs, which defines how many samples are used for refinement per pixel.
Then you set an adaptive threshold which defines how big a difference needs to be for oversampling to occur.
Then you set adaptive steps. Each adaptive step re-renders bucket again refining it even further using the threshold you have set above. So if you have AA subdivs at 2, and adaptive steps at 2, then bucket will be initially rendered subdividing each pixel 2*2 times (4 subdivisions total), and then refined once more, but this time, only pixels pixels that fall above adaptive threshold will be refined using subdiv AA value.
So if you have subdiv AA at 4, and adaptive steps at 4, then you will render bucket initially using 16 samples per pixel, and then refine it adaptively 3 more times using same value.
But there are some big issues. In the current state, bucket rendering is also controlled by passes, which are hard to define in terms of bucket rendering and therefore do not make much sense. You can not exactly set when bucket rendering will stop, and it is just illogical overall.
Also, all adaptive steps are currently rendered during a first pass, so if you have steps at 4, then they will render 4 times and then it continues to the next bucket. Which is slow if you have some higher settings, and you have to wait really long to have some visual feedback.
My proposal is that bucket rendering will be totally disconnected from any pass limitations, so it will simply stop rendering once it is done, and also that adaptive steps will render progressively, so that entire image will be first rendered using initial step, and then subsequently refined adaptively by the amount of adaptive steps set up. This would make more sense and mainly give faster visual feedback.
I really think this is at least temporary way to go before adaptive sampling is implemented for progressive, but i am having quite a hard time convincing Keymaster so :)