Do you think this will work well even for video ?
I mean, traditionally film rendering need to be blurrier than static images.
Red cameras for example have a low pass filter before the sensor (basically a glass plate to make the image less sharp) to eliminate moire and other artifacts given by too sharp pictures which in motion tend to flicker and stand out.
Some started to take out that low pass plate to have sharper images from the red camera.
If moire happens, you will not cure it by giving samples a higher radius influence. You can only make it less obvious. The only way to truly get rid of moire is to increase resolution. Yes, no image filter works for video no problem, but it does give a less realistic feel. Video does not, by any stretch of the imagination, need to be more blurry. That's bad info. Besides compression already heavily scratching sharpness, the likes of V-Ray's Video filter will always be a mystery for me and a relic of past times, where Renderes out put a 768×576 NTSC resolution image and had to be blurred to a pulpy mess to be intune with TV images.
The thing is, no camera in the world can deliver the sharpness of a mathematically perfect 1 pixel - 1 sample of a renderer, so we intentionally let samples have a larger than 1 to 1 influence when rasterizing to make it more realistic and in-tune with real images.
All of the options above are valid. However, one thing is debatable: Sharpening a mathematically perfect representation of geometry. If you sharpen, you throw out the window the work, that has gone into the subtle differences to make the image seem more real. (unless you do it for artistic effect) So before anyone sharpens, take out the filter of the renderer first.