As far as I understand that product, no.
AI gigapixel works based on a neural network, that upscales images based on their pixels. For this task we have something way better in rendering.
In rendering we have our samples. Those samples can be averaged to a different resolution, without scaling the pixels, without information loss. If you would to save individual samples you can even pull sub-pixel perfect masks,
like here this compositor does. There it allows to to create absolutely perfect coverage masks, because it does not create it from the pixels, but from individual samples.
So no, in rendering that does not make sense.
That being said, the distribution of samples may not be optimal on lower resolutions to be applied at higher resolutions, but still, that would be miles better than a pixel based approximation. Ultimately not worth the effort, because especially with IR not resolution or sample coverage is the problem (unless we talk about extreme DOF or MoBlur, where I bet AI Gigapixel would fail anyways), but because we need to average more GI rays to reduce noise, where the AI denoiser really helps.
trivia:
Right now you have a "reconstruction filter" set which dictates how the samples are being "averaged" into the final image. (Box, triangle etc.) This mathematic function describes how you combine multiple samples into one pixel and can be changed in the settings.