>This leads to a natural question that has not yet been dis-
cussed in the literature: if we could query a model infinitely,
how much memorization could we extract in total?
You will get every 50-grams, not because the model memorized all of them but by pure chance. It seems pretty obvious to me.
It makes me question if there were some cases where the model output an identical 50-grams but it wasn't present in the training dataset of the model, like in a very structured setting, like assembly code where there is usually a very limited number of keywords used.