Depends on so many things, like the method you use, the quality of the hardware and software, and if you have an "air gap" (i.e., you use a mic) then the recording environment.
You probably could model it in a function where...
y = number of re-recordings
x = quality of the audio
x = -2y
...and I guess it would be a linear function. Which is to say that every time you re-record it, it gets another 2 times worse. (The 2 is arbitrary and made up for the sake of argument.)
...but then how do you quantify x? You'd have to give it a subjective rating, like "I feel like the 10th recording is 20 times worse than the first", unless you could come up with an algorithm for comparing the deviation from the original recording and invent a scale for it. The problem there would be that you would need to have a model with which to calibrate it, and you're back to square one.
So to answer your questions Froge: at a subjective speed; nope.