Yikes, the percent output when doing >81 petabytes parallel transfers with #curl would display wrongly.
Presumably not too many users saw this.
@bagder how do you even test that?
@wolf480pl @bagder Injection of dummy data during tests? Like when we test a network API.
@wolf480pl in theory I could make such a file with truncate, but I actually don't test this. I found it by reading the code and fixed without a test...
What about a server side code sending HTTP headers with the size indication being 100 petabytes while just sending a single "A" in the output stream in a loop?
If the server need to handle parallel requests from the same client ... the server code could just fake the chunking.
@dazo @wolf480pl sure, it'll just take a while to complete
Just need to bond "a few" 40 Gbit/s interfaces
@dazo @wolf480pl I can just do localhost transfers. This is on a single core so the limit is rather CPU. I think my CPU can do almost 20gigabytes/sec in a single core. It would take 13 years to reach 8192 petabytes
If I understand correctly, then it should be enough to transfer only the last 100GiB to see the wrong display output. Shouldn't that be possible to achieve?
@jwalzer @dazo @wolf480pl not quite that easily, but if we really wanted to we can of course sneak in some kind of shortcut in there to make it possible for debugging/testing purposes
I mean, there will be a moment, when you have to test the progress bar for Zetabytes, don't you?
@jwalzer @dazo @wolf480pl possibly when we fix the progress meter logic to work with data larger than 64bit sizes