mstdn.io is one of the many independent Mastodon servers you can use to participate in the fediverse.

Administered by:

Server stats:

382
active users

Yikes, the percent output when doing >81 petabytes parallel transfers with #curl would display wrongly.

Presumably not too many users saw this.

@wolf480pl @bagder Injection of dummy data during tests? Like when we test a network API.

@bortzmeyer @bagder @wolf480pl Could also just run a patched test copy that multiplies the count before display, just to see what happens.

At least, I would think my computer would have issues actually generating 80PB/s of dummy data even if it was all zeros.

@wolf480pl in theory I could make such a file with truncate, but I actually don't test this. I found it by reading the code and fixed without a test...

@bagder @wolf480pl

What about a server side code sending HTTP headers with the size indication being 100 petabytes while just sending a single "A" in the output stream in a loop?

If the server need to handle parallel requests from the same client ... the server code could just fake the chunking.

@bagder @wolf480pl

Just need to bond "a few" 40 Gbit/s interfaces 😉

@dazo @wolf480pl I can just do localhost transfers. This is on a single core so the limit is rather CPU. I think my CPU can do almost 20gigabytes/sec in a single core. It would take 13 years to reach 8192 petabytes

@bagder @dazo @wolf480pl

If I understand correctly, then it should be enough to transfer only the last 100GiB to see the wrong display output. Shouldn't that be possible to achieve?

@jwalzer @dazo @wolf480pl not quite that easily, but if we really wanted to we can of course sneak in some kind of shortcut in there to make it possible for debugging/testing purposes

@bagder @dazo @wolf480pl

I mean, there will be a moment, when you have to test the progress bar for Zetabytes, don't you?

@jwalzer @dazo @wolf480pl possibly when we fix the progress meter logic to work with data larger than 64bit sizes