> "Negative latency" will allow Stadia to predict your next button press/movement/action and do the input for you

woah hold on right there


@aidalgol @stolas how would that work with stuff like multiplayer games tho? Is it supposed to estimate and render every possible event by every player?

@aidalgol @stolas keep in mind, it has to keep doing it for each frame rendered, at 60fps that means around 16ms to get at least the most likely outcomes and push the final rendered image when confirmed

@espectalll @stolas I've no idea, and I think that person's hypothesis is insane and unworkable, because of resource costs and how often I've seen odd timing issues in a *single* instance of single-player games. This whole thing smacks of Google just rubbing "AI" on a problem and seeing whether it makes them money.

@espectalll @stolas Especially as they are claiming it will be *faster* than local hardware. Oh, wow, you've broken physics? This is big!

@aidalgol @espectalll @stolas yeah, the amount of permutations required — at 60fps, no less — would make this absurd.

Far more likely and Google-esque: train a model on how existing players behave and predict and prerender one possible path based on that. (Or a handful.)

Also likely and Google-esque: it’s a vaporware moonshot self-flagellation project that won’t leave beta.

@aidalgol @espectalll @stolas these are some interesting ideas, though I don't see it improving things much without any kind of inconsistent controller or movement artifacts

the only thing you'd be able to reliably cut down with this technique is rendering time, right? unless you're sending every single simulated frame for the local device to figure out or something

@espectalll i already hate it when google wants to save my app signing keys *for me* so that would be no difference

@espectalll can't wait to see the spectre equivalent for this

@espectalll I mean, it’s basically the same idea as branch prediction in a CPU, since whether or not you provide some input is essentially a branch.

The core idea isn’t that new, though—multiplayer games already often do some sort of prediction and then adjust state as actual data arrives (so with too much lag you get rubberbandy effects), but Google just decided to throw machine learning in there. It kind of follows from their usual strategy: offer a service, have people use it, collect data on how they use it, build a model based on that data (after a year or two).

@espectalll if Microsoft does something like this for xCloud, they should call it FTT (Faster Than Time), to call back to an old Mixer april fools joke :p

Sign in to participate in the conversation

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!