On one hand, I want to learn more arcane lowl-evel stuff that almost nobody knows, like how to make a NIC or a switch or how PCIE works, etc.

On the other hand, I already feel like I have noone to talk to about "simple" things like use-after-free.

· · Web · 6 · 0 · 3

tfw. you just understood that wizards don't want to be alone in their towers, it's just that none of their friends understand this stuff they're doing

wrt. post upthread, do you feel the same?


No malloc, no use after free!

Problem solved! :-D

@Shamar to be clear:
it's not about a use-after-free in my code, it's not about me not knowing how to deal with it.

It's about finding a fun bug and not being able to share the excitement.


My point is that if you think about it, even mundane issues like memory allocation might be solved in arcane ways, sometimes removing whole class of bugs.

@Shamar yes, avoiding dynamic memory allocation can be fun. Implementing your own allocator can be more fun. Debugging someone else's custom allocator's interaction with a custom event loop is even more fun.

@wolf480pl Fortunately, talking about UaFs is basically my job, but I'm happy to talk about stuff like that here too.
@wolf480pl I dunno, I mean what is there to say about use after free? Other than perhaps reminiscing about that one time, while trying to write a game in C++ and using the STL stuff for the first time which I didn't understand all that well, I made in-game robots that somehow shot laser beams into the mis-allocated memory in such a way that the moon, which I had modelled as an actual 3d sphere with a texture on it, had its orbit disrupted and literally fell out of the sky.

@zudlig actually that's pretty cool, did you figure out why exactly that happened?

@wolf480pl Well basically I wasn't doing the STL thing right as a result of not reading the documentation much at all, but I don't remember the details.

@zudlig hmm do you have still the buggy version around?

I know this doesn't make practical sense but I think it would be fun to actually see what part of the code was modifying which part of the moon's memory structures and why that caused it to fall (as opposed to crashing the whole program, or replacing the moon's texture).

@wolf480pl My code for handling the orbits of the "moon" (which wasn't really meant to be a moon exactly, but close enough) and the sun was probably a little over-complicated because I was trying to replicate earth-like seasons and pretty much got there by trial and error, so who knows... but the robot weapon shots fired were frequent enough and random enough that the misbehaving data structure in question was overwriting all kinds of important stuff, and perhaps it even got to the lookup table of values that everything used for sine and cosine. In any case, among other weird goings-on I suddenly noticed the moon appearing larger and larger as it dropped from the heavens, obviously on a collision course, which was visually quite impressive. Game dev is fun.

@wolf480pl Tons of jobs in embedded computing, and highly paid too. When the embedded guys have a scrum meeting in my office they're all talking enthusiastically about stuff that sounds like "reroute power to the main deflector dish" to me

@p no, it's not about fixing the bug, it's about admiring how cool the bug is, you don't get it

(also zig doesn't have ASAN)

@p (or maybe you do get it, I mean you didn't initially)

@wolf480pl I used to write stuff like that and I still have friends who do. It's a shitshow.

the low-level stuff? it is a shit-show but our whole computing is built on top of it.

And it's not like software, incl. high-level, is not a shitshow. Especailly the web stuff.

@wolf480pl these are different kinds of crap. And arguably, web isn’t as horrible. For the sole reason that mistakes and errors in webshit can be phased out within a few years. Meanwhile, in hardware world, they last for decades.

Your pc still has, albeit these days purely in software, A20 gate that was used to extend the amount of memory 40 years ago. Despite not even needing that and not even having a dedicated keyboard controller.

Also, if you take a look at Linux kernel, it has an enormous amount of hacks for some particular hardware not adhering to standards. ACPI used to be and probably is one of the biggest offenders.

@newt the A20 hack is beautiful.

(I probably say that because I never had to interact with it directly)

In any case, if it's cursed, tell me how it's cursed, so that if I ever need to rebuild civilization from scratch I know not to do it :P

@newt I mean, not to make the same mistake.
You won't convince me to not rebuild the civilization given the opportunity.

@wolf480pl hardware is cursed. Again, used to be much much worse.

But anyway, hardware is just software except printed in silicon. If you wanna play around really low level stuff, but yourself an FPGA and learn VHDL or Verilog.

@newt I had an FPGA course at uni and know some Verilog, but wish I could do more stuff... more high-speed stuff. We only did LEDs, buttons, EPP and VGA. DVI and PS/2 were demoed but not part of any assignment. PCIE wasn't described. DDR, USB and SATA were "it's cursed, don't even ask about it".

@wolf480pl pcie is probably the easiest due to having the smaller legacy baggage and tolerance to latency. I’d go for it.

@newt got any howtos, or is it "read the spec or use existing IP core"

@wolf480pl nope. But from quick googling, pcie protocol is somewhat similar to tcp/ip stack.
@wolf480pl the biggest obstacle seems to be the fact that most software is proprietary and paid, in this case. Plus you might need a good oscilloscope or a bus analyzer. And those cost a fortune.

@newt also, errors in webshit can't be phased out in a few years because there is an ubderfunded government website still using $deprecated_feature

Conversely, because things keep changing and being phased out all the time, you can't ever finish the thing, because updates to the dependencies will always break it sooner or later.

I'll take hacks set in stone over hacks changing under me every month.

@wolf480pl some really aren’t set in stone. Linux used to have particularly hard time with ACPI some years ago, to the point when my first Linux installs had to be run with acpi=off, because most mobo manufacturers didn’t bother with following standards. If it works with Windows, that’s good enough.

At least in webshit, there is a rather easy way to debug a problem. Hardware is a gay mess.
Sign in to participate in the conversation

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!