interesting article. good one.
[C Is Not a Low-level Language. Your computer is not a fast PDP-11. 2018-04-30 By David Chisnall. At https://queue.acm.org/detail.cfm?id=3212479 ]
esr wrote a follow up thought with respect to golang. to read.
[Embrace the SICK 2018-05-09 By Eric Raymond. At http://esr.ibiblio.org/?p=7979 ]
for our interest about programing language design and its bondage by the design
of hardware it runs on, the interesting thing is
Instruction set architecture. see
and learn about CISC RISC other design issues.
so now when we see programing language speed comparison, we can say, in fact technically true, that it's not because #haskell or functional programing are slower than C, but rather, the cpu we use today are designed to run C-like instructions.
because: algorithm is fundamentally meaningful only to a specific machine. Note, new cpu design (aka instruction set architecture) will be inevitable. That means, all software, needs to be rewritten again. (practically, needs to be recompiled. the compilers need to be rewritten. it may take years to mature.)
@jeremiah wasn't it a modern advance, to actually use gpu to do general computation? ...
i still don't understand fully, how exactly gpu diff from cpu, or why can't cpu be just gpu, or how exactly one use gpu to do general comp. I know the basic principles... but i think one'd need to dip into actual experience in designing chips, or writing specific compilers.
@xahlee On the abstract level, it's another architecture -- but to get specific -- it's an architecture that is specialized to do very specific kinds of jobs. There happens to be lots of them, giving them sort of "mutant powers" relative to a normal CPU, and limitations (they're not meant to be a CPU, of course.)
Read up on CUDA. If you're interested, I'll dig up some other stuff for you.
@jeremiah just read CUDA , just the 1st paragraph yet. It made so much sense now!
basically a API to the GPU! ... ah, so, from a programer point of view, that's how one uses gpu for general computation!
@xahlee ... that's how one does it relatively easily. Before that, it was like graphics card + driver hacking, but people were doing it (to my knowledge) as early as 2004. CUDA standard came out in 2007.
@xahlee also think of the history of GPU. First, that meant something that did blitter, sprites, high speed ops to offload tedium from 7-100mhz CPUs -- look at old Amiga's custom graphics chips as the final phase of this evolution.
Then came the "graphics pipeline" -- that was the idea that SGI engineered. That created the OpenGL standard (and hardware protoform) that became the accelerated _3D_ graphics we knew in the late 90s.
Today, GPU is super MMU, multicore 3D + video processing + 2D...
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!