> Perceptrons, which I believe are the simplest neural networks, go back to the 1950s. What's changed is the hardware we can run them on has gotten so much faster and so much more efficient and so much more powerful, and the data sizes that we can work with have gotten so much bigger. So now we can solve these problems, and it's kind of awesome what we can do.

It’s incredible how even now you run a normal multi-layer neural network on a CPU machine and it’s quite slow. The GPU is a big advancement -- and the other thing is there have been some algorithmic advances on the vision side of deep learning.

Thomas Edison was really freaking good at making money and keeping the IP for himself, so obviously he’s going to promulgate the view that it was a single genius, a loner working super hard in a room, who owns everything that came from it. Of course that’s going to be his mission. That's the founder.

Why does pretty much everyone associate with Google's TensorFlow? It's just a nightmare to use in practice.

Despite the tremendous hype around it, the number of people who are actually using it to build real things that make a difference is probably very low.

Follow

You have a bunch of people with physics PhDs who maybe wrote some R code in graduate school. And they suddenly have to compile all these packages with GPU support so they can get CUDA running, and they're just like, "We can't do that."

Sign in to participate in the conversation
Mastodon

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!