I don’t really have anything to talk about, but I made this blog yesterday and I want to make it more than 24 hours before I abandon it, so I’m forcing myself to write something. It’s not like it matters because nobody is reading this. I just put it up because I pay $10/year for jaimerump.com and I’m doing nothing with it.
I’ve always wanted to learn about AI, but I never had any idea where to start, because I didn’t know anything about the field. Lately, I’ve seen a few job postings about machine learning and natural language processing that mention TensorFlow as a requirement, so I figured that must be a reasonable place to start.
Turns out that was a pretty naive assumption. It turns out that AI is hellishly complex, and even with a great tool like TensorFlow helping you out, it’s still extremely confusing and difficult to get into. Luckily, Martin Görner did this great talk at Google Next to help people like me break into the field.
I think the first thing you need to know about the field is that nobody has any idea how intelligence works. It’s easy to program computers to be good at a single task, but that’s not really intelligence. A human still has to think of all of the use cases, all of the things that can go wrong, and tell the computer what to do in every one of those cases.
The only true intelligence we know of, as in a system that can adapt and change itself, comes from organic brains. Programmers like to copy structures from nature and from other disciplines, so naturally we implement artificial intelligence with neural nets.
My understanding is that a neural net is like a plinko board. In plinko, you drop a chip from the top, it bounces around in the pegs, and eventually lands in one of the slots at the bottom. To some degree, you can predict where the chip will go based on where you drop it from and how it hits the pegs. In a neural network, you drop an input in, it bounces around the neurons, and something usable falls out. You train a neural network by giving it an input and telling it what the output should be, so it can adjust how the neurons bounce things around to produce the right output. It’s weird and arcane and I still don’t get it, but it’s a lot better than my prior understanding of “it’s literally magic.”
So yeah, AIs are weird arcane plinko boards that you put problems into and answers come out. Kind of like how computers are weird arcane transistors that you put problems into and answers come out.