In 2018, I helped build a system that automatically inspected dishwashers on an assembly line. I taught a computer to recognize when a wire was unseated. The circuit worked, but the clip to keep the wire in place wasn't engaged. When the computer thought that a wire was unseated, that dishwasher was sent to a technician. Instead of telling the computer exactly what to look for, the computer learned from pictures. I showed the computer thousands of pictures labeled as seated or unseated, and the computer learned to see new parts and classify them.
The computer was more accurate than a human. At least, normally.
One day, the technician was overwhelmed by dozens of dishwashers. The line's manager turned off the system. I had to figure out what had gone wrong.
A computer doesn't learn the way a person does. If I showed you a few unseated wires, you could identify it. But a computer needs thousands of pictures to identify them accurately. It doesn't know what an unseated wire is. It simply memorizes patterns of good and bad parts. Then it finds which patterns are closest to the part it's looking at. It only memorizes, it doesn't learn.
We found that someone was using a black marker to double check their work. They pushed in the wire, then the marker pressed the wire a second time. The computer was never trained to look for a black marking. The pictures with the black marking confused the computer, it's tuned patterns both matches equally well. The computer made its best guess based on the patterns it knew, resulting in dozens of okay wires getting sent to the technician. Thousands of hours of work across a team of engineers, programmers, and technicians had been foiled by a black marker.
Where am I memorizing patterns, instead of learning to see?