Yet another GPGPU-based AI algorithm implemented in my GL Lua Shell. This time it's a Hopfield network.
The demo works by downsampling all the images to a smaller size, something like 32x32, and greyscaling them. It does this because the Lyapunov function matrix needs to be of the size of the inputs squared. Thirty-two, squared, squared, is a large number. That's also why I greyscaled them: Thirty-two times thirty-two times four, squared, is an even bigger number. My graphics card doesn't have that much memory. With that said, feel free to optimize the matrix multiplication code to work more efficiently with symmetric matrices. I'm sure there are other fixes out there too.
I first designed it to converge each channel separately. Writing a shader to do that works pretty easily. However when you feed it an input pattern and ask it to converge, you run the risk of separate channels converging on separate stored images. A unique effect, albeit not a desired one.
The 'doUpdate' button converges the pattern in initialstate.jpg into whatever the Hopfield memory settles on. Feel free to change initialstate.jpg and add images to the memory. The carrying capacity of a Hopfield network is supposed to be pretty large, so feel free to add away. If this tends to converge to nowhere, try the "doSingleUpdate."