On Hopfield Networks. From the general model to a special… | by Arun Jagota | Oct, 2024

0
6


From the general model to a special case

Towards Data Science
Photo by Shubham Dhage on Unsplash

The 2024 Nobel Prize in Physics went to John Hopfield and Geoffrey Hinton, John Hopfield on his work on the so-called Hopfield Network.

A few decades back, my PhD thesis was on Hopfield-style Networks. I figured this was a good time to write this post.

I’ll focus on Hopfield networks. I’ll begin with the simplest example and illustrate how it works, then bring in the concept of an energy function and minimizing it locally, then cover the special case introduced in my PhD thesis, then come back to the more general case and discuss associative memories and optimization use cases, then come back to the special case to instantiate these applications in a more specific setting.

Let’s begin at the beginning. First off, what is a Hopfield Network?

Consider the picture below.

Interpret it as follows. We have two neurons, each can take on a value of 1 (think “firing”) or -1 (think “not firing”). The two neurons are connected by a positive weight, denoting a synapse of positive influence. Meaning that, from the perspective of any one of the neurons, say the first one, it wants the other one to be in the same state as it is — “firing” or “not firing”.