Building an associative memory with Neo4j

A short journey through neuroscience, neural networks and graph databases

Alberto De Lazzari
8 min readOct 23, 2020

At Graph Connect 2017, during the keynotes, Emil Eifrem said that his favourite graph database is the brain (here you can find the keynote). So I wanted to take the challenge and use Neo4j to build up a brain. Probably you’re thinking that I’m completely crazy but just let me explain better what I mean.

Neural networks are difficult to understand, they behave and work as a black box. You train your model, you feed the network with a certain input and get an answer. You don’t really know what happens under the hood. The underline infrastructure of nodes and relationships (or if you prefer neurons, dendrites and synapses) is completely obscure. Moreover, there are a lot of tools, frameworks and libraries that let anyone play with machine learning and deep learning algorithms but all the bits and bolts are hidden. But I think it’s intriguing to understand how a neural network works, what is the learning algorithm, the mathematics behind it and what are the “connections” with neuroscience or biology.

If I think one of the most distinctive capabilities of the human brain is its associative memory. The ability to learn and remember the relationship between unrelated items. So why not to build up an associative memory model using Neo4j and Cypher.

The network model

I use the Hopfield network model to implement a human memory model. It’s a form of RNN (recurrent neural network) invented by John Hopfield in 1982. The units (neurons) in this network are binary threshold units, that is they take only two values for their states. Each unit is a McCulloch&Pitts neuron. It’s the first computational model of a neuron that was proposed by Warren MuCulloch (neuroscientist) and Walter Pitts (logician) in 1943. It takes inspiration from the biological neuron: there are the dendrites that take inputs from other neurons, the soma processing the inputs and then the output is propagated through the axon (and then to the synapses, the point of contact with other neurons). A McCulloch and Pitts neuron is composed by two parts.

The first one is the sum of all the inputs coming from the other neurons (through the synapses):

The dendrite taking the input from the synapses

The second part is the processing function (let say that the function g is the soma):

The soma processes the input according to a threshold

So the unit is a step function and in my examples I will use polar values so 1 if the input will be greater than or equal the threshold θ otherwise if it will be less than the threshold the state will be -1.

I have the neurons but the network needs a way to learn. So how do the network will learn my input patterns? How do the network evolve the weights of the connections between the neurons and the state of each neuron as well? There are different learning rules but I choose the Hebbian rule. Hebb’s rule is a postulate proposed by Donald Hebb in 1949 and describes how the neuronal activities influence the connection between neurons. Here is Hebb’s principle:

When the axon of a cell A is close enough to excite a B cell and takes part on its activation in a repetitive and persistent way, some type of growth process or metabolic change takes place in one or both cells, so that increases the efficiency of cell A in the activation of B.

This means that “neurons that fire together wire together”. The connections between two neurons might be strengthened if the neurons fire simultaneously. But there’s a drawback to keep in mind. The Hebbian rule works well as long as all the input patterns are orthogonal or uncorrelated. The requirement of orthogonality places serious limitations on the Hebbian learning rule, we can’t expect, in a real use case, to have all the patterns orthogonal.

The learning rule for this model is described by the following equation:

Hebbian learning rule for the Hopfield network model

The data set

My data set is some sort of acknowledgment to Neo4j and I decided to take some pictures of some well known Neo4j guys. The images are in grayscale format and are converted to binary format using a simple threshold function.

A very representative subset of Neo4j guys in their binay form

The orginal images are 120px wide and 160px high. The size of the array representing an image is 120x160 = 19,200. In order to have a small neural network I reduce the dimension of the images up to 50% (60x80 = 4,800).

The network with Neo4j

Let’s put everything together. First I create the network, but it’s an easy task with Cypher. I have 5 patterns to store into the network and each pattern has 4,800 elements. The network is a single layer network so the number of neurons is 4,800.

Generate all the neurons with a simple cypher query

It’s a fully connected graph with no self-loops (no connection that connects a node to itself). The network model is undirected so the adjacency matrix is symmetric and there is no need to create a bidirectional connection between two nodes (in Neo4j relationships must have a direction but it’s not a limit at all) but the number of connections is more than 11 millions so it’s better to iterate and batch the creation of the relationships.

Hebbian rule as a cypher query

With two simple cypher queries I have a single layer recurrent neural network and its learning rule. We can check if each node has the expected degree with the following query.

Get the distinct degree of nodes

Let’s see if we can get any insights from the graph in terms of network structure. According to the Hebbian rule, for a given pattern μ, if two elements x and x are equals than for the neurons i and j the product:

will be positive otherwise it will be negative. I want to check if the weights of the connections reflect the values of the patterns. The cypher query is pretty simple.

Neuron n doesn’t fire together with m
An example of neuron n and its negative connections with its neighbours m
Product of neuron i and neuron j if negative i and j are not in sync

As you can see Neo4j let you investigate the structure of the neural network and explore how the connections change and are updated. Neural networks work like a black box but having the entire network in a graph database let us understand what happens under the hood. I’m not talking about the explainability of the result but the explainability of the algorithm (the rightmost box in the image below, for further details take a look at this blog post)

Three categories of explainability as described by Amy Hodler

Test the associative memory

I want to see if my associative memory is able to retrieve one of the previous stored images. But I want to submit a “corrupted” version of my image. I want to see if my network can associate this corrupted version (an unrelated item) with the original image. Let’s take Michael Hunger binary image and change it. The first “corrupted” image will be a cropped one while the second one will be a noisy image (20% of the pixels are flipped).

Cropped version of Michael Hunger

I submit the image to the network and I iterate until I reach the convergence that is until I reach a stable state (a local minimum of the energy function of the Hopfield model). After few iterations the network reaches a stable state, the original Michael Hunger. I repeat the same process with the following image and again after few iterations the network reaches a stable state.

Noisy version of Michael Hunger
Stable state for Michael Hunger

Now I mess the thing around, I test the network with a very noisy image where half of the pixels are flipped. Probably there is no human being who is capable of recall the original image given the following one. So I’m pretty curious to see the output of the neural network.

Super noisy version of Michael Hunger

Surprisingly the network is able to recall Michael Hunger again but giving a closer look at the image you can see that is the opposite of the original one (yes, now Michael Hunger’s beard is white instead of black). What is happening is pretty simple, the network has reached a stable state (a valid local minimum) but it’s a spurious state. This state is something that has not been stored previously.

Conclusions

I just wanted to showcase how you can easily implement a simple neural network with just a few cypher queries (and a little bit of python for image processing) and show how helpful can be Neo4j when you want to understand the underline structure of the network. A graph database can be very helpful at different levels when your struggling with the explainability of your model especially when it’s a black box model. Emil Eifrem’s keynote and explainability were just an excuse to an uncommon use of Neo4j.

All the steps are available in a Jupyter notebook at the following link (I will also add soon an Apache Zeppelin notebook too)

--

--

Alberto De Lazzari
Alberto De Lazzari

Written by Alberto De Lazzari

I’m a Graph Database Expert and an Enterprise Architect with a strong experience on Banking, Financial and Insurance Industries.

Responses (1)