UP | HOME

graph neural network

1. basic idea

1.1. features

At layer \(l\), we have the following representation of the graph

  • \(V_l\) – a set of vector representations: one per node
  • \(E_l\) – a set of vector representations: one per edge
  • \(U_l\) – one vector to represent the graph as a whole – this is useful if we ever want to predict properties of the entire graph, e.g. "how many cycles does this graph have?"

1.2. message passing

1.2.1. nodes only

  • At layer \(l\), for node a \(v\), take the neighboring node representations for time \(t\), aggregate them (e.g., concat or sum), and then feed the result through an update function: usually a neural net. This gets you the representation for \(l+1\)

1.2.2. nodes and edges

  • Let's expand the procedure above
  • As above, at layer \(l\), construct the representation for an edge \(e\) at time \(l+1\) from the node representation of its endpoints at time \(l\).
  • Then, construct the representation for a node \(v\) at \(l+1\) from the representation of the edges at \(l+1\)
  • Note that the ordering of the node and edge representation construction can be flipped

1.2.3. global representation

  • Let's continue to expand the procedure
  • At each layer, when constructing the node and edge representations, also take as input the global representation \(U_l\)
  • When constructing the global vector at layer \(l+1\), take as input the node and edge representation at \(l+1\)

2. sources

Created: 2024-07-15 Mon 01:27