Comparison of extrapolation in ReLU MLPs and GNNs. The left part of the image shows a plot of the predictions of MLPs both within and outside the training data range. The right part of the image shows a hypothetical architecture of GNNs with appropriate non-linearities in the input representation. The center of the image shows a text box with the main hypotheses and theorems supporting them.

How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks

Neural networks have been successful in a variety of tasks due to their ability to learn and generalize patterns in the input data. One common task is to learn a function that maps inputs to outputs through supervised learning. Feedforward to graph neural networks are a type of neural network that processes fixed-size inputs layer …

How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks Read More »