Restricted Boltzmann Machines (RBMs) are a type of generative neural network used in unsupervised learning tasks, such as dimensionality reduction, feature learning, and collaborative filtering. They were invented by Geoffrey Hinton and his colleagues in 2006.
An RBM is a type of neural network with two layers: a visible layer and a hidden layer. The visible layer represents the input data, while the hidden layer represents a set of latent variables that capture the underlying structure of the data. The nodes in each layer are fully connected to the nodes in the other layer, but there are no connections between nodes within the same layer.
RBMs use a probabilistic approach to model the data. Each node in the visible and hidden layers is assigned a binary state (1 or 0) with a probability that depends on the states of the other nodes in the network. The network is trained using a process called contrastive divergence, which involves adjusting the weights of the connections between the nodes in order to maximize the likelihood of the training data.
One of the key advantages of RBMs is that they can learn a compressed representation of the input data that captures the essential features of the data. This makes them useful for tasks such as image recognition, where the input data is high-dimensional and complex. RBMs can also be stacked to form deep belief networks, which are powerful models for unsupervised learning.