A recursive neural network is a kind of deep neural network created by applying the same set of weights recursively over a structured input, to produce a structured prediction over variable-size input structures, or a scalar prediction on it, by traversing a given structure in topological order. Recursive neural networks, sometimes abbreviated as RvNNs, have been successful, for instance, in learning sequence and tree structures in natural language processing, mainly phrase and sentence continuous representations based on word embedding. RvNNs have first been introduced to learn distributed representations of structure, such as logical terms. [1] Models and general frameworks have been developed in further works since the 1990s. [2] [3]
In the most simple architecture, nodes are combined into parents using a weight matrix that is shared across the whole network, and a non-linearity such as tanh. If c1 and c2 are n-dimensional vector representation of nodes, their parent will also be an n-dimensional vector, calculated as
Where W is a learned weight matrix.
This architecture, with a few improvements, has been used for successfully parsing natural scenes, syntactic parsing of natural language sentences, [4] and recursive autoencoding and generative modeling of 3D shape structures in the form of cuboid abstractions. [5]
RecCC is a constructive neural network approach to deal with tree domains [2] with pioneering applications to chemistry [6] and extension to directed acyclic graphs. [7]
A framework for unsupervised RNN has been introduced in 2004. [8] [9]
Recursive neural tensor networks use one, tensor-based composition function for all nodes in the tree. [10]
Typically, stochastic gradient descent (SGD) is used to train the network. The gradient is computed using backpropagation through structure (BPTS), a variant of backpropagation through time used for recurrent neural networks.
Universal approximation capability of RNN over trees has been proved in literature. [11] [12]
Recurrent neural networks are recursive artificial neural networks with a certain structure: that of a linear chain. Whereas recursive neural networks operate on any hierarchical structure, combining child representations into parent representations, recurrent neural networks operate on the linear progression of time, combining the previous time step and a hidden representation into the representation for the current time step.
An efficient approach to implement recursive neural networks is given by the Tree Echo State Network [13] within the reservoir computing paradigm.
Extensions to graphs include graph neural network (GNN), [14] Neural Network for Graphs (NN4G), [15] and more recently convolutional neural networks for graphs.
A recursive neural network is a kind of deep neural network created by applying the same set of weights recursively over a structured input, to produce a structured prediction over variable-size input structures, or a scalar prediction on it, by traversing a given structure in topological order. Recursive neural networks, sometimes abbreviated as RvNNs, have been successful, for instance, in learning sequence and tree structures in natural language processing, mainly phrase and sentence continuous representations based on word embedding. RvNNs have first been introduced to learn distributed representations of structure, such as logical terms. [1] Models and general frameworks have been developed in further works since the 1990s. [2] [3]
In the most simple architecture, nodes are combined into parents using a weight matrix that is shared across the whole network, and a non-linearity such as tanh. If c1 and c2 are n-dimensional vector representation of nodes, their parent will also be an n-dimensional vector, calculated as
Where W is a learned weight matrix.
This architecture, with a few improvements, has been used for successfully parsing natural scenes, syntactic parsing of natural language sentences, [4] and recursive autoencoding and generative modeling of 3D shape structures in the form of cuboid abstractions. [5]
RecCC is a constructive neural network approach to deal with tree domains [2] with pioneering applications to chemistry [6] and extension to directed acyclic graphs. [7]
A framework for unsupervised RNN has been introduced in 2004. [8] [9]
Recursive neural tensor networks use one, tensor-based composition function for all nodes in the tree. [10]
Typically, stochastic gradient descent (SGD) is used to train the network. The gradient is computed using backpropagation through structure (BPTS), a variant of backpropagation through time used for recurrent neural networks.
Universal approximation capability of RNN over trees has been proved in literature. [11] [12]
Recurrent neural networks are recursive artificial neural networks with a certain structure: that of a linear chain. Whereas recursive neural networks operate on any hierarchical structure, combining child representations into parent representations, recurrent neural networks operate on the linear progression of time, combining the previous time step and a hidden representation into the representation for the current time step.
An efficient approach to implement recursive neural networks is given by the Tree Echo State Network [13] within the reservoir computing paradigm.
Extensions to graphs include graph neural network (GNN), [14] Neural Network for Graphs (NN4G), [15] and more recently convolutional neural networks for graphs.