### Full Paper : http://arxiv.org/abs/1410.0510

### Poster : NIPS_Workshop

### Slides: dsnn

### Abstract

Neural Networks sequentially build high-level features through their successive layers. We propose here a new neural network model where each layer is associated with a set of candidate mappings. When an input is processed, at each layer, one mapping among these candidates is selected according to a sequential decision process. The resulting model is structured according to a DAG like architecture, so that a path from the root to a leaf node defines a sequence of transformations. Instead of considering global transformations, like in classical multilayer networks, this model allows us for learning a set of local transformations. It is thus able to process data with different characteristics through specific sequences of such local transformations, increasing the expression power of this model w.r.t a classical multilayered network. The learning algorithm is inspired from policy gradient techniques coming from the reinforcement learning domain and is used here instead of the classical back-propagation based gradient descent techniques. Experiments on different datasets show the relevance of this approach.

### Formalism

Let us consider the input space, and the output space, and being respectively the dimension of the input and output spaces. We denote the set of labeled training instances such that and . will denote the set of testing examples.\linebreak

The DSNN model has a DAG-structure defined as follow:

- Each node is in where is the total number of nodes of the DAG
- The root node is , does not have any parent node.
- corresponds to the -th child of node and is the number of children of so, in that case, is a value be between and .
- is
*true*if node is a leaf of the DAG - i.e a node without children. - Each node is associated to a particular representation space where is the dimension associated to this space. Nodes play the same role than layers in classical neural networks.
- i.e the dimension of the root node is the dimension of the input of the model.
- For any node , if i.e the dimension of the leaf nodes is the output space dimension.
- We consider
*mapping functions*which are functions associated with edge . computes a new representation of the input in node given the representation of in node . The output produced by the model is a sequence of -transformation applied to the input like in a neural network. - In addition, each node is also associated with a \textit{selection function} denoted able, given an input in , to compute a score for each child of node . This function defines a probability distribution over the children nodes of such as, given a vector : . Selection functions aim at selecting which -functions to use by choosing a path in the DAG from the root node to a leaf node.

### Inference in DSNN

### Learning in DSNN

The training procedure we propose aims at simultaneously learning both the \textit{mapping functions} and the *selection functions* in order to minimize a given learning loss denoted . Our learning algorithm is based on an extension of **policy gradient techniques** inspired from the **Reinforcement Learning** literature. **See full paper......**

### Experiments

Experiments have been made on UCI datasets, MNIST and Checkerboard. **See full paper...**

### Conclusion

We have proposed a new family of model called \textit{Deep Sequential Neural Networks} which differ from neural networks since, instead of always applying the same set of transformations, they are able to choose which transformation to apply depending on the input. The learning algorithm is based on the computation of the gradient which is obtained by an extension of policy-gradient techniques. In its simplest shape, DSNNs are equivalent to DNNs. Experiments on different datasets have shown the effectiveness of these models.