Neural network modeler nodes

Use the neural network modeler to create a neural network design flow by using the following deep learning nodes.

The deep learning node palette

Input

Input nodes

Image Data
A layer for the input image data models.
Text Data
For text data. Accepts a .csv file with each line containing: text input followed by a class label, separated by a comma.

Activation

Absolute Value
The AbsVal layer computes the output as abs(x) for each input element x.
Binomial Normal Log Likelihood
This layer computes the output as log(1 + exp(x)) for each input element x.
Power
The Power layer computes the output as y=(shift + scale * x) ^ power for each input element x.
ReLU
A layer of rectified linear units. f(x) = max(x, 0)
Sigmoid
this layer takes an information gain matrix specifying the value of all label pairs. It is often used for predicting targets interpreted as probabilities.
Softmax
A utility layer that computes the softmax function
Hyperbolic Tangent
The TanH layer computes the output as tanh(x) for each input element x.

Convolution

Conv 2D
This layer convolves the input image with a set of (non)learnable filters, each producing one feature map in the output image
Pool 2D
Performs 2D pooling which is a form of non-linear down-sampling

Core

Concat
A utility layer that concatenates its multiple input blobs to one single output blob.
Dense
A fully connected layer.
Dropout
This layer randomly drops it's inputs with a probability p. Applies Dropout to the input. Dropout consists in randomly setting a fraction p of input units to 0 at each update during training time, which helps prevent overfitting.
Elementwise Operations
This layer computes elementwise operations, such as product and sum, along multiple inputs.
Flatten
he Flatten layer is a utility layer that flattens an input of shape n c h w to a simple vector output of shape n (chw)
Reshape
This layer is use to change the dimension of input without changing the data. At any dimension, value 0 means to copy the input dimension as is, and value -1 means to infer the dimension value from the remaning dimensions.
Split
This layer is use to split previous layer into multiple layers. This creates multiple copies of the input to be fed into different layers ahead simultaneously.

Metric

Accuracy
This layer computes the top-k classification accuracy for a one-of-many classification task
Argmax
This layer computes index of k maximum values across all dimensions. It is used after classification layer to get the top-k predictions

Loss

Euclidean Loss
Computes the L2 Norm function over a 4D input It is used for real value regression tasks.
Hinge Loss
This layer computes a one-vs-all hinge or squared hinge loss. Mainly used for one-of-many classification tasks
Infogain Loss
It is used in cases of one-of-many classification. It is a generalization of MultinomialLogisticLossLayer. This layer takes an information gain matrix specifying the value of all label pairs.
Cross-Entropy Loss
Computes the cross-entropy (logistic) loss. It is often used for predicting targets interpreted as probabilities. It is used in cases of one-of-many classification.
Softmax with loss
This layer computes the multinomial logistic loss for a one-of-many classification task. It is conceptually identical to a softmax layer followed by a multinomial logistic loss layer, but provides a numerically stable gradient.

Normalization

LRN
The local response normalization layer performs a kind of lateral inhibition by normalizing over local input regions. The local response normalization layer performs a kind of lateral inhibition by normalizing over local input regions. In ACROSS_CHANNELS mode, the local regions extend across nearby channels, but have no spatial extent (i.e., they have shape local_size x 1 x 1). In WITHIN_CHANNEL mode, the local regions extend spatially, but are in separate channels (i.e., they have shape 1 x local_size x local_size). Each input value is divided by (1+(α/n)∑i xi2)β, where n is the size of each local region, and the sum is taken over the region centered at that value (zero padding is added where necessary).
Mean-Variance Normalization
A utility layer that normalizes the input to have 0-mean and/or unit variance

Embedding

Embedding
Turn positive integers (indexes) into dense vectors of fixed size

Recurring

GRU
Gated Recurrent Unit
LSTM
Long Short-Term Memory Unit
Simple RNN
Simple Recurrent Neural Network Unit

Optimizer

SGD
The optimization algorithm
RMSprop
The optimization algorithm
Adam
The optimization algorithm

Next steps

Ready to create neural network design flow? For a real-world example of working with neural networks, see Introducing deep learning and long-short term memory networks.

Check out our content pages for more samples, tutorials, documentation, how-tos, and blog posts.