Neural network modeler nodes
Use the neural network modeler to create a neural network design flow by using the following deep learning nodes.
Creating models by using the flow editor
You can use the flow editor to create a deep learning flow. A deep learning flow is a graphical representation of a neural network design, which you can use to design and run experiments. Using the flow editor, you prepare or shape data, train or deploy a model, or transform data and export it back to a database table or file in IBM Cloud Object Storage.
To create a neural network model, add the Modeler flow asset type to your project, then select Neural Network Modeler as the flow type.
To create a flow, start by adding an input data node that connects to a data source containing text or images, then add nodes for transforming and processing the data.
- Data format
- Textual: CSV files with labeled text data
Image: Image files in a PKL file. For example, a model testing signatures uses images resized to 32×32 pixels and stored as numpy arrays in a pickled format.
- Data size
- Extremely large data sets
- How you can build models
- Create a deep learning flow to design and run experiments without coding
Tune many hyperparameters
Standardize the components of a deep learning experiment for easier collaboration
- Get started
- To create a neural network model, click Add to project > Modeler flow, then select Neural Network Modeler as the flow type.
For more information on choosing the right tool for your data and use case, see Choosing a tool
The deep learning node palette
- Image Data
- A layer for the input image data models.
- Text Data
- For text data. Accepts a .csv file with each line containing: text input followed by a class label, separated by a comma.
- Absolute Value
- The AbsVal layer computes the output as
abs(x)for each input element x.
- Binomial Normal Log Likelihood
- This layer computes the output as
log(1 + exp(x))for each input element x.
- The Power layer computes the output as
y=(shift + scale * x) ^ powerfor each input element x.
- A layer of rectified linear units.
f(x) = max(x, 0)
- this layer takes an information gain matrix specifying the value of all label pairs. It is often used for predicting targets interpreted as probabilities.
- A utility layer that computes the softmax function
- Hyperbolic Tangent
- The TanH layer computes the output as
tanh(x)for each input element x.
- Conv 2D
- This layer convolves the input image with a set of (non)learnable filters, each producing one feature map in the output image
- Pool 2D
- Performs 2D pooling which is a form of non-linear down-sampling
- A utility layer that concatenates its multiple input blobs to one single output blob.
- A fully connected layer.
- This layer randomly drops it's inputs with a probability
p. Applies Dropout to the input. Dropout consists in randomly setting a fraction
pof input units to
0at each update during training time, which helps prevent overfitting.
- Elementwise Operations
- This layer computes elementwise operations, such as product and sum, along multiple inputs.
- he Flatten layer is a utility layer that flattens an input of shape
n * c * h * wto a simple vector output of shape
n * (c*h*w)
- This layer is use to change the dimension of input without changing the data. At any dimension, value
0means to copy the input dimension as is, and value
-1means to infer the dimension value from the remaning dimensions.
- This layer is use to split previous layer into multiple layers. This creates multiple copies of the input to be fed into different layers ahead simultaneously.
- This layer computes the top-
kclassification accuracy for a one-of-many classification task
- This layer computes index of k maximum values across all dimensions. It is used after classification layer to get the top-k predictions
- Euclidean Loss
- Computes the L2 Norm function over a 4D input It is used for real value regression tasks.
- Hinge Loss
- This layer computes a one-vs-all hinge or squared hinge loss. Mainly used for one-of-many classification tasks
- Infogain Loss
- It is used in cases of one-of-many classification. It is a generalization of MultinomialLogisticLossLayer. This layer takes an information gain matrix specifying the value of all label pairs.
- Cross-Entropy Loss
- Computes the cross-entropy (logistic) loss. It is often used for predicting targets interpreted as probabilities. It is used in cases of one-of-many classification.
- Softmax with loss
- This layer computes the multinomial logistic loss for a one-of-many classification task. It is conceptually identical to a softmax layer followed by a multinomial logistic loss layer, but provides a numerically stable gradient.
- The local response normalization layer performs a kind of lateral inhibition by normalizing over local input regions. The local response normalization layer performs a kind of lateral inhibition by normalizing over local input regions. In
ACROSS_CHANNELSmode, the local regions extend across nearby channels, but have no spatial extent (i.e., they have shape
local_size x 1 x 1). In
WITHIN_CHANNELmode, the local regions extend spatially, but are in separate channels (i.e., they have shape
1 x local_size x local_size). Each input value is divided by
(1+(α/n)∑i xi2)β, where
nis the size of each local region, and the sum is taken over the region centered at that value (zero padding is added where necessary).
- Mean-Variance Normalization
- A utility layer that normalizes the input to have 0-mean and/or unit variance
- Turn positive integers (indexes) into dense vectors of fixed size
- Gated Recurrent Unit
- Long Short-Term Memory Unit
- Simple RNN
- Simple Recurrent Neural Network Unit
- The optimization algorithm
- The optimization algorithm
- The optimization algorithm
Ready to create neural network design flow? For a real-world example of working with neural networks, see Introducing deep learning and long-short term memory networks.
Check out our content pages for more samples, tutorials, documentation, how-tos, and blog posts.