0

Dive into the World of Deep Learning without Programming

Share this article!

Is Deep Learning without Programming Possible?

Before telling you the answer to this question let me start with a short introduction about Deep Learning.

Deep learning is a machine learning technique that teaches computers to do what comes naturally to humans: learn by example. Deep learning is a key technology behind driverless cars, enabling them to recognize a stop sign, or to distinguish a pedestrian from a lamppost. It is the key to voice control in consumer devices like phones, tablets, TVs, and hands-free speakers. Deep learning is getting lots of attention lately and for good reason. It’s achieving results that were not possible before.

In deep learning, a computer model learns to perform classification tasks directly from images, text, or sound. Deep learning models can achieve state-of-the-art accuracy, sometimes exceeding human-level performance. Models are trained by using a large set of labelled data and neural network architectures that contain many layers.

So, building a Deep Learning model involves quite a bit of programming with one of the current Deep Learning framework (Theano, Tensorflow, PyTorch, CNTK, Caffe2, …) or Meta-API (Keras) and programming language (Python, Java, R, …). It is usually done by advanced users that have been specifically trained.

So, coming back to question i.e. Is Deep Learning without Programming Possible?

So, answer to this question is a big “YES”.  Let me tell you how.

It is possible through the Easiest Deep Learning platform the “Deep Learning Studio” created by a company named Deep Cognition .

To know more about Deep Learning Studio you can refer to my previous article here.

And to check out benefits of using this software over any other framework you can check out a time-lapse of two situations when I code using KERAS API and when I have used the desktop version of this software here.

In this article, I am going to demonstrate how this Deep Learning platform actually work.

And how we can use this to

  • Create a project and load dataset
  • Build a deep learning classifier
  • Tune the model
  • Checking result
  • Deploy the model as a REST API.

 

Step-1: Get Access

Login at (http://deepcognition.ai/login/)

If you haven’t register yourself yet then sign up and get access to Deep Learning Studio at (http://deepcognition.ai/register/)

Step-2: Create Dataset

Select the dataset option in the menu. There you will find 2 options

  • MY DATASETS
  • PUBLIC DATASETS

In Public dataset, one can find some common datasets like mnist, iris, titanic, cifar-10 and many more which you can directly use to create DL based model.

If one wants to upload their own dataset they can do that in MY Datasets by clicking on the “upload dataset” option

  • Click on the upload data icon in Datasets option
  • Select the file
  • Click on START UPLOAD

To upload your own dataset, follow the 3 steps below to load your <1GB zipped dataset into Deep Learning Studio.

  • Set the name of your file to load to “csv
  • Set your file specs to meet the requirements below:

Image datasets: each image should be encoded with a relative path to the image file.

Text datasets: encode the text as a string of semicolon-separated numbers. Pad as needed to maintain fixed length of sequence.

  • Zip the file (train.csv should be at the top level in the zip file)

To load datasets >1GB, use the file browser located in the left navigation bar and follow the instructions: –

  1. Open File browser by clicking the link to access file browser in the tab
  2. Set the name of your file to load to “train.csv
  3. Set your file specs to meet the requirements below:
    Image datasets: each image should be encoded with a relative path to the image file.
    Text datasets: encode the text as a string of semicolon-separated numbers. Pad as needed to maintain fixed length of sequence.
  4. Create a new folder with a relevant name within the public/dataset folder. Remember this name.
  5. Move your unzipped train.csv file into the folder you created in step 3.
  6. Once the file transfer is complete, simply close the file browser window. Your dataset will be available as (relevant name) under the “Data” tab within a project page.

To see how to create a csv file for custom image dataset one can refer to this link.

 

Step-3: Create, Open a New Project

In the Projects option in navigation bar one can find 2 options:

  • My Projects
  • Sample Projects

In Sample Projects, one can find 5 different Deep Learning based projects that are based on the optimized solution for your reference one can work on any of these projects. To use them click on the “Copy” icon this will add that project in your My Projects list

In My Projects option one can create a new project by clicking on + button and give a name and description to their project, and then opened the project by clicking on box + arrow icon on project bar or one can directly open existing projects.

Step-4: Select Dataset and do training/validation set division

  • In the Data tab one can select the existing dataset both public and private by clicking on the dataset option.
  • Then Divide your dataset between train, test and validation.

Training Dataset: The sample of data used to fit the model.

Validation Dataset: The sample of data used to provide an unbiased evaluation of a model fit on the training dataset while tuning model hyperparameters. The evaluation becomes more biased as skill on the validation dataset is incorporated into the model configuration.

Test Dataset: The sample of data used to provide an unbiased evaluation of a final model fit on the training dataset.

  • One can select either “One batch at a time” or “full dataset” to load dataset in memory. Full dataset option is more preferred for systems with good RAM.
  • Select the shuffle data option to shuffle your dataset.

Step-5: Visualizing the dataset and feature engineering

In the data section, one can find the visualization of the dataset selected

  • One can select the column as Input, output or ignore

Input: – Column used as a feature to train the model

Output: – Column used as a label for the model

Ignore: – Column not included in the training of the model

 

  • One can select the data type of the column as Numeric, Categorical, Image, Array and Numpy according to the requirement.

  • One can also adjust the advance option by clicking on the column name
  • For Numeric, Array and Numpy one can apply
  • For Image type one can apply Pretrained Preprocess, Augmentation, Normalization and Resize function
  • Normalization: – It helps your neural net because it ensures that your input data always is within certain numeric boundaries, basically making it easier for the network to work with the data and to treat data samples equally.
  • Augmentation: – It creates “new” data samples that should be ideally as close as possible to “real” rather than synthetic data points.
  • Resize: – It is used to resize all the images to a specific dimension.

 

 

Step-6: Building a Deep Learning model

For creating a deep learning model Deep Leaning Studio uses GRAPHICAL EDITOR.

Graphical Editor:

Deep Learning model editor looks like:

The model is built by drag and dropping these layer on the editor workspace and defining the connection graph between these layers. The parameters of each layer can be set by selecting the layer and then set the values in a side panel on the right of the editor screen. Each time a layer is added or a parameter of a layer is changed a background process check that the network is “coherent”. That way, one is warned early on if is side-tracking into building an “impossible” model along with error detection.

AutoML:

This feature can let one design their first neural network without prior knowledge of deep learning.

For me, it is like using random forest for non-linear data when I have no idea from where to begin.

One can modify the model generated by AutoMl and tune its hyperparameters to optimize the model.

To create a model using AutoML

  • Click on the AutoML icon on the graphical editor.
  • Select the input and output content
  • Click on the design button

 

  • It will create an auto-generated deep learning model for your dataset.
  • From Load AutoMl Model option load your automatic generated model.

 

 

 

Different Layers in graphical editor:

Select the advanced layer option to view all the available layers

Input/Output Layer

  • Input
  • Output

Core Layers

  • Activation
  • ActivityRegularization
  • Dense
  • Dropout
  • Flatten
  • Highway
  • Lambda
  • Masking
  • MaxoutDense
  • Permute
  • RepeatVector
  • Reshape
  • SpatialDropout1D
  • SpatialDropout2D
  • SpatialDropout3D
  • TimeDistributedDense

Convolution Layers

  • AtrousConvolution1D
  • AtrousConvolution2D
  • Convolution1D
  • Convolution2D
  • Convolution3D
  • Cropping1D
  • Cropping2D
  • Cropping3D
  • Deconvolution2D
  • SeprableConvolution2D
  • UpSampling1D
  • UpSampling2D
  • UpSampling3D
  • ZeroPadding1D
  • ZeroPadding2D
  • ZeroPadding3D

Pooling Layers

  • AvgPooling1D
  • AvgPooling2D
  • AvgPooling3D
  • GlobalAvgPooling1D
  • GlobalAvgPooling2D
  • GlobalAvgPooling3D
  • GlobalMaxPooling1D
  • GlobalMaxPooling2D
  • GlobalMaxPooling3D
  • MaxPooling1D
  • MaxPooling2D
  • MaxPooling3D

Pre-Trained Models

  • InceptionV3
  • ResNet50
  • SqueezeNet
  • VGG16
  • VGG19
  • WideResNet

Local Layers

  • LocallyConnected1D
  • LocallyConnected2D

Recurrent Layers

  • GRU
  • LSTM
  • SimpleRNN

Embeddings Layers

  • Embedding

Convolution Recurrent Layers

  • ConvLSTM2D
  • ConvRecurrent2D

Advanced Activations Layers

  • ELU
  • LeakyReLU
  • ParametricSoftplus
  • PReLU
  • SReLU
  • ThresholdedReLU

Normalization Layers

  • BatchNormalization

Noise Layers

  • GaussianDropout
  • GaussianNoise

Special Functions

  • Merge

These are all the layers that are presently present in Deep Learning Studio

 

One can select any of the layers in proper order to perform different task and build a deep learning model.

Adjusting Parameters of layers:

  • Select the layer whose parameter you want to adjust
  • A dialog box will get open on the right side from where you can adjust parameters
  • If you want to adjust advanced parameters select show advanced option then all the advanced parameters will become visible
  • After adjusting all the parameters deselect the layer to close the dialog box

How to build a model:

Step 1) Select the input layer to insert input in your model

Step 2) Connect all the subsequent layers based on your deep learning concepts in the sequential order to build your model.

Step 3) Adjust the parameters of the layer. Every unique layer has different parameters to adjust. Tune them according to your requirement and algorithm.

Step 4) Connect then output layer at the end of the last layer of your model.

Step 5) If there is no error shown it means that your model is ready to use.

For better understanding, I am going to demonstrate a basic deep learning model designed for classification based on Titanic dataset

  • Dataset Selection

  • Splitting of Data

  • Select datatype

 

 

  • Model Design
  • Adding Input layer                                                     

 

 

  • Adding all the other layers and adjusting their parameters

 

  • Adding output layer

  • Connecting all the layers together

Step-7: Tuning Hyper Parameters

To optimize your solution, one needs to adjust the hyperparameters of their model but hyperparameter tuning is the hardest in the neural network in comparison to any other machine learning algorithm.

But with Deep Learning Studio this can be done really easy and in a very flexible way, in the HyperParameters tab, you can choose from several Loss functions and Optimizers to tune your parameters.

You can adjust the following hyperparameters

  • Number of epoch: – It is a measure of the number of times all of the training vectors are used once to update the weights.
  • Batch Size: – It defines the number of samples that going to be propagated through the network.
  • Loss Function: – It is used to calculate difference between output and target variable. You can select from different loss function available according to your model. Loss function available in Deep Learning Studio are: –
  • Mean_squared_error
  • Mean_absolute_error
  • Mean_absolute_percentage_error
  • Mean_squared_logarithmic_error
  • Squared_hinge
  • Hinge
  • Binary_crossentropy
  • Sparse_categorical_crossentropy
  • Kullback_leibler_divergence
  • Poisson
  • Cosine_proximity
  • Custom_loss_function                   
  • Optimizer: – It helps us to minimize (or maximize)an Objective function (another name for Error functionE(x) which is simply a mathematical function dependent on the Model’s internal learnable parameters which are used in computing the target values(Y) from the set of predictors(X) used in the model. Different optimizer available in Deep Learning Studio are:
  • Adadelta
  • Adagrad
  • Adam
  • Adamax
  • Nadam
  • RMSprop
  • SGD

  • Learning rate: – It is how quickly a network abandons old belief for new ones.
  • Epsilon (fuzz factor): – It is a small floating point number used to generally avoid mistakes like divide by zero.
  • Decay: – It determines how dominant this regularization term will be in the gradient computation
  • Rho: – It is a hyper-parameter which attenuates the influence of past gradient.

Step-8: Training

For Cloud Version only

  • Select any of the available instances and click on “Start Instance”.

Follow these steps for both the versions

  • Edit your Run name if you want
  • If you want to continue your old training then you can load previously trained weights using Load Previous Weights.
  • One can save weight for the following 3 choices:
  • End of Epoch
  • Best Accuracy
  • Lowest Loss
  • In case of more than 1GPU one can select the no of GPUs he wants to use to train the model.
  • Now click on the Start Training option to start training of the model.
  • To stop the training process in between one can click on the Stop Training

Training tab will also help you monitor your training process and create a Loss and Accuracy graph for you.

Step-9: Results

The results of your training can be found in the Results tab. You will have there all your runs.

One can analyze and delete their previous results here.

To check the configuration of the trained weight model, select the weight and click on the Configuration tab.

Step-10: Inference

To test the model and visualize its performance at individual layer one can do this in Dataset Inference Tab

  • Select the data source.
  • Select the trained weight use want to use.
  • Select the output layer at which you want to run inference.
  • Click on the Start Inference.

Visualization of Inference Result

One can also download the trained weight in Download tab.

  • Select the weight
  • Click on the Download Trained Model
  • Save the model.

Step-11: Deploy model

Once the model is built Deep Learning Studio allows deployment of the model as REST API. In addition to deployed REST API, a simple form-based web application is also generated and deployed for rapid testing and sharing. Deployed models can be managed from the deployment menu.

  • Click on the Deploy
  • Select the training run.
  • Enter the service name.
  • Now one can deploy the model using 2 methods:
  • Using This Instance
  • Using Remote Instance
  • For Remote Instance
  • Enter your SSH server name/IP
  • Enter port number
  • Enter your Username and password to access server.

 

  • Click on the Deploy button to deploy your model.

 

WEBAPP/API

 

 

Deployment Menu

Jupyter Notebook

DSL provides the possibility to program inside a Jupyter Notebook or run already existing Notebook in the environments provided (Desktop or Cloud).

Pre-configured Environments

Deep Cognition has introduced pre-configured environment for deep learning programmers. This feature frees AI developers from the headache of setting up development environments. This is especially important as many deep learning frameworks and libraries require a different version of packages. Conflict in the version of these packages often leads to wasted time in debugging.

Currently latest version of Tensorflow, Keras, Caffe 2, Chainer, PyTorch, MxNet, and Caffe are available. These enable developers to use various GitHub AI projects very fast. These environments are isolated and support both CPU and GPU computing.

Ultimately these free up developers time from DevOps work and help them focus on real AI model building and optimization work.

Pre-configured environments in Deep Learning Studio not only gives access to the terminal but also to a full-fledged web-based IDE that is based on open source components from VS Code.

So, the main purpose of this article is to give you a complete demonstration of this new Software platform in the market which simplifies and accelerates the process of Deep Learning by creating more AI developers through a Drag and Drop GUI and allows you all to design, train and deploy Deep Learning model with no coding involved.

 

References:

 

 

Happy Learning 🙂  

Share this article!

Rajat Gupta

Rajat Gupta

Torture the data, and it will confess to anything. I believe in one thing that you can learn anything if you are dedicated enough towards it. Visit http://rajatgupta.me to know more about me.
LinkedIN - https://in.linkedin.com/in/rajat2712
Rajat Gupta

Latest posts by Rajat Gupta (see all)

Rajat Gupta

Torture the data, and it will confess to anything. I believe in one thing that you can learn anything if you are dedicated enough towards it. Visit http://rajatgupta.me to know more about me. LinkedIN - https://in.linkedin.com/in/rajat2712

Leave a Reply

Your email address will not be published. Required fields are marked *