0

Getting Started With Keras

Share this article!

There are various neural network frameworks, libraries and APIs available,and so its quite difficult to choose the best for the project. Still Keras is the most favourable.
This article is intended for those readers who are familiar with the basics of deep learning. If you’re not there yet, we highly recommend going through this intuitive article about neural networks and deep learning

Why Keras?

Because it wraps some of the highly efficient numerical computation libraries Theano and TensorFlow. The advantage of this is mainly that we can get started with neural networks in an easy and fun way.

Use Keras if you need a deep learning library that:

  • Allows for easy and fast prototyping (through user friendliness, modularity, and extensibility).
  • Supports both convolutional networks and recurrent networks, as well as combinations of the two.
  • Runs seamlessly good on CPU and GPU.
  • Well maintained documentation.
  • Python supported models -easy to debug and good extensibility

There are other numerous reasons to go for Keras. More can be found here 😀

While choosing the framework for one of the projects based on NLP, we were given the option to get hands dirty with TF(Tensor Flow) or Keras. After much research and guidance of working professionals, Keras was found to be an easy to learn and interpret for beginners in Deep Learning.

Let’s begin with installation (Much easy and quick!)

Install Keras from PyPI (recommended):

sudo pip install keras

Alternatively: install Keras from the GitHub source:

git clone https://github.com/keras-team/keras.git

Then, cd to the Keras folder and run the install command:

cd keras
sudo python setup.py install

Tada…!!

So after installation, the basic steps involved are:

• Load Data

• Define Model

• Compile Model

• Fit Model

• Evaluate Model

• Tie It All Together

Let’s begin by understanding basic code sample here:
Let’s work on developing a simple sequential single hidden layer network in Keras, and understand the different basic functions.
Sequential — You can build all single input and single output models using this. It’s useful when you need to just stack up layers one after the other.

In [2]:
# Importing necessary modules
from keras.models import Sequential
from keras.layers import Dense
Using TensorFlow backend.
In [3]:
# Setting up some hyper-parameters
LEARNING_RATE = 1e-3
BATCH_SIZE = 5
NUMBER_EPOCHS = 5
In [4]:
# Function for creating a Keras model
def create_model():
    model = Sequential()
    model.add(Dense(5, input_shape=(10,), kernel_initializer="uniform",activation="relu"))
    model.add(Dense(2, activation='softmax'))
    
    return model

Here, the Dense layer is regular densely-connected NN layer. Units define the number of neurons in densely-connected NN layer

In Keras, you have to mention the input shape of your data in the format [num_inputs, batch_size] only in the first layer of your model. Input dimensions to all other layers are taken care of by Keras! I Alternatively, you can specify None. Both of them tell Keras to expect any number of samples of shape num_inputs[10 in the given example] each.

Kernel_initializer is used to initialize the weights.
The output of the first dense layer will be of the shape [batch_size,5].

The output layer will have 2 neurons, and the activation function softmax. The dimensions of output from the last layer will be [batch_size, 2].
The above model is a typical example you will see for classification problems using NN.

The next step is to decide on your optimizer and loss function.

In [5]:
# Creating a Keras model
model = create_model()
In [6]:
# Importing necessary modules
from keras import losses
from keras.optimizers import SGD
In [7]:
# Defining the optimizer
sgd = SGD(lr=1e-3)
In [8]:
# Compiling the model and defining the loss and metric for evaluation.
model.compile(optimizer = sgd, loss='categorical_crossentropy', metrics=['accuracy'])

Stochastic Gradient Descent is the optimizer, and binary cross-entropy ( Why binary? because we have two classes)
Stochastic Gradient Descent (SGD) is a simple yet very efficient approach to discriminative learning of linear classifiers under loss functions such as (linear) SVM.

The advantages of Stochastic Gradient Descent are:
-> Efficiency
-> Ease of implementation (lots of opportunities for code tuning)

This was just to start with your deep learning journey with Python and Keras. We can experiment a lot with Keras to make wonderful projects . Will be implementing the framework extensively in upcoming blog posts.Stay tuned!

Keep on experimenting,keep coding.

You can find the Keras cheat-sheet on : Cheat-sheet

A must-read book for all Deep learning enthusiasts out there:
François Chollet’s Deep Learning in Python

Refrences:
https://techleer.com
https://keras.io/
https://www.kdnuggets.com/tag/keras

Happy Learning 🙂

Share this article!

Reshu Singh

Reshu Singh

Tech enthusiast and a recent coffee-addict.You can find me online tweeting about tech and books https://twitter.com/reshusinghhh
Reshu Singh

Latest posts by Reshu Singh (see all)

Reshu Singh

Tech enthusiast and a recent coffee-addict.You can find me online tweeting about tech and books https://twitter.com/reshusinghhh

Leave a Reply

Your email address will not be published. Required fields are marked *