# Using Sequential module to build a neural network

In this article, I am going to show you how you can make the same neural network by using the Sequential module in PyTorch. But before that,

what is the Sequential module?

The nn.Sequential is a module that contains other modules and applies them in sequence to produce its output. Each Linear Module computes output from the input using a linear function and holds internal Tensors for its weight and bias. (reference: link)

Okay so now, let’s get started:

Step 1…

# How do I use Dataset and DataLoader

Ever wondered why we need Dataset and DataLoader in PyTorch

We use Dataset and DataLoader to use a very important hyperparameter called “batch size”.

What is batch size?

Batch size refers to the number of the data points considered to calculate the loss value or update the weights. This becomes very handy in the case of huge datasets.

How is it done?

`Dataset` stores the samples and their corresponding labels, and `DataLoader` wraps an iterable around the `Dataset` to enable easy access to the samples. (refer: link to the tutorial). So we need both.

Let’s take a small use case…

How to build a basic neural network using PyTorch

In this article, I am going to train a simple neural network how to subtract.

Let’s get started:

# Why do we need tensor objects over Numpy arrays for building Neural Network?

When calculating the optimal weights we vary each weight by a small amount and understand the impact on reducing the overall loss value. The key thing to note is the loss calculation based on the weight update of one weight does not impact the loss calculation of the weight update of the other weights in the same iteration. Hence, this process can be optimized by parallelly computing the weight updates in the different cores. The GPUs have multiple cores as compared to CPUs. And tensors can work on GPUs, so we use tensor objects instead of numpy arrays.

Now registering…

# Auto gradients of tensor objects

How do PyTorch’s tensor objects use built-in functionality to calculate gradients?

Differentiation and calculating gradients play a crucial role in updating weights in a neural network. Pytorch takes care of that. Here’s an example.

Now the loss function, in this case, let’s say, the sum of the square values or square loss.

loss = Σ

Let’s see how it is calculated using PyTorch built-in function. The function used is backward().

Coming out of a 10-year-old relationship wasn’t easy. For the first few weeks, I carried an aching feeling inside me, losing interest in all the things I used to enjoy once and weeping the whole day, every day. When you break up because somebody cheats on you, is bad but worse is when you know you both love each other but still cannot be together. You wave each other a dispirited goodbye and wish each other lifelong happiness, but deep inside you are torn apart. Everything from that point starts appearing meaningless, you want to skip every meeting at work…

# Timeseries forecasting using LSTM

LSTM(long short-term memory networks) is a variant of RNN(Recurrent neural network), capable of learning long-term dependencies, especially in sequence prediction problems.

Here I am going to use LSTM to demonstrate how we can use this technique to do some time series forecasting.

# Interpreting ACF or Auto-correlation plot

Time series is linearly related to a lagged version of itself.

What is ACF plot ?

A time series is a sequence of measurements of the same variable(s) made over time. Usually, the measurements are made at evenly spaced times — for example, monthly or yearly. The coefficient of correlation between two values in a time series is called the autocorrelation function (ACF). In other words,

>Autocorrelation represents the degree of similarity between a given time series and a lagged version of itself over successive time intervals.

>Autocorrelation measures the relationship between a variable’s current value and its past values.

# Stationarity for Timeseries Analysis

Time series is stationary if it’s devoid of a trend or any seasonal effects.

Stationarity is one of the basic building blocks of performing a time series analysis or time series forecasting.

Why do we care so much if the time series is stationary or not?

Because statistical modeling methods assume or require the time series to be stationary to be effective. So, in order to effectively use any statistical modeling technique, it is imperative to check the presence of any trend or seasonality in it.

How do we define stationarity in statistical terms?

The summary statistics of a stationary…

# Regression analysis with PyTorch 