In my previous post(follow link), I have talked about building your neural network using **nn** module offered by PyTorch.

In this article, I am going to show you how you can make the same neural network by using the **Sequential** module in PyTorch. But before that,

what is the

Sequentialmodule?

The **nn.Sequential **is a module that contains other modules and applies them in sequence to produce its output. Each Linear Module computes output from the input using a linear function and holds internal Tensors for its weight and bias. (reference: link)

Okay so now, let’s get started:

**Step 1…**

Ever wondered why we need Dataset and DataLoader in PyTorch

We use Dataset and DataLoader to use a very important hyperparameter called “**batch size**”.

*What is **batch size**?*

Batch size refers to the number of the data points considered to calculate the loss value or update the weights. This becomes very handy in the case of huge datasets.

*How is it done?*

`Dataset`

stores the samples and their corresponding labels, and `DataLoader`

wraps an iterable around the `Dataset`

to enable easy access to the samples. (refer: link to the tutorial). So we need both.

Let’s take a small use case…

How to build a basic neural network using PyTorch

Just follow these basic steps while building your neural network.

In this article, I am going to train a simple neural network

how to subtract.

Let’s get started:

When calculating the optimal weights we vary each weight by a small amount and understand the impact on reducing the overall loss value. The key thing to note is the loss calculation based on the weight update of one weight does not impact the loss calculation of the weight update of the other weights in the same iteration. Hence, this process can be optimized by parallelly computing the weight updates in the different cores. The GPUs have multiple cores as compared to CPUs. And tensors can work on GPUs, so we use tensor objects instead of numpy arrays.

**Now registering…**

How do PyTorch’s tensor objects use built-in functionality to calculate gradients?

Differentiation and calculating gradients play a crucial role in updating weights in a neural network. Pytorch takes care of that. Here’s an example.

Now the loss function, in this case, let’s say, the sum of the square values or square loss.

loss = Σ*X²*

gradient = 2*x

Let’s see how it is calculated using PyTorch built-in function. The function used is backward().

Coming out of a 10-year-old relationship wasn’t easy. For the first few weeks, I carried an aching feeling inside me, losing interest in all the things I used to enjoy once and weeping the whole day, every day. When you break up because somebody cheats on you, is bad but worse is when you know you both love each other but still cannot be together. You wave each other a dispirited goodbye and wish each other lifelong happiness, but deep inside you are torn apart. Everything from that point starts appearing meaningless, you want to skip every meeting at work…

LSTM(long short-term memory networks) is a variant of RNN(Recurrent neural network), capable of learning long-term dependencies, especially in sequence prediction problems.

Here I am going to use LSTM to demonstrate how we can use this technique to do some time series forecasting.

**Data**: https://www.kaggle.com/uciml/electric-power-consumption-data-set

*Time series is linearly related to a lagged version of itself.*

What is ACF plot ?

A** time series** is a sequence of measurements of the same variable(s) made over time. Usually, the measurements are made at evenly spaced times — for example, monthly or yearly. The coefficient of correlation between two values in a time series is called the **autocorrelation function** (**ACF**). In other words,

*>Autocorrelation represents the degree of similarity between a given time series and a lagged version of itself over successive time intervals.*

*>Autocorrelation measures the relationship between a variable’s current value and its past values.*

*…*

*Time series is stationary if it’s devoid of a trend or any seasonal effects.*

Stationarity is one of the basic building blocks of performing a time series analysis or time series forecasting.

Why do we care so much if the time series is stationary or not?

Because statistical modeling methods assume or require the time series to be stationary to be effective. So, in order to effectively use any statistical modeling technique, it is imperative to check the presence of any trend or seasonality in it.

How do we define stationarity in statistical terms?

The summary statistics of a stationary…

This is probably the thousandth article that is going to talk about implementing regression analysis using PyTorch. so how is it different?

Well, before I answer that let me write the series of events that led to this article. So, when I started learning PyTorch, I was excited but I had so many whys and why nots that I was frustrated at one point so, I thought why not start from scratch, understand the deep learning framework a little better and then delve deep into the complex concepts like CNN, RNN, LSTM etc. …