extension joomla, template joomla,banner extension joomla,jomla slider,slider joomla
Linear Regression in Machine Learning

Linear regression is used in machine learning to predict the output for new data based on the previous data set. Suppose you have data set of shoes containing 100 different sized shoes along with prices. Now if you want to predict the price of a shoe of size (say) 9.5 then one way of doing prediction is by using linear regression. We train the model based on those 100 data. After training, we will have a hypothesis and based on this hypothesis we can predict the price of the new shoe.

Linear regression is one of the Generalized Linear Model (GLM).  In linear model, the output variable (i.e. dependent variable) is assumed to be a linear combination of input variables (i.e. independent variables). The output variable takes continuous values. Here the input and output variables both are expected to be numeric. The figure below shows the simple representation of a linear regression processor. X1 and X2 are input variables and Y is the output variable. The learning goal is to find out the hidden parameters ?(also called weights)

\[ Y = \theta_0 + \theta_1x_1 + \theta_2x_2 \]
Here I will discuss single variable linear regression model. We have only one variable (size of the shoe) and based on this variable we will predicting the prize of the shoe. The first job of building linear regression model is to fit the data into a straight line with minimum error. We will use the least squared error approach for this. The equation of the straight line is given by

\[ h_\theta(x) = \theta_0 + \theta_1x \]
Our job is to find the value of ?and ?1 that minimizes the prediction error i.e minimizes the function
\[ j = \sum_{i = 1} ^ {m} (h_\theta(x^{(i)}) – y^{(i)})^2 \]
This function is also called as cost function. Here m is the total number of training examples (100 in our example), y is the output variable ( price in our example). There are various approaches by which we can calculate the value of theta that can minimize the cost function. One of the approaches is called Gradient Descent. In this method, we decrease the error gradually by calculating the partial derivative of cost function and subtracting it from initial guess of ?. The initial guess may be [0 0] for ?. The algorithm for gradient descent is given by

 Repeat until convergence {
\[ \theta_j := \theta_j – \alpha \frac { \partial J(\theta_0, \theta_1)} {\partial \theta_j} (for j = 0 and j = 1 ) \]
}

Where ? is called a learning parameter. It controls how fast the algorithm converges to minimum. If the value of ? is large, then it converges fast and if the value of ? is small it converges slowly. But remember the large value of ? sometime may overshoot the minimum and may fail to converge or even diverge.

Gradient Descent

Gradient descent method is a way to find a local minimum of a function. The way it works is we start with an initial guess of the solution and we take the gradient of the function at that point. We step the solution in the negative direction of the gradient and we repeat the process. The algorithm will eventually converge where the gradient is zero (which correspond to a local minimum). For more detail check out wikipedia.

Here is the source code is written in python that creates a model for linear regression.

Note

You need matlibplot.pyplot and numpy library to run this code



Related Article



destination source:https://www.programming-techniques.com/?p=43