Examples of using Squared error in English and their translations into Thai
{-}
-
Colloquial
-
Ecclesiastic
-
Ecclesiastic
-
Computer
So this is the squared error of the line.
And then, you could imagine the vertical axis to be the squared error.
Or total squared error with the line.
Sometimes, it's called the squared error.
This is the squared error of the line axis.
So this first term right over here, y1 minus mx1 plus b squared, this is all going to be the squared error of the line.
Let me caught it the squared error with the line.
So are squared error versus our line, our total squared error, we just computed to be 2.74.
Let me call this the squared error from the average.
So our squared error to the line from the sum of the squared error to the line from the n points is going to be equal to-- this term right here is n times the mean of the y squared values.
And the partial derivative of our squared error with respect to b is going to be equal to 0.
And over the next few videos, is I want to find the m and b that minimizes the squared error of this line right here.
And we want the squared errors between each of the points of the line.
And so what we calculated next was the total error, the squared error, from the means of our y values.
So let me define the squared error against this line as being equal to the sum of these squared errors. .
So if you want to know what percentage of the total variation is not described by the regression line, it would just be the squared error of the line, because this is the total variation not described by the regression line.
In the last video, we showed that the squared error between some line, y equals mx plus b and each of these n data points is this expression right over here.
The total variation in y, which is the squared error from the mean of the y's.
So if we take 1 minus the squared error between our data points and the line over the squared error between the y's and the mean y, this actually tells us what percentage of total variation is described by the line.
And what we want to do is minimize this squared error from each of these points to the line.
So at that point, the partial derivative of our squared error with respect to m is going to be equal to 0.
This is all just algebraic manipulation of the squared error between those n points and the line y equals mx plus b.
So this is the error one squared.
Error two squared is y2 minus m x2 plus b.
We got to a formula for the slope and y-intercept of the best fitting regression line when you measure the error by the squared distance to that line.
But what we want to do is a minimize the square of the error between each of these points, each of these n points on the line.