git.net

```On 7/19/20 10:54 AM, duncan smith wrote:
> On 19/07/2020 11:19, Dino wrote:
>> Hi, I am looking at someone else's code trying to understand their use
>> of numpy.polyfit.
>>
>> My understanding was that you can use it to fit polynomials, but
>> apparently, the original author has used it for logarithmic and
>> exponential curves as well this way:
>>
>> Logarithmic
>>
>> ??? def fit_model(self):
>> ??????? self.coefficients = np.polyfit(np.log(self.x), self.y, 1)
>>
>> Exponential
>>
>> ??? def fit_model(self):
>> ??????? self.coefficients = np.polyfit(self.x, np.log(self.y), 1)
>>
>> is this an ok use of np.polyfit? Will it yield the expected result.
>>
>> Thanks
> It depends on what you expect the result to be. There's nothing
> inherently wrong with transforming variables before using least squares
> fitting. Whether it gives you the "best" estimates for the coefficients
> is a different issue.
>
> Duncan

Or, to state the results a bit more precisely, the 'Least Squres' fit is
defined as finding the values of the adjustable values that minimizes
the error between the provided 'measured' data, and the fitted curve,
where the error is defined as the (possbily weighted) sum of the square
of the difference between measured value and predicted value.

One very useful property of least squares fitting to a polynomial is
that you can get a closed form set of equations to solve to find the
'optimal' values.

The primary effect of transforming the data before doing the fit is the
error is now defined in terms of the difference of the transformed
values, not the original values. In many cases, this is actually a
reasonable way to define your error, so it works.

--
Richard Damon

```