class LevenbergMarquardtFitter( IterativeFitter ) | Source |
---|
Non-linear fitter using the Levenberg-Marquardt method.
Implementation of the Levenberg-Marquardt algorithm to fit the parameters of a non-linear model. It is a gradient fitter which uses partial derivatives to find the downhill gradient. Consequently it ends in the first minimum it finds.
The original C-version stems from Numerical Recipes with some additions of my own. This might be the third or fourth transcription of it.
Author Do Kester.
Examples
# assume x and y are Double1d data arrays
x = numpy.arange( 100, dtype=float ) / 10
y = numpy.arange( 100, dtype=float ) / 122 # make slope
rg = RandomGauss( seed=12345L ) # Gaussian random number generator
y += rg( numpy.asarray( 100, dtype=float ) ) * 0.2 # add noise
y[Range( 9,12 )] += numpy.asarray( [5,10,7], dtype=float ) # make some peak
# define a model: GaussModel + background polynomial
gauss = GaussModel( ) # Gaussian
gauss += PolynomialModel( 1 ) # add linear background
gauss.setParameters( numpy.asarray( [1,1,0.1,0,0], dtype=float ) ) # initial parameter guess
print gauss.getNumberOfParameters( ) # 5 ( = 3 for Gauss + 2 for line )
gauss.keepFixed( numpy.asarray( [2] ), numpy.asarray( [0.1], dtype=float ) ) # keep width fixed at 0.1
lmfit = LevenbergMarquardtFitter( x, gauss )
param = lmfit.fit( y )
print param.length( ) # 4 ( = 5 - 1 fixed )
stdev = lmfit.getStandardDeviation( ) # stdevs on the parameters
chisq = lmfit.getChiSquared( )
scale = lmfit.getScale( ) # noise scale
yfit = lmfit.getResult( ) # fitted values
yband = lmfit.monteCarloError( ) # 1 sigma confidence region
# for diagnostics ( or just for fun )
lmfit = LevenbergMarquardtFitter( x, gauss )
lmfit.setVerbose( 2 ) # report every 100th iteration
plotter = IterationPlotter( ) # from BayesicFitting
lmfit.setPlotter( plotter, 20 ) # make a plot every 20th iteration
param = lmfit.fit( y )
Notes
In case of problems look at the "Troubles" page in the documentation area.
Limitations
- LMF is not guaranteed to find the global minimum.
- The calculation of the evidence is an Gaussian approximation which is only exact for linear models with a fixed scale.
Attributes
- xdata : array_like
vector of numbers as input for model - model : Model
the model to be fitted - lamda : float
to balance the curvature matrix (see Numerical Recipes)
LevenbergMarquardtFitter( xdata, model, **kwargs ) |
---|
Create a class, providing xdata and model.
Parameters
-
xdata : array_like
vector of independent input values -
model : Model
a model function to be fitted -
kwargs : dict
Possibly includes keywords from
IterativeFitter : maxIter, tolerance, verbose
BaseFitter : map, keep, fixedScale
fit( data, weights=None, par0=None, keep=None, limits=None, maxiter=None, tolerance=None, verbose=None, plot=False, accuracy=None, callback=None ) |
---|
It will calculate the hessian matrix and chisq.
Parameters
- data : array_like
the data vector to be fitted - weights : array_like
weights pertaining to the data - accuracy : float or array_like
accuracy of (individual) data - par0 : array_like
initial values for the parameters of the model
default: from model - keep : dict of {int : float}
dictionary of indices (int) of parameters to be kept at fixed value (float)
The values ofkeep
are only valid for this fit
see also `LevenbergMarquardtFitter( ..., keep=dict ) - limits : None or list of 2 floats or list of 2 array_like
None : no limits applied
[lo,hi] : low and high limits for all values of the parameters
[la,ha] : arrays of low and high limits for all values of the parameters - maxiter : int
max number of iterations. default=1000, - tolerance : float
absolute and relative tolrance. default=0.0001, - verbose : int
0 : silent
>0 : more output
default=1 - plot : bool
plot the results - callback : callable
is called each iteration as
val = callback( val )
where val is the parameter list.
Raises
ConvergenceError if it stops when the tolerance has not yet been reached.
chiSquaredExtra( data, params, weights=None ) |
---|
trialfit( params, fi, data, weights, verbose, maxiter ) |
---|
getParameters( ) |
---|
Only for debugging; use Model.getParameters( ) otherwise
checkLimits( fitpar, fitindex ) |
---|
Methods inherited from IterativeFitter |
---|
- setParameters( params )
- doPlot( param, force=False )
- fitprolog( ydata, weights=None, accuracy=None, keep=None )
- report( verbose, param, chi, more=None, force=False )
Methods inherited from BaseFitter |
---|
- setMinimumScale( scale=0 )
- fitpostscript( ydata, plot=False )
- keepFixed( keep=None )
- insertParameters( fitpar, index=None, into=None )
- modelFit( ydata, weights=None, keep=None )
- limitsFit( ydata, weights=None, keep=None )
- checkNan( ydata, weights=None, accuracy=None )
- getVector( ydata, index=None )
- getHessian( params=None, weights=None, index=None )
- getInverseHessian( params=None, weights=None, index=None )
- getCovarianceMatrix( )
- makeVariance( scale=None )
- normalize( normdfdp, normdata, weight=1.0 )
- getDesign( params=None, xdata=None, index=None )
- chiSquared( ydata, params=None, weights=None )
- getStandardDeviations( )
- monteCarloError( xdata=None, monteCarlo=None)
- getScale( )
- getEvidence( limits=None, noiseLimits=None )
- getLogLikelihood( autoscale=False, var=1.0 )
- getLogZ( limits=None, noiseLimits=None )
- plotResult( xdata=None, ydata=None, model=None, residuals=True,