class ScipyFitter( MaxLikelihoodFitter ) | Source |
---|
Unified interface to the Scipy minimization module minimize, to fit data to a model.
For documentation see scipy.org->Docs->Reference Guide->optimization and root finding. scipy.optimize.minimize
Examples
# assume x and y are Double1d data arrays.
x = numpy.arange( 100, dtype=float ) / 10
y = numpy.arange( 100, dtype=float ) / 122 # make slope
y += 0.3 * numpy.random.randn( 100 ) # add noise
y[9:12] += numpy.asarray( [5,10,7], dtype=float ) # make some peak
gauss = GaussModel( ) # Gaussian
gauss += PolynomialModel( 1 ) # add linear background
print( gauss.npchain )
cgfit = ConjugateGradientFitter( x, gauss )
param = cgfit.fit( y )
print( len( param ) )
5
stdev = cgfit.stdevs
chisq = cgfit.chisq
scale = cgfit.scale # noise scale
yfit = cgfit.getResult( ) # fitted values
yband = cgfit.monteCarloEoor( ) # 1 sigma confidence region
Notes
- CGF is not guaranteed to find the global minimum.
- CGF does not work with fixed parameters or limits
Attributes
- gradient : callable gradient( par )
User provided method to calculate the gradient of chisq.
It can be used to speed up the dotproduct calculation, maybe
because of the sparseness of the partial.
default dotproduct of the partial with the residuals - tol : float (1.0e-5)
Stop when the norm of the gradient is less than tol. - norm : float (inf)
Order to use for the norm of the gradient (-np.Inf is min, np.Inf is max). - maxIter : int (200*len(par))
Maximum number of iterations - verbose : bool (False)
if True print status at convergence - debug : bool (False)
return the result of each iteration invectors
- yfit : ndarray (read only)
the result of the model at the optimal parameters - ntrans : int (read only)
number of function calls - ngrad : int (read only)
number of gradient calls - vectors : list of ndarray (read only when debug=True)
list of intermediate vectors
Hidden Attributes
- _Chisq : class
to calculate chisq in the method Chisq.func() and Chisq.dfunc()
Returns
- pars : array_like
the parameters at the minimum of the function (chisq).
ScipyFitter( xdata, model, method=None, gradient=True, hessp=None, **kwargs ) |
---|
Constructor. Create a class, providing inputs and model.
Parameters
-
xdata : array_like
array of independent input values -
model : Model
a model function to be fitted (linear or nonlinear) -
method : None | 'CG' | 'NELDER-MEAD' | 'POWELL' | 'BFGS' | 'NEWTON-CG' | 'L-BFGS-B' |
'TNC' | 'COBYLA' | 'SLSQP' | 'DOGLEG' | 'TRUST-NCG'
the method name is case invariant.
None Automatic selection of the method.
'SLSQP' when the problem has constraints
'L-BFGS-B' when the problem has limits
'BFGS' otherwise
'CG' Conjugate Gradient Method of Polak and Ribiere
encapsulatesscipy.optimize.minimize-cg
'NELDER-MEAD' Nelder Mead downhill simplex
encapsulatesscipy.optimize.minimize-neldermead
'POWELL' Powell's conjugate direction method
encapsulatesscipy.optimize.minimize-powell
'BFGS' Quasi-Newton method of Broyden, Fletcher, Goldfarb, and Shannon
encapsulatesscipy.optimize.minimize-bfgs
'NEWTON-CG' Truncated Newton method
encapsulatesscipy.optimize.minimize-newtoncg
'L-BFGS-B' Limited Memory Algorithm for Bound Constrained Optimization
encapsulatesscipy.optimize.minimize-lbfgsb
'TNC' Truncated Newton method with limits
encapsulatesscipy.optimize.minimize-tnc
'COBYLA' Constrained Optimization BY Linear Approximation
encapsulatesscipy.optimize.minimize-cobyla
'SLSQP' Sequential Least Squares
encapsulatesscipy.optimize.minimize-slsqp
'DOGLEG' Dog-leg trust-region algorithm
encapsulatesscipy.optimize.minimize-dogleg
'TRUST-NCG' Newton conjugate gradient trust-region algorithm
encapsulatesscipy.optimize.minimize-trustncg
-
gradient : bool or None or callable gradient( par )
if True use gradient calculated from model. It is the default.
if False/None dont use gradient (use numeric approximation in stead)
if callable use the method as gradient -
hessp : callable hessp(x, p, *args) or None
Function which computes the Hessian times an arbitrary vector, p.
The hessian itself is always provided. -
kwargs : dict
Possibly includes keywords from
MaxLikelihoodFitter : errdis, scale, power
IterativeFitter : maxIter, tolerance, verbose
BaseFitter : map, keep, fixedScale
fit( data, weights=None, par0=None, keep=None, limits=None, maxiter=None, tolerance=None, constraints=(), verbose=0, accuracy=None, plot=False, callback=None, **options ) |
---|
Parameters
-
ydata : array_like
the data vector to be fitted -
weights : array_like
weights pertaining to the data -
accuracy : float or array_like
accuracy of (individual) data -
par0 : array_like
initial values of the function. Default from Model. -
keep : dict of {int:float}
dictionary of indices (int) to be kept at a fixed value (float)
The values of keep are only valid for this fit
See alsoScipyFitter( ..., keep=dict )
-
limits : None or list of 2 floats or list of 2 array_like
None : no limits applied
[lo,hi] : low and high limits for all values
[la,ha] : low array and high array limits for the values -
constraints : list of callables
constraint functions cf. All are subject to cf(par) > 0. -
maxiter : int
max number of iterations -
tolerance : float
stops when ( |hi-lo| / (|hi|+|lo|) ) < tolerance -
verbose : int
0 : silent
>0 : print output if iter % verbose == 0 -
plot : bool
Plot the results -
callback : callable
is called each iteration as
val = callback( val )
whereval
is the minimizable array -
options : dict
options to be passed to the method
Raises
ConvergenceError if it stops when the tolerance has not yet been reached.
collectVectors( par ) |
---|
Methods inherited from MaxLikelihoodFitter |
---|
- makeFuncs( data, weights=None, index=None, ret=3 )
- getScale( )
- getLogLikelihood( autoscale=False, var=1.0 )
- normalize( normdfdp, normdata, weight=1.0 )
- testGradient( par, at, data, weights=None )
Methods inherited from IterativeFitter |
---|
- setParameters( params )
- doPlot( param, force=False )
- fitprolog( ydata, weights=None, accuracy=None, keep=None )
- report( verbose, param, chi, more=None, force=False )
Methods inherited from BaseFitter |
---|
- setMinimumScale( scale=0 )
- fitpostscript( ydata, plot=False )
- keepFixed( keep=None )
- insertParameters( fitpar, index=None, into=None )
- modelFit( ydata, weights=None, keep=None )
- limitsFit( ydata, weights=None, keep=None )
- checkNan( ydata, weights=None, accuracy=None )
- getVector( ydata, index=None )
- getHessian( params=None, weights=None, index=None )
- getInverseHessian( params=None, weights=None, index=None )
- getCovarianceMatrix( )
- makeVariance( scale=None )
- getDesign( params=None, xdata=None, index=None )
- chiSquared( ydata, params=None, weights=None )
- getStandardDeviations( )
- monteCarloError( xdata=None, monteCarlo=None)
- getEvidence( limits=None, noiseLimits=None )
- getLogZ( limits=None, noiseLimits=None )
- plotResult( xdata=None, ydata=None, model=None, residuals=True,