BayesicFitting

Model Fitting and Evidence Calculation

View project on GitHub



class ScipyFitter( MaxLikelihoodFitter )Source

Unified interface to the Scipy minimization module minimize, to fit data to a model.

For documentation see scipy.org->Docs->Reference Guide->optimization and root finding. scipy.optimize.minimize

Examples

# assume x and y are Double1d data arrays.
x = numpy.arange( 100, dtype=float ) / 10
y = numpy.arange( 100, dtype=float ) / 122          # make slope
y += 0.3 * numpy.random.randn( 100 )                # add noise
y[9:12] += numpy.asarray( [5,10,7], dtype=float )   # make some peak
gauss = GaussModel( )                               # Gaussian
gauss += PolynomialModel( 1 )                       # add linear background
print( gauss.npchain )
cgfit = ConjugateGradientFitter( x, gauss )
param = cgfit.fit( y )
print( len( param ) )
5
stdev = cgfit.stdevs
chisq = cgfit.chisq
scale = cgfit.scale                                 # noise scale
yfit  = cgfit.getResult( )                          # fitted values
yband = cgfit.monteCarloEoor( )                         # 1 sigma confidence region

Notes

  1. CGF is not guaranteed to find the global minimum.
  2. CGF does not work with fixed parameters or limits

Attributes

  • gradient : callable gradient( par )
         User provided method to calculate the gradient of chisq.
         It can be used to speed up the dotproduct calculation, maybe
         because of the sparseness of the partial.
         default dotproduct of the partial with the residuals
  • tol : float (1.0e-5)
         Stop when the norm of the gradient is less than tol.
  • norm : float (inf)
         Order to use for the norm of the gradient (-np.Inf is min, np.Inf is max).
  • maxIter : int (200*len(par))
         Maximum number of iterations
  • verbose : bool (False)
         if True print status at convergence
  • debug : bool (False)
         return the result of each iteration in vectors
  • yfit : ndarray (read only)
         the result of the model at the optimal parameters
  • ntrans : int (read only)
         number of function calls
  • ngrad : int (read only)
         number of gradient calls
  • vectors : list of ndarray (read only when debug=True)
         list of intermediate vectors

Hidden Attributes

  • _Chisq : class
         to calculate chisq in the method Chisq.func() and Chisq.dfunc()

Returns

  • pars : array_like
         the parameters at the minimum of the function (chisq).

ScipyFitter( xdata, model, method=None, gradient=True, hessp=None, **kwargs )

Constructor. Create a class, providing inputs and model.

Parameters

  • xdata : array_like
         array of independent input values

  • model : Model
         a model function to be fitted (linear or nonlinear)

  • method : None | 'CG' | 'NELDER-MEAD' | 'POWELL' | 'BFGS' | 'NEWTON-CG' | 'L-BFGS-B' |
               'TNC' | 'COBYLA' | 'SLSQP' | 'DOGLEG' | 'TRUST-NCG'
         the method name is case invariant.
         None Automatic selection of the method.
                         'SLSQP' when the problem has constraints
                         'L-BFGS-B' when the problem has limits
                         'BFGS' otherwise
         'CG' Conjugate Gradient Method of Polak and Ribiere
                         encapsulates scipy.optimize.minimize-cg
         'NELDER-MEAD' Nelder Mead downhill simplex
                         encapsulates scipy.optimize.minimize-neldermead
         'POWELL' Powell's conjugate direction method
                         encapsulates scipy.optimize.minimize-powell
         'BFGS' Quasi-Newton method of Broyden, Fletcher, Goldfarb, and Shannon
                         encapsulates scipy.optimize.minimize-bfgs
         'NEWTON-CG' Truncated Newton method
                         encapsulates scipy.optimize.minimize-newtoncg
         'L-BFGS-B' Limited Memory Algorithm for Bound Constrained Optimization
                         encapsulates scipy.optimize.minimize-lbfgsb
         'TNC' Truncated Newton method with limits
                         encapsulates scipy.optimize.minimize-tnc
         'COBYLA' Constrained Optimization BY Linear Approximation
                         encapsulates scipy.optimize.minimize-cobyla
         'SLSQP' Sequential Least Squares
                         encapsulates scipy.optimize.minimize-slsqp
         'DOGLEG' Dog-leg trust-region algorithm
                         encapsulates scipy.optimize.minimize-dogleg
         'TRUST-NCG' Newton conjugate gradient trust-region algorithm
                         encapsulates scipy.optimize.minimize-trustncg

  • gradient : bool or None or callable gradient( par )
         if True use gradient calculated from model. It is the default.
         if False/None dont use gradient (use numeric approximation in stead)
         if callable use the method as gradient

  • hessp : callable hessp(x, p, *args) or None
         Function which computes the Hessian times an arbitrary vector, p.
         The hessian itself is always provided.

  • kwargs : dict
         Possibly includes keywords from
             MaxLikelihoodFitter : errdis, scale, power
             IterativeFitter : maxIter, tolerance, verbose
             BaseFitter : map, keep, fixedScale

fit( data, weights=None, par0=None, keep=None, limits=None, maxiter=None, tolerance=None, constraints=(), verbose=0, accuracy=None, plot=False, callback=None, **options )
Return parameters for the model fitted to the data array.

Parameters

  • ydata : array_like
         the data vector to be fitted

  • weights : array_like
         weights pertaining to the data

  • accuracy : float or array_like
         accuracy of (individual) data

  • par0 : array_like
         initial values of the function. Default from Model.

  • keep : dict of {int:float}
         dictionary of indices (int) to be kept at a fixed value (float)
         The values of keep are only valid for this fit
         See also ScipyFitter( ..., keep=dict )

  • limits : None or list of 2 floats or list of 2 array_like
         None : no limits applied
         [lo,hi] : low and high limits for all values
         [la,ha] : low array and high array limits for the values

  • constraints : list of callables
         constraint functions cf. All are subject to cf(par) > 0.

  • maxiter : int
         max number of iterations

  • tolerance : float
         stops when ( |hi-lo| / (|hi|+|lo|) ) < tolerance

  • verbose : int
         0 : silent
         >0 : print output if iter % verbose == 0

  • plot : bool
         Plot the results

  • callback : callable
         is called each iteration as
         val = callback( val )
         where val is the minimizable array

  • options : dict
         options to be passed to the method

Raises

     ConvergenceError if it stops when the tolerance has not yet been reached.

collectVectors( par )
Methods inherited from MaxLikelihoodFitter
Methods inherited from IterativeFitter
Methods inherited from BaseFitter