Navigation
class AmoebaFitter( MaxLikelihoodFitter ) | Source |
---|
Fitter using the simulated annealing simplex minimum finding algorithm,
See also: AnnealingAmoeba
Author Do Kester
Examples
# assume x and y are Double1d data arrays.
x = numpy.arange( 100, dtype=float ) / 10
y = 3.5 * SIN( x + 0.4 ) # make sine
numpy.random.seed( 12345L ) # Gaussian random number generator
y += numpy.random.randn( 100 ) * 0.2 # add noise
sine = SineModel( ) # sinusiodal model
lolim = numpy.asarray( [1,-10,-10], dtype=float )
hilim = numpy.asarray( [100,10,10], dtype=float )
sine.setLimits( lolim, hilim ) # set limits on the model parameters
amfit = AmoebaFitter( x, sine )
param = amfit.fit( y, temp=10 )
stdev = amfit.getStandardDeviation( ) # stdevs on the parameters
chisq = amfit.getChiSquared( )
scale = amfit.getScale( ) # noise scale
yfit = amfit.getResult( ) # fitted values
yfit = sine( x ) # fitted values ( same as previous )
yband = amfit.monteCarloError( ) # 1 sigma confidence region
# for diagnostics ( or just for fun )
amfit = AmoebaFitter( x, sine )
amfit.setTemperature( 10 ) # set a temperature to escape local minima
amfit.setVerbose( 10 ) # report every 10th iteration
plotter = IterationPlotter( ) # from BayesicFitting
amfit.setPlotter( plotter, 20 ) # make a plot every 20th iteration
param = amfit.fit( y )
Notes
- AmoebaFitter is not guaranteed to find the global minimum.
- The calculation of the evidence is an Gaussian approximation which is only exact for linear models with a fixed scale.
- Author : Do Kester.
AmoebaFitter( xdata, model, **kwargs ) |
---|
Create a new Amoeba class, providing inputs and model.
Parameters
- xdata : array_like
independent input values - model : Model
the model function to be fitted - kwargs : dict
Possibly includes keywords from
MaxLikelihoodFitter : errdis, scale, power
IterativeFitter : maxIter, tolerance, verbose
BaseFitter : map, keep, fixedScale
fit( data, weights=None, par0=None, keep=None, size=None, seed=4567, temp=0, limits=None, maxiter=1000, tolerance=0.0001, cooling=0.95, steps=10, verbose=0, plot=False, accuracy=None, callback=None ) |
---|
When done, it also calculates the hessian matrix and chisq.
Parameters
- data : array_like
the data vector to be fitted - weights : array_like
weights pertaining to the data
The weights are relative weights unlessscale
is set. - accuracy : float or array_like
accuracy of (individual) data - par0 : array_like
initial values of teh parameters of the model
default: from model - keep : dict of {int:float}
dictionary of indices (int) to be kept at a fixed value (float)
The values of keep are only valid for this fit
See alsoAmoebaFitter( ..., keep=dict )
- size : float or array_like
step size of the simplex - seed : int
for random number generator - temp : float
temperature of annealing (0 is no annealing) - limits : None or list of 2 floats or list of 2 array_like
None : no limits applied
[lo,hi] : low and high limits for all values
[la,ha] : low array and high array limits for the values - maxiter : int
max number of iterations - tolerance : float
stops when ( |hi-lo| / (|hi|+|lo|) ) < tolerance - cooling : float
cooling factor when annealing - steps : int
number of cycles in each cooling step. - verbose : int
0 : silent
1 : print results to output
2 : print some info every 100 iterations
3 : print some info all iterations - plot : bool
plot the results. - callback : callable
is called each iteration as
val = callback( val )
whereval
is the minimizable array
Methods inherited from MaxLikelihoodFitter |
---|
- makeFuncs( data, weights=None, index=None, ret=3 )
- getScale( )
- getLogLikelihood( autoscale=False, var=1.0 )
- normalize( normdfdp, normdata, weight=1.0 )
- testGradient( par, at, data, weights=None )
Methods inherited from IterativeFitter |
---|
- setParameters( params )
- doPlot( param, force=False )
- fitprolog( ydata, weights=None, accuracy=None, keep=None )
- report( verbose, param, chi, more=None, force=False )
Methods inherited from BaseFitter |
---|
- setMinimumScale( scale=0 )
- fitpostscript( ydata, plot=False )
- keepFixed( keep=None )
- insertParameters( fitpar, index=None, into=None )
- modelFit( ydata, weights=None, keep=None )
- limitsFit( ydata, weights=None, keep=None )
- checkNan( ydata, weights=None, accuracy=None )
- getVector( ydata, index=None )
- getHessian( params=None, weights=None, index=None )
- getInverseHessian( params=None, weights=None, index=None )
- getCovarianceMatrix( )
- makeVariance( scale=None )
- getDesign( params=None, xdata=None, index=None )
- chiSquared( ydata, params=None, weights=None )
- getStandardDeviations( )
- monteCarloError( xdata=None, monteCarlo=None)
- getEvidence( limits=None, noiseLimits=None )
- getLogZ( limits=None, noiseLimits=None )
- plotResult( xdata=None, ydata=None, model=None, residuals=True,