Package mystic :: Module _scipy060optimize

Module _scipy060optimize

source code

local copy of scipy.optimize


Version: 0.7

Functions
 
rosen(x) source code
 
rosen_der(x) source code
 
rosen_hess(x) source code
 
rosen_hess_prod(x, p) source code
 
fmin(func, x0, args=(), xtol=1e-4, ftol=1e-4, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None)
Minimize a function using the downhill simplex algorithm.
source code
 
line_search(f, myfprime, xk, pk, gfk, old_fval, old_old_fval, args=(), c1=1e-4, c2=0.9, amax=50)
Find alpha that satisfies strong Wolfe conditions.
source code
 
approx_fprime(xk, f, epsilon, *args) source code
 
check_grad(func, grad, x0, *args) source code
 
fmin_ncg(f, x0, fprime, fhess_p=None, fhess=None, args=(), avextol=1e-5, epsilon=_epsilon, maxiter=None, full_output=0, disp=1, retall=0, callback=None)
Minimize the function f using the Newton-CG method.
source code
 
fminbound(func, x1, x2, args=(), xtol=1e-5, maxfun=500, full_output=0, disp=1)
Bounded minimization for scalar functions.
source code
 
brent(func, args=(), brack=None, tol=1.48e-8, full_output=0, maxiter=500)
Given a function of one-variable and a possible bracketing interval, return the minimum of the function isolated to a fractional precision of tol.
source code
 
golden(func, args=(), brack=None, tol=_epsilon, full_output=0)
Given a function of one-variable and a possible bracketing interval, return the minimum of the function isolated to a fractional precision of tol.
source code
 
bracket(func, xa=0.0, xb=1.0, args=(), grow_limit=110.0, maxiter=1000)
Given a function and distinct initial points, search in the downhill direction (as defined by the initital points) and return new points xa, xb, xc that bracket the minimum of the function: f(xa) > f(xb) < f(xc).
source code
 
fmin_powell(func, x0, args=(), xtol=1e-4, ftol=1e-4, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None, direc=None)
Minimize a function using modified Powell's method.
source code
 
brute(func, ranges, args=(), Ns=20, full_output=0, finish=fmin)
Minimize a function over a given range by brute force.
source code
Function Details

fmin(func, x0, args=(), xtol=1e-4, ftol=1e-4, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None)

source code 
Minimize a function using the downhill simplex algorithm.

:Parameters:

  func : the Python function or method to be minimized.
  x0 : ndarray - the initial guess.
  args : extra arguments for func.
  callback : an optional user-supplied function to call after each
              iteration.  It is called as callback(xk), where xk is the
              current parameter vector.

:Returns: (xopt, {fopt, iter, funcalls, warnflag})

  xopt : ndarray 
    minimizer of function
  fopt : number 
    value of function at minimum: fopt = func(xopt)
  iter : number 
    number of iterations
  funcalls : number
    number of function calls
  warnflag : number 
    Integer warning flag:
              1 : 'Maximum number of function evaluations.'
              2 : 'Maximum number of iterations.'
  allvecs : Python list 
    a list of solutions at each iteration

:OtherParameters:

  xtol : number 
    acceptable relative error in xopt for convergence.
  ftol : number 
    acceptable relative error in func(xopt) for convergence.
  maxiter : number 
    the maximum number of iterations to perform.
  maxfun : number 
    the maximum number of function evaluations.
  full_output : number 
    non-zero if fval and warnflag outputs are desired.
  disp : number 
    non-zero to print convergence messages.
  retall : number 
    non-zero to return list of solutions at each iteration

:SeeAlso:

  fmin, fmin_powell, fmin_cg,
         fmin_bfgs, fmin_ncg -- multivariate local optimizers
  leastsq -- nonlinear least squares minimizer

  fmin_l_bfgs_b, fmin_tnc,
         fmin_cobyla -- constrained multivariate optimizers

  anneal, brute -- global optimizers

  fminbound, brent, golden, bracket -- local scalar minimizers

  fsolve -- n-dimenstional root-finding

  brentq, brenth, ridder, bisect, newton -- one-dimensional root-finding

  fixed_point -- scalar fixed-point finder
  
Notes

-----------

  Uses a Nelder-Mead simplex algorithm to find the minimum of function
  of one or more variables.
  

line_search(f, myfprime, xk, pk, gfk, old_fval, old_old_fval, args=(), c1=1e-4, c2=0.9, amax=50)

source code 

Find alpha that satisfies strong Wolfe conditions.

:Parameters:

f : objective function myfprime : objective function gradient (can be None) xk : ndarray -- start point pk : ndarray -- search direction gfk : ndarray -- gradient value for x=xk args : additional arguments for user functions c1 : number -- parameter for Armijo condition rule c2 : number - parameter for curvature condition rule

:Returns:

alpha0 : number -- required alpha (x_new = x0 + alpha * pk) fc : number of function evaluations gc : number of gradient evaluations

Notes

--------------------------------

Uses the line search algorithm to enforce strong Wolfe conditions Wright and Nocedal, 'Numerical Optimization', 1999, pg. 59-60

For the zoom phase it uses an algorithm by

fmin_ncg(f, x0, fprime, fhess_p=None, fhess=None, args=(), avextol=1e-5, epsilon=_epsilon, maxiter=None, full_output=0, disp=1, retall=0, callback=None)

source code 
Minimize the function f using the Newton-CG method. 

:Parameters:

f -- the Python function or method to be minimized.
x0 : ndarray -- the initial guess for the minimizer.
fprime -- a function to compute the gradient of f: fprime(x, *args)
fhess_p -- a function to compute the Hessian of f times an
           arbitrary vector: fhess_p (x, p, *args)
fhess -- a function to compute the Hessian matrix of f.
args -- extra arguments for f, fprime, fhess_p, and fhess (the same
        set of extra arguments is supplied to all of these functions).

epsilon : number 
    if fhess is approximated use this value for
             the step size (can be scalar or vector)
callback -- an optional user-supplied function to call after each
            iteration.  It is called as callback(xk), where xk is the
            current parameter vector.

:Returns: (xopt, {fopt, fcalls, gcalls, hcalls, warnflag},{allvecs})

xopt : ndarray
    the minimizer of f
fopt : number
    the value of the function at xopt: fopt = f(xopt)
fcalls : number
    the number of function calls
gcalls : number
    the number of gradient calls
hcalls : number 
    the number of hessian calls.
warnflag : number
    algorithm warnings:
            1 : 'Maximum number of iterations exceeded.'
allvecs : Python list
    a list of all tried iterates

:OtherParameters:

avextol : number
    Convergence is assumed when the average relative error in
           the minimizer falls below this amount.
maxiter : number
    Maximum number of iterations to allow.
full_output : number
    If non-zero return the optional outputs.
disp : number
    If non-zero print convergence message.
retall : bool
    return a list of results at each iteration if True

:SeeAlso:

  fmin, fmin_powell, fmin_cg,
         fmin_bfgs, fmin_ncg -- multivariate local optimizers
  leastsq -- nonlinear least squares minimizer

  fmin_l_bfgs_b, fmin_tnc,
         fmin_cobyla -- constrained multivariate optimizers

  anneal, brute -- global optimizers

  fminbound, brent, golden, bracket -- local scalar minimizers

  fsolve -- n-dimenstional root-finding

  brentq, brenth, ridder, bisect, newton -- one-dimensional root-finding

  fixed_point -- scalar fixed-point finder

Notes

---------------------------------------------

Only one of fhess_p or fhess need be given.  If fhess is provided,
then fhess_p will be ignored.  If neither fhess nor fhess_p is
provided, then the hessian product will be approximated using finite
differences on fprime. fhess_p must compute the hessian times an arbitrary
vector. If it is not given, finite-differences on fprime are used to
compute it. See Wright, and Nocedal 'Numerical Optimization', 1999,
pg. 140.

fminbound(func, x1, x2, args=(), xtol=1e-5, maxfun=500, full_output=0, disp=1)

source code 
Bounded minimization for scalar functions.

:Parameters:

  func -- the function to be minimized (must accept scalar input and return
          scalar output).
  x1, x2 : ndarray 
    the optimization bounds.
  args -- extra arguments to pass to function.
  xtol : number
    the convergence tolerance.
  maxfun : number
    maximum function evaluations.
  full_output : number
    Non-zero to return optional outputs.
  disp : number
    Non-zero to print messages.
          0 : no message printing.
          1 : non-convergence notification messages only.
          2 : print a message on convergence too.
          3 : print iteration results.


:Returns: (xopt, {fval, ierr, numfunc})

  xopt : ndarray
    The minimizer of the function over the interval.
  fval : number
    The function value at the minimum point.
  ierr : number
    An error flag (0 if converged, 1 if maximum number of
          function calls reached).
  numfunc : number
    The number of function calls.

:SeeAlso:

  fmin, fmin_powell, fmin_cg,
         fmin_bfgs, fmin_ncg -- multivariate local optimizers
  leastsq -- nonlinear least squares minimizer

  fmin_l_bfgs_b, fmin_tnc,
         fmin_cobyla -- constrained multivariate optimizers

  anneal, brute -- global optimizers

  fminbound, brent, golden, bracket -- local scalar minimizers

  fsolve -- n-dimenstional root-finding

  brentq, brenth, ridder, bisect, newton -- one-dimensional root-finding

  fixed_point -- scalar fixed-point finder

Notes

-------------------------------------------------------

Finds a local minimizer of the scalar function func in the interval
  x1 < xopt < x2 using Brent's method.  (See brent for auto-bracketing).

brent(func, args=(), brack=None, tol=1.48e-8, full_output=0, maxiter=500)

source code 
Given a function of one-variable and a possible bracketing interval,
return the minimum of the function isolated to a fractional precision of
tol. 

:Parameters:

func - objective func
args - additional arguments (if present)
brack - triple (a,b,c) where (a<b<c) and
    func(b) < func(a),func(c).  If bracket is two numbers (a,c) then they are
    assumed to be a starting interval for a downhill bracket search
    (see bracket); it doesn't always mean that obtained solution will satisfy a<=x<=c.
    
full_output : number
    0 - return only x (default)
    1 - return all output args (xmin, fval, iter, funcalls)
    
:Returns:

xmin : ndarray
    optim point
fval : number
    optim value 
iter : number
    number of iterations
funcalls : number
    number of objective function evaluations
    
:SeeAlso:

  fmin, fmin_powell, fmin_cg,
         fmin_bfgs, fmin_ncg -- multivariate local optimizers
  leastsq -- nonlinear least squares minimizer

  fmin_l_bfgs_b, fmin_tnc,
         fmin_cobyla -- constrained multivariate optimizers

  anneal, brute -- global optimizers

  fminbound, brent, golden, bracket -- local scalar minimizers

  fsolve -- n-dimenstional root-finding

  brentq, brenth, ridder, bisect, newton -- one-dimensional root-finding

  fixed_point -- scalar fixed-point finder

Notes

----------------------------

Uses inverse parabolic interpolation when possible to speed up convergence
of golden section method.

golden(func, args=(), brack=None, tol=_epsilon, full_output=0)

source code 
Given a function of one-variable and a possible bracketing interval,
return the minimum of the function isolated to a fractional precision of
tol. 

:Parameters:

func - objective function
args - additional arguments (if present)
brack : a triple (a,b,c) where (a<b<c) and
    func(b) < func(a),func(c).  If bracket is two numbers (a, c) then they are
    assumed to be a starting interval for a downhill bracket search
    (see bracket); it doesn't always mean that obtained solution will satisfy a<=x<=c
tol : number 
    x tolerance stop criterion
full_output : number
    0 for false
    1 for true
    
:SeeAlso:

  fmin, fmin_powell, fmin_cg,
         fmin_bfgs, fmin_ncg -- multivariate local optimizers
  leastsq -- nonlinear least squares minimizer

  fmin_l_bfgs_b, fmin_tnc,
         fmin_cobyla -- constrained multivariate optimizers

  anneal, brute -- global optimizers

  fminbound, brent, golden, bracket -- local scalar minimizers

  fsolve -- n-dimenstional root-finding

  brentq, brenth, ridder, bisect, newton -- one-dimensional root-finding

  fixed_point -- scalar fixed-point finder

Notes

---------------------------------------

Uses analog of bisection method to decrease the bracketed interval.

bracket(func, xa=0.0, xb=1.0, args=(), grow_limit=110.0, maxiter=1000)

source code 
Given a function and distinct initial points, search in the downhill
direction (as defined by the initital points) and return new points
xa, xb, xc that bracket the minimum of the function:
f(xa) > f(xb) < f(xc). It doesn't always mean that obtained solution will satisfy xa<=x<=xb

:Parameters:

func -- objective func
xa, xb : number
    bracketing interval
args -- additional arguments (if present)
grow_limit : number
    max grow limit
maxiter : number
    max iterations number

:Returns: xa, xb, xc, fa, fb, fc, funcalls

xa, xb, xc : number
    bracket
fa, fb, fc : number
    objective function values in bracket
funcalls : number
    number of function evaluations

fmin_powell(func, x0, args=(), xtol=1e-4, ftol=1e-4, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None, direc=None)

source code 
Minimize a function using modified Powell's method.

:Parameters:

  func -- the Python function or method to be minimized.
  x0 : ndarray
    the initial guess.
  args -- extra arguments for func
  callback -- an optional user-supplied function to call after each
              iteration.  It is called as callback(xk), where xk is the
              current parameter vector
  direc -- initial direction set

:Returns: (xopt, {fopt, xi, direc, iter, funcalls, warnflag}, {allvecs})

  xopt : ndarray
    minimizer of function

  fopt : number
    value of function at minimum: fopt = func(xopt)
  direc -- current direction set
  iter : number
    number of iterations
  funcalls : number 
    number of function calls
  warnflag : number
    Integer warning flag:
              1 : 'Maximum number of function evaluations.'
              2 : 'Maximum number of iterations.'
  allvecs : Python list
    a list of solutions at each iteration

:OtherParameters:

  xtol : number
    line-search error tolerance.
  ftol : number
    acceptable relative error in func(xopt) for convergence.
  maxiter : number
    the maximum number of iterations to perform.
  maxfun : number
    the maximum number of function evaluations.
  full_output : number
    non-zero if fval and warnflag outputs are desired.
  disp : number
    non-zero to print convergence messages.
  retall : number
    non-zero to return a list of the solution at each iteration

:SeeAlso:

  fmin, fmin_powell, fmin_cg,
         fmin_bfgs, fmin_ncg -- multivariate local optimizers
  leastsq -- nonlinear least squares minimizer

  fmin_l_bfgs_b, fmin_tnc,
         fmin_cobyla -- constrained multivariate optimizers

  anneal, brute -- global optimizers

  fminbound, brent, golden, bracket -- local scalar minimizers

  fsolve -- n-dimenstional root-finding

  brentq, brenth, ridder, bisect, newton -- one-dimensional root-finding

  fixed_point -- scalar fixed-point finder
  
Notes

-----------------------

  Uses a modification of Powell's method to find the minimum of a function
  of N variables
  

brute(func, ranges, args=(), Ns=20, full_output=0, finish=fmin)

source code 
Minimize a function over a given range by brute force.

:Parameters:

func -- Function to be optimized
ranges : tuple 
    Tuple where each element is a tuple of parameters
                  or a slice object to be handed to numpy.mgrid

args  -- Extra arguments to function.
Ns : number 
    Default number of samples if not given
full_output : number 
    Nonzero to return evaluation grid.

:Returns: (x0, fval, {grid, Jout})

x0 : ndarray
    Value of arguments giving minimum over the grird
fval : number
    Function value at minimum
grid : tuple
    tuple with same length as x0 representing the  evaluation grid
Jout : ndarray -- Function values over grid:  Jout = func(*grid)

:SeeAlso:

  fmin, fmin_powell, fmin_cg,
         fmin_bfgs, fmin_ncg -- multivariate local optimizers
  leastsq -- nonlinear least squares minimizer

  fmin_l_bfgs_b, fmin_tnc,
         fmin_cobyla -- constrained multivariate optimizers

  anneal, brute -- global optimizers

  fminbound, brent, golden, bracket -- local scalar minimizers

  fsolve -- n-dimenstional root-finding

  brentq, brenth, ridder, bisect, newton -- one-dimensional root-finding

  fixed_point -- scalar fixed-point finder

Notes

------------------

Find the minimum of a function evaluated on a grid given by the tuple ranges.