MATLAB Program to Find A Function Minimum

Using an Ehanched Random Search Method

by Namir Shammas

The following program calculates the minimum point of a multi-variable function using random search method. The search selects random points within a given range for each variable. In each iteration, the new and old points form a path that is used to perform a linear search. This search improves the overall efficiency of the algorithm in finding the optimum.

The function Random_Search2 has the following input parameters:

  1. N - number of variables
  2. XLo - array of lower values
  3. XHi - array of higher values
  4. Eps_Fx - tolerance for difference in successive function values
  5. MaxIter - maximum number of iterations
  6. myFx - name of the optimized function

The function generates the following output:

  1. X - array of optimized variables
  2. BestF - function value at optimum
  3. Iters - number of iterations

Here is a sample session to find the optimum for the following function:

y = 10 + (X(1) - 2)^2 + (X(2) + 5)^2

The above function resides in file fx1.m. The search for the optimum 2 variables has the search range between [-10 -10] and [10 10]. The search employs a maximum of 10000 iterations and a function tolerance of 1e-7:

>> [XBest,BestF,Iters]=Random_Search2(2, [-10 -10], [10 10], 1e-7, 10000, 'fx1')

XBest =

    1.9999 -5.0000


BestF =

    10.0000


Iters =

    40

Notice how close the located optimum is to the actual one [-2 5]..

Here is the MATLAB listing:

function y=fx1(X, N)
  y = 10 + (X(1) - 2)^2 + (X(2) + 5)^2;
end

function [XBest,BestF,Iters]=Random_Search2(N,XLo,XHi,Eps_Fx,MaxIter,myFx)
% Function performs multivariate optimization using the
% random search method with directional search.
%
% Input
%
% N - number of variables
% XLo - array of lower values
% XHi - arra of higher values
% Eps_Fx - tolerance for difference in successive function values
% MaxIter - maximum number of iterations
% myFx - name of the optimized function
%
% Output
%
% XBest - array of optimized variables
% BestF - function value at optimum
% Iters - number of iterations
%

XBest = XLo + (XHi - XLo) * rand(N);
BestF = feval(myFx, XBest, N);;
LastBestF = 100 * BestF + 100;

Iters = 0;

for i=1:MaxIter
  Iters = Iters + 1;
  X = XLo + (XHi - XLo) * rand(N);
  DeltaX = XBest - X;
  lambda = 1;
  lambda = linsearch(X, N, lambda, DeltaX, myFx);
  X = X + lambda * DeltaX;
  F = feval(myFx, X, N);
  if F < BestF
    XBest = X;
    LastBestF = BestF;
    BestF = F;
  end

  if abs(LastBestF - BestF) < Eps_Fx
    break
  end
end

function y = myFxEx(N, X, DeltaX, lambda, myFx)
% Helper function
  X = X + lambda * DeltaX;
  y = feval(myFx, X, N);

% end

function lambda = linsearch(X, N, lambda, D, myFx)
% Perform linear search
  MaxIt = 100;
  Toler = 0.000001;

  iter = 0;
  bGoOn = true;

  while bGoOn
    iter = iter + 1;
    if iter > MaxIt
      lambda = 0;
      break
    end

    h = 0.01 * (1 + abs(lambda));
    f0 = myFxEx(N, X, D, lambda, myFx);
    fp = myFxEx(N, X, D, lambda+h, myFx);
    fm = myFxEx(N, X, D, lambda-h, myFx);
    deriv1 = (fp - fm) / 2 / h;
    deriv2 = (fp - 2 * f0 + fm) / h ^ 2;
    diff = deriv1 / deriv2;
    lambda = lambda - diff;
    if abs(diff) < Toler
      bGoOn = false;
    end
  end

% end

BACK

Copyright (c) Namir Shammas. All rights reserved.