Sei sulla pagina 1di 5

Matlab Optimization for Dummies (Matlab R2012a) Find the parameters that minimize the square root difference

of observed values and user equation. A good starting point to avoid problems in matlab is to identify which algorithm to use. For this purpose, search in the help documentation the content for "Choosing a Solver". Here you will nd the optimization decision table. Autor: Mauricio Bedoya (javierma36@hotmail.com) version: 01/2013 fminsearch First, read the documentation of this function. In the command window type: >> doc fminsearch Now, lets do several examples. Example 1
f(x) = a * (x^b) x=[1 2 3 4 5]; y = [2 8 18 32 50]; % y = f(x) find the value of "a" and "b" that minimize the difference ? Here we have 2 parameters, but the fminsearch function only allow 1 input parameter. The trick is to define the parameters as a vector. In the decision table, we have that for least square objective function with none constraints, lscurvefit algorithm is recommended. However, I will proceed with fminsearch step 1 "f(x) function" In an m.file (file/new script) function my_f = f_x(parameters) global x a = parameters(1); b = parameters(2); my_f = a * (x.^b); step 2 "objective function" Minimize the square root difference between y and f(x). In another m.file function objective = f_objective(parameters) global y objective = sqrt(sum(y - f_x(parameters)).^2); step 3 "implement fminsearch" In the command window global x y x=[1 2 3 4 5]; y = [2 8 18 32 50]; parameters0 = [2,1]; options = optimset('LargeScale', 'off', 'MaxIter', 2500, ... 'MaxFunEvals', 3500, 'Display', 'iter', 'TolFun', 1e-40, 'TolX', 1e-40); [x,feval] = fminsearch(@(parameters) f_objective(parameters), parameters0, options) a = x(1); b = x(2); If you try to run again the model, you could get an error. To avoid this error, implement the step 3 in an empty script(m.file).

Example 2
f1(x) = a * (x^b)

f2(x) = (b*exp(a)) * x x=[1 2 3 4 5]; y = [2 8 18 32 50]; % y = f1(x) or y = f2(x) find the value of "a" and "b" that minimize the difference ? In this case, we have 2 equations. The first approach is to solve them independently, like in example 1. The approach that I will developed, is to add an additional parameter to the objective function to define which f(x) to solve (user defined). step 1 "f(x) function" In different m.files write both functions. function my_f1 = f1_x(params) global x a = params(1); b = params(2); my_f1 = a*(x.^b); In a different m.file write function my_f2 = f2_x(params) global x a = params(1); b = params(2); my_f2 = (b*exp(a)) * x; step 2 "objective function" Minimize the square root difference between y and f(x). Here, the objective function has 2 parameters and allow user to define wihch function to solve. function objective = f_objective(params, model) global y switch model case 'first' objective = sqrt(sum(y - f1_x(params)).^2); case 'second' objective = sqrt(sum(y- f2_x(params)).^2); end

step 3 "implement fminsearch" In a different m.file write. global x y x=[1 2 3 4 5]; y = [2 8 18 32 50]; parameters0 = [2,1]; model = 'second'; options = optimset('LargeScale', 'off', 'MaxIter', 2500, ... 'MaxFunEvals', 3500, 'Display', 'iter', 'TolFun', 1e-40, 'TolX', 1e-40); [x,feval] = fminsearch(@(params) f_objective(params,model),parameters0, options) a = x(1); b = x(2); Notes: The expression @(params) define the values that you want to search in the minimization process. As you can see, all other parameters are defined before fminsearch is called. I recommend to implement everything in m.files. Those m.files that you created, must be defined in the current folder. Other examples can be found in the documentation of the function (type in the command window >>doc fminsearch).

lscurvet First, read the documentation of this function. In the command window type: >> doc lscurvet

Now, lets do several examples. Im going to continue with the example provided in the matlab documentation. Example 1 x1 = [0 1 2 3 4 5]; f(x1) = [0 6 20 42 72 110]; f(x1) = a*x1^2 + b*x1 + x1^c; What are the values of "a", "b" and "c" that minimize the square root difference ?
The first thing we have to do is to choose the right algorithm. For this purpose, go to the Optimization Decision Table in the matlab help documentation. Because the objective function minimize the square root difference, a good starting point is to use lsqcurvefit. step 1 "f(x) function" Write in an m.files the functions function my_f = f_1(params,x1) a = params(1); b = params(2); c = params(3); my_f = a * x1.^2 + b * x1 + x1.^c; step 2 "implement lsqcurvefit" write in an m.file global x1 Y x1 = [0 1 2 3 4 5]; Y = [0 6 20 42 72 110]; params0=[2 1 1.5]; [x,resnorm,residual,exitflag] = lsqcurvefit(@f_1,params0,x1,Y); a = x(1); b = x(2); c = x(3);

Example 2 x1 = [0 1 2 3 4 5]; x2 = [6 7 8 9 10 11] f(x1,x2) = [ 36 66 108 162 228 306]; f(x1,x2) = a*x1^2 + b*x1*x2 + x2^c;
lscurvefit apply here too. However, the inputs in the function are data_x, data_y, and I have two x(x1 and x2). To solve this, organize x as a matrix. step 1 "f(x) function" Write in an m.files the functions function my_f = f_1(params,x) a = params(1); b = params(2); c = params(3); my_f = a * x(1,:).^2 + b * x(1,:).*x(2,:) + x(2,:).^c;

step 2 "implement lsqcurvefit" write in an m.file global x1 x2 Y x1 = [0 1 2 3 4 5]; x2 = [6 7 8 9 10 11]; x_data = [x1; x2]; Y = [ 36 66 108 162 228 306]; params0=[2 1 1.5]; [x,resnorm,residual,exitflag] = lsqcurvefit(@f_1,params0,x_data,Y)

fmincon First, read the documentation of this function. In the command window type: >> doc fmincon Example 1

Max 5 -(x(1) -2)^2 - 2*(x(2)-1)^2 constrain: x(1) + 4*x(2) = 3


First, identify algorithm to use. The objective function is smooth and nonlinear and the constraint is linear. If you go to de optimization decision table, we can identify that the optimal algorithm to use is fmincon. step 1 "objective function" write in an m.file function my_f = f_1(params) x1 = params(1); x2 = params(2); my_f = -(5 - (x1 - 2)^2 - 2 * (x2 -1)^2);

% Minus because matlab always minimize

step 2 "implement fmincon" A=[]; b=[]; Ae = [1 4]; be = 3; lb = []; ub = []; nonlcon = []; params0=[1 1]; options = optimset('Display','iter'); [params,fval,exitflag,output,lambda,grad,hessian]=fmincon(@(params)f_1(params),params0,A,b ,Ae,be,lb,ub,nonlcon,options); x1 = params(1); x2 = params(2);

Example 2
f(x) = -(a - (x - 2).^b x = [4 5 6 7 8]; Y = [0 1 2 3 4]; The function is smooth nonlinear and we want to minimize the least square error. In this case, we can use fminunc, fminsearch, lsqcurvefit, lsqnonlin.I will implement lsqcurvefit again step 1 f(x) function function my_f = f_1(params,x) a = params(1); b = params(2); my_f = -(a - (x - 2).^b); step 2 implement lsqcurvefit X = [4 5 6 7 8]; Y = [0 1 2 3 4]; params0=[0 0]; lb = []; ub = []; options = optimset('Display','iter'); [params,resnorm,residual,exitflag] = lsqcurvefit(@f_1,params0,X,Y,lb,ub,options); a = params(1); b = params(2); step 3 implement fmincon lets suppose that we have constraints equal to: a*X + b >=2 (a+b)*X = 3 0<=a<=3 0<=b<=4 In this case, it`s appropriated to use fmincon. First, we need to define the objective function step 3.1 objective function function [objective] = f_objective(params) global X Y obj = sqrt(sum(f_1(params,X) - Y).^2); %Minimize the least square error

step 3.2 organize constraints to standard form -a*X <= b - 2 (a+b)*X = 3 0<=a<=3 0<=b<=4 step 3.3 implement fmincon X = [4 5 6 7 8]; Y = [0 1 2 3 4]; params0=[1 1]; a_ = params(1) b_ = params(2) A=[-a_ 0]; b=[b_-2]; Aeq = [a b]; beq = [3]; lb = [0,0]; ub = [3,4]; nonlcon = []; options = optimset('LargeScale', 'off', 'MaxIter', 2500, ... 'MaxFunEvals', 3500, 'Display', 'iter', 'TolFun', 1e-40, 'TolX', 1e-40); [params,fval,exitflag,output,grad,hessian]=fmincon(@(params) f_objective(params),params0,A,b,Aeq,beq,lb,ub,nonlcon,options); a_ = params(1); b_ = params(2);

Potrebbero piacerti anche