|
教与学算法(Teaching Learning Based Optimization,TLBO)
- function [X, FVAL, BestFVALIter, pop] = ModifiedTLBO(FITNESSFCN,lb,ub,T,NPop)
- % Teaching Learning Based optimization (TLBO)
- % ModifiedTLBO attempts to solve problems of the following forms:
- % min F(X) subject to: lb <= X <= ub
- % X
- %
- % [X,FVAL,BestFVALIter, pop] = ModifiedTLBO(FITNESSFCN,lb,ub,T,NPop)
- % FITNESSFCN - function handle of the fitness function
- % lb - lower bounds on X
- % ub - upper bounds on X
- % T - number of iterations
- % NPop - size of the population (class size)
- % X - minimum of the fitness function determined by ModifiedTLBO
- % FVAL - value of the fitness function at the minima (X)
- % BestFVALIter - the best fintess function value in each iteration
- % pop - the population at the end of the specified number of iterations
- % preallocation to store the best objective function of every iteration
- % and the objective function value of every student
- BestFVALIter = NaN(T,1);
- obj = NaN(NPop,1);
- % Determining the size of the problem
- D = length(lb);
- % Generation of initial population
- pop = repmat(lb, NPop, 1) + repmat((ub-lb),NPop,1).*rand(NPop,D);
- % Evaluation of objective function
- % Can be vectorized
- for p = 1:NPop
- obj(p) = FITNESSFCN(pop(p,:));
- end
- for gen = 1: T
-
- % Partner selection for all students
- % Note that randperm has been used to speedup the partner selection.
- Partner = randperm(NPop);
- % There is a remote possibility that the ith student will have itself as its partner
- % No experiment is available in literature on the disadvantages of
- % a solution having itself as partner solution.
-
- for i = 1:NPop
-
- % ----------------Begining of the Teacher Phase for ith student-------------- %
- mean_stud = mean(pop);
-
- % Determination of teacher
- [~,ind] = min(obj);
- best_stud = pop(ind,:);
-
- % Determination of the teaching factor
- TF = randi([1 2],1,1);
-
- % Generation of a new solution
- NewSol = pop(i,:) + rand(1,D).*(best_stud - TF*mean_stud);
-
- % Bounding of the solution
- NewSol = max(min(ub, NewSol),lb);
-
- % Evaluation of objective function
- NewSolObj = FITNESSFCN(NewSol);
-
- % Greedy selection
- if (NewSolObj < obj(i))
- pop(i,:) = NewSol;
- obj(i) = NewSolObj;
- end
- % ----------------Ending of the Teacher Phase for ith student-------------- %
-
-
- % ----------------Begining of the Learner Phase for ith student-------------- %
- % Generation of a new solution
- if (obj(i)< obj(Partner(i)))
- NewSol = pop(i,:) + rand(1, D).*(pop(i,:)- pop(Partner(i),:));
- else
- NewSol = pop(i,:) + rand(1, D).*(pop(Partner(i),:)- pop(i,:));
- end
-
- % Bounding of the solution
- NewSol = max(min(ub, NewSol),lb);
-
- % Evaluation of objective function
- NewSolObj = FITNESSFCN(NewSol);
-
- % Greedy selection
- if(NewSolObj< obj(i))
- pop(i,:) = NewSol;
- obj(i) = NewSolObj;
- end
- % ----------------Ending of the Learner Phase for ith student-------------- %
-
- end
-
- % This is not part of the algorithm but is used to keep track of the
- % best solution determined till the current iteration
- [BestFVALIter(gen),ind] = min(obj);
- end
- % Extracting the best solution
- X = pop(ind,:);
- FVAL = BestFVALIter(gen);
复制代码 适应度函数:- function F = Rastrigin(X)
- [ros, ~] = size(X);
- F = zeros(ros, 1);
- for k = 1: ros
- x = X(k,:);
- F(k,1) = sum((x.^2 - 10.*cos(2.*pi.*x) + 10));
- end
复制代码 主函数:
- rng(2,'twister')
- FITNESSFCN = @Rastrigin;
- lb = -5.12*ones(1,2);
- ub = 5.12*ones(1,2);
- NPop = 50;
- T = 90;
- [X,FVAL,BestFVALIter] = ModifiedTLBO(FITNESSFCN,lb,ub,T,NPop);
- display(['The minimum point is ', num2str(X)])
- display(['The fitness function value at the mimimum point is ', num2str(FVAL)])
- D = length(lb);
- display(['The number of fitness function evaluation is ', num2str(NPop+2*NPop*T)])
-
- plot(1:T,BestFVALIter,'r*')
- xlabel('Iteration Number')
- ylabel('Value of Fitness function')
- grid on
复制代码
|
|