Hello Mat

 找回密码
 立即注册
查看: 6577|回复: 1

教与学算法(Teaching Learning Based Optimization,TLBO)

[复制链接]

1323

主题

1551

帖子

0

金钱

管理员

Rank: 9Rank: 9Rank: 9

积分
22647
发表于 2018-4-7 12:22:58 | 显示全部楼层 |阅读模式
教与学算法(Teaching Learning Based Optimization,TLBO)
  1. function [X, FVAL, BestFVALIter, pop] = ModifiedTLBO(FITNESSFCN,lb,ub,T,NPop)
  2. % Teaching Learning Based optimization (TLBO)
  3. % ModifiedTLBO attempts to solve problems of the following forms:
  4. %         min F(X)  subject to: lb <= X <= ub
  5. %          X        
  6. %                           
  7. %  [X,FVAL,BestFVALIter, pop] = ModifiedTLBO(FITNESSFCN,lb,ub,T,NPop)
  8. %  FITNESSFCN   - function handle of the fitness function
  9. %  lb           - lower bounds on X
  10. %  ub           - upper bounds on X
  11. %  T            - number of iterations
  12. %  NPop         - size of the population (class size)  
  13. %  X            - minimum of the fitness function determined by ModifiedTLBO
  14. %  FVAL         - value of the fitness function at the minima (X)
  15. %  BestFVALIter - the best fintess function value in each iteration
  16. %  pop          - the population at the end of the specified number of iterations

  17. % preallocation to store the best objective function of every iteration
  18. % and the objective function value of every student
  19. BestFVALIter = NaN(T,1);
  20. obj = NaN(NPop,1);

  21. % Determining the size of the problem
  22. D = length(lb);

  23. % Generation of initial population
  24. pop = repmat(lb, NPop, 1) + repmat((ub-lb),NPop,1).*rand(NPop,D);


  25. %  Evaluation of objective function
  26. %  Can be vectorized
  27. for p = 1:NPop
  28.     obj(p) = FITNESSFCN(pop(p,:));
  29. end


  30. for gen = 1: T
  31.    
  32.     % Partner selection for all students
  33.     % Note that randperm has been used to speedup the partner selection.
  34.     Partner = randperm(NPop);
  35.     % There is a remote possibility that the ith student will have itself as its partner
  36.     % No experiment is available in literature on the disadvantages of
  37.     % a solution having itself as partner solution.
  38.    
  39.     for i = 1:NPop
  40.         
  41.         % ----------------Begining of the Teacher Phase for ith student-------------- %
  42.         mean_stud = mean(pop);
  43.         
  44.         % Determination of teacher
  45.         [~,ind] = min(obj);
  46.         best_stud = pop(ind,:);
  47.         
  48.         % Determination of the teaching factor
  49.         TF = randi([1 2],1,1);
  50.         
  51.         % Generation of a new solution
  52.         NewSol = pop(i,:) + rand(1,D).*(best_stud - TF*mean_stud);
  53.         
  54.         % Bounding of the solution
  55.         NewSol = max(min(ub, NewSol),lb);
  56.         
  57.         % Evaluation of objective function
  58.         NewSolObj = FITNESSFCN(NewSol);
  59.         
  60.         % Greedy selection
  61.         if (NewSolObj < obj(i))
  62.             pop(i,:) = NewSol;
  63.             obj(i) = NewSolObj;
  64.         end
  65.         % ----------------Ending of the Teacher Phase for ith student-------------- %
  66.         
  67.         
  68.         % ----------------Begining of the Learner Phase for ith student-------------- %
  69.         % Generation of a new solution
  70.         if (obj(i)< obj(Partner(i)))
  71.             NewSol = pop(i,:) + rand(1, D).*(pop(i,:)- pop(Partner(i),:));
  72.         else
  73.             NewSol = pop(i,:) + rand(1, D).*(pop(Partner(i),:)- pop(i,:));
  74.         end
  75.         
  76.         % Bounding of the solution
  77.         NewSol = max(min(ub, NewSol),lb);
  78.         
  79.         % Evaluation of objective function
  80.         NewSolObj =  FITNESSFCN(NewSol);
  81.         
  82.         % Greedy selection
  83.         if(NewSolObj< obj(i))
  84.             pop(i,:) = NewSol;
  85.             obj(i) = NewSolObj;
  86.         end
  87.         % ----------------Ending of the Learner Phase for ith student-------------- %
  88.         
  89.     end
  90.    
  91.     % This is not part of the algorithm but is used to keep track of the
  92.     % best solution determined till the current iteration
  93.     [BestFVALIter(gen),ind] = min(obj);
  94. end

  95. % Extracting the best solution
  96. X = pop(ind,:);
  97. FVAL = BestFVALIter(gen);
复制代码
适应度函数:
  1. function F = Rastrigin(X)

  2. [ros, ~] = size(X);
  3. F = zeros(ros, 1);
  4. for k = 1: ros
  5.     x = X(k,:);
  6.     F(k,1) = sum((x.^2 - 10.*cos(2.*pi.*x) + 10));
  7. end
复制代码
主函数:
  1. rng(2,'twister')

  2. FITNESSFCN = @Rastrigin;

  3. lb = -5.12*ones(1,2);
  4. ub = 5.12*ones(1,2);

  5. NPop = 50;
  6. T = 90;

  7. [X,FVAL,BestFVALIter] = ModifiedTLBO(FITNESSFCN,lb,ub,T,NPop);

  8. display(['The minimum point is ', num2str(X)])
  9. display(['The fitness function value at the mimimum point is ', num2str(FVAL)])
  10. D = length(lb);
  11. display(['The number of fitness function evaluation is ', num2str(NPop+2*NPop*T)])

  12. plot(1:T,BestFVALIter,'r*')
  13. xlabel('Iteration Number')
  14. ylabel('Value of Fitness function')
  15. grid on
复制代码




算法QQ  3283892722
群智能算法链接http://halcom.cn/forum.php?mod=forumdisplay&fid=73
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

Python|Opencv|MATLAB|Halcom.cn ( 蜀ICP备16027072号 )

GMT+8, 2024-11-22 22:12 , Processed in 0.224094 second(s), 25 queries .

Powered by Discuz! X3.4

Copyright © 2001-2021, Tencent Cloud.

快速回复 返回顶部 返回列表