A brief introduction to the backpack problem

Knapsack Problem (Knapsack Problem) is a combinatorial optimization NP complete problem. The problem is described as follows: Given a group of items, each item has its own weight and price value. Within the limited total weight, how can we choose to make the total price of the item the highest? 2 [0-1 backpack problem] There are only two cases of loading/not loading the backpack for each item I. We have n items, the weight of item J is Wj, and the price is Pj. We assume that the weights and prices of all items are non-negative. The maximum weight the backpack can bear is W. If you limit each item to zero or one, the problem is called the 0-1 backpack problem. Let V(I,j) represent the maximum value of items that can be loaded into the backpack with a capacity of J among the first I items, then the dynamic programming function can be obtained: V(I,0) = V(0,j)=0; V(I,j) = V(I -1,j) j

j, V(I,j) = Max {V(i-1,j), V(i-1,j-wi)+vi} j> WI // 1. If the ith item is loaded into the backpack, then the value of the items in the backpack = the value of the first I-1 item is loaded into the backpack with the capacity of J-WI plus the value of the ith item vi; 2. If the i-th item is not loaded into the backpack, then the value in the backpack = the value obtained by the first I-1 item loaded into the backpack of capacity J. Take the more valuable of the two. Step 1: Load only the first item to determine the maximum value of the backpack in each situation. Use only the first two items to determine the maximum value you can get from your backpack in any situation. Step n:… Finally, V(n,C) is the maximum value of n items in a backpack of capacity C. To get V(n,C), you have to push it up to V(n minus 1,C). If V(n,C)>V(n-1,C), then the NTH item is loaded into the backpack, and the first n-1 item is loaded into the backpack with the capacity of C-WN; Otherwise, the NTH item is not loaded into the backpack, and the first N-1 item is loaded into the backpack of capacity C. Until it is determined whether the first item has been loaded into the backpack. When V(I,j)= V(I -1,j), xi = 0; When V(I,j) > V(I -1,j), xi = 1,j = j-wi;

Two, discrete particle swarm optimization algorithm introduction

1 What is discrete particle swarm optimization? Particle Swarm Optimization Algorithm (PSO) is a continuous function of the initial position of particles, and the update rate, Discrete Particle Swarm Optimization Algorithm (DPSO) is an Algorithm that updates both position and velocity with Discrete values. Generally, after following the position of the new particle, the particle is treated with discrete points. For example: the discrete points of your particles are integers from 0 to 9. So if you update the position of each particle, let’s say it’s a random number in the range (0,1). So it’s in the range (0, 0.1) and it’s going to be 0; The range (0.1,0.2) makes its value 1; . (0.9.1) the range makes its value 9. Of course, the initial position values need to be handled the same way. reference

2 What is discrete binary particle swarm optimization? Discrete Binary Particle Swarm Optimization Algorithm (BPSO) was originally designed by J.Kennedy and R.C.Eberhart in 1997. PSO mainly optimizes continuous real value problems, BPSO mainly optimizes discrete space constraint problems; BPSO is based on the discrete particle swarm optimization algorithm, and the position vector and velocity vector are all composed of 0 and 1 values. BPSO has a strong global search capability, but it cannot converge to the global optimal value, and with the increasing randomness of algorithm iteration search, it lacks the local search capability in the later stage.

3 Steps of discrete binary particle swarm optimization4 experimental steps reference: discrete binary particle swarm optimization analysis first step: determine the benchmark function of the test; Step 2: Test the iterative change of the average velocity of a particle and the iterative change of the average probability of a particle taking 1; An iterative change in the probability of a particle changing; The iterative change of the distance from a particle to the optimal particle; Step 3: An improved particle swarm optimization algorithm is proposed. The improved point is the probability function. Do the experiment according to step 2; Step 4: Propose an improved binary PSO algorithm based on genetic algorithm; Significance analysis was performed to test the minimum mean fitness value and standard variance.

Three, part of the source code

%%%%%%%%%% Discrete particle swarm optimization algorithm to solve0- 1Knapsack problem % % % % % % % % % % % % % % % % % % % % % % % % % % % % initialization % % % % % % % % % % % % % % % % % % % clear all; % clear all variables close all; % qing figure CLC; CLS N = %100; % Number of population particles D=10; % Particle dimension T=200; % Maximum number of iterations c1=1.5; % Learning factor1
c2=1.5; % Learning factor2
Wmax=0.8; % Maximum inertia weight Wmin=0.4; % Minimum value of inertia weight Vmax=10; % Maximum speed Vmin=- 10; % Speed minimum value V =300; % backpack capacity C = [95.75.23.73.50.22.6.57.89.98]; % item size W = [89.59.19.43.100.72.44.16.7.64]; % item value AFA =2; % coefficient of punishing function % % % % % % % % % initialization population individuals (limit position and velocity) % % % % % % % % % % x = rand (N, D); V =rand(N,D) * (vmax-vmin)+Vmin; %%%%%%%%%%% Initializes the optimal location and value of the individual %%%%%%%%%%%% p=x; pbest=ones(N,1);
for i=1:N pbest(i)= func4(x(i,:),C,W,V,afa); End %%%%%%%%%%%% Initialize the global optimal location and value %%%%%%%%%%% g=ones(1,D);
gbest=eps;
for i=1:N
    if(pbest(i)>gbest)
        g=p(i,:);
        gbest=pbest(i);
    end
end
gb=ones(1,T); %%%%%%% iterates according to the formula until the accuracy or number of iterations is met %%%%%%%for i=1:T
    for j=1:N %%%%%% Updates the individual optimal location and value %%%%%%%%%%%%%if(func4(x(j,:),C,W,V,afa)>pbest(j)) p(j,:)=x(j,:); pbest(j)=func4(x(j,:),C,W,V,afa); End %%%%%%%%%% Updates the global optimal location and value %%%%%%%%%if(pbest(j)>gbest) g=p(j,:); gbest=pbest(j); End %%%%%%%%%% Calculates the dynamic inertia weight value %%%%%%%%%%%%%for ii=1:D
            if (v(j,ii)>Vmax)  |  (v(j,ii)< Vmin)
                v(j,ii)=rand * (Vmax-Vmin)+Vmin;
            end
        end    
        vx(j,:)=1./ (1+exp(-v(j,:)));
        for jj=1:D
            if vx(j,jj)>rand
                x(j,jj)=1;
            else
                x(j,jj)=0; End end end % % % % % % % % % % % % % record all previous dynasties the global optimal value % % % % % % % % % % % % % g the best individualfigure
plot(gb)
xlabel('Number of iterations');
ylabel('Fitness value');
title('Fitness evolution curve')
Copy the code

Fourth, the operation result

V. MATLAB version and references

1 Matlab version 2014A

Steamed stuffed bun [1] Yang, YU jizhou, Yang Shan. [2] ZHANG Yan, WU Shuigen. Intelligent Optimization Algorithm and MATLAB Example (2nd edition) [M]. Publishing House of Electronics Industry, 2016. MATLAB Optimization algorithm source code [M]. Tsinghua University Press, 2017.