Swarm Intelligence

Lecture notes on nature-inspired learning algorithm
Date
Author @asyncze

Contents

Swarm intelligence basics #

A swarm intelligence is any system with collective behaviour by unsophisticated agents interacting locally with environment and causing emergent global patterns. A swarm provides the basis to explore collective or distributed problem solving without centralised control or provision of a global model.

Social colonies are flexible enough to respond to internal perturbations and external challenges, robust enough to complete tasks even if some individuals fail, decentralised (no central control in colony), and self-organised (paths to solutions are emergent).

Self-organisation is a set of dynamic mechanisms where structures appear at global level from interactions of lower-level components (such as ants):

Stigmergy and artifical pheromone #

A stigmergy is a mechanism that mediate animal-to-animal interactions and form of indirect communication mediated by modifications of environment. Signs observed by individual triggers specific response or action (reinforce or modify signals to influence actions of others, positive or negative).

There are two forms of stigmergy:

An artificial pheromone can be an artifical stigmergy, such as indirect communication mediated by numeric modifications of environment states locally accessible to agent, or stigmergic variable, such as encapsulate information used by artifical ants to communicate indirectly.

Ant colony optimization #

Ants appeared on earth about 100 million years ago and have an estimated population of 10 ** 16 individuals. They are social, live in colonies of 30 to millions, and show collective behaviour (foraging, division of labour, cooperative support, construction). Ants are stimulus-response agents where an individual perform simple and basic action based on local information. Simple actions appear to have a large random component.

Example of good fit for ant colony optimization:

Binary Bridge Experiment is a simple experiment to study foraging behaviour of ants, where ants drop a chemical pheromone while walking and have larger probability to follow path with higher pheromone trail, shortest path is selected (ants return to nest faster) and pheromone on shortest path is reinforced (positive feedback).

probability to choose path A: P_A(t + 1) = ((c + n_A(t)) ** a) / (((c + n_A(t)) ** a) + ((c + n_B(t)) ** a)) = 1 - P_B(t + 1)

n_A: number of ants on path A
n_B: number of ants on path B
c: degree of attraction of unexpected branch
a: bias using pheromone deposits in decision process

The artificial ant decision process algorithm:

  1. generate random number, r[0,1]
  2. choose values, c and a
  3. for each path A, calculate P_A(t + 1)
  4. if r <= P_A, then follow path A

Simple Ant Colony Optimization (SACO) #

The simple ant colony optimisation method is a graph ((i, j) is edge from node i to node j), where pi_ij(t) is pheromone concentration associated with edge at generation t (pi_ij(0) is assigned small initial random value) and L_k(t) is length of path from source to destination constructed by ant k.

The stopping criteria is maximum number of iterations, an acceptable solution, or most ants following the same path:

p_k_ij(t) = { pi_a_ij(t) / sum[pi_a_iu(t)] if j in N_k_i(t), 0 otherwise }

N_k_i(t): set of feasible nodes connected to node i with respect to and k
a: constant greater than zero
edge: (i, j)

pi_ij(t) = (1 - rho) * pi_ij(t)

rho: evaporation rate (random number between 0 and 1, not inclusive)
pi_ij(t + 1) = pi_ij(t) + sum[delta(pi_ij(t))]

delta(pi_ij(t)) = { Q / f(x_k(t)) if edge in path x_k(t), 0 otherwise }
x_k(t): solution of ant k
f(x_k(t)): quality of solution
Q: constant greater than 0

Particle Swarm Optimisation (PSO) #

Particle swarm optimization is a numerical and population-based optimisation technique to discover optimal regions of high-dimensional search space using collective behaviour of individuals:

Particle swarms expand the ant-swarm behaviour model to N-dimensional physchosocial space without constraints of physical laws. The swarm consists of set of particles and each particle represent potential solution for some N-dimensional optimization problem. The velocity updated as previous velocity + cognitive bias c_1 + social bias c_2.

The stopping criteria is maximum number of iterations, an acceptable solution, no improvement observed over a period of iterations, or swarm radius close to zero.

Global best particle swarm method use star topology, the whole swarm as neighbours, and particles are updated based on social information from all particles in swarm (best position found by swarm).

Local best particle swarm method use ring topology, small number of particles as neighbours, and particles updated based on social information exchanged within neighbourhood of particle (local best position within neighbourhood).

Different methods for selecting neighbourhoods:

Inertia weight #

An inertia weight is an mechanism to control exploration and exploitation abilities of swarm without controlling velocity.

v_ij(t + 1) = w * v_ij(t) + (c_1 * r_1_j(t)) * (y_ij(t) - x_ij(t)) + (c_2 * r_2_j(t)) * (Y_j(t) - x_ij(t))

w: * > 0
c_1: cognitive bias
c_2: social bias

Other methods to adjust weight:

Implementation in Python #

import random

random.seed(42)

MAX_ITERATIONS = 50 # feel free to change this to stop at convergence

weight = 0.5 # constant inertia weight
c_1 = 1 # cognative constant
c_2 = 2 # social constant

def generate_swarm(x_0, n_par):
    """Generate swarm of particles

    x_0 (list)              : initial position
    n_par (int)             : number of particles
    """
    dimensions = len(x_0)
    swarm = []
    # generate particles
    for i in range(0, n_par):
        position = []
        # best position
        position_best = -1
        # particle velocity
        velocity = []
        # particle error (cost)
        error = -1
        # best error (cost)
        error_best = error

        # position and velocity
        for i in range(0, dimensions):
            position.append(x_0[i])
            velocity.append(random.uniform(-1, 1))

        # append particle
        swarm.append({
            "dimensions": dimensions,
            "position": position,
            "position_best": position_best,
            "velocity": velocity,
            "error": error,
            "error_best": error_best
        })

    return swarm

def update_velocity(velocity, position, position_best, global_pos):
    """Update particle velocity

    velocity (float)        : particle velocity
    position (float)        : particle position
    position_best (float)   : best position
    global_pos (float)      : global best position
    """
    # random bias
    r_1 = random.random()
    r_2 = random.random()

    # update velocity
    velocity_cognative = c_1 * r_1 * (position_best - position)
    velocity_social = c_2 * r_2 * (global_pos - position)
    velocity = weight * velocity + velocity_cognative + velocity_social

    return velocity

def update_position(position, velocity):
    """Update particle position

    position (float)        : particle position
    velocity (float)        : particle velocity
    """
    position = position + velocity
    return position

def iterate_swarm(f, swarm, bounds=None, global_best=-1, global_pos=-1):
    """Iterate swarm

    f (function)            : cost function
    swarm (list)            : list of particles
    bounds (list)           : list of bounds for each dimension
    global_best (float)     : global best error
    global_pos (float)      : global best position
    """
    for j in range(0, len(swarm)):
        dimensions = swarm[j]["dimensions"]
        position = swarm[j]["position"]
        error_best = swarm[j]["error_best"]

        # evaluate new error (cost)
        error = swarm[j]["error"] = f(position)

        # update local best position if current position gives
        # better local error
        if (error < error_best or error_best == -1):
            swarm[j]["position_best"] = position
            swarm[j]["error_best"] = error
        position_best = swarm[j]["position_best"]
        velocity = swarm[j]["velocity"]

        # update global best if position of current particle gives
        # best global error
        if (error < global_best or global_best == -1):
            global_pos = list(position)
            global_best = float(error)

        # update particle velocity and position
        for i in range(0, dimensions):
            velocity[i] = update_velocity(
                velocity[i],
                position[i],
                position_best[i],
                global_pos[i]
            )
            position[i] = update_position(
                position[i],
                velocity[i]
            )
            # max value for position
            if bounds and (position[i] > bounds[i][1]): position[i] = bounds[i][1]
            # min value for position
            if bounds and (position[i] < bounds[i][0]): position[i] = bounds[i][0]

    # return
    return swarm, round(global_best, 2), [round(pos, 2) for pos in global_pos]

# Example 1
# minimize x^5 - 3x^4 + 5 over [0, 4]
def f(x): return x[0] ** 5 - 3 * x[0] ** 4 + 5

# reset global
global_best = -1
global_pos = -1

swarm = generate_swarm(x_0=[5], n_par=15)
for i in range(MAX_ITERATIONS):
    swarm, global_best, global_pos = iterate_swarm(
        f,
        swarm,
        bounds=[(0, 4)],
        global_best=global_best,
        global_pos=global_pos
    )

assert (global_best, global_pos) == (-14.91, [2.39])

# Example 2
# minimize -(5 + 3x - 4y - x^2 + x y - y^2)
def f(x): return -(5 + 3 * x[0] - 4 * x[1] - x[0] ** 2 + x[0] * x[1] - x[1] ** 2)

# reset global
global_best = -1
global_pos = -1

swarm = generate_swarm(x_0=[5, 5], n_par=15)
for i in range(MAX_ITERATIONS):
    swarm, global_best, global_pos = iterate_swarm(
        f,
        swarm,
        global_best=global_best,
        global_pos=global_pos
    )

assert (global_best, global_pos) == (-9.33, [0.67, -1.67])