A Simple Queueing Network

With recursions it is (nearly) trivial to simulate feedforward networks, that is, networks without feedback or loops. Here is an example based on a program of Margot Jonker.

Here is the code in one file: queueingNetwork.py.

The fundamental algorithm

import numpy as np
from numpy.random import randint
import matplotlib.pylab as plt


Assume that at the start of period \(n\) the queue length is \(Q_{n-1}\), and that in period \(n\) \(A_n\) jobs arrive and \(S_n\) jobs can be served, including the jobs that arrive in period \(n\). Then the number of jobs \(D_n\) that can depart from the sytem at the end of period \(n\) is given by

\[D_n = \min\{Q_{n-1}+A_n, S_n\}\]

As a consequence, the queue length at the end of period \(n\) is then equal to

\[Q_n = Q_{n-1}+A_n - D_n.\]

In a feedforward network the output process of an upstream queue is the input of a downstream queue. Hence, we with the number of departures \(D_n\) during period \(n\) we can determine the number of arrivals in downstream stations.

These observations lead to the following algorithm:

def computeQandD(A, S):
    N = len(A)
    Q = np.empty(N)
    D = np.zeros_like(A)
    Q[0] = 0
    for n in range(1,N):
        D[n] = min(Q[n-1] + A[n], S[n])
        Q[n] = Q[n-1]+ A[n] - D[n]
    return Q, D

An application

Lets consider the following network:

digraph network {
"1" -> "3";
"2" -> "3";
"3" -> "5";
"4" -> "5";
"5" -> "6" [ label = "1/3"];
"5" -> "7" [ label = "2/3"];

Note in particular that the outputs of queues 1 and 2 feed into queue 3, i.e., queue 3 is a join queue, and that a fraction of \(1/3\) of the output of queue 5 is sent to queue 6 and and the remaining fraction of \(2/3\) is sent to queue 7, i.e., queue 5 is a fork queue.

For the sake of the example we take uniformly distributed arrival processes at queues 1, 2, and 3. We also assume that the number of jobs served in a period is uniformly distributed.

def network():
    # station 1
    A1 = randint(0,201,N)
    S1 = randint(0,260,len(A1))
    Q1, D1 = computeQandD(A1, S1)

    # station 2
    A2 = randint(0,201,N)
    S2 = randint(0,260,len(A2))
    Q2, D2 = computeQandD(A2, S2)

    # station 3
    A3 = np.sum([D1,D2], axis =0) #join output of queue 1 and 2
    S3 = randint(0,540,len(A3))
    Q3, D3 = computeQandD(A3, S3)

    # station 4
    A4 = randint(0,201,N)
    S4 = randint(0,301,len(A4))
    Q4, D4 = computeQandD(A4, S4)

    # station 5
    A5 = np.sum([D3,D4], axis =0) #join output of queue 3 and 4
    S5 = randint(0,700,len(A5))
    Q5, D5 = computeQandD(A5, S5)

    # station 6
    A6 = D5/3    #fork 1/3 of station 5
    A6 = A6.astype(int)
    S6 = randint(0,300,len(A6))
    Q6, D6 = computeQandD(A6, S6)

    # station 7
    A7 = D5-A6    #fork the rest of station 5
    S7 = randint(100,400,len(A7))
    Q7, D7 = computeQandD(A7, S7)

    plt.plot(Q1, label="Q1")
    plt.plot(Q2, label="Q2")
    plt.plot(Q3, label="Q3")
    plt.plot(Q4, label="Q4")
    plt.plot(Q5, label="Q5")
    plt.plot(Q6, label="Q6")
    plt.plot(Q7, label="Q7")


The simulation horizon:

N = 5000

Clearly, queue 5 is the bottleneck. It is interesting to see that the queue length fluctuates so wildly. Quite a number of books claim that it is easy to find the bottleneck in a sequence of machine: just look for the longest queue. This, however, seems easier said then done when queue lenghts vary a lot.