# Simulation Analysis of a single server queue¶

In this page we simulate the queue length process of a single server by the simplest means possible.

I invite you to use this code for your own experiments: `case1.py`

.

## Computing the Queue length recursively¶

Let

- \(a_n\) denote the number of arrivals in week \(n\),
- \(C_n\) the maximal service capacity of week \(n\)
- \(d_n\) the number of customers that depart from the server in week \(n\)
- \(Q_n\) the queue length at the end of week \(n\)

If we assume that all customers arrive at the start of week \(n\), then the following must be true

\[\begin{split}d_{n} &= \min\{Q_{n-1}+a_n, C_n\}, \\ Q_{n} &= Q_{n-1}+a_n - ds_n.\end{split}\]

If, otherwise, we assume that all customers arrive at the end of week \(n\), then it must be that

\[\begin{split}d_{n} &= \min\{Q_{n-1}, C_n\}, \\ Q_{n} &= Q_{n-1}+a_n - ds_n.\end{split}\]

Below we follow the first of these recursions.

The following function computes the queue length process for given \(a\), \(C\), and \(Q_0\). First we need some libraries for numerical work and making plots. We also need to fix the seed of the random number generator to ensure we always get the same random numbers.

```
import numpy as np
import matplotlib.pylab as plt
np.random.seed(3)
from nickysLibrary import empericalDistribution
def computeQ(a, C, Q0=0): # initial queue length is 0
N = len(a)
Q = np.zeros(N,dtype=int) # make a list to store the values of Q
d = np.zeros(N,dtype=int) # make a list to store the values of d
Q[0] = Q0
for n in range(1,N):
d[n] = min( Q[n-1]+ a[n], C[n])
Q[n] = Q[n-1]+ a[n] - d[n]
return Q, d
```

Lets run a simple simulation. I just make up some data and see what happens. Note that we set \(a_0 =0\), as python (like many other programming languages) starts indexing lists at \(i=0\).

```
a = [0,4,7,3,1,5, 0,0,0,]
C = [3]*len(a)
Q, d= computeQ(a,C)
plt.plot(a, drawstyle = 'steps-pre', label = 'a')
plt.plot(d, drawstyle = 'steps-mid', label = 's')
plt.plot(Q, drawstyle = 'steps-post', label = 'Q')
plt.legend()
```

## Input Output diagrams¶

I want to plot the total number of arrivals \(A_n\) up to and including week \(n\) as a function of \(n\), and likewise for the number of departures \(D_n\), i.e,

Often, for real systems, these processes can be measured independently, although sometimes with considerable efforts. Here we have only the arrivals per week and the potential numer of customers served, i.e., \(C_n\), at our disposal. So, how to obtain \(D_n\)?. Well, in a sense that is quite easy. Observe that all customers that arrived up to week \(n\), are either in queue, or have left the system. Therefore,

Lets make a plot. I plot the arrivals A at the start of an interval, the departures D half way, and the resulting queue Q at the end of an interval. This, in my opinion, shows the transitions that occur somewhat clearer.

```
A = np.cumsum(a) # compute the total number of arrivals up to time n
D = A - Q
plt.plot(A, drawstyle = 'steps-pre', label = 'A')
plt.plot(D, drawstyle = 'steps-mid', label = 'D')
plt.plot(Q, drawstyle = 'steps-post', label = 'Q')
plt.legend()
```

Perhaps we should make this a bit more realistic, and generate some more data.

```
a = np.random.poisson(3, size = 10)
C = 3.*np.ones_like(a)
Q, d= computeQ(a,C)
A = np.cumsum(a)
D = A - Q
plt.plot(A, drawstyle = 'steps-pre', label = 'A')
plt.plot(D, drawstyle = 'steps-mid', label = 'D')
plt.plot(Q, drawstyle = 'steps-post', label = 'Q')
plt.legend()
```

## Waiting times¶

To find the waiting times we need to invert the arrival sequence `a`

and the departure sequence `d`

. To see this, observe that if, for
instance, `a[1]=3`

then at time period 1 three customers
arrives. Thus, the arrival times of customer 1, 2 and 3, is 1. If
`a[2]=1`

, then the arrival time of customer 4 is equal to 2, and so
on. Likewise reasoning for `d`

leads to the following computation of
the waiting times.

```
def waitingTime(At,Dt):
# At[i] is the number of arrivals in time slot i
# Convert this to the arrival time of each customer
Ak = np.repeat(np.arange(len(At)), At)
# Dt[i] is the number of departures in time slot i
# Convert this to the departure time of each customer
Dk = np.repeat(np.arange(len(Dt)), Dt)
m = min(len(Ak), len(Dk))
W = Dk[:m] - Ak[:m]
return Ak, Dk, W
Ak, Dk, W = waitingTime(a, d)
plt.plot(Ak, label='Ak')
plt.plot(Dk, label='Dk')
plt.plot(W, label='w')
plt.legend()
plt.plot()
```

## The effect of the service capacity.¶

Lets run a large simulation.

```
a = np.random.poisson(3, size = 20000)
C = 3.*np.ones_like(a)
Q, d= computeQ(a,C)
plt.plot(Q, drawstyle = 'steps-post', label = 'Q')
plt.legend()
```

Up to now the service capacity has been equal to the arrival rate, i.e., 3. What is the influence of the service rate?

First we choose a smaller value.

```
a = np.random.poisson(3, size = 20000)
C = 2.8*np.ones_like(a)
Q, d= computeQ(a,C)
plt.plot(Q, drawstyle = 'steps-post', label = 'Q')
plt.legend()
```

And now a larger value.

```
a = np.random.poisson(3, size = 20000)
C = 3.2*np.ones_like(a)
Q, d= computeQ(a,C)
plt.plot(Q, drawstyle = 'steps-post', label = 'Q')
plt.legend()
```

This is a dramatic difference. In the case that \(s=3\) we see some erratic behavior, if \(s=2.8\), i.e, smaller than the arrival rate, the queue increases steadily, and if \(s=3.2\) the queue remains finite. This in turn suggests that for queueing systems with under capacity, the dominating behavior is given by the drift, i.e., arrival rate minus service rate, and that the stochastic behavior is less relevant.

## The empiral distribution function¶

Now we are going to plot the emperical distribution function.

An experiment.

```
a = np.random.poisson(3, size = 20000)
C = 3*np.ones_like(a)
Q, d= computeQ(a,C)
e =empericalDistribution(Q)
plt.plot(e.x,e.cdf)
```

This is a curious plot. Why does \(Q\) increases roughly as a straight line? This suggests that small queues are about as probable as large queues. In practice we don’t see this behavior all that often.

Some thought will reveal that this is due to the fact that the service capacity is 3, which is the same as the arrival rate of the customers. Lets make \(s\) a bit larger, 3.2 say, and see what happens.

```
a = np.random.poisson(3, size = 20000)
C = 3.2*np.ones_like(a)
Q, d= computeQ(a,C)
e =empericalDistribution(Q)
plt.plot(e.x,e.cdf)
```

Clearly, smaller queues are much more likely now, as the distribution function increases much faster. Moreover, the largest queue length that occurred is much smaller.

## Stochastic service times¶

Up to now the service capacity has been constant. What would be the influence of stochasticity of service times of the distribution of the queue length process? Lets compare. First we rebuild the deterministic case, and then simulate a case with Poisson distributed capacity.

```
a = np.random.poisson(3, size = 20000)
mu = 3.2
C = mu*np.ones_like(a)
Q, d= computeQ(a,C)
det =empericalDistribution(Q)
C = np.random.poisson(mu, size = 20000)
Q, d= computeQ(a,C)
pois =empericalDistribution(Q)
plt.plot(det.x,det.cdf, label="det service")
plt.plot(pois.x,pois.cdf, label="sto service")
plt.legend()
```

Apparently, the queue length is quite a bit larger when service times are stochastic.

## Characterizing the output process¶

Up to now we computed the departures but did not analyze it. This we do here.

The first test is to see whether the average number of departures per week, i.e., the departure rate \(r_d\) is approximately the same as the arrival rate \(r_a\).

```
a = np.random.poisson(3, size = 20000)
mu = 3.2
C = mu*np.ones_like(a)
Q, d= computeQ(a,C)
ra = a.mean()
rd = d.mean()
print(ra)
print(rd)
```

```
3.01315
2.99985
```

So, why is \(r_a > r_d\) here? There must be some jobs left behind in the queue:

```
print(Q[-1]) # print the last item of Q
```

```
265
```

The second test is to compute the squared coefficient of variation (SCV) \(c_d^2\) of the output process. There is a formula in Factory Physics that states that

where \(c_e^2\) is the SCV of the service times and \(c_a^2\) the SCV of the interarrival times, and the utilization \(u=r_a/r_e\). Now there is a slight problem here. In this simulation we count customers per week; we do not keep track of interarrival times or service times. We therefore need a means to convert the coefficient of variation related to the number of arrivals in a week to the coefficient of variation related to the interarrival times. This point is explicitly dealt with this formula (mentioned in Factory Physics, i.e., Eq 8.12 in edition 3):

where \(\sigma_n\) is the standard deviation of the number of customers that arrive in a slot/week, and \(\mu_n\) is the mean number of arrivals in a week. A similar formula holds for \(c_e^2\). We analyze this formula below in more detail.

```
ca2 = a.var()/a.mean()
ce2 = C.var()/C.mean()
cd2 = d.var()/d.mean()
u = a.mean()/C.mean()
cd2approx = u*u*ce2 + (1-u*u)*ca2
print(ca2)
print(ce2)
print(u)
print(cd2approx)
print(cd2)
```

```
0.995595001079
6.16297582204e-32
0.941609375
0.112872382322
0.00015
```

Pretty good! Here, \(c_e^2\) is really very small. So why is that? Because the service capacity is constant, i.e., \(C_n = 3.2\) for all \(n\).

## Cycle time analysis¶

The famous approximation for the average queueing time for the \(G/G/1\) queue is given by

The average cycle time is then \(CT = CT_q + t_e\). From Little’s law it follows that the average number of customers in the system is given by

where \(t_e = 1/r_e\), i.e., the average service time.

```
ra = a.mean()
te = 1./C.mean()
CTq = (ca2+ce2)/2 * u/(1-u)*te
CT = CTq + te
W = ra*CT
print("W: ", W)
print("Q: ", Q.mean())
```

```
W: 8.50037945801
Q: 185.259
```

So, the average number of customers as estimated with the simulation, i.e., Q.mean(), is smaller than the number as computed by the formula, i.e., W. Can this be because we assume that the number of customers served include the customers that arrive, i.e., that we use

rather than

So, lets try this second approximation, redo the simulation, and compare.

```
def computeQlower(a, C, Q0=0): # initial queue length is 0
N = len(a)
Q = np.empty(N) # make a list to store the values of Q
d = np.empty(N) # make a list to store the values of d
Q[0] = Q0
for n in range(1,N):
d[n] = min( Q[n-1], C[n]) # this line is changed
Q[n] = Q[n-1]+ a[n] - d[n]
return Q, d
Q, d= computeQlower(a,C)
print("Q: ", Q.mean())
```

```
Q: 9.72648
```

And now it is larger than W.

## Relating different coefficients of variation¶

To test the validity of our simulations above, we need to apply our simulation to simple situations for which (approximate) analytical solutions are available, such as the G/G/m waiting time formula, and compare the results of the simulation to the analytical results. This is, however, not entirely straightforward. The problem is this: in our simulations we simulate the number of jobs arrived per week; we do do not simulate the interarrival times between consecutive jobs. But the analytical models use expressions for the squared coefficient of variation of the interarrival times, i.e., \(c_a^2\), not the squared coefficient of variation of the number of arrivals in a week, i.e., \(c_n^2\). Thus, with the simulator we can easily compute \(c_n^2\), but not \(c_a^2\). So, we are in need of a way to relate \(c_n^2\) to \(c_a^2\). This is the goal of Eq 8.12 of edition 3 of Factory physics:

where \(\sigma_n\) is the standard deviation of the number of customers that arrive in a slot/week, and \(\mu_n\) is the mean number of arrivals in a week. The goal of this section is to analyze its accuracy.

Before we run a large scale simulation, we try it first for a small case of exponentially distributed interarrival times.

```
T = 7 # a week is seven days long
size = 10 # simulation size
a = np.random.exponential(2, size=size) #
print(a)
```

```
[ 0.78749952 1.28698272 1.96068493 1.19477867 1.14581811
0.25678213
4.13187485 3.42744454 6.10702406 0.48711931]
```

Computing \(c_a^2\) is straightforward:

```
ca2 = a.var()/(a.mean()**2)
print(ca2)
```

```
0.742355496265
```

This is quite a bit different from 1, which we expect since the interarrival times are exponential, but this is surely due to the small simulation size.

Now we compute \(\sigma_n^2\). First we need to count the number of arrivals in a week. So we start with computing the arrival times from a:

```
t = a.cumsum()
print(t)
```

```
[ 0.78749952 2.07448224 4.03516717 5.22994584 6.37576395
6.63254609 10.76442094 14.19186547 20.29888953 20.78600885]
```

Next, make a set of bins with bin size T.

```
bins = np.arange(0,t[-1], T)
print(bins)
```

```
[ 0. 7. 14.]
```

To find the bin in which each arrival time falls we use np.digitize.

```
c= np.digitize(t,bins)
print(c)
```

```
[1 1 1 1 1 1 2 3 3 3]
```

To count the number of arrivals per bin, we can use np.bincount.

```
c= np.bincount(c)
print(c)
```

```
[0 6 1 3]
```

And now, finally,

```
cn2 = c.var()/c.mean()
print(cn2)
```

```
2.1
```

Lets run it all for one large sample.

```
size = 10000 # simulation size
a = np.random.exponential(2, size=size) #
ca2 = a.var()/(a.mean()**2)
print(ca2)
t = a.cumsum()
bins = np.arange(0,t[-1], T)
c= np.digitize(t,bins)
c= np.bincount(c)
cn2 = c.var()/c.mean()
print(cn2)
```

```
0.997044843132
1.0405321075
```

Pretty good. We need to test this further for more general interarrival distributions, but we postpone that to a later date.

## Other extenstions, To do:¶

- relate queue length process to waiting times, use the fact that \(C_n\) need not be constant.
- emperical distribution of the departure process.
- simulate tandem queueing network
- network with blocking
- semiopen network: orders are queued at the first queue, i.e., the order book, and are allowed by a conwip loop to enter the shopfloor.