To work with simulations, one needs a lot more code than defining the rules - like we have done above. Like was the case with previous chapters, libraries (see Section 8.2) are used to avoid building this machinery. In principle, there are two major tasks for this part of the simulation code. The first task is choosing values for various parameters and relationships and setting up the world. For system dynamics simulation, these include choosing flow parameters and the level of stocks. For agent-based simulations, it requires choosing values of various parameters for each agent (often these are drawn from distributions), as well as locating the agents in the environment. For microsimulations, this includes choosing the initial data set used to populate the simulation and describing rules about what takes place in the simulation model. After these setup procedures, the rest of the simulation is fairly simple. One executes the rules for all relevant parts and unites, steps the time up and executes the rules again. Code Example 6.6 illustrates how these steps may look for an agent-based simulation. We have used the bright blue commands to explicate these three stages for the code. They are not executed by the computer.
As we already hinted above, there are many ways of using simulation models. We already spoke about thought experiments, forecasting and policymaking with simulation models. Often the differences are not explicit on the chosen simulation approach but rather on understanding how the parameters and rules were developed and how they limit what can be said about the object of simulation. One can use simulations to explore a phenomenon or focus on some special aspects of the phenomenon to gain more applicable understanding (Doran and Gilbert, 1994). Also, the size of the simulation model relates to what can be said based on the simulation and its limits. To illustrate these perspectives, Kliemt (1996) suggests that there are thin simulation models focused on controlled speculation and simplified assumptions and thick simulation models that are built on empirical data and inform scholars about specific questions. Different approaches to simulation are important and recognised, and all of them have opportunities as well as limitations for scholarship.
We already suggested that many recommend that simulation models begin as simple approaches - some even argued that it might encourage more theoretical thinking. These choices naturally depend on the scope of the simulations. While writing this, I am also following news from the COVID-19 pandemic and seeing many simulation models being used for policy purposes. I hope they are not overly simplified. Therefore, the simplicity depends a lot on the researchersâ aims with the simulation. Process-wise, the suggested first step is to build as simple a simulation model as possible that produces outcomes similar to reality (Epstein and Axtell, 1996; Forrester, 1969). This is to say, do not necessarily start from theoretical conceptualisations of the problem and seek to integrate all of them into a model but choose a few suitable ideas and test if they give a close-enough approximation. Sometimes these simulations can also benefit from comparing model outcomes to a real-world event. For example, to check if a military strategy simulation was sufficiently correct, Lappi et al. (2014) compared the simulation results to data from World War II.
After the initial formulation and simplification stages comes the interpretation stage. Plots and data matrices that show what happened in the simulation are used for this stage. In practice, the analysis with simulation models include running the simulation several times. One reason for this is to examine different parameters and rules. Naturally, the code must be changed to demonstrate these changes and requires re-running the simulation. The different results are visualised to help people make comparisons. A second reason to run simulation models several times is the stochastic nature of simulations. That is, they use randomness. In agent-based simulations, the initial positions of agents are often randomised. Therefore, the simulation needs to be run several times to check that results hold even with different initial positions. Sometimes one might even consider averaging different simulation runs to balance the variance of models.