- In this document are listed questions for different aspects of system models calibration.
- System Dynamics (SD) is the main modeling paradigm, hence SD’s terminology is used.
- There are three types of people dealing with the models:
- Modeler, who develops and implements the model and prepares it for calibration.
- Calibrator, who calibrates the model with different data for different parameters.
- Stakeholder, who requires different features of the model and outcomes from different scenario play-outs.
- There are two main calibration scenarios:
- Modeler and Calibrator are the same person
- Modeler and Calibrator are different persons
- Model development and calibration is most likely going to be an iterative process.
- Hence, it is good idea to keep track (a journal) of the model decisions, calibration process, problems encountered and their resolution, etc.
- It is assumed that each model calibration is done either for model development purposes or for scenario play-out studies.
- It is assumed in this document that the model has matured development-wise and model calibration is done in a (more) formal way.
- Stakeholder requires certain scenarios to be investigated.
- Modeler prepares the model for those scenarios.
- Stakeholder and Modeler formulate a calibration request.
- Calibrator uses the specifications from the calibration request to:
- Calibrate the model
- Derive model outcomes results
- Provide model qualitative results
- Provide model sensitivity analysis results
- The calibration request should specify:
- Data to be used
- Calibration parameters and their value ranges
- Model outcomes to focus on
- Model properties to focus on
- The list of questions below should help both Modeler and Calibrator to:
- Have a more streamlined process of communication
- Address problems of model application and deployment
- Keep track of model evolution due to iterative model-and-calibrate processes
- The following questions assume that models are developed and run with Mathematica or R.
- Currently most of SystemModeling models are (mostly) run through Mathematica notebooks.
- Most of the SystemModeling models are System Dynamics models.
- Hence the questions below discuss stocks (e.g. people who are infected) and rates (e.g. number of contacts per person per day.)
- [ ] Was the model changed in order to calibrate it?
- [ ] Were new stocks added?
- [ ] Were new parameters added?
- [ ] Did you do new model-related implementations?
- [ ] Visualization routines
- [ ] Model outcomes statistics routines
- [ ] Model introspection routines
- [ ] Where the new implementations are placed?
- [ ] Notebooks
- [ ] Packages
- [ ] Source version control repositories
- [ ] Did you get a (detailed) calibration request?
- [ ] What is the formulation of the calibration request?
- [ ] How did you get the model code?
- [ ] How did you get the data?
- [ ] Did you get any special calibration instructions?
- [ ] Did you get directions for results, targets, or parameters to focus on?
- [ ] What/which files have the calibration results?
- [ ] Are special representation routines/packages needed to read and interpret the calibration results?
- [ ] What version of programming language or system the model was calibrated with?
- [ ] What operating systems the model was calibrated in?
- [ ] How long did it take you make the first run of the model?
- [ ] How long did it take you to run the whole calibration process?
- [ ] When “victory” or “sufficiently good results” were declared?
- [ ] Under what conditions the calibration process was given up?
- [ ] Was data pre-processing needed?
- [ ] Is the calibration data different from the data the model was developed with?
- [ ] Was data feeding documentation adequate?
- [ ] What changes of the data feeding process were necessary?
- [ ] Which calibration parameters were on focus?
- Requested by the stakeholder(s).
- [ ] Which calibration parameters did you use?
- [ ] Did the calibration parameters have:
- [ ] Specified general ranges
- [ ] Prescribed range subsets of interest
- [ ] Did you use all calibration parameters specified in calibration request?
- [ ] Which calibration parameters were:
- Most important
- Most sensitive
- Most difficult to deal with
- [ ] Did the model dynamics changed in an expected way with the calibration parameters?
- [ ] Did you use particular algorithm specifications?
- [ ] Did you change the precision and accuracy goals?
- [ ] Decreased
- [ ] Increased
- [ ] Multiple changes
- [ ] Did you observe changes of modeled system behavior while changing:
- [ ] Computational algorithms
- [ ] Precision or accuracy
- [ ] Order of computations
- [ ] Did the model have units?
- [ ] Did the data have unit tests?
- [ ] Did you run model unit tests?
- [ ] Did the data pass unit tests?
- [ ] Were the pre-calibration unit tests adequate?
- [ ] Did you create new unit tests?
- [ ] Do you propose new unit tests?
- What issues were encountered when executing the model?
- How encountered issues were resolved?