Navigation Bar (see also links at bottom of page) Home PageContactsNewsSiteIndexTrainingSupportPricesResearchSoftware

Avoiding Mistakes in Population Modeling:
General Mistakes

List of common mistakes in population modeling


Invalid model assumptions

Every model makes assumptions, and many assumptions are not or cannot be validated (if they were validated, they would be parameters, not assumptions). However, there are several types of checks that a user can make to ensure that the model assumptions are consistent with each other, and with the available qualitative information about the species' life history.

It is important to remember that a program such as RAMAS GIS is not a model, but a tool that helps users build their own models. RAMAS GIS has been validated at the program level by checking all subunits and algorithms of the programs, by checking the lower-level algorithms for consistency (e.g., random distributions of vital rates), and by making sure that the program does what is described in the manual and help files. The validation at the model level is the user's responsibility. For tips on how to do this, see the following topics in the RAMAS Metapop help files (the links given below provide short introductions):

These help topics make a number of suggestions. Two specific examples are given below.

  1. Print a summary of your model (select "Export Model Summary" from the File menu to save a file that you can view and print using your browser), and examine the features and parameters of the model.
  2. Examine the structure of the program, which is summarized in the form of pseudo-code in a file distributed with the program (RAMAS Metapop Algorithm.PDF in the program folder). This describes the way parameters are combined into the overall model. It is important that you understand how the program uses each parameter, and not rely on your intuitive understanding based on the parameter's name. The same name or symbol may be used in entirely different contexts by different programs or textbooks.

Also, read about how to check for and correct internal inconsistencies of your model.


Model too complex

Many modelers are tempted to add a lot of complexity to realistically reflect the life history of the species. However, complexity means more parameters to estimate, for which data may not be available. Attempts to include more details than can be justified by the quality of the available data may result in decreased predictive power (Ginzburg and Jensen 2004). In many cases, simple assumptions may be preferable to adding new factors or parameters.

For example, if the number of individuals in older age classes is small, then pooling these data into a "composite age class" (e.g., a 3+ "age" class that includes all individuals that are 3 years old and older) would allow a more precise estimate of the survival rate of these individuals. Of course, this would mean making several assumptions (survival and fecundity do not change after age 3; there is no senescence), which should be compared to what is known about the biology of the species.

Another example is adding various factors (e.g., inbreeding depression, behavioral traits, Allee effects, etc.) that affect the dynamics of very small populations. Although it is preferable to model each of these factors when data are available, often adding these factors involves a lot of guesswork and assumptions. When data are not available, it may be better to simply ignore these factors, but focus on risk of decline to a threshold greater than zero (instead of risk of extinction). When the model includes multiple populations, this can be done in two different ways: (1) use an extinction threshold for the whole population (in Stochasticity dialog), (2) use a local threshold for each population. See the help files and the manual for setting these thresholds, and the difference between the two.

Another problem with complex models is that it is more difficult to interpret or understand their results, some of which may be counter-intuitive. In many cases, even when sufficient data exist to build a complex model, it is advisable to start from a very simple version of your model (e.g., scalar, deterministic, density-independent), and add one feature at a time, running the model and examining its results at each step. Alternatively (if you have already build a complex model), you can remove one feature of the model at a time. A process of addition or elimination may let you understand the factors that contribute to the results, and notice problems in your model.

Each feature is removed in a different way. For example, to remove all variability (all stochastic elements) and run a deterministic simulation, set Replications to 0 (in General information dialog). To only eliminate catastrophes, go to Catastrophes dialog and uncheck each box under "Affects". To remove random fluctuations in vital rates, go to Standard deviations matrix dialog and set all elements to zero by using "Autofill" with CV=0. To remove age/stage structure, go to Stages dialog and delete all but one of the stages, then go to Stage Matrix dialog and enter the population growth rate ("finite rate of increase") as the only "stage matrix" element, then go to Initial abundances dialog to enter the correct value. To remove density dependence, go to Density dependence dialog, select "All populations have the same density dependence type", and set this type to "Exponential". To remove all spatial features, go to Populations dialog and delete all but one of the populations.


Model too simple

Some modelers make the opposite mistake of building a model that is (1) too simple to use all the available data and incorporate all information about the life history of the species, or (2) too simple to properly address the question at hand, or (3) too simple to incorporate all the known threats to the species. The risk in building a very simple model is that it may lead to an underestimate of the risk of decline or extinction (because all the threats may not be modeled) or to an inefficient use of the available information. The question of the appropriate level of complexity (i.e., the trade-off between realism and functionality) depends on:

  1. characteristics of the species under study (e.g., its ecology),
  2. what you know of the species (the availability of data), and
  3. what you want to know or predict (the questions addressed).

For example, if the question to be addressed relates to harvest (hunting, fishing, etc.) of the population, or population management by translocating individuals, the results will probably depend sensitively on which age classes are harvested or translocated. In such cases, a simple model that does not include the population's age structure may not be very useful. Similarly, if different age classes show different physiological sensitivities to a particular toxin, then an ecotoxicological assessment using a structured model would be more appropriate than one that uses a scalar or unstructured model (one that does not have age or stage structure).

Some modelers are under the impression that data from regular census of a population (i.e., "count data") can only be used to develop a scalar model. This impression perhaps arises because a population viability analysis using a scalar model is sometimes (confusingly) called a "count-based PVA." However, if the census is not restricted to only the total population, but also includes a count of individuals in each age class or stage, then such data can be used to develop an age or stage structured model.

An objective way to determine the appropriate level of complexity is to fit statistical models of different complexity to the available data, and compare their goodness of fit using the Akaike Information Criterion (AIC). If you are not sure of the appropriate level of complexity, you may want to use multiple models at different levels of complexity, instead of a single model, and consider the range of results you obtain as part of model uncertainty.


Internal inconsistency (Not paying attention to error messages)

A common mistake is to ignore signs that some aspects of the model may be inconsistent with each other. In the case of RAMAS GIS, these signs include a large number and variety of error and warning messages. Some of these messages are encountered while entering model parameters; others appear while running a simulation. An error message indicates that there is a potential mistake in your model. The warning given by these messages is one of the advantages of using a professional program like RAMAS. So, if you see such a message, read about it in the help files to see what it means, to determine if it needs a corrective action, and to learn what options are suggested for correction.

Another sign of inconsistency is that a parameter you had entered earlier becomes unavailable for editing (the parameter box will become gray, and you will not be able to change the parameter). This indicates that the program is ignoring the parameter, usually because of the value of some other parameter (e.g., if local catastrophe probability is zero, local multiplier that determines catastrophe impact is irrelevant and is ignored). Check each set of input parameters (all items under the Model menu) to see which parameters are not available for editing.

Another way to check for inconsistencies is to examine the various graphs and tables the program uses to summarize the input data. For this, go to the Populations dialog (under the Model menu) and click on the "Display" button, which lets you see a variety of input summaries for each population.

If you are programming your own model (or using the "user-defined density dependence" feature in RAMAS Metapop), internal model inconsistencies may be the most difficult mistakes to notice and correct. Two strategies may help: First, start from a very simple version of your model, and gradually add more complex features, running the model at each step and keeping an eye on any counter-intuitive results. Second, compare your results to those from a model which is developed by someone else but has similar features.


List of common mistakes in population modeling



Software · Prices · Training · What's New · Forum

Research · Support · Index · Contact Us · Home