List of common mistakes in population modeling
When presenting the results of a modeling effort, it is important to explicitly list the assumptions made. Although modelers know the assumptions of their models, and many assumptions are implicit in the description of a model, those using the model results or evaluating the model may not realize what these assumptions are.
Uncertainty is a prevalent feature of ecological data, and includes natural variability (which can be modeled and translated into risk estimates using a stochastic model), as well as model or parameter uncertainty (which results from measurement error, lack of knowledge, and model misspecification and biases). If data for a particular parameter of the model are unavailable or uncertain, parameter uncertainty can be incorporated by using ranges (lower and upper bounds, instead of point estimates). Uncertainties in structure of the model can be incorporated by building multiple models (e.g., with different types of density dependence).
There are various methods of propagating such uncertainties in calculations and simulations (Regan et al. 2002; Tucker and Ferson 2003). For incorporating model and parameter uncertainty in population models, one of the simplest methods is to build best-case and worst-case models. A best-case (or optimistic) model includes a combination of the lower bounds of parameters that have a negative effect of viability (such as variation in survival rate), and upper bounds of those that have a positive effect (such as average survival rate). A worst-case or pessimistic model includes the reverse bounds. Combining the results of these two models gives a range of estimates of extinction risk and other assessment endpoints. This allows the users of the PVA results (managers, conservationists) to understand the effect of uncertain input, and to make decisions with full knowledge of the uncertainties.
The uncertainties can also be used in a sensitivity analysis. When assessing impacts, it is important to prevent uncertainties making it difficult to detect impact. For more information, see the "Measurement error and Uncertainty" topic in the RAMAS Metapop help file or manual.
Although deterministic measures such as predicted population size and population growth rate may be useful under certain circumstances, variability is such a pervasive feature of natural populations that ignoring variability almost always leads to an incomplete picture of the state of the population and an incorrect prediction of its future. When a model includes variability, deterministic results such as the future population size will be unreliable, and must be accompanied by measures of their variability (such as variance, percentiles, etc.). They may also be irrelevant. If a population is subject to variability and has a substantial risk of going extinct in the near future, the long-term estimated growth rate, or the point estimates of a future population size are not relevant. In this case, probabilistic results such as risk of decline may be more relevant.
Very small populations may be subject to Allee effects, inbreeding depression, social dysfunction, or other effects that are often difficult to model because of lack of information and uncertainties associated with these factors. Thus, predictions of models for very small population sizes are not reliable. Because of this, the results are more reliable if risk of decline (rather than total extinction) is used. Thus, results should be expressed as the probability that the population size falls to or below a critical population level (e.g., for social dysfunction or other severe effects), say, 20, 50, or 100 individuals. Estimating risks of decline rather than extinction may also allow the results to be more sensitive to effects of simulated impact.
When the model includes multiple populations, there are two different types of thresholds you can set: (1) an extinction threshold for the whole population (in Stochasticity dialog), (2) a local threshold for each population. See the help files and the manual for setting these thresholds, and the difference between the two.
Results are usually more reliable if they are relative (e.g., "Which option gives higher viability?"), rather than absolute (e.g., "What is the risk of extinction?"). As discussed elsewhere, relative results may not be as sensitive to uncertainties in the data, even in cases where the uncertainties in the data result in uncertainties in absolute results (e.g.., see Akçakaya & Raphael 1998).
Sensitivity analysis identifies important parameters; those that, if known with a higher precision, would decrease the uncertainty in model results to the largest extent. The importance of a parameter in determining viability depends on both the range of plausible values (its current uncertainty), and practical limitations (such as cost considerations).
Another use of sensitivity analysis is determining the most effective management action. This is often done by evaluating the sensitivity of the model results to each parameter. However, most management actions cause changes in more than one parameter. For example, an effort to increase the survival of newborns affects both the first survival rate and the fecundity in a matrix model based on pre-reproductive census. It most likely affects the survival of other age classes as well. In such cases, it is better to evaluate whole-model sensitivity (with respect to management actions) instead of parameter-by-parameter sensitivities (see Akçakaya et al. 1999 and Akçakaya 2000a).
List of common mistakes in population modeling
||Software · Prices · Training · What's New · Forum|
||Research · Support · Index · Contact Us · Home|