ࡱ>  a~ lbjbjcTcT >>_zz   X9T#4#v8$V 0000k88qpuppppppp[y{Fp 577;7055p00hu5 80 0p5pI*Y03pNpu0#v~O C|C|*Y*YC| >Yyr\NyyyppUyyy#v5555C|yyyyyyyyyz : Model-Based Environmental Decision-Making by John Doherty Watermark Numerical Computing November 2010 Support for the writing of this document was provided by South Florida Water Management District.  Preface This document is an abridged form of a much larger document entitled Methodologies and Software for PEST-Based Model Predictive Uncertainty Analysis. The latter document presents theory, philosophy and worked examples of the use of PEST and its utility support software in extracting information from real-world datasets, and in quantifying the uncertainty of predictions made by environmental models using state-of-the-art techniques, many of which are unique to PEST. It also discusses the roles that models can and should play in environmental management, these following from recognition of what models can and cannot achieve in that context, and from the fact that their utility in that context is maximised through the use of PEST and its support software. The present document retains only the philosophy and theory from the original document. The original document, together with all support files can be downloaded from the following web site: www.pesthomepage.org John Doherty November, 2010 Table of Contents  TOC \o "1-3" \h \z \u   HYPERLINK \l "_Toc278522711" 1. Introduction  PAGEREF _Toc278522711 \h 1  HYPERLINK \l "_Toc278522712" General  PAGEREF _Toc278522712 \h 1  HYPERLINK \l "_Toc278522713" This Document  PAGEREF _Toc278522713 \h 3  HYPERLINK \l "_Toc278522714" Case Studies  PAGEREF _Toc278522714 \h 5  HYPERLINK \l "_Toc278522715" 2. What will happen if...?  PAGEREF _Toc278522715 \h 6  HYPERLINK \l "_Toc278522716" Introduction  PAGEREF _Toc278522716 \h 6  HYPERLINK \l "_Toc278522717" Making a Decision  PAGEREF _Toc278522717 \h 6  HYPERLINK \l "_Toc278522718" Environmental Management  PAGEREF _Toc278522718 \h 7  HYPERLINK \l "_Toc278522719" Risk  PAGEREF _Toc278522719 \h 9  HYPERLINK \l "_Toc278522720" Hypothesis-Testing  PAGEREF _Toc278522720 \h 11  HYPERLINK \l "_Toc278522721" Reducing Model Augmentations to Uncertainty  PAGEREF _Toc278522721 \h 13  HYPERLINK \l "_Toc278522722" The Scientific Method  PAGEREF _Toc278522722 \h 16  HYPERLINK \l "_Toc278522723" Summary  PAGEREF _Toc278522723 \h 16  HYPERLINK \l "_Toc278522724" 3. Models, Simulation and Uncertainty  PAGEREF _Toc278522724 \h 18  HYPERLINK \l "_Toc278522725" Expert Knowledge  PAGEREF _Toc278522725 \h 18  HYPERLINK \l "_Toc278522726" What a Model can Provide  PAGEREF _Toc278522726 \h 18  HYPERLINK \l "_Toc278522727" What an Uncalibrated Model can Provide  PAGEREF _Toc278522727 \h 19  HYPERLINK \l "_Toc278522728" Linear Analysis  PAGEREF _Toc278522728 \h 21  HYPERLINK \l "_Toc278522729" Exercises  PAGEREF _Toc278522729 \h 23  HYPERLINK \l "_Toc278522730" 4. Getting Information out of Data  PAGEREF _Toc278522730 \h 24  HYPERLINK \l "_Toc278522731" History-matching  PAGEREF _Toc278522731 \h 24  HYPERLINK \l "_Toc278522732" Bayes Equation  PAGEREF _Toc278522732 \h 24  HYPERLINK \l "_Toc278522733" Figure 4.1 Schematic representation of Bayesian analysis.  PAGEREF _Toc278522733 \h 25  HYPERLINK \l "_Toc278522734" Calibration  PAGEREF _Toc278522734 \h 25  HYPERLINK \l "_Toc278522735" The Null Space  PAGEREF _Toc278522735 \h 27  HYPERLINK \l "_Toc278522736" Regularisation  PAGEREF _Toc278522736 \h 31  HYPERLINK \l "_Toc278522737" General  PAGEREF _Toc278522737 \h 31  HYPERLINK \l "_Toc278522738" Tikhonov Regularization  PAGEREF _Toc278522738 \h 31  HYPERLINK \l "_Toc278522739" Subspace Regularization  PAGEREF _Toc278522739 \h 32  HYPERLINK \l "_Toc278522740" Hybrid Regularization  PAGEREF _Toc278522740 \h 33  HYPERLINK \l "_Toc278522741" Manual Regularization  PAGEREF _Toc278522741 \h 33  HYPERLINK \l "_Toc278522742" Structural Regularization  PAGEREF _Toc278522742 \h 35  HYPERLINK \l "_Toc278522743" Some Equations  PAGEREF _Toc278522743 \h 35  HYPERLINK \l "_Toc278522744" Structural Noise  PAGEREF _Toc278522744 \h 39  HYPERLINK \l "_Toc278522745" Exercises  PAGEREF _Toc278522745 \h 41  HYPERLINK \l "_Toc278522746" 5. How Wrong Can a Prediction Be? Linear Analysis  PAGEREF _Toc278522746 \h 42  HYPERLINK \l "_Toc278522747" Error and Uncertainty  PAGEREF _Toc278522747 \h 42  HYPERLINK \l "_Toc278522748" The Predictive Error Term  PAGEREF _Toc278522748 \h 44  HYPERLINK \l "_Toc278522749" Linear Parameter and Predictive Uncertainty Analysis  PAGEREF _Toc278522749 \h 45  HYPERLINK \l "_Toc278522750" Parameter Uncertainty  PAGEREF _Toc278522750 \h 45  HYPERLINK \l "_Toc278522751" Predictive Uncertainty  PAGEREF _Toc278522751 \h 46  HYPERLINK \l "_Toc278522752" Linear Parameter and Predictive Error Analysis  PAGEREF _Toc278522752 \h 47  HYPERLINK \l "_Toc278522753" Parameter Error  PAGEREF _Toc278522753 \h 47  HYPERLINK \l "_Toc278522754" Predictive Error  PAGEREF _Toc278522754 \h 47  HYPERLINK \l "_Toc278522755" Over-Determined Parameter Estimation  PAGEREF _Toc278522755 \h 49  HYPERLINK \l "_Toc278522756" Derived Quantities  PAGEREF _Toc278522756 \h 50  HYPERLINK \l "_Toc278522757" General  PAGEREF _Toc278522757 \h 50  HYPERLINK \l "_Toc278522758" Parameter Contributions to Predictive Uncertainty and Error Variance  PAGEREF _Toc278522758 \h 51  HYPERLINK \l "_Toc278522759" Data Worth  PAGEREF _Toc278522759 \h 53  HYPERLINK \l "_Toc278522760" Exercises  PAGEREF _Toc278522760 \h 54  HYPERLINK \l "_Toc278522761" 6. How Wrong can a Prediction Be? Nonlinear Analysis  PAGEREF _Toc278522761 \h 55  HYPERLINK \l "_Toc278522762" Error and Uncertainty  PAGEREF _Toc278522762 \h 55  HYPERLINK \l "_Toc278522763" Constraints  PAGEREF _Toc278522763 \h 56  HYPERLINK \l "_Toc278522764" Well-Posed Inverse Problems  PAGEREF _Toc278522764 \h 58  HYPERLINK \l "_Toc278522765" General  PAGEREF _Toc278522765 \h 58  HYPERLINK \l "_Toc278522766" Constrained Maximization/Minimization  PAGEREF _Toc278522766 \h 58  HYPERLINK \l "_Toc278522767" Calibration-Constrained Monte Carlo  PAGEREF _Toc278522767 \h 60  HYPERLINK \l "_Toc278522768" Ill-Posed Inverse Problems  PAGEREF _Toc278522768 \h 60  HYPERLINK \l "_Toc278522769" General  PAGEREF _Toc278522769 \h 60  HYPERLINK \l "_Toc278522770" Constrained Predictive Maximization/Minimization  PAGEREF _Toc278522770 \h 61  HYPERLINK \l "_Toc278522771" Null Space Monte Carlo  PAGEREF _Toc278522771 \h 62  HYPERLINK \l "_Toc278522772" Exercises  PAGEREF _Toc278522772 \h 64  HYPERLINK \l "_Toc278522773" 7. Hypothesis-Testing and Pareto Methods  PAGEREF _Toc278522773 \h 65  HYPERLINK \l "_Toc278522774" Where are we at?  PAGEREF _Toc278522774 \h 65  HYPERLINK \l "_Toc278522775" Where do we go from here?  PAGEREF _Toc278522775 \h 66  HYPERLINK \l "_Toc278522776" The Scientific Method  PAGEREF _Toc278522776 \h 67  HYPERLINK \l "_Toc278522777" The Role of Model Calibration  PAGEREF _Toc278522777 \h 68  HYPERLINK \l "_Toc278522778" Pareto Concepts - Model Calibration  PAGEREF _Toc278522778 \h 71  HYPERLINK \l "_Toc278522779" Figure 7.1 The Pareto front as it applies to the model calibration process.  PAGEREF _Toc278522779 \h 71  HYPERLINK \l "_Toc278522780" Pareto Concepts - Model Prediction  PAGEREF _Toc278522780 \h 73  HYPERLINK \l "_Toc278522781" Figure 7.2 The Pareto front as it applies to model-based hypothesis-testing.  PAGEREF _Toc278522781 \h 74  HYPERLINK \l "_Toc278522782" Pareto Methods - Some Final Words  PAGEREF _Toc278522782 \h 75  HYPERLINK \l "_Toc278522783" Exercises  PAGEREF _Toc278522783 \h 75  HYPERLINK \l "_Toc278522784" 8. Conclusions  PAGEREF _Toc278522784 \h 76  HYPERLINK \l "_Toc278522785" 9. References  PAGEREF _Toc278522785 \h 77  HYPERLINK \l "_Toc278522786" Appendix 1. PEST Utilities  PAGEREF _Toc278522786 \h 79  HYPERLINK \l "_Toc278522787" Appendix 2. PEST Groundwater Data Utilities  PAGEREF _Toc278522787 \h 80  1. Introduction General It is my hope that this document is more exciting than its title. It has been written with this intent, though in the eyes of many it will no doubt fail in this regard. After all, it is not a detective story, nor even a history of a journey of discovery. Yet, in some respects it is both of these. Part of the aim of this document is to help modellers, and those who make decisions on the basis of modelling, to look at modelling through fresh eyes. It is written against a background of what the author sees as frequent and expensive misuse of what is fundamentally a very useful technology. Sadly, the costs of this misuse can be high. These costs include human resources that are wasted on expensive yet fruitless modelling exercises that do not provide the insights into the future that was promised as justification for their construction. However even greater costs are incurred by making decisions on the basis of model outcomes that are thought to provide robust insights into the future but that, in fact, provide no such thing. Let us briefly look at the context in which most environmental modelling takes place. A decision of some importance must be made. The outcome of this decision is that someone may invest a great deal of money in a venture that has the potential to cause damage to someone elses investment, with this damage being propagated through environmental pathways. Or damage may be inflicted on the environment itself. Alternatively, someone may be denied the opportunity to make such an investment because it judged that the consequences of this investment will be damage inflicted on other parties, or on the environment. The costs of modelling are normally small compared with those associated with the projects whose implementation and design are based on modelling outcomes. This is where the real cost of less-than-optimal modelling is incurred. And that cost may be very large indeed. In recognition of this, the topic of model predictive uncertainty analysis is gaining increasing interest at a rapid rate. The subject of model uncertainty has never been absent from the modelling literature. However it is also fair to say that it has never been a headline topic. It has always maintained the interest of some, but has never received the cult status that some other topics seem to have received from time to time. Even a casual inspection of the academic literature reveals large differences in the means through which model predictive uncertainty is explored by different groups of researchers. In some cases there is considerable overlap between these approaches. In other cases different approaches appear to have very little in common. Nowhere does the divide appear to be greater than between approaches taken by groundwater and surface water modellers; even the vocabularies used by these groups are different. And just to add to the confusion, the mathematics of uncertainty often appears complicated. Where this is superimposed on the mathematics of simulation and published as a paper in an academic journal, that paper is sure to be given a wide berth by all except the most learned spectators. Lately, however, much of the growth in interest of this topic appears to be coming from outside of the academic community. It appears to be coming from those who must make decisions, or who must convince others of the benefits of decisions that have already been taken. Inevitably, a model has formed part of the decision-making process; these days, that is almost inescapable. Almost inevitably, the decision will be challenged by individuals or groups who see themselves as disadvantaged by the decision. In mounting the challenge, the focus of attack will almost certainly be the model. The model must therefore be defensible. In the authors opinion very few models are defensible - at least not in the way that they are traditionally defended. This is an outcome of the fact that in most cases there is no mathematical reason why environmental models can live up to the expectations that are placed upon them. Nevertheless a model will often survive the onslaughts of its detractors either because the instrument of attack is an equally indefensible model, or because no better alternative is available for scientifically-based decision-support. In the heat of battle the rhetoric of modelling rather than the science of modelling often decides the contest. This is partly due to the fact that there is no metric by which to independently judge the worth of a model, or the superior worth of one model over another. Where metrics are put forward by proponents of one side of an argument they are often spurious, with explicit reliance often being placed on whether one model is better calibrated than another model, and implicit reliance often being placed on the aesthetic appeal with which model outcomes are presented. In recognition of this unsatisfactory state of affairs, battle-hardened decision-makers and managers are rightly turning their attention to the concept of model uncertainty. In part this, like all other human behaviour, has its roots in self interest; where models are built to support the making of expensive and controversial decisions, the state of defensibility of the primary mechanism for decision-support must be known. So too must the vulnerabilities of the models that are used to attack it. In part, however, it also arises from a growing feeling, born of witnessing too many occasions on which reality has recklessly gone its own way independently of model predictions, that managers are being sold a lemon when they agree to pay for an expensive model. There is a growing realization that models are not quite the predictive wonders that they are often made out to be. So who is to blame for this situation? No-one in my opinion. Decisions-makers have wanted predictive certainty since decision-makers first existed. In the past they grasped the straws that were available to them at the time - oracles, signs, astrology, and other tools of the mystical trade. Today those straws are models. And modellers have been eager to please managers, or have evolved through natural selection to be eager to please managers, by presenting them with oracles; for career extinction awaits those who did not. However the times, they are rapidly changing. As an industry, we are on the threshold of a paradigm shift in the way we build and use models. It is the growing recognition, born of painful experience, that model predictions may be seriously wrong that has led us to this point. This presents us with many dilemmas, the first of which is, of course, how to calculate the potential for wrongness that is associated with predictions made by a model. However as soon as this problem is addressed, an even greater dilemma awaits us. It is how to use models as a basis for decision-making when it is openly admitted that they cannot be construed as instruments for divining the future. While it is the intention of this document to address the first of these issues rather than the second, neither of them can be addressed in isolation. Hence, instead of looking at the issue of uncertainty as yet another layer that must be superimposed on the existing conceptual edifice that has formed the basis for model deployment up until now, this document begins by briefly discussing the place that models occupy in the decision-making process. This forms the context in which they must be used. It is thus the context to which their performance must be tuned, and the context in which their imperfections must be recognized. This is also the context in which uncertainty must be embraced as an inevitable part of looking into the future, and as an inseparable aspect of any decision-making process. This Document This document is not meant to serve as a comprehensive review of work that has been carried out to date in the field of model predictive uncertainty analysis, in spite of the fact that reference is made to some of it. Nor does it purport to provide a comprehensive mathematical treatment of model predictive uncertainty analysis, despite the fact that some equations are presented. Instead, it intends to achieve the following. Provide an overview of impediments to our ability to predict the environmental future; Show that while we cannot be certain about the future, the magnitude of our uncertainty is, at to at least some extent, quantifiable; Show how uncertainty can be reduced through appropriate use of simulation software in partnership with software that facilitates the flow of information and ideas from environmental data and user expertise to these simulators; Illustrate that, while there is a theoretical lower limit to the uncertainty associated with predictions of different types at a specific study site, that limit may be difficult to attain because of model imperfections and practical computing requirements; Identify strategies that can be used to approach the theoretical lower limit of uncertainty for a particular prediction at a particular site. Practical demonstrations of concepts discussed herein will be provided. Many of the following chapters conclude with worked examples based on two simple models used in conjunction with programs of the PEST suite of software. Files for these exercises are supplied. Hence, in addition to the above roles, this document also provides a tutorial on use of PEST-suite programs in analysing model predictive uncertainty. It is important to point out, however, that an over-riding consideration in writing this document has been that it be easy to read by both experts and non-experts alike. As stated above, while some equations are presented, the use of mathematics is kept to a minimum. Where equations are presented, it is not essential that they be understood - only that the principles that underlie them be understood. Similar considerations hold for the practical exercises. A reader of this document may ignore them altogether if he/she wishes, and nevertheless learn much about approaches to model predictive uncertainty analysis that are encapsulated in the PEST suite of software. Alternatively, a reader may wish to increase his/her understanding of PEST-suite software by simply reading the example descriptions and glancing at the files to which these descriptions pertain. Or a reader may wish to follow all of the instructions provided through the examples, thereby acquiring maximum knowledge of the workings of PEST and its ancillary support software in the uncertainty analysis context. This document is organised as follows. The remainder of this chapter provides a brief description of the two models on which practical examples used throughout this text are based. Chapter 2 attempts to set the context for the discussions that follow by examining what is required of a model when it is used as a basis for environmental decision-making. It attempts to take a somewhat different view of modelling from that which often explicitly or implicitly accompanies model usage at the present time. In particular, the role of a model when used in conjunction with appropriate support software as an instrument through which scientific hypotheses-testing can be implemented is emphasized. An incapacity to reject the hypothesis that a particular management strategy may have unwelcome consequences may constitute grounds for making the decision to implement an alternative management strategy. Chapter 3 looks at what simulation of environmental processes as they operate at a particular study site can achieve. It makes the point that a model can never promise a prediction that is correct. However what it can aspire to do is guarantee that the correct prediction will lie within the interval of predictive uncertainty that it provides, and that this uncertainty approaches its theoretical lower limit given currently available information. Unfortunately, however, this lower limit may not be attainable because of the imperfect and simplistic nature of models as simulators of environmental processes. Model imperfections result in misdirection of information that is resident in historical system datasets. This, in turn, can lead to an increased penchant for model predictive error that must be accounted for when using a model to calculate predictive confidence intervals. Chapter 4 explores the concept of model calibration. It is demonstrated that calibration does not endow a model with an ability to provide the the right answer when it is used to make a prediction of future environmental behaviour. It may, however, reduce the potential for error associated with one or a number of model predictions; alternatively it may not. In either case, calibration provides a pathway through which information can flow from data gathered at a study site to a simulator of environmental processes for that site. It is demonstrated that calibration also provides insights into model inadequacies, and the effects of these inadequacies on the models ability to reproduce past and future environmental behaviour. A distinction is made between model predictive uncertainty analysis and model predictive error analysis. While the two are often used interchangeably, the potential for predictive error is usually higher than the innate uncertainty of a prediction; uncertainty is an outcome of information inadequacy while potential for error includes both uncertainty and the effects on model predictions of its flawed capacity to simulate environmental processes. Chapter 5 discusses linear analysis of model predictive uncertainty and error. It is shown that while linear analysis can only be approximate because the relationship between model outcomes and model parameters is in fact nonlinear, linear analysis can nevertheless provide some useful insights. In particular, a modeller can rapidly assess the extent to which data inadequacy and model imperfections detract from a models ability to predict future environmental behaviour. In addition to this, the worth of existing or yet-to-be acquired data can be assessed in terms of its ability (or otherwise) to reduce the uncertainty associated with specific model predictions. Such analysis can therefore provide a sound basis for investment in acquisition of further data at a particular study site. Chapter 6 treats nonlinear predictive uncertainty analysis. While far more general than linear analysis as it does not require an assumption of linear model behaviour, this generality sometimes comes with a heavy computational cost. Nevertheless, through the use of methodologies such as the null space Monte Carlo scheme provided by PEST, the uncertainty associated with model predictions can be explored with reasonable levels of computational efficiency, even where parameter variability is subject to calibration constraints, and even where the number of parameters attributed to a model is made purposefully large in order to preclude underestimation of the extent of predictive variability. Chapter 7 explores in detail one particular form of nonlinear uncertainty analysis that implements more-or-less directly the type of hypothesis-testing that forms the heart of the scientific method. Furthermore, it does this in a way that makes enough information available to the modeller for him/her to be capable of exercising necessarily subjective (though informed) expert judgement when assessing the likelihood or otherwise of a particular future environmental occurrence. As such this methodology may find a useful role in contexts of collaborative decision-making that are becoming more and more widespread as attempts are made to reconcile the interests of different stakeholder groups when deciding on ways to manage the environment in a manner that maximises its benefit to all. Chapter 8 provides a short conclusion. Case Studies This section of the original document has been omitted. 2. What will happen if...? Introduction The purpose of this chapter is to set the scene for chapters that follow. It provides a brief description of the decision-making context in which many models operate. It is only after examining this context that optimal usage of models in this context can be pursued. Before proceeding, however, it must be acknowledged that models are built for many reasons. Sometimes they are built for research purposes where their primary role is to allow a scientist to experience things that may otherwise be invisible, and to thereby gain a greater understanding of the interplay of the many different, and often competing, processes that determine the environmental future. The focus of the present document is not on this type of modelling. Rather the present focus is on models that underpin environmental management, and on which basis important management decisions must be made. This is not to say that complex process models have no place in decision-making; obviously, the better that a practitioner understands environmental processes as they are presumed to operate at his/her study site, the better is his/her ability to discern good management practice from bad management practice at that site. However in this document the focus is on models whose outcomes provide direct and quantitative inputs to a decision-making process. Such models are therefore built as a means for predicting the future behaviour of a specific system under existing, or yet-to-be-tested, management strategies. Making a Decision Making a decision would be easy if the repercussions of that decision were perfectly known. This, of course, entails the existence of some means of looking into the future - as it would unfold under existing management strategies and as it would exist under altered management strategies. Decision-making, as it is implemented in the political and economic world, recognizes that the future cannot be perfectly known. However through careful analysis of all available data, combined with an understanding of the workings of a system, a course of action can often be chosen that maximizes the probability of some good thing happening, or minimizes the probability of some bad thing happen. These are really the same thing when it is considered that failure to maximize a good thing constitutes an opportunity cost that should ideally be minimized. Unfortunately, environmental decision-making is often seen as a process that has more in common with engineering design than it does with notions of minimization of risk in the context of a system that is poorly understood. Engineering design is often based on the premise that perfect predictions of future system behaviour can in fact be made based on a complete mathematical characterisation of the system. System performance measures can be proposed; design is then targeted at satisfying those measures. Even where a system is well understood however, good design often requires compromise - for example between cost and performance, or between this aspect of performance and that aspect of performance. Where compromises must be made, optimality must be defined, or at least explored. Of the continuum of designs that may allow a system to work to the satisfaction of all concerned, the one that is eventually chosen is that which achieves the highest level of satisfaction from as many points of view as possible, some of which may be conflicting, and many of which may include subjective considerations. This illustrates an important point. Even when used in a laboratory setting where a great deal is known about system processes and about the properties of materials on which they operate, models are rarely used on their own in contexts of engineering design. Mostly they are employed in conjunction with other software that optimizes design, taking into account system knowledge as it is encapsulated in the model. It does this by reconciling conflicting design requirements in the most satisfactory way possible. The model thus forms part of a more complex software environment, part of which functions as a kind of model supervisor which employs the outcomes of many model runs to formulate designs that attempt to achieve optimality, in whatever way this is defined. Environmental Management While environmental management may sometimes be seen as closely related to engineering design, it probably has more in common with civic and economic management (see, for example, Orrell, 2007). This is because it is rarely, if ever, possible to construct a model that can predict the environmental future at a given site, either under management practices which prevail at present, or under those which may be proposed for the future. Reasons for this include the following. The complexity of environmental processes is virtually unbounded. Consider for example the plethora of chemical reactions that affect many contaminants, especially those of agricultural origin, as they make their way to and through underground and surface waterways. The properties of materials from which environmental systems are built are often poorly known. These properties may vary by several orders of magnitude over short distances (rock and soil permeabilities being a case in point). Important components of system geometry can often be inferred only vaguely. This applies particularly to the disposition of geological layering, and of fractures and shear zones that may intersect and offset this layering. While it may be possible to mathematically characterise environmental processes that are operative at a point or through a column, mathematical characterisation of these processes as they operate over larger areas that are the focus of management decisions is often impossible. Though considerable attention has been given to it in the academic literature, the problem of upscaling is far from solved; many questions, and much debate, still attends the manner in which this should be done. Issues include the following. Should point-scale equations be applied to large areas (as they are in many so-called physically-based models)? If so, how should hydraulic properties as they pertain to highly nonlinear processes be averaged over a large area? If not, how should hydraulic properties be represented in modified equations? In either case, what relevance (if any) do point measurements of system properties have to those used by a large-scale model? Measurements of system state from which system properties can be inferred through calibration are often scarce. For example wells in which groundwater head measurements are made often tap shallow layers rather than deep layers, and are often concentrated in some parts of a model domain while being sparse in other parts. Rivers and streams are gauged at only a small number of locations. Water quality measurements in both contexts are often sporadic, and often exhibit a high degree of temporal variability. Historical land and water use is often only approximately known. Nevertheless such usage figures are often required by models during their calibration phase. Other important system inputs, both historical and present-day, are often only approximately known. This applies in particular to the spatial disposition of rainfall throughout a watershed Contaminant source strengths and locations, both industrial and agricultural, are extremely uncertain. Numerical problems often attend the simulation of complex environmental processes. In general, the greater the degree to which a numerical model attempts to be physically based - the greater will its run time be, and the greater will be the degree to which it may fall prey to numerical instability. Both of these reduce a models ability to be used in conjunction with software that facilitates calibration and uncertainty analysis on the one hand, and system design optimisation on the other hand. The last of the above points deserves further consideration. In a typical engineering design scenario system properties are well known. In the environmental modelling context the areal distribution of system properties must be inferred from scarce point measurements of some of them, and/or back-calculated from historical measurements of system state. Thus solution of an ill-posed inverse problem is a fundamental aspect of environmental model usage. The extreme heterogeneity of most earth systems makes it very unlikely that exact values of system properties can be inferred throughout the system. Furthermore as the properties and processes represented within a model are often lumped or averaged analogues of their real-world counterparts, the salience of point measurements of system properties to parameters that represent these properties in a model is often questionable. Repercussions of this include the following. An environmental model must serve more than one purpose. While being used to make predictions on which environmental management must be based, it must also be used to extract as much information as possible from historical site data. If this is to be done with maximum efficacy, it requires software support. Hence, as stated above, a model must be capable of being deployed in partnership with other software that runs it repeatedly as part of a data extraction process. The complexity of real-world systems is such that even large calibration datasets cannot hope to provide unique estimates of all system properties at all places within a model domain. In general, estimates of averaged properties, or combinations of properties, will be available through solution of what may be a complex and ill-posed inverse problem. Even estimates of broad scale system properties will be degraded through: lack of complete knowledge of current and historical system inputs; inadequacies in a models ability to provide precise simulation of environmental processes; and the fact that measurements of system state on which basis system properties are inferred, are accompanied by measurement error. From all of this it is obvious that it is not possible to construct a model that will provide accurate predictions of future system behaviour. Furthermore, the greater the extent to which a prediction is sensitive to non-inferable system properties, the lower will be the reliability of that prediction. In general, non-inferable properties will pertain to system detail, and/or to aspects of system behaviour that are rarely observed - or perhaps may never have been observed if contemplated changes to a system are likely to take it to places where it has never been before. Unfortunately, these are the very types of prediction that are often of most interest from a management point of view. For example water quality, including the nature and disposition of surface or subsurface contaminants, is often dependent on process and property details that can only be vaguely measured or inferred. The response of a system to extreme events (for example the height of a flood peak) is, by definition, sensitive to aspects of the system that have rarely been observed in the past and whose measurement is subject to a high degree of uncertainty. The response of the system to sets of inputs that arise out of development, or to management plans that are put in place to ameliorate the deleterious effects of development, is dependent on aspects of system behaviour that may never have been experienced before. It is obvious, therefore, that an environmental model cannot be employed as an engineering design tool through which the means to achieve some agreed-upon management outcome is optimised, with achievement of the outcome never in doubt and only the efficiency of its achievement being the chief design consideration. If an environmental model tells us anything, it tells us that the future cannot be exactly known. Use of a model in the decision-making context, and the very design of the decision-making context itself, must be based on recognition of this fact. Risk An environmental model cannot predict the future. However this does not render it useless. While it cannot be used to predict what will happen because of data and model inadequacies, it can often be used to discriminate between what can happen and what cannot happen. As the line between the two is most unlikely to be sharp, it may also be able to provide some indication of diminishing likelihood as the value of a prediction changes. Many, if not most, environmental decisions are made in order to avoid an unwanted occurrence. Examples of unwanted occurrences include the following. Where groundwater cleanup is undertaken, a remediation strategy must be such as to ensure that the concentration of a contaminant at management boundaries will not exceed a certain threshold. Assurances must be provided to the public that the height of an imminent flood will be no greater than a level that is compatible with current evacuation boundaries. Water allocated to irrigators at the start of a water year must be such as to guarantee that no water deficit occurs during the water year. Alterations to land management practices must be such as to ensure that the concentrations of agricultural contaminants during periods of high or low flow (depending on the contaminant) do not rise above regulatory thresholds. Obviously, the likelihood of an unwanted occurrence can be minimized by taking extreme measures to avoid it. However such measures may not be acceptable because the associated costs may be too great. These must be balanced against the cost of an unwanted occurrence, together with the risk of its occurrence. Thus if the cost of the occurrence is high, only a low occurrence likelihood is tolerable. Conversely, if its cost is not too great a higher risk of its occurrence can be tolerated. In a series of landmark papers (Freeze et al, 1990; Massmann et al, 1991; Sperling et al, 1992; Freeze et al, 1992), Freeze and his co-workers presented theory and examples in support of a suggested methodology for model-based decision analysis. At the heart of their methodology is an objective function defined as:  EMBED Equation.3  (2.1) where: j = the objective function associated with alternative j in dollars; Bj(t) = benefits of alternative j in year t in dollars; Cj(t) = costs of alternative j in year t in dollars; Rj(t) = risk of alternative j in year t in dollars; T = time horizon in years; i = discount rate as a decimal fraction. Risk is defined through the equation: R(t) = Pf(t)Cf(t)(Cf) (2.2) where: Pf(t) = probability of failure in year t (decimal fraction); Cf(t) = costs associated with failure in year t (dollars); (Cf) = normalized utility function related to risk aversion (decimal fraction e" 1). The optimal decision is that corresponding to the maximum of the above objective function. Other decision strategies are also discussed in these papers. For example the maximin decision criterion seeks to identify the least desirable consequence for each alternative under consideration and then selects the alternative that leads to the best of these least desirable consequences. Alternatively, the minimax regret strategy selects that alternative whose maximum regret is smallest, where regret is defined as the price that must be paid for selecting the non-optimal alternative given perfect knowledge of system properties. The above authors point out that a models contribution to the objective function of equation (2.1) is expressed only through the risk term. This then defines the role of the model in the decision-making process. Whether decision-making is based on formal analysis such as that presented in the above equation, or whether it is based on more subjective considerations, the role of an environmental model as a means of assessing the probability of an unwanted occurrence is the same. It is not reasonable to ask a model to predict the future, for it cannot do this. The fact that all predictions of future environmental behaviour can only be made with uncertainty must be embraced. If it is to provide a scientific foundation to the decision-making process, all that can be asked of a model is that it be used to assess probability and hence risk. Or to put it another way, a model must be used to explore whether, if a certain management action is taken, an unwanted occurrence can be avoided, or that its occurrence is associated with a probability that is suitably low. In some circumstances it may be possible to extend the role of models in the decision-making process a little beyond this. If a model exposes the possibility of a bad thing happening, then it may also be able to show how that bad thing can happen. This may promulgate the design of a suitable monitoring strategy for early detection of the untoward event. Implementation of that strategy may add to the cost of a certain management plan, while reducing the risk of that plan going awry. These terms can be incorporated formally or informally into the above decision equation. Hypothesis-Testing Two important roles played by environmental modelling when used in a decision-making framework emerge from the above discussion. The first is its role in assessing the probability of a future untoward occurrence, this being a probabilistic description of the confidence with which it can be said that a certain event will or will not happen. The second is its role in extracting information from existing site data, thereby reducing the range of possible predictive occurrences below that which would prevail in the absence of this data. A model is able to provide these services because, ideally, it carries within it the entirety of our knowledge of a study site. This knowledge is encapsulated in the processes that it simulates, the boundary conditions that it implements, various aspects of system geometry that it features, and the range of parameter values that we allow it possess. These aspects of model construction and usage can be related to the terms that appear in Bayes equation. Bayes equation can be written as:  EMBED Equation.3  (2.3) where P(k) describes the prior probability of model parameters, P(h|k) is a likelihood function calculated from the fit that a model provides to measurements of system state, and P(k|h) describes the posterior probability of model parameters. As the container for all of our expert knowledge, the model can be represented as the P(k) term of Bayes equation. The models role in extracting information from site data is represented by the P(h|k) likelihood term. Its role in providing risk analysis in support of the decision-making process is represented by the posterior probability term P(k|h). This represents the probability distribution of parameters after all information has been taken into account. The probability distribution of any prediction, and consequentially the risk associated with any untoward occurrence, is calculated from this term. This matter will be examined in greater detail in later chapters. For the moment, however, we will look at the role that a model plays in the decision-making process in a slightly more qualitative way. As we shall see, however, the more qualitative nature of this description does not degrade its validity, as in most cases of model deployment, qualitative assessment of the likelihood or otherwise of a future event is all that is available to us anyway. An environmental model can be considered to be a type of scientific instrument. As such, it can be used to test the hypothesis that the occurrence of a certain future event is consistent with all that is known about a system. All that is known about a system is encapsulated in the design of a model of that system, in the range of parameters we allow that model to possess, and in the constraints on those ranges that emerge from the necessity that the model reproduce historical system behaviour, all of this being in accordance with Bayes equation. The hypothesis that a future event will occur given a certain management practice can be rejected if, at a certain confidence level, the occurrence of that event is inconsistent with some or all of this knowledge. Inconsistency is exhibited by an inability on the models part to make this prediction unless it employs parameters that, in our view, are unrealistic (at a certain level of confidence), or that incur an unlikely (at a certain level of confidence) amount of misfit with historical observations of system state when the model is driven by historical stresses. Using a model to test hypotheses is a conceptually straightforward undertaking. It is also an undertaking that is pivotal to its role in decision support. In practice however, it is an undertaking that is more easily said than done. Reasons for this include the following. The capacity of a model to encapsulate our knowledge often erodes its capacity to reduce posterior uncertainty through extracting information from historical datasets. Complex physically-based models are often employed because they provide appropriate receptacles for expert knowledge of processes and parameters, for they try to simulate, as well as possible, reality as we know it based on system properties that we can measure. However such complex models often have (extremely) long run times, and a high penchant for numerical instability. Both of these make their use with high-end parameter estimation software difficult or impossible. While complex models may allow us to represent the prior probability distribution of certain system properties with some degree of integrity, they make it almost impossible for us to represent other important aspects of prior uncertainty. For example, numerical grids are often designed specifically to accommodate the nuances of what we imagine to be the complex geology of three-dimensional systems, including the disposition of geological layers and faults, and the offsetting of layering and faults by other faults. While the locations of geological features that may be critical to the movement of underground water and contaminants may be only poorly known, making alterations to the disposition of these features to reflect their unknown status is often impossible without expensive and time-consuming redesign of the model grid or mesh. Practical difficulties can also be encountered in varying the geometric and hydraulic descriptors of certain boundary conditions (for example those related to rivers, streams and canals), and in introducing variability to factors affecting recharge processes (for example the location and timing of local ponding, temporal and spatial variations in diffuse vs. macropore recharge, etc). Parameters employed by a model cannot be represented with the same level of heterogeneity as that of hydraulic property variability in the real world. A lower limit is placed on model parameter variability by the model grid or element size in the case of discrete-element models. The algorithmic lumping of processes in many land use and surface water models places a lower limit on the spatial and temporal hydraulic property variability that can be represented in those kinds of models; at the same time it makes encapsulation of prior knowledge difficult because of the abstract nature of parameters employed by these kinds of models when compared with the measureable hydraulic properties that they purport to represent. Even where a physically-based model allows representation of heterogeneity at a cell-by-cell or element-by-element level, there are practical difficulties associated with this endeavour, especially where these parameters are subjected to adjustment through the model calibration process. Complex process models are often endowed with simplistic parameter fields to facilitate their calibration. However the use of a few parameters that represent average properties over many cells, or even a moderate number of parameters based on devices such as pilot points, erodes the capacity of the model to represent fine-scale heterogeneity. Model outputs under both historical and predictive conditions may be compromised because of this. Avenues for representation of prior system knowledge acquired through point measurements of system properties are also partially blocked because of this. No model is perfect. All outputs of all models bear the imprints of model imperfections. When assessing parameter likelihood through history-matching, this must be taken into account as we will almost certainly need to tolerate a greater level of model-to-measurement misfit than that which would be suggested on the basis of measurement noise alone. But how much greater? This is something that can only be assessed subjectively. Alternatively, it may be possible to achieve a very good fit between model outputs and historical measurements of system state. But in achieving this fit, some parameters may need to assume unlikely values to compensate for model inadequacies. But if a model parameter has an abstract side to its nature for reasons discussed above, how is unlikely defined for such a parameter? Furthermore, is parameter compensation for model inadequacies always a bad thing? If compensatory behaviour on the part of one parameter enhances a models ability to replicate the past, may it not also enhance its ability to predict the future? By keeping parameters strictly realistic could we be shutting the door on information that would otherwise flow to a model and that would therefore make it a better predictor of (at least some aspects of) future system behaviour? Doherty and Welter (2010) show that the frustrating answer to this question is sometimes yes and sometimes no, and it will often not be possible to discriminate between the two. Even if every parameter employed by a model could be magically provided with its correct value (notwithstanding the abstract nature of many of them), its predictions would still be flawed because of the imperfect nature of a models capacity to simulate every nuance of future system behaviour. The extent to which a prediction of decision-making interest is flawed will depend on many things. Generally its proclivity for error will rise with the extent to which it depends on small-scale or extreme features of system behaviour, as these are the aspects of system behaviour that model imperfections are most likely to affect. These considerations leave us in a position that is not altogether satisfying. Having dispensed with the illusion that a model can be used to predict the future, and replaced it with the assertion that a model can be used to assign confidence levels to various occurrences that are salient to management decisions, we must now conclude that the confidence with which we can assign a confidence level is frustratingly small. Or, to put it another way, the confidence with which we can assert that a future untoward event will not occur is lower than that which exists in theory, because we simply cannot process all of the data that are available to us using the cumbersome scientific instrument that is an environmental model. The widths of predictive confidence intervals on which decision-making is based must therefore be augmented to take into account the bluntness of our scientific instrument. However the magnitudes of these augmentations are almost impossible to calculate, so that their assessment will necessarily be somewhat subjective. Obviously they must be reduced as much as possible. Reducing Model Augmentations to Uncertainty Strictly speaking the term model-added uncertainty has no meaning. A better term may be penchant for error, or some similar phrase that would almost certainly include the word error. This would reserve the word uncertainty for unambiguous use in describing the purer concepts that appear in Bayes equation that are characteristics of the system itself (as described by our expert knowledge), and of the information contained in measurements of that system. The fact that our prior knowledge must be housed in a flawed receptacle, and that information must be extracted from historical measurements of system state through a flawed vehicle that compromises this information, could then be described using other terminology. To some extent we will keep the terms error and uncertainty separate throughout this document, and will calculate them in different ways. However it must not be forgotten that the two become inseparably mixed in practice when we use a numerical model to assess whether, and at what confidence level, the occurrence of an untoward event can be rejected. Meanwhile it must be remembered that, however it is characterized, the penchant for a model prediction to be wrong, even after steps have been taken to minimize its potential for predictive wrongness, is an outcome of two factors. The first is the inherent uncertainty of any prediction that it is required to make as it is described by Bayes equation. The second is the fact that information that is pertinent to the making of that prediction must be extracted from historical data using a flawed instrument, and that predictions of future system behaviour are made with the same flawed instrument. The situation is made even more complex however, when it is recognized that a models flawed status may compromise some predictions but not others. Furthermore, as Doherty and Welter (2010) show, some model predictions may benefit from having some of a models flaws calibrated out, while the making of other predictions may suffer from such an exercise. Furthermore, the prediction-tuning process may be different for different predictions. It follows that a model should not be built and calibrated in isolation from the predictions that are required of it, and that environmental processes at the same site may need to be simulated in different ways to provide optimal bases for different management decisions. Model-building can, in some ways, be seen as an optimisation process. At one end of a continuum of approaches to environmental modelling are complex, physically-based models that attempt to provide realistic simulation of all aspects of a systems behaviour. This approach is seen by many as that which entails the highest level of scientific integrity because it purports to provide suitable receptacles for all aspects of expert knowledge. This line of argument leads to the conclusion that a single complex model of a study area can be employed to make a plethora of predictions of many different types with as high a level of integrity as current expert knowledge and hard data allows. Hence it can provide a universal basis for decision-making at a particular study site. In fact, as has been discussed, such a model may provide a worthy repository for expert knowledge (including the uncertainties associated therewith), and thus encapsulate the prior knowledge term of Bayes equation. As such it can provide a basis for analysis of the uncertainty of any system prediction based solely on expert knowledge (and lack thereof), while ignoring the likelihood term of Bayes equation. Presumably, such quantification of pre-calibration uncertainty would entail running the model countless times in order to undertake Monte-Carlo analysis based on variation of all of the myriad of parameters that such a complex model can employ. As such a model would probably have a high run time, parallelisation of this process would be essential. Unfortunately, a complex model may be a far-from-optimal device for extracting information from an historical dataset because its inevitably high run time and penchant for numerical instability may render its use in conjunction with high-end inversion software impossible. Hence the second term of Bayes equation may be higher than it needs to be. Thus the capacity for prior predictive uncertainty intervals to be reduced through history-matching may be reduced to almost zero, in spite of the fact that a wealth of information may reside in historical datasets. Predictive confidence intervals will therefore be wider than they need to be. The risk associated with untoward events may therefore be assessed as unnecessarily high, this leading to unnecessarily conservative management. An alternative to use of a very complex model is use of a model that attempts to retain as much complexity as possible that is salient to the making of a prediction of interest, at the same time as it attempts to abandon non-salient complexity in the hope of decreasing model run time and increasing numerical stability. Unfortunately, such a model may not provide an optimal receptacle for expert knowledge. If prior uncertainty of model parameters is appropriately increased to accommodate their limited capacity to be informed by expert knowledge, this may then inflate predictive uncertainty. However if the historical dataset is rich in information - information that can now be extracted from it - reduction of the likelihood term of Bayes equation may more than compensate for increase of the prior information term. Caution must be exercised, of course, as some of the information that is extracted from the historical dataset may be misdirected to parameters that play compensatory roles for model defects. If a critical model prediction is sensitive to parameters that are misinformed in this manner, it may inherit this error. On the other hand if, as Doherty and Welter (2010) show, the prediction depends on the same parameter combinations as do model outputs used in the history-matching process, information that is resident in historical system measurements is probably directed to prediction-informative repositories; this will occur even though the parameters that constitute these repositories may not have as direct a relationship with the system properties after which they are named as a modeller would like, and hence cannot be as well informed by a modellers expert knowledge as he/she would wish. However if information that is resident in expert knowledge is small compared with that residing in the historical dataset, then this may not matter because more will have been gained in terms of reduction of predictive uncertainty through reduction of the likelihood term than has been lost through raising of the prior probability term. Of course if a model is too simple then fits between model outputs and field measurements will be poor and the likelihood term will be high. In addition to this, the likelihood term will be difficult or impossible to calculate because of the unknown stochastic characteristics of model-imperfection-induced misfit (often described by the term structural noise). Meanwhile the prior probability term will also be high because the abstract parameters employed by a simple model provide poor receptacles for expert knowledge. It is apparent from the above considerations that design of a model to underpin environmental decision-making requires conceptual solution of an optimisation problem that has at its heart the interplay between the two terms constituting the right side of Bayes equation. The solution to this problem can only be context-specific. It must entail extraction of maximum prediction-specific information from both user expertise and site data. Furthermore, it is not a foregone conclusion that a strategy that minimizes uncertainty for one prediction constitutes an optimal strategy for minimizing the uncertainty of another prediction. In particular, where a prediction is similar in nature and location to historical measurements of system state, a modelling approach that maximizes transfer of information from the measurement dataset to model parameters (even if some of these parameters must assume surrogate roles in the information-extraction process) will probably be optimal. On the other hand, where a prediction is of a distinctly different kind from those comprising the historical site dataset, it may be sensitive to aspects of the system that are ill-informed by that dataset. In this case the uncertainty of that prediction may have a direct dependence on the prior probability term of Bayes equation. A model will therefore be maximally effective in reducing the uncertainty of that prediction when it provides optimal receptacles for expert knowledge, and has a strong physical basis. It therefore follows that the attempted use of a single model to make all predictions of all kinds within a study area will probably result in a failure to reduce the uncertainty of any one prediction to anything like its theoretical lower limit. It may also make the analysis of posterior predictive uncertainty very difficult, this possibly resulting in underestimation of that uncertainty and therefore a failure to properly assess the probability of occurrence of unwanted events. The Scientific Method Ultimately, those who use models to provide as sound a basis for scientific environmental management as possible should aspire to implement the scientific method. After all, to what other goal should they aspire? As will be discussed in later chapters, at its most basic level implementation of the scientific method comprises the proposal of hypotheses together with subsequent attempts to reject them based on all information at hand. In the environmental management context a hypothesis comprises the conjecture that something bad will happen, this being a management outcome that it is desirable to avoid. The information on which it may, or may not, be possible to reject this hypothesis is composed of both expert knowledge and of information that resides in historical measurements of the state of the system that are available for the study area. The history of scientific achievement is replete with stories of scientists who devised brilliant experiments to test their hypotheses and thereby provide hitherto unavailable insights into the natural world. Most of these experiments were targeted at the testing of individual hypotheses - not at the testing of all hypotheses. Their laboratory instruments were often blunt and cumbersome compared to the phenomenon being explored, particularly where these pertained to the nature of matter and of the atom. However through tuning these instruments to the problem at hand, and through focussing data acquisition and data processing so that it was maximally effective in falsifying, or failing to falsify, the particular hypothesis being tested, great advances in human knowledge were achieved. In many respects, scientific inquiry into the nature of an environmental system is little different. Our best tools are often models. Despite the fact that their numerical inadequacies can make them blunt instruments, and despite the fact that data availability at a particular site may be scarce, it is nevertheless often possible to undertake incisive, prediction-specific inquiries of that data using a model - inquiries that may yield conclusions on which sound management can be based. This has a far greater chance of happening, however, if the design of the modelling instrument, and the manner in which it is used to extract information from available data, is optimized in relation to a specific management problem, and is thereby deployed with as much skill as the scientist can bring to bear. Summary The discussion of this chapter can be summarized as follows. Environmental decision-making, as does all decision-making, rests on an assessment of the risk associated with the happening of unwanted and costly events. It is the task of modelling to assess this risk. Informed model usage constitutes an implementation of Bayes equation. The end product of informed model usage can only be the definition of a posterior confidence interval associated with a prediction of interest. For maximum relevance to the decision-making process, this must be expressed as a level of confidence that an unwanted event will not occur. Complex models provide optimal receptacles for expert knowledge. However as scientific instruments they may be far from optimal as they may provide impediments to the flow of information from environmental datasets. Because of this, post-calibration predictive uncertainty may at best be higher than it needs to be, and at worst unquantifiable. Simple models provide poor receptacles for expert knowledge. Furthermore history-matching based on these models may require that some of their parameters assume compensatory roles for model inadequacies. If a prediction resembles observations used in the history-matching process, reductions in predictive uncertainty may nevertheless be achieved through embracing these compensatory roles. If it does not, the uncertainty of a prediction may be increased through use of a simple model, at the same time as it is rendered virtually unquantifiable. Model construction and deployment may therefore need to be prediction-specific. The idea that a single system simulator can provide the basis for all management decisions at a particular site or study area has no foundation in either common sense or theory. 3. Models, Simulation and Uncertainty Expert Knowledge The role of models in the decision-making process was discussed in the previous chapter. There it was pointed out that physically-based models that attempt to simulate as many details as possible of the behaviour of natural systems have advantages and disadvantages. Their main advantage is that they comprise a suitable repository for expert knowledge. However their main disadvantage is that detailed simulation of natural processes, and representation within a model of the heterogeneous nature of system properties on which natural processes depend, is a computationally demanding exercise. Use of such a model therefore makes it very difficult for a modeller to extract information from historical datasets, thereby devaluing the cost of such data. Even when used outside of the calibration context, design rigidities may be such as to make it difficult for a modeller to encapsulate his/her expert knowledge in a complex model to the extent that he/she would like. At this stage it is worth pausing for a moment to ponder the question of what expert knowledge actually is. Expert knowledge is in fact a probabilistic form of knowledge. A hydrogeologist cannot say what the hydraulic properties of the subsurface are at every point within a (necessarily three-dimensional) groundwater flow domain. Nor can he/she know the disposition of rock boundaries throughout that domain, nor the variation in weathering depths, nor the changes in lithology along the strike of any sedimentary or structural feature, nor the paths of ancient meandering streams, nor the variations in fracture density throughout a model domain. Similarly, a surface water hydrologist cannot know infiltration properties pertaining to all soil types at all locations under all land use conditions throughout a study area at all times of the year. Nor can he/she know with certainty how land uses have changed over what may be a lengthy calibration period. Conceptually, expert knowledge must be expressed stochastically (i.e. probabilistically), as it is rarely definitive. As has already been stated, probability distributions that arise from expert knowledge are in fact the prior probabilities that feature in Bayes equation. In theory, the use of a complex model allows us to expose this knowledge for what it is - a range of possibilities that hopefully encompass the true state of the system, whatever that may be. What a Model can Provide Given that expert knowledge is probabilistic in nature, it immediately follows that so too are predictions that are made by a model whose parameterization is based on expert knowledge alone. These predictions must therefore be expressed as probability distributions. The better is expert knowledge, the narrower will these probability distributions be. Bayes equation shows that predictions made by models that have been calibrated must also be probabilistic in nature. Actually, the term calibration has no place in Bayes equation. To the extent that it has any meaning in the environmental modelling context at all, that meaning will be examined in the next chapter. Bayes equation shows that what history-matching can achieve is a narrowing of the uncertainty associated with some model parameters, for their propensity to vary is no longer limited by expert knowledge alone. Their propensity for variability is now also constrained by the necessity for the model to reproduce historical system behaviour as measured at certain points in space and time. Obviously, the less data that is available, and the greater the measurement noise that is associated with that data, the fewer will be the parameters that are constrained by this data, and the looser will be the constraints on them that this data exerts. Greater amounts of data can lead to tighter constraints on some parameters - but not necessarily on all parameters - and not necessarily on the parameters to which a prediction of interest is most sensitive. Hence (as the synthetic groundwater model that forms one of the practical exercises associated with this document demonstrates), the process of model calibration may, or may not, lead to an enhanced ability on the part of the model to make predictions which are of most interest to us. This will depend entirely on the nature of the predictions we seek, and on the information content of existing site data. These considerations lead us to the point where we can define what a modelling exercise can aspire to achieve. For a prediction of interest, a range of possible values which the prediction may take, all of which are compatible with all that is known of a system; collectively these define the uncertainty range of the prediction. A guarantee that the correct prediction lies within the uncertainty limits so defined (at a specified level of confidence). A modelling strategy which ensures that the range of predictive uncertainty calculated by a model is no wider than it needs to be, given the prevailing level of expert knowledge and the information content of site data. Thus the probability of occurrence of an untoward event will not be over-estimated. An uncertainty assessment strategy which ensures that calculated predictive uncertainty margins are no narrower than they should be through failure to account for all contributors to possible model predictive error, some of these arising from data inadequacy and some of them arising from model structural defects. Thus the risk associated with the occurrence of an untoward event will not be under-estimated. These then define the aspirations of any modelling exercise. Unfortunately, for reasons already discussed, the meeting of these goals may not be straightforward. Furthermore compromise will always be required as attempts to reduce one aspect of uncertainty may lead to inflation of another aspect of uncertainty by an unknown amount. What an Uncalibrated Model can Provide Here, and in later sections of this document, the term uncalibrated model will refer to a model whose parameters are not constrained by history-matching. Hence it is a model that can represent only the prior probability term of Bayes equation. As such it is a model that is capable of defining the range of predictive probabilities that exist where no historical measurements of system state are available to constrain the uncertainty that arises from limitations in expert knowledge. Probabilistic analysis is easy for an uncalibrated model. Conceptually it is most readily implemented using a Monte Carlo methodology. Using this methodology random parameter sets are generated on the basis of a prior parameter probability distribution which expresses expert knowledge (at the same time as it expresses expert ignorance due to its probabilistic nature). The model is run on the basis of each of these parameter sets in order to calculate the value of a prediction of interest. By collecting prediction values computed on the basis of all such parameter sets, an empirical probability density function can be built for the prediction. This probability density function can then be used to define the risk associated with the occurrence of prediction values that are considered to be unwelcome. In practice, even the uncalibrated model can only approximately attain the goals to which modelling must aspire (assuming that justification for its uncalibrated status lies in the absence of any data to calibrate it against). Reasons for this include the following. Even the most complex model can only be an abstract reflection of reality, for not only are the parameters associated with environmental processes uncertain, but the equations that describe these processes are also often uncertain. Many environmental processes, and the system properties on which they rely, are difficult to characterise where they must be averaged over a model element or cell. Where a model domain is large (for example the domain of a regional ground or surface water model), the level of parameterization complexity that is required to characterize the degree of system property complexity that exists in the real world is far too large to handle in either probabilistic or deterministic analysis. Many important aspects of environmental uncertainty simply cannot be accommodated in uncertainty analysis based on numerical models. As has already been stated, this includes uncertainties in the disposition of geological layering, bedding and faulting. Even where it is numerically feasible to run a model many times based on different realisations of system properties, a suitable stochastic descriptor for variability of those properties may not be available. Instead simplistic (and often over-constraining) assumptions such as that of multi-normality and stationarity are used as a basis for random parameter set generation. It is simply not possible for numerical complexity to mimic the complexity of the real world. The numerical grids employed by groundwater models must be finite if these models are to have finite run times. Regional rainfall-runoff models must represent complex processes in lumped form despite the intricacies of surface water movement over the hundreds of different land use and soil types that prevail in large watersheds. It is thus apparent that even the most physically-based uncalibrated model is compromised, as its construction and deployment requires medium to high levels of abstraction. Abstraction obviously comes at a cost. But against what must this cost be debited? Obviously it must be debited against what a model can promise. The latter is discussed in the previous subsection of this document. The cost is therefore paid through a decrease in quality of the predictive probability distributions that are the only scientifically based outcomes of environmental modelling. How can model imperfections detract from model-calculated predictive probability distributions? There is no answer to this question that is universally applicable. However the following considerations are salient. In some cases they will create bias, thereby shifting a predictive probability distribution to one side. Certain unwanted environmental occurrences that, using a perfect model, may be assessed as having low but finite predictive likelihood, may be considered as impossible when their likelihood is assessed using a flawed model. On the other hand, at the other end of the shifted predictive probability distribution, events whose likelihood is in fact very low, may be considered to possess moderate to high likelihood of occurrence. Simplification and abstraction involves removal of detail. For predictions that are sensitive to detail, certain mechanisms that lead to low, but nevertheless finite, predictive possibilities will become unavailable. Hence there is a risk that, for predictions of these types, model-calculated predictive probability distributions will be narrower than those which in fact prevail. As both of the above phenomena may lead to under-estimation of risk certain steps can be taken to ameliorate a models performance as a risk assessment tool. These steps are based on the assumption that it is better to over-estimate risk than to under-estimate it. They also presume that concluding that an unwanted event cannot occur when in fact it is entirely possible that it can occur must be avoided at all costs. These steps include the following. To the extent that predictive bias is introduced through the model construction process, the details of model construction should be such as to bias predictions toward pessimism rather than optimism. Some model parameters may need to be endowed with wider prior probability distributions than they would possess based on expert knowledge alone. This applies particularly to parameters whose more extreme values may provide surrogates for missing or defective simulated processes. A random predictive noise term may be added to model outcomes of interest in order to endow these noise-enhanced predictions with a wider range of variability than they would otherwise possess. Predictive probability distributions computed on the basis of model outcomes may be stretched in order to provide a suitable engineering safety margin. All of the above strategies are necessarily heuristic and subjective. However subjectivity must be placed in its proper context. A great deal of expert knowledge is subjective, including definition of prior parameter probability distributions. The art of prediction-specific model abstraction and/or of definition of an appropriate engineering safety margin to employ when basing important decisions on model-computed predictive probability distributions is no less a form of expert knowledge, and no greater a form of expert judgement, than that which is required in all other phases of the model construction process. Linear Analysis In this subsection some concepts underpinning linear analysis, as it applies to an uncalibrated model, are introduced. In later sections this type of analysis will be expanded to accommodate the imposition of calibration constraints. As discussed in Chapter 1, mathematical presentations provided herein are brief. It is not necessary that the equations presented below be understood - only that the concepts behind them be understood. Let the vector k represent all parameters used by a model. Notionally, elements of k can include any imperfectly known value that is used by a model, irrespective of whether the quantity to which this value is assigned is a system property, a boundary condition, or an aspect of the models geometry. Let the covariance matrix of k be denoted as C(k). If the vector k has m elements, then C(k) is an m m matrix. As a covariance matrix, C(k) provides a summary of the stochasticity of k. Hence it expresses both the pre-calibration knowledge, and the pre-calibration ignorance of the modeller. The diagonal terms of C(k) are perhaps the best expression of the modellers ignorance. The ith diagonal term expresses the variance of the ith element of k. Variance is the square of standard deviation. Hence the diagonal terms of C(k) denote a modellers inability to say exactly what the value of a particular system property is at a certain point within the model domain. However the fact that these diagonal elements are of finite magnitude portrays a certain state of knowledge (or conversely, they portray boundaries to the modellers ignorance). Off-diagonal terms of C(k) denote statistical interrelatedness between parameters of the same type, or even of different types. Zero-valued off-diagonal terms portray no statistical relationship at all. Thus if the element at row i and column j of C(k) is zero, then ki and kj (these being the ith and jth elements of k) are statistically independent. However if this term is non-zero, it signifies that if one of these parameters is higher than average, the other will tend to be either higher than average (if Ci,j(k) is positive) or lower than average (if Ci,j(k) is negative). This implies expert knowledge. Examples include the following. Subsurface hydraulic properties do not show completely random spatial variability. In most cases there is a tendency for some degree of spatial continuity in hydraulic properties to exist. Furthermore the length over which such statistical interrelatedness prevails may be longer in one direction than in others, this implying anisotropy of hydraulic properties. A soil with a high sand content is likely to allow greater infiltration of water than a soil with a high clay content. Its capacity to store water may also be greater, at the same time as its propensity to lose water through drainage may also be enhanced. All of these hydraulic characteristics of a particular soil may be represented by different model parameters; these parameters obviously possess a high degree of statistical correlation. Let s (a scalar) denote a prediction. Let the n1 vector y denote the sensitivity of this prediction to all of the elements of k. In a linear system, the following relationship then applies. s - s0 = yt(k - k0) (3.1) where the superscript t designates the matrix transpose operation, and s0 and k0 are reference values. For simplicity (and without loss of generality) these reference values will be omitted from future equations (implying that s and k are defined as perturbations from these reference values), so that (3.1) becomes: s = ytk (3.2) We now introduce a basic matrix identity. Suppose that: u = Av (3.3) where u and v are random vectors (i.e. vectors whose elements are random numbers) and A is a matrix. As v is a random vector it possesses a covariance matrix. Let C(v) denote the covariance matrix of v. It is easily shown (see, for example, Koch, 1997 ) that: C(u) = AC(v)At (3.4) where C(u) is the covariance matrix of u. If this is applied to equation (3.2), while bearing in mind that the covariance matrix of a scalar (which can be considered to be a 1 1 matrix) is the variance (square of standard deviation) of that scalar, the following equation results. 2s = ytC(k)y (3.5) where 2s is the variance of the prediction s. Thus, for a linear model, a statistical characterization of model parameters leads immediately to statistical characterization of predictive uncertainty. Exercises This section of the original document has been omitted. 4. Getting Information out of Data History-matching The process of adjusting model parameters until a good fit between model outputs and field measurements is obtained is often referred to as calibration. A model whose parameters have been adjusted in this fashion is often referred to as a calibrated model. There is a certain sense of finality in that term, some of this inherited from the fact that the word calibration is commonly associated with finely-tuned laboratory instruments. Unfortunately, however, when applied to environmental modelling, the term can be misleading, both in its direct and implied sense. History-matching is a vital part of preparing a model for use in decision support. However the outcomes of the history-matching process must be viewed in a way that is in harmony with: a mathematical description of what history-matching can actually achieve, and the practicalities of what history-matching can achieve when applied to a model which is a defective simulator of real-world environmental processes. History-matching is normally implemented through minimizing a so-called objective function. This can be defined in many ways. A common way is as the sum of weighted squared differences between model outputs and field measurements. Ideally greater weights should be given to measurements which are thought to be less contaminated by errors that are incurred in the actual making of those measurements (often referred to as noise). However in practice, as will be discussed below, weighting strategies may need to be more flexible than this in order to accommodate the fact that model-to-measurement misfit is usually dominated by so-called structural noise rather than measurement noise. Bayes Equation In the Bayesian context the fit between model outputs and historical field observations determines the magnitude of the likelihood term, this being the first term on the right of equation 2.3. Parameters which give rise to a better fit result in a greater likelihood function. In Bayes equation better is defined in a statistical sense, for model-to-measurement misfit is assumed to be an outcome of the fact that measurements of system state are accompanied by random error. Betterment of fit, and hence increase in parameter likelihood, is calculated using the probability distribution that is associated with this error. Two immediate outcomes of this are as follows. Bayes equation makes no inference of parameter uniqueness. Where parameters are many and data is scarce, it is not hard to imagine that many different combinations of parameters will provide the same or similar level of fit. The ranking, in terms of posterior probability, of different parameters sets which yield the same likelihood function must then take place on the basis of the prior probability of those parameter sets. The statistics of measurement noise are assumed to be known - or almost known. In practice, some aspects of the statistical distribution of measurement noise (mainly its overall magnitude) can be estimated through the history-matching process, while others (mainly variables that govern its shape) are assumed to be known because their estimation is difficult or impossible. In doing this it must be realised, however, that assumptions regarding the statistical properties of measurement noise can have a large influence on what parameter sets are construed to be better than others, and hence on the nature of the inferred posterior parameter probability distribution. From the posterior parameter probability distribution inferred through Bayesian analysis the posterior distribution of any model prediction can be calculated. Parameters of which field measurements are informative may have a significantly narrower posterior distribution than prior distribution. Likewise, predictions that are sensitive to parameters (and parameter combinations) of which the measurement dataset is informative may also have a significantly narrower posterior probability distribution than prior probability distribution. This is illustrated in Figure 4.1.  Figure 4.1 Schematic representation of Bayesian analysis. Figure 4.1 attempts to illustrate a fact that has been emphasized in previous sections of this document. When using an environmental model to inquire into future system behaviour, all that can be expected is a probability distribution at best, or a range of predictive possibilities at worst. The latter is narrower than the range of prior predictive possibilities as all information has been taken into account through supplementing expert knowledge with the information that resides in measurements of system state. Calibration Unfortunately, direct manipulation of parameter and predictive probability distributions is a numerically burdensome procedure. To be sure, software is available which can do this. In particular, Markov chain Monte Carlo (MCMC) analysis allows a modeller to define the posterior parameter distribution by directly sampling from it. As such it may be considered to represent the purest way to use a model as it constitutes a direct implementation of Bayes equation. A problem with the method, however, is that sampling of the posterior parameter distribution requires many model runs. Furthermore, as the number of parameters used in the analysis increases, the number of model runs required to implement the analysis tends to rise dramatically, especially if the inverse problem that defines the history-matching process is characterised by a high or even moderately-dimensioned null space (see below). Furthermore, attempts to reduce the number of parameters involved in the analysis through devices such as lumping, fixing, tying and averaging often erode the capacity of MCMC analysis to achieve what it sets out to achieve, for a wide posterior predictive uncertainty distribution is often a direct outcome of the fact that many parameters are ill-informed by measurements of system state. Other approaches to Bayesian-based history-matching include formulation of equations that directly encapsulate prior and posterior probability distributions, and direct solution for the parameters that govern these distributions. This can be a more fruitful approach than MCMC analysis where model run times are high. However it requires considerable model-specific programming and hence specialized software. Furthermore it often requires the making of assumptions pertaining to the nature, type and size of parameter variability that may further reduce the generality of its application. For these reasons, and for cultural reasons, most history-matching is undertaken as part of the process of model calibration. In the environmental modelling context, this is an almost mystical term whose hidden meaning has more strength than its actual meaning. To the non-specialist the term calibrated model has overtones of predictive certainty which, in the environmental sphere, few if any models can lay claim to. Given that the term has no place in Bayesian analysis, and given that Bayesian analysis alone provides a complete mathematical characterization of what the history-matching process can and should achieve, it is hardly surprising that the term calibration has found meanings which have little or no scientific basis. Unfortunately the term is well suited to advertising campaigns that bestow on existing, or yet-to-be-built, models predictive powers for which justification must be sought in wishful thinking rather than in mathematics. The word calibration infers parameter uniqueness, for the calibration process purports to seek one set of parameters which the model will then employ for the making of predictions of future environmental behaviour. Numerically, it is much easier to find a single set of parameters than a suite of parameters on which basis a posterior probability distribution can be built. Hence the process of model calibration has a physical allure that compliments its metaphysical allure. If a single set of parameters is sought in lieu of a posterior parameter probability distribution, or a suite of parameters that sample the posterior parameter probability distribution, this raises some serious questions. In particular: What properties should the single set of parameters possess? How much propensity for error exists when using these parameters to make predictions of future environmental behaviour? If we are indeed going to select a single set of parameters for predictive model usage (this being the set of parameters which is deemed to calibrate the model), it makes sense that these parameters should be as right as possible; that is, of all the parameter possibilities that we may employ, the values that we assign to them should be those of minimum error variance. If the posterior parameter probability distribution is symmetric, values assigned to parameters which bestow on them their calibrated status should thus be their posterior expected values in the statistical sense. If the model provides a correct representation or reality, predictions based on these parameters will then be of minimum error variance. This then defines the goal of the model calibration process. That is, this process must seek parameter values of minimized error variance so that predictions that depend on these parameters may also be of minimized error variance. It is important to note, however, that minimized error variance does not mean minimal error variance. It only means that calibrated parameter values lie somewhere near the centres of their posterior probability distributions, and that a prediction made on the basis of these parameter values lies somewhere near the centre of its posterior probability distribution. Thus the potential for error incurred by making a prediction on the basis of these parameter values is approximately symmetrical with respect to the prediction itself. This is what is responsible for its minimization. It is important to note that significant reduction of the width of the posterior predictive probability distribution below its prior width may or may not occur through the history-matching process; this depends on the information content of the measurement dataset. The issue of how broad is the posterior predictive probability distribution is separate from that of acquiring, through the history-matching process, an ability to make a prediction which is centrally located with respect to this distribution. Nevertheless, the potential for predictive error must be quantified if model-based decision-making is to have integrity for reasons that have already been outlined. Ideally, the post-calibration potential for error in a prediction should be the same as the inherent posterior uncertainty of that prediction as calculated from the posterior predictive probability distribution of that prediction achieved through Bayesian analysis applied to a model which is a perfect simulator of environmental behaviour. In practice the potential for predictive error will be somewhat greater than this as an outcome of numerical imperfections that attend both of the processes of model calibration and model simulation. Both of these increase the potential for error in predictions made by a calibrated model; hence both of these must be taken into account when assessing that potential. This, unfortunately, entails an irremovable element of subjectivity. The Null Space The process of calculating a single parameter set with a special set of properties from a set of field measurement is often referred to as inversion. The problem itself is often referred to as the inverse problem. Inverse problems are often difficult to solve, the reason being that they are often characterised as being, in mathematical parlance, ill-posed. Inverse problem ill-posedness arises from the fact that if a model is endowed with parameterization complexity that reflects the complexity and heterogeneity of reality, then rarely, if ever, can these parameters be estimated uniquely on the basis of measurements of system state alone. Linear algebra provides a useful vehicle for analysing this problem. As in the previous chapter, let model parameters be represented by the vector k. We will suppose that the elements of k represent system properties at a level of complexity that is salient to the modelling task at hand. Ultimately salience is determined by the fact that critical model predictions may be sensitive to those parameters. Let the matrix Z represent the action of the model under calibration conditions, and let the elements of the vector h comprise the set of measurements of system state that comprise the calibration dataset. Then the action of the model when supplied with historical system drivers can be written as: h = Zk +  (4.1) where  is an (unknown) vector whose elements represent noise associated with the elements of h. For simplicity, let us assume for the moment that measurement noise is zero. Then: h = Zk (4.2) From the above equations it is apparent that each row of the matrix Z contains the sensitivities of a given model output (for which there is a corresponding field measurement) to all of the parameters k. If, because of the level of parameterization complexity represented by k, there are fewer elements of h than of k, then Z is rectangular with its long direction horizontal. If this is the case, it can be shown that there exist non-zero vectors k for which: 0 = Zk (4.3) By adding (4.2) to (4.3) it is easily seen that if k satisfies (4.2) then so does (k+k). Hence inference of k from h is nonunique. Nonuniqueness of the inverse problem is the rule rather than the exception. It can readily occur even where there are more observations than parameters, though it is not guaranteed to occur under these conditions as it is when there are less observations than parameters. If equation (4.3) is satisfied by even one non-zero vector k, then the matrix Z is said to possess a null space. The number of dimensions in parameter space occupied by the null space is at least equal to column-over-row surplus (if one exists). However it is generally much larger than this. It follows that parameter space can be subdivided into two subspaces - a null space and a so-called solution space, the latter being the orthogonal compliment of the former. A unique k can be calculated from the h of equation (4.2) if the search for k is restricted to the solution space of the matrix Z. Unfortunately however, it is unlikely that the real parameter set k falls within the solution space of Z. In finding k, all that we in fact find is the projection of k onto the solution space. Because the orthogonal complement of the solution space, i.e. the null space, contains inestimable system property detail it follows intuitively that the projection of reality onto the solution space constitutes a simplified solution to the inverse problem of model calibration. It can however be shown that, under the right circumstances, it is also the solution to the inverse problem of minimum error variance. This can be verified by intuition, for as soon as we venture from the solution space into the null space we increase our potential for wrongness as we run the risk of venturing into the null space in the wrong direction (for example up rather than down in Figure 4.2).  Figure 4.2 Schematic of parameter space showing solution and null spaces. k represents the true parameter set. All that can be estimated through the calibration process is its projection into the solution space. In practice, in calibrating a real-world model, the null space needs to have more dimensions than those which strictly satisfy (4.3), for it also needs to include parameter sets k for which h is very low, and not just zero. This is because, as Moore and Doherty (2005) show, attempts to estimate parameter sets k for which h is nearly zero will lead to a propensity for estimation error that is greater than the pre-calibration uncertainty of these parameter sets. The partitioning of parameter space into solution and null spaces rarely takes place along neat parameter boundaries. To be sure, some parameters are entirely inestimable and hence lie within the null space. These are parameters to which all model outputs corresponding to field observations are insensitive. In other cases a parameter may lie partly within the solution space and partly within the null space. This indicates that the calibration dataset provides some information pertaining to that parameter - but that this information must be shared with at least one other parameter. The individual parameters amongst which this information is shared therefore show a high or infinite amount of statistical correlation in the posterior parameter probability distribution. This is because the information contained within the calibration dataset is sufficient only for estimation of a combination of these parameters, rather than all of them individually. Figure 4.3 attempts to illustrate this situation. Doherty and Hunt (2009) define the direction cosine between an individual parameter and its projection into the solution space as its identifiability. This ranges between 1 for a parameter that lies entirely within the solution space, and 0 for a parameter that lies entirely within the null space.  Figure 4.3. Vectors k1, k2 and k3 point along parameter axes. These are different from the vectors v1, v2 and v3 which define orthogonal axes through which parameter space can be partitioned into solution and null spaces. The cosine of  is defined as the identifiability of parameter k1. Identifiability is defined in the same way for other parameters. As stated above, restriction of the search for a solution to the inverse problem of model calibration to parameter combinations that lie within the solution space effectively restricts that search to the simplest set of parameters which allow the model to effectively reproduce historical system behaviour. This accords with the much-repeated precept offered by many calibration sages that the calibration process should pursue the principle of parsimony. It is important to note, however, that parsimony is desirable not as an end in itself, but because it is a means of achieving the only thing that is worth achieving through the calibration process, that is the solution to the inverse problem that is of minimum error variance. As we shall see shortly, too much simplicity, or inappropriately defined simplicity, can act as a barrier to achieving this goal. Singular value decomposition (SVD) provides a means of subdividing parameter space into solution and null spaces. It can be shown that any matrix Z can be decomposed according to the formula: Z = USVt (4.4) where the columns of the matrix U are orthogonal unit vectors which span the range space of Z, the columns of V are orthogonal unit vectors which span the domain of Z (parameter space in our case), and S is a diagonal matrix composed of positive or zero diagonal elements arranged from highest to lowest. The columns of V as they pertain to a three-dimensional parameter space are actually depicted in Figures 4.2 and 4.3 as the vectors v1, v2 and v3. Columns of V corresponding to zero-valued singular values span the null space; in practice, columns associated with weakly estimable parameter combinations for which singular values are near zero are also assigned to the null space, this providing a safeguard against amplification of parameter error though over-fitting. Regularisation General Model calibration is the search for a unique parameter set. From the above discussion it is apparent that this unique parameter set cannot be the reality parameter set, for the latter contains details that are simply not inferable on the basis of the measurement dataset. Nor are they inferable on the basis of expert knowledge which, as we have seen, is characterized by a prior parameter probability distribution (possibly conditioned by point measurements of system properties) rather than by parameter certainty. We have also seen that while the calibrated parameter field will almost certainly be incorrect, it can nevertheless be optimal in the sense that its potential for wrongness, though it may be considerable, is minimized. So far in this document we have discussed what this means from both a Bayesian framework (where optimal is characterized as leading to predictions of minimized error variance), and from a parameter subspace framework (where optimal is characterized as absence of null space components). It can be shown that as measurement noise approaches zero these lead to exactly the same parameter set (Albert, 1972). In practice they can lead to parameter estimates which are slightly different. However these differences are normally small compared with the potential for error which exists in either. The process of finding a unique solution to an ill-posed inverse problem, and of thereby achieving a parameter set which is deemed to calibrate a model, is called regularisation. The ways in which regularisation is most commonly implemented are now briefly discussed. Tikhonov Regularization In its simplest form, Tikhonov regularization attempts to guide solution of the inverse problem towards parameter estimates which can be considered to be expected values (in the statistical sense) of the posterior parameter probability distribution (and hence constitute parameter estimates that approach minimum error variance). The modeller must supply a default value for all parameters, and/or default values for relationships between parameters; an example of the latter are default differences of zero between neighbouring spatial parameters - this implying a default condition of parameter field homogeneity. Collectively these parameter values and/or parameter relationships define expectations (in the statistical sense) from the prior parameter probability distribution. Hence they define parameter estimates of minimum error variance based on expert knowledge alone. When using PEST for solution of the inverse problem of model calibration, preferred parameter values and/or conditions can be supplied through the prior information mechanism, or as more complex nonlinear regularisation observations. The parameter estimation process is also provided with a suite of measurements of system state. The inverse problem of model calibration is then formulated as a constrained optimisation problem in which a suitable, user-defined value for the target measurement objective function (defined through model-to-measurement misfit) is sought subject to the constraint that the regularisation objective function (defined as departures of parameters from their default values or preferred condition) is minimized. The target measurement objective function is assigned a value that reflects what is considered to be the ambient level of measurement noise. By seeking (but not exceeding) this level of model-to-measurement fit subject to the constraint that departures from pre-calibration optimality are minimized, the calibration goal of minimized propensity for parameter and predictive error is formally sought. Tikhonov regularisation has many attractive features, the most obvious of which is that it provides receptacles for information that is forthcoming from both the user and from the calibration dataset. Its chief disadvantage is that it tends to suffer from numerical instability as parameter optimality is approached. Behind the scenes a trade-off is implemented between fitting the data on the one hand and fitting a users preconceptions as they apply to all parameters involved in the parameter estimation process on the other hand. Numerically, this trade-off is sometimes difficult to apply. It is made no less difficult by the fact that misfit is often dominated by structural noise whose statistical properties are unknown, but whose variance is much higher than that of measurement noise. Avoidance of over-fitting may thus become a trial and error process in which the Tikhonov-based inversion process is repeated with different values assigned to the target measurement objective function. Determination of the relative strengths with which Tikhonov constraints must be applied to different types of parameters can also be problematic. However PEST provides some help with this process; see PEST documentation of the IREGADJ regularization control variable. Subspace Regularization As described above, Tikhonov regularization achieves uniqueness by supplementing information within the calibration dataset with information that is born of expert knowledge. Subspace regularisation (of which the flagship is so-called truncated singular value decomposition) takes the opposite approach. It identifies parameter combinations which are inestimable on the basis of the current calibration dataset, and removes them from the parameter estimation process altogether. These are identified as those that are associated with zero and low singular values, and thus occupy the calibration null space. The solution to the inverse problem is thus comprised entirely of estimable combinations of parameters which, by definition, belong to the calibration solution space. As has been discussed above, these estimable combinations of parameters normally correspond to broad-scale features of a models parameterization. When implementing subspace regularisation, a modeller should ensure that initial parameter values which are provided to the parameter estimation process are in fact preferred parameter values from an expert knowledge point of view. Behind the scenes, it is actually departures from these initial values that are estimated. Because departures from these values which occupy the null space are not estimated, the only departures from initial parameter values that are tolerated are those that are supported by the data. For those that are not supported by the data, the users initial choice prevails. If initial parameter values thus embody user expert knowledge, and therefore constitute pre-calibration minimum error variance parameter estimates, the solution to the inverse problem formulated in this way approaches that of post-calibration minimum parameter error variance status. This is an outcome of the fact that only those parameter combinations whose estimates achieve reduced error variance through the calibration process are adjusted through this process, while those that that do not remain unchanged. Calibration implemented through subspace regularization is unconditionally numerically stable. This is ensured because, by definition, the solution space is comprised only of parameter combinations which are indeed uniquely estimable. The truncated singular value decomposition (SVD) process through which parameter estimation is most easily achieved by this means simply refuses to estimate parameter combinations corresponding to singular values which are below a certain threshold, and hence are not robustly estimable. Unfortunately, however, truncated SVD as a mechanism for solution of the inverse problem of model calibration suffers from two significant shortcomings. The first is that it is difficult to link the singular value truncation threshold to the expected level of measurement/structural noise. Hence while unconditional numerical stability is achieved, prevention of over-fitting is less easily achieved as it is not part of the formal definition of the inverse problem as it is for Tikhonov regularisation. Secondly, because expert knowledge is an implicit rather than explicit part of the inversion process (being provided only through parameter initial values rather than in the form of a possibly complex suite of linear and/or nonlinear parameter relationships), the parameter fields that emerge from the truncated SVD process rarely have the same aesthetic appeal as those which emerge from a calibration process that is implemented using Tikhonov regularization. Hybrid Regularization Both Tikhonov and subspace regularization have strengths and weaknesses. These tend to complement each other. Hence when used together, the outcome is an inversion scheme that makes best use of the benefits of each of them at the same time as it mitigates the deleterious effects of their weaknesses. PEST allows a user to estimate parameters using truncated SVD as a solution device to the inverse problem of model calibration while allowing this problem to be formulated as a constrained optimization problem, with constraints supplied as linear or nonlinear parameter relationships in accordance with the Tikhonov approach outlined above. Experience has demonstrated on many occasions that this approach provides pleasing parameter fields at the same time as it maintains numerically stability while simultaneously preventing over-fitting. Significant gains in computational efficiency can be had through undertaking singular value decomposition of the global sensitivity matrix only intermittently, and on this basis defining a limited number of super parameters equal in number to the dimensionality of the calibration solution space. These new super parameters are in fact the projections of the solution to the inverse problem onto the parameter axes which span the calibration solution space. Through estimation of these projections alone, a solution to the inverse problem of model calibration is achieved. Estimation thus takes place on the basis of an often vastly reduced parameter set, this decreasing the numerical burden of the parameter estimation process enormously. At the same time, Tikhonov constraints are exercised on native parameters, thereby ensuring optimal use of expert knowledge in the parameter estimation process. See PESTs SVD-Assist methodology for further details. Manual Regularization Despite the ability that PEST provides for implementing highly parameterized inversion using the techniques discussed above, many models are still regularized manually. Thus, prior to initiating the calibration process, the modeller must reduce the number of parameters which he/she estimates by fixing certain parameters at pre-calibration preferred values and amalgamating others into a smaller parameter set. Through examining the nature of model-to-measurement misfit achieved through the parameter estimation process based on this reduced parameter set, and by inspecting statistics that are produced as an outcome of the calibration process, the modeller then assesses whether more or less parameter reduction needs to occur, and/or if parameter reduction needs to take place in different ways. In some ways, manual regularisation attempts to achieve the same thing as truncated SVD in that the dimensionality of the inversion problem is reduced to that which is estimable. Meanwhile, though it is not referred to as such, parameters that occupy the null space are assigned fixed values, while parameter combinations that are inestimable (these normally expressing fine parameterization detail) are hidden from the purview of the parameter estimation process through parameter amalgamation. In the spatial parameterization context, parameter amalgamation often takes place through definition of a small number of zones of assumed parameter constancy. If enough care is taken in implementing manual regularization something approaching the minimum error variance solution to the inverse problem of model calibration can indeed be achieved. However, in the authors opinion, the disadvantages of this approach tend to outweigh its advantages. In fact, in the authors experience, the main advantage to be gained through implementing this kind of regularisation lies in the ability (welcomed by some modellers) to defer the need to learn the details of a superior approach. Disadvantages of manual regularisation include the following. In a spatial setting characterized by system property heterogeneity, it is better to define many rather than few parameters throughout the model domain. This provides the parameter estimation process with a license to introduce heterogeneity to the model domain wherever it needs to, to the extent that it needs to, and in the way that it needs to, rather than having to respect pre-defined (and often inappropriate) mechanisms for expression of heterogeneity that may seriously compromise such expression. A considerable amount of trial and error is often required in determining how many parameters can be estimated on the basis of a given calibration dataset. If too few parameters are estimated, less information is obtained from the calibration dataset than it contains. If an attempt is made to estimate too many parameters, over-fitting occurs. In neither case is parameter optimality achieved. Proper implementation of either or both of Tikhonov and subspace regularization provides a mathematical guarantee that something approaching parameters of minimized error variance will be achieved through the parameter estimation process. No such guarantee is afforded through manual regularisation. Less than optimal definition of spatial parameter variability through use of cumbersome and unrealistic parameterization devices such as zones of piecewise constancy can introduce structural noise to model outputs. This erodes the capacity of the parameter estimation process to obtain information from the calibration dataset, thereby incurring an increased propensity for parameter and predictive error. At the same time, the magnitude of this error is difficult to quantify. Where parameterization complexity complements model process complexity at the same time as it represents system property heterogeneity that is salient to predictive variability, it is mandatory that regularization be implemented mathematically rather than manually, for the inverse problem of model calibration is otherwise unsolvable. As a by-product of implementing mathematical regularisation a definition of the null space is achieved (or something approaching the null space in the case of Tikhonov regularization). This is the (unavoidable) source of most parameter and predictive uncertainty on most occasions. Hence use of mathematical regularization based on an underlying parameter field comprised of many parameters provides a far better foundation for post-calibration uncertainty analysis than does manual regularization employing few parameters. Structural Regularization Regularization is built into the design of many models by virtue of the lumped nature of their parameters. Implicit in the design of many lumped-parameter models is the fact that they are made to be calibrated against measurement datasets comprised of one or many data types gathered at one or many locations over many years. Model design is often specifically aimed at endowing a model with a parsimonious set of parameters that is uniquely estimable on the basis of commonly-available datasets, at the same time as it provides optimal receptacles for the information content of these datasets. As such, model design of this type implicitly attempts to achieve a similar outcome to the use of subspace methods in solving the inverse problem of model calibration (or perhaps an even better outcome as it may provide superior accommodation of the nonlinear relationship between model outputs and model parameters). Calibration of models of this type may therefore indeed achieve parameter values of minimized error variance. A problem arises, however, where post-calibration predictive uncertainty is explored, for while failure to represent parameterization detail comprising the null space may not compromise optimality of the calibration process, it will probably compromise the ability of the uncertainty analysis process to properly define post-calibration predictive variability as it pertains to predictions of system behaviour under extreme conditions. Also, representation of parameters in a lumped manner may detract from a modellers ability to explore the effects of certain proposed environmental management strategies on future environmental behaviour. For example it may not be possible to represent land use changes at a farm or sub-regional level in such a parsimonious model as parameterization density is simply too broad to support definition of proposed changes. To overcome the latter problem, regional lumped-parameter surface water and land use models are often constructed from many submodels that seek to simulate hydrologic processes that are operative at the land management scale. If this is done, some form of (mathematical or manual) regularisation must then be implemented during the calibration process. With the introduction of a higher parameterization density in this manner also comes the ability to conduct post-calibration parameter and predictive uncertainty analysis with greater integrity. Some Equations The following equations are presented for completion. They are not derived. Nor is it necessary that they be understood. We take equation (4.1) as our starting point. When calibrating a model, a vector k of calibrated parameter values is calculated according to: k = Gh (4.5) where G is a matrix that depends on the regularization method employed in the calibration process. If the measurement dataset contains no noise and if parameters are normalized with respect to their innate variability, then ideally G should be the Moore-Penrose pseudo inverse of Z, this leading to a parameter set of minimum norm and hence (if norm is defined in an appropriate way) the parameter set of minimum error variance. Where there is noise in the calibration dataset, use of a generalized inverse of Z (whether this is the Moore-Penrose pseudo inverse or some other generalized inverse) to calculate k would lead to over-fitting. Hence G must be derived through other means - any one of these requiring definition of some kind of regularisation. Formulas for G differ according to the type of regularisation employed. Ideally, however, they should converge to the same formula as the noise associated with h decreases to zero (as do truncated SVD and properly-designed Tikhonov-based inversion). For truncated SVD: G = V1S-11Ut1 (4.6a) while for Tikhonov regularization: G = (ZtQZ + 2TtWT)-1ZtQ (4.6b) where: Q is a measurement weighting matrix, ideally proportional to C-1(), where C() is the covariance matrix of measurement noise; U, V and S are obtained through singular value decomposition of Q1/2Z, the subscript of 1 on these matrices indicating use of pre-truncation (and hence non-zero) singular values; T is a matrix which expresses Tikhonov constraints; W is a weighting matrix for Tikhonov constraints; ideally if T provides preferred pre-calibration values for the parameters k then W should be proportional to C-1(k) where C(k) is the pre-calibration covariance matrix of innate parameter variability; 2 is a factor adjusted during the regularized inversion process; it is equivalent to a Lagrange multiplier employed in the constrained optimisation process described above through which Tikhonov regularization is implemented. For manual regularization, G can be expressed as: G = L(XtQX)-1XtQ (4.6c) where: L is a matrix through which elements of k are computed from a reduced parameter set p; elements of p are few enough for their estimation to formulate a well-posed inverse problem; and X expresses the means through which the model-generated counterparts to the measurement dataset h are calculated from the reduced parameter set p; it is related to the model matrix Z through the equation: X = ZL (4.6d) Substitution of (4.1) into (4.5) leads to the equation: k = GZk + G (4.7a) That is: k = Rk + G (4.7b) where: R = GZ (4.8) R is the well-known  resolution matrix . Where measurement noise is zero it expresses the relationship between estimated parameters k and their real world counterparts k (which are never known). Where regularization is implemented through truncated SVD, R is a projection operator as indicated in Figure 4.2. Irrespective of the regularization methodology employed, each row of the matrix R specifies the averaging relationship through which each element of k is derived from the entirety of elements of k. Thus it defines the parameter simplification process that was required for achievement of a unique solution to the inverse problem of model calibration. The difference between estimated parameters and their real world counterparts is parameter error. From (4.7) this can be formulated as: k - k = -(I - R)k + G (4.9) The first term on the right side of (4.9) is the  cost of uniqueness discussed extensively by Moore and Doherty (2005; 2006). It is the contribution made to parameter error arising out of the fact that the calibration process can estimate only a simplified form of reality. The second term of equation (4.9) arises from the fact that the estimated parameter set is calculated from a calibration dataset which is contaminated by noise. Where regularisation is implemented using truncated SVD, the first and second terms of equation (4.9) lie within the null and solution spaces respectively, and are orthogonal to each other. This is illustrated in Figure 4.4.  Figure 4.4. The two components of parameter error expressed by the two terms on the right side of equation (4.9). Unfortunately parameter error cannot be calculated, for neither the real parameters k nor noise  associated with the measurement dataset are known. However, using equation (3.4), the covariance matrix of parameter error can be calculated from the stochastic characterization of pre-calibration parameter variability C(k) and from that of measurement noise C() as: C(k - k) = (I - R)C(k)(I - R)t + GC()Gt (4.10) Equation (4.10) acquires a particularly simple but instructive form when the following conditions are met: Regularisation is implemented using truncated SVD; C(k) can be expressed as: C(k) = 2kI (4.11) where I is the identity matrix; and C() can be expressed as: C() = 2I (4.12) In this case (4.10) becomes: C(k - k) = 2kV2Vt2 + 2V1S-21Vt1 (4.13) where the columns of V2 contain orthogonal unit vectors which span the calibration null space and the columns of V1 contain orthogonal unit vectors which span the calibration solution space; both of these are obtained through partitioning of the V matrix of equation (4.4). As the number of dimensions of the calibration solution space grows, and the null space therefore shrinks, the first contributor to parameter error variance (i.e. the cost of uniqueness term) falls, as less and less simplification is being undertaken in an attempt to calibrate the model. However the second term of equation (4.13) rises; furthermore it rises very fast as the magnitudes of singular values fall, until ultimately it becomes infinity as singular values fall to zero. Total parameter error variance therefore falls and then rises as simplification is at first too great, and at last too little (and over-fitting occurs). Minimized total parameter error variance occurs at an intermediate number of singular values; so too does minimized predictive error variance, as will be discussed in the next chapter. Parameter and predictive error cannot be known. If they could be known, an appropriate correction term could be applied. Thanks to equations (4.10) and (4.13) however, the propensity for parameter error (and, as we will see shortly, for predictive error as well) can be known. Furthermore, to the extent that C(k) and C() are known or can be surmised, optimality of solution of the inverse problem of model calibration can be achieved through minimizing the propensity for parameter and predictive error. We finish this subsection with a nice property of singular value decomposition; this property will be demonstrated using one of the exercises provided later in this chapter. From equations (4.1) and (4.4) with measurement noise ignored: h = USVtk (4.14a) After removal of zero-valued singular values, this becomes: h = US1V1tk (4.14b) S1 is, by definition, a diagonal matrix with non-zero elements. Hence it has an inverse. From (4.14b) and the fact that, because U is an orthogonal unit matrix: UtU = I (4.15) It follows that: S-11Uth = Vt1k (4.16) This formula states that certain linear combinations of parameters are solely and uniquely informed by certain, partnered, linear combination of observations. The number of such partnerships is equal to the number of diagonal elements of S1 and hence the number of singular values employed in the inversion process. The sequence of informative linear combinations of observations (which capture the entire information content of the calibration dataset) form an orthogonal set of axes in observation space. The corresponding sequence of informed linear combinations of parameters (which comprise a complete set of receptacles for the information content of the calibration dataset) form an orthogonal set of axes in parameter space; these axes span the calibration solution subspace. The former are given by: u1ut1, u2ut2, u3ut3, etc (4.17a) where ui is the ith column of the U matrix. The latter are given by: v1vt1, v2vt2, v3vt3 etc (4.17b) where vi is the ith column of the V matrix. The former are referred to as super observations in PEST parlance, whereas the latter are referred to as super parameters. Structural Noise Unfortunately, models are not perfect simulators of system behaviour. This may manifest itself in an inability to achieve a good fit between model outcomes and field measurements during the calibration process. Alternatively, or as well, it may result in parameters being assigned incorrect values as partial compensation for unrepresented processes, this allowing the model to fit the historical data well in spite of the fact that some of the environmental processes that gave rise to that data are not simulated by the model. In addition to this, structural defects may compromise the models ability to make a desired prediction with integrity, as not all processes on which the prediction depends are represented in the model. If this prediction is of a different type from data comprising the calibration dataset, the models inadequacies in this regard will not have been detected during the calibration process. Doherty and Welter (2010) attempt to introduce some rigor to the manner in which structural noise is accommodated in the model calibration and predictive processes. They begin their analysis by stating that equation (4.1) can be used as the basis for analysis of calibration optimality and predictive uncertainty only if the model (represented by the Z matrix in that equation) is a perfect simulator of environmental reality. In fact, the action of the model is better described by the following equation: h = Z1k1 + Z2k2 +  (4.18) where Z1 and k1 represent the model and parameters used by the model respectively, while k2 represents corrections to the model that would allow it to simulate reality perfectly while Z2 represents the sensitivity of model outputs to these corrections. Both of Z2 and k2 are unknown. Doherty and Welter (2010) draw the following conclusions through mathematical analyses based on this equation. Model structural inadequacies can express themselves through model-to-measurement misfit under calibration conditions that can greatly exceed that which arises from measurement noise. However treating that misfit as an additive term to measurement noise (this being commonly referred to as structural noise) for the purpose of determining an optimal level of misfit to achieve during the calibration process, and in calculating the penchant for parameter error that arises from that misfit, is fraught with conceptual difficulties. This follows from the fact that the covariance matrix of structural noise is unknown, and probably singular. Unless the singular nature of the covariance matrix of structural noise is taken into account, post-calibration propensity for parameter error may be seriously underestimated. If historical measurements of system state were contaminated only by measurement noise of non-singular covariance matrix, the potential for error associated with parameter value estimates should decrease as the size of the calibration dataset increases (in accordance with Bayes equation). This is not the case, however, where the model-generated counterparts to field measurements are contaminated with noise of structural origin. Calibration strategies can be devised which mitigate the deleterious effects of model structural defects on estimation of at least some parameters. In particular, it is often possible to formulate a multi-component objective function in which different objective function components inform different combinations of parameters. Through ensuring equal visibility of each of these components in the overall objective function, the damage inflicted by the presence of structural noise on at least some parameter estimates can be greatly reduced. Strategies for formulation of a suitable multi-component objective function include the following: use of inter-layer head differences and temporal head differences when calibrating a groundwater model; use of log-transformed flows, together with baseflow-filtered flows, as well as event-based or monthly volumes, when calibrating a surface water model. When a defective model is calibrated against a real-world dataset, it is almost certain that some parameters will be assigned values that compensate for model structural defects. This may or may not increase the error variance of model predictions. For predictions that resemble observations used in the calibration process, parameter compensation may reduce the propensity for model predictive error. For predictions that differ in type and location from observations used in the calibration process, parameter compensation may increase the propensity for model predictive error. In either case, the link between parameter optimality and predictive optimality is broken. The fact that some estimated parameters can soak up the information that has no other place to go that is structural noise may, or may not, benefit the calibration process. As stated above, this depends in part on the extent to which predictions required of a model resemble observations used in the calibration process. To the extent that parameter surrogacy is judged to be advantageous to the making of some predictions, it may be worthwhile ensuring that observations that most resemble predictions of interest are well fit during the calibration process by endowing them with weights that are great enough to allow this to happen. This raises the spectre of prediction-specific calibration. Doherty and Welter (2010) demonstrate that the presence of model structural defects requires that a modeller make many informed, but necessarily subjective, decisions during both the calibration and predictive phases of model deployment. Their subjective nature arises from the fact that model structural defects, the magnitude and characteristics of structural noise that they incur, and the degree to which parameters may assume advantageous and disadvantageous compensatory roles during their estimation as an outcome of these defects, are all unknown. It will often be possible to obtain a very good fit between model outputs and field data when calibrating a flawed model. But when is a good fit in fact too good a fit? Over-fitting is often recognized as such when some awkward values are estimated for some parameters. However these values may have been incorrect long before they were recognized as being unrealistic, and hence long before recognition of a models over-fit status required that the calibration process be repeated with higher levels of parameter simplification introduced in order to prevent such a good fit between model outcomes and field measurements from being obtained again. In many modelling circumstances however (as stated above) the surrogate role that some parameters play may actually benefit the predictive process, this depending on the predictions required of the model. In this case, sacrificing goodness of fit in order to ensure that no parameter plays any surrogate role whatsoever may leave much important information that is contained in the calibration dataset untapped. Unfortunately, the best path to choose in any given calibration context is often unclear, for there are no universal rules pertaining to this situation (and little expert guidance available). All that a modeller can do is to exercise informed creativity, taking full account of whatever theoretical help is available (including aspects of modelling theory that account for model structural defects). Nevertheless decisions made by a modeller will necessarily be subjective, and will almost certainly vary from modeller to modeller. Exercises This section of the original document has been omitted. 5. How Wrong Can a Prediction Be? Linear Analysis Error and Uncertainty Parameter and predictive uncertainty following imposition of constraints on parameter values imposed by the necessity for a model to reproduce historical system behaviour, is described by Bayes equation. It is repeated here (in a slightly different form from that presented in equation 2.3) for completeness.  EMBED Equation.3  (5.1) As before, the vector h represents measurements of system state (referred to herein as the calibration dataset) while the vector k represents parameters. The symbol P() represents probability. P(k) represents the prior probability of parameters k while P(k|h) is the posterior probability of parameters k; the latter is the probability of parameters k conditional upon information encapsulated in h. Meanwhile P(h|k) represents the likelihood function, this increasing with the extent to which model-to-measurement misfit (defined in a way that accounts for the stochastic properties of measurement noise) decreases. The denominator of the right hand side of equation (5.1) is required for normalization so that the area under the posterior probability distribution is 1.0. In implementation of Bayes equation, the model is employed for calculation of the likelihood term. In everyday modelling practice direct use of Bayes equation is difficult. This arises from the numerical difficulties involved in handling probability distributions, especially where they do not have analytical formulations. Unfortunately, even if a prior probability distribution can be given an analytical formulation (such as multi-uniform or multi-normal) the nonlinear behaviour of most models will ensure that the posterior parameter probability distribution does not have an analytical formulation. Another problem that arises in assessing the uncertainty that is associated with the environmental future is that models are imperfect simulators of environmental system behaviour. An immediate repercussion of this is that a models calculation of the likelihood function is likely to be in error. This is inescapable in most calibration contexts, so that most model-to-measurement misfit on most occasions arises from model imperfections. To a limited extent the nature of misfit engendered by model imperfections can be assessed through the history-matching process and assimilated into Bayesian characterization of posterior parameter probabilities. However the effects of model imperfections on predictions of interest cannot be assessed in this way, unless those predictions are of identical character, and occur under identical conditions, to those which prevailed when measurements comprising the calibration dataset were made. Unfortunately, this facet of model structural error therefore affects predictive uncertainty in a manner that constitutes an often unknown term that must be added to any model prediction, and thus requires that the probability distribution of that prediction as assessed through Bayesian analysis be modified in an unknown way. Because of the numerical difficulties involved in working with probability distributions, particularly when these must be assessed using complex models with large run times, everyday model usage normally involves a two step process of calibration followed by parameter and predictive uncertainty analysis. Ideally, as has already been discussed, the calibration process should yield parameters that are the expected values (in the statistical sense) of real-world parameters. Predictions should be the same. Pursuit of this goal normally requires a highly parameterized approach to inversion accompanied by the use of mathematical regularisation to achieve uniqueness of the calibrated parameter field. However there are numerical and computational limits on the number of parameters which can be included in the calibration process. Hence, in practice, even where calibration is implemented using regularized inversion, many of a models parameters will be, of necessity, lumped to at least some extent while others will be fixed by the user at reasonable values and remain unadjusted through the calibration process. In proportion to the extent to which reality remains unlumped, and that fixed parameters are set at incorrect values by the user, a model acquires defects. These defects are in addition to those which arise from the imperfect nature of the model as a simulator of real-world environmental processes. Ultimately, model defects induce errors in parameters estimated through the calibration process, and in predictions made by the model. Errors in the latter arise both from their dependencies on error-prone parameters, and from model defects as they directly affect these predictions. In light of the above, it is probably better to use the term error rather than uncertainty when discussing environmental modelling. Parameters estimated through the calibration process are in error by virtue of the fact that their estimation necessarily involves regularisation (i.e. simplification). The calibration process must then strive to achieve estimates of parameters which are of minimum error variance. As was discussed in the preceding chapter, where calibration is undertaken through the agency of highly parameterized inversion, the resolution matrix that describes the simplification that underpins the quest for parameter uniqueness is available as a by-product of the regularized inversion process. To some extent then, post calibration parameter error variance can be calculated, and indeed minimized. The same is not true where regularization is undertaken manually; an inability to calculate and minimize the potential for error arising from the necessity to simplify in order to achieve uniqueness is one of the principle drawbacks of its use. In practice the situation is more difficult than this. The imperfect nature of models engenders structural noise. This can add to the propensity for parameter error by: decreasing the dimensionality of the solution space at which parameter error variance is minimized, thereby increasing the first term on the right of equation 4.10; increasing the model-to-measurement misfit contribution to the potential for parameter error described by the second term on the right of equation 4.10. To make matters worse, the increased propensity for parameter error incurred by model structural defects is virtually impossible to quantify, though as Doherty and Welter (2010) point out, steps can be taken to reduce it through intelligent formulation of the calibration objective function. However what is at once disturbing and comforting is that this increased penchant for parameter error may or may not result in an increased propensity for predictive error (in fact the opposite may be the case), this depending entirely on the nature of the prediction and its dependencies on model parameters. Where predictions are similar in character to observations employed in the calibration process, and where the calibration process allows parameters to adopt values that compensate for model imperfections as they affect its ability to simulate those aspects of environmental behaviour that are recorded in measurements comprising the calibration dataset, the accrual of parameter error through the calibration process can be entirely beneficial to the making of those types of predictions. In other cases, however, particularly those where predictions are of a different character to observations comprising the calibration dataset, the accrual of parameter error in this way should be assiduously avoided in pursuit of the goal of minimization of predictive error variance. However this variance will probably be inflated beyond its theoretical minimum as the prediction must then be made by a model whose parameters are less constrained by the calibration dataset than they would otherwise be, this being born of the necessity to eschew too low a level of calibration misfit in order to avoid any hint of parameter surrogacy. Total predictive error will, or course, include another term, this being that which describes the models imperfections in relation to this prediction. Neither this term, nor its stochastic distribution, will in general be known in contexts where a prediction is different in character from members of the calibration dataset, for the models integrity (or lack thereof) in making predictions of this type will not have been explored through the calibration process. This rather bleak picture does not provide sufficient grounds for giving up on attempts to evaluate model predictive uncertainty. However, it does suggest the following. As stated above, it is probably better to employ the concept of error rather than uncertainty when working in the context of environmental models. Despite the fact that the two terms are often used interchangeably (including in the present document), the distinction should be born in mind. The potential for error in a prediction made by an environmental model will always be greater than its inherent uncertainty. The latter is notionally calculable through implementing Bayesian analysis in conjunction with a perfect model. As real-world models are imperfect, their predictions have a potential for error that is greater than the inherent uncertainty of those predictions, this arising out of the necessity to make predictions, and to analyse their potential for wrongness, using an imperfect model. Uncertain parameters lead to uncertain predictions. However errors in parameters that compensate for imperfections in a models simulation capabilities may or may not increase the potential for error in predictions made by that same model. Some models are explicitly designed for parameter compensation to work in a predictions favour, this normally requiring that predictions of interest resemble observations that comprise the dataset through which that model is calibrated. Other models are not designed in this way, being specified as physically based. However no physically based model is perfect, and hence at least some degree of parameter compensation is unavoidable. This may or may not be a bad thing, as it may be possible to tune a model to make good predictions of one type (while unavoidably de-tuning its ability to predictions of other types). This raises the spectre of prediction-specific calibration. It will rarely, if ever, be possible to quantify either predictive uncertainty or predictive error variance with a high level of precision. Thus it will rarely, if ever, be possible to make statements such as these thresholds mark the 95% confidence interval of this prediction or there is only a 5% chance that such an event will occur in the environmental modelling context. Assessment of model predictive uncertainty/error necessarily involves a high degree of subjectivity. The Predictive Error Term As even a complex, physically-based model is an imperfect simulator of environmental reality, any prediction made by such a model will contain a component of error that reflects its imperfections. To some extent it may be possible to calibrate out at least some model imperfections in some circumstances. As already discussed, with care this may be legitimate where predictions of interest resemble observations used in the calibration process. Alternatively, if model-to-measurement misfit cannot legitimately be reduced in this manner through allowing some parameters to soak up structural noise, the resulting level of model-to-measurement misfit provides quantification of the penchant for error in model predictions that are similar in nature to model outputs used in the calibration process. However in cases where a prediction does not resemble model outputs used in the calibration process it is not possible to quantify the contribution that model imperfections make to predictive error. In this case, recognition of this contribution to model predictive error can only take the form of a judiciously-chosen engineering safety margin added to other components of model predictive error potential that can be at least partially quantified. It is beyond the scope of this document to discuss the means through which a structurally-induced model predictive error correction term should be formulated in different modelling contexts. Indeed, it is the authors opinion that this is a problem that is yet to be properly addressed and is therefore deserving of research. For the moment however, it is recommended that this predictive structural error term be treated (notionally if not actually) as a parameter. There will be cases where this parameter is directly estimable through the calibration process. In other cases it will be subjectively inferable through studying calibration misfit as it pertains to different observation types at different locations. In still other cases its size will be purely a matter of educated guesswork. As a parameter (and an uncertain one at that) the error term which must be added to a model prediction has an initial and estimated value of zero, this being an outcome of the fact that it does not receive information in a formal way through the calibration process, and hence lies entirely in the null space. This parameter can be included in the existing set of parameters comprising the vector k. Its uncertainty (as inferred through the calibration process or assessed subjectively) is then included in the overall C(k) matrix of parameter uncertainty along with the other parameters on which a model prediction is dependent. If the above strategy is followed for accommodation of model predictive structural noise, no further modifications to any of the linear analysis methodologies presented in the present chapter, or to any of the nonlinear analysis methodologies presented in the following chapter are then required. The model as it applies to the prediction is simply enhanced with this extra parameter - the value of this parameter being added to the model prediction with which it is associated. In setting up PEST input files for linear and nonlinear analysis, the predictive structural noise parameter can be included in the PEST control file as a parameter which no component of the model actually reads when the model is run under calibration conditions. In this way its uncertainty is propagated through to predictive uncertainty, unaffected by the calibration process. The parameter is only used under predictive conditions as an additive term to a prediction. Different parameters of this type can be used with different predictions. In certain applications they may be specified as showing statistical correlation with each other, or even with physically-based parameters, this being done through assignment of appropriate values to pertinent elements of the C(k) matrix of innate parameter variability supplied by the user. Linear Parameter and Predictive Uncertainty Analysis Parameter Uncertainty An uncertain variable is characterized by a probability distribution. The width of a probability distribution can be characterized by its variance, this being the square of its standard deviation. An uncertain vector represents a multiplicity of individual uncertain variables. The uncertainty of a vector is characterized by a multi-component probability distribution. The width of this distribution in any direction of parameter space, and the degree to which random elements of the vector exhibit statistical inter-relatedness can be characterized by an m m covariance matrix, where m is the dimension of the vector (i.e. the number of elements contained in the vector). To date in this document, covariance matrices C(k) and C() have characterized the prior probability distribution of the vector of parameters k, and the probability distribution of the vector  of noise associated with measurements comprising the calibration dataset h respectively. Let C'(k) denote the covariance matrix that is associated with the posterior parameter probability distribution. This is the probability distribution that appears on the left side of Bayes equation, sometimes referred to herein as the post-calibration parameter probability distribution. Suppose also that the following conditions are met: Model outputs are linear with respect to parameters so that the action of the model on its parameters can be represented by a matrix (which we denote as Z): The prior parameter probability distribution is multiGaussian; Measurement noise is also characterized by a multiGaussian distribution. It can be shown that under these circumstances C'(k) can be calculated using either of the following two formulas (which are mathematically equivalent). C'(k) = [ZtC-1()Z + C-1(k)]-1 (5.2a) C'(k) = C(k) - C(k)Zt[ZC(k)Zt + C()]-1ZC(k) (5.2b) Use of the first of these two formulas is more computationally efficient when the number of observations exceeds the number of parameters, whereas use of the second is more efficient when the opposite is the case. This follows from the dimensionality of the matrix which must be inverted in each case. Predictive Uncertainty Let s be a model prediction of interest. Let the elements of the vector y denote the sensitivities of this prediction to the elements of the parameter vector k. As was previously expressed as equation (3.5), the prior variance of predictive uncertainty (i.e. the square of the standard deviation of uncertainty associated with the prediction s) is given by: 2s = ytC(k)y (5.3a) Obviously the posterior variance of predictive uncertainty '2s is expressed as: '2s = ytC'(k)y (5.3b) From (5.2) it then follows that: '2s = yt[ZtC-1()Z + C-1(k)]-1y (5.4a) '2s = ytC(k)y - ytC(k)Zt[ZC(k)Zty+ C()]-1ZC(k)y (5.4b) These two equations are equivalent; however (5.4b) is particularly illustrative. The first term on the right of this equation is the prior uncertainty variance. The second term is the amount by which this term is reduced through the history-matching process. The matrix that appears between yt and y in this second term can be shown to be positive semidefinite. Hence history-matching can never lead to an increase in the uncertainty of a prediction. At worst it can lead to zero reduction in prior predictive uncertainty; at best it can lead to a considerable reduction in this uncertainty. It all depends on the information content of the calibration dataset with respect to parameters to which the prediction is sensitive. (Note that, as we shall see, the same does not apply to post-calibration predictive error; it is indeed possible for predictive error to be higher after calibration than before calibration.) Equation (5.4) can be used to calculate the post-calibration uncertainty of an individual parameter if desired. In this case the y vector is comprised of zero-valued elements, except for the element pertaining to the parameter whose uncertainty is desired. This element is assigned a value of 1.0. Linear Parameter and Predictive Error Analysis Parameter Error Equation (4.10) depicts the covariance matrix of post-calibration parameter error. It is repeated below as equation (5.5). C(k - k) = (I - R)C(k)(I - R)t + GC()Gt (5.5) Recall that G is the matrix which describes the means through which the calibrated parameters set k is computed from the observation dataset h while R is the resolution matrix. Both of these are ultimately functions of the model matrix Z. Hence equations (5.5) and (5.2) involve the same components; they are just manipulated in different ways. Predictive Error As before, let a prediction s be calculable from model parameters k through the linear relationship: s=ytk (5.6a) The prediction calculated using the calibrated model is given by: s=ytk (5.6b) where k is the calibrated parameter set. Predictive error is given by the difference between these two quantities. That is: s-s=yt(k-k) (5.7) Applying equation (3.4) to equation (5.5), predictive error variance is obtained as: 2s-s = yt(I - R)C(k)(I - R)ty +ytGC()Gty (5.8) Pre-calibration predictive error variance can be formulated as a special case of equation (5.8). In this case both G and R are zero so that equation (5.8) becomes equation (5.3a). Thus pre-calibration predictive error variance is the same as the pre-calibration variance of predictive uncertainty. Where a model is a perfect simulator of reality, it can be shown that post-calibration predictive error variance as calculated using equation (5.8) is always greater than post-calibration predictive uncertainty variance, except in the special case where: calibration is undertaken using Tikhonov regularization; Tikhonov prior information equations specify that the preferred value of each parameter is equal to its pre-calibration value of minimum error variance; the weight matrix applied to Tikhonov prior information constraints is equal to C-1(k), where C(k) is the covariance matrix of the prior parameter distribution; and the weight matrix applied to the calibration dataset is equal to C-1() where C() is the covariance matrix of measurement noise. These conditions will rarely be met in real-world modelling practice for a variety of reasons, including the following. Neither C(k) nor C() is perfectly known. Numerical stability of the inversion process will normally require that singular value decomposition play some role in calculation of the parameter set which is deemed to calibrate a model; use of Tikhonov inversion alone does not provide the same unequivocal guarantee of numerical stability. In all modelling contexts, at least some degree of regularisation is encompassed in the simplifications required to build a model and to furnish it with a useable parameterization scheme; no resolution matrix is available for this type of regularisation, though it may make a significant contribution to the potential for parameter and predictive error. It follows that predictive error variance will always be greater than predictive uncertainty variance. Nevertheless the goal of a well-implemented calibration strategy should be to minimize the difference between these two. For the special case where calibration is undertaken using truncated singular value decomposition, and where the covariance matrices of prior parameter uncertainty and measurement noise are given by equations (4.11) and (4.12), equation (5.8) becomes: 2s-s = 2kytV2Vt2y + 2ytV1S-21Vt1y (5.9) The first term of equation (5.9) falls as the number of singular values used in the inversion process increases, whereas the latter term rises. Where truncation occurs at zero singular values (which is equivalent to not calibrating at all) then 2s-s is equal to 2kyty; this of course is the pre-calibration uncertainty of the prediction. Where too many singular values are employed in the inversion process the second term of (5.9) becomes very high, thereby showing the deleterious effects of over-fitting. If predictive error variance is plotted against number of singular values used in the calibration process, a graph such as the following should result. (As will be shown later in this chapter, this can be obtained using the PEST PREDVAR1 utility.)  Figure 5.1 Predictive error variance as a function of number of singular values used in the inversion process. Moore and Doherty (2005) show that graphs such as that depicted in Figure 5.1 can also be drawn to represent the effects on predictive error variance of varying the strength of application of regularization devices other than singular value decomposition. Irrespective of how a model is calibrated, a modeller is always faced with the decision on how strongly to apply his/her chosen regularisation scheme. The horizontal axis in Figure 5.1 can be considered to be the inverse of the strength of application of regularisation. When regularisation is total, and hence pre-calibration preferred parameter values are perfectly respected, the potential for predictive error is equal to pre-calibration predictive uncertainty. Where parameters are actually adjusted through the calibration process, but where a modeller is too heavy-handed in application of regularisation constraints, information extracted from the calibration dataset is less than the information that resides in it; the full potential of the calibration process to reduce predictive error variance is therefore not achieved. On the other hand, where too good a fit between model outputs and field measurements is sought, measurement noise is amplified in estimation of too many parameters - some of which should not be estimated at all because the information content of the calibration dataset with respect to these parameters is just too weak. In this case the potential for error in predictions made by the calibrated model may actually exceed the potential for error in predictions that would have been made if the model had not been calibrated at all! Obviously optimality of regularisation is achieved at the minimum of the error variance curve of Figure 5.1. Note that equations (5.8) and (5.9) can be used to calculate the error variance of an individual parameter if desired. In this case the y vector is composed of zero-valued elements, except for the element pertaining to the parameter whose uncertainty is desired; this element should be given a value of 1. Over-Determined Parameter Estimation Where the calibration problem is well-posed, parameters can be estimated using the standard equation for Gauss-Marquardt-Levenberg parameter estimation: k = (ZtQZ)-1Zth (5.10) It is easily shown that solution of the inverse problem through singular value decomposition leads to the same k; however no truncation is necessary under these circumstances. Parameter and predictive error variance are therefore calculable using equations (5.5) and (5.8) respectively with R equal to I. For the special case where Q is chosen to be the inverse of C() (the covariance matrix of measurement noise) the formula for the post-calibration covariance matrix of parameter error becomes particularly simple: C(k-k) = (ZtQZ)-1 (5.11) Where the noise associated with different measurements is statistically independent (as is often assumed to be the case) Q is diagonal. In this case use of a weighting matrix can be replaced by the use of individual measurement weights. If using PEST, a weighting strategy which assigns to each measurement a weight that is equal to the inverse of the standard deviation of noise associated with that measurement ensures a Q matrix which is the inverse of C(). Where prior information on parameter values is weak so that C(k) specifies large pre-calibration uncertainties, equation (5.2a) becomes: C'(k) = (ZtQZ)-1 (5.12) Hence under these circumstances post-calibration parameter and predictive uncertainty variance become equivalent to post-calibration parameter and predictive error variance. The covariance matrix appearing in equations (5.11) and (5.12) is recorded by PEST at the bottom of its run record file, and in its matrix (*.mtt) file when parameter estimation is implemented using the Gauss-Marquardt-Levenberg method and the inverse problem of model calibration is indeed well-posed. Derived Quantities General As already noted, both of equations (5.4) and (5.8) ultimately rely on the same matrices; the Z matrix which specifies the action of the model does not appear in (5.8) but is used in calculation of the R and G matrices featured in that equation. Where a model is nonlinear its action, of course, cannot be represented by a matrix. In that case the Jacobian matrix is used in place of Z. The Jacobian matrix mimics Z; its elements are comprised of the derivatives of all model outputs used in the calibration process with respect to all parameters adjusted through that process. Ideally calculation of the Jacobian matrix should be based on calibrated parameter values; supposedly these lie somewhere near the centre of the uncertainty/error interval that equations (5.4) and (5.8) seek to explore. The effect of model nonlinearity on estimates of parameter and predictive uncertainty/error is hopefully thereby diminished. In spite of the approximation that necessarily attends use of a linearity assumption in a nonlinear context, equations (5.4) and (5.8) can provide useful estimates of predictive uncertainty and predictive error variance. As will be now demonstrated, they can also provide estimates of related quantities. While calculation of these related quantities will also be subject to error, it is expected that the effects of the linearity assumption on these calculations will be somewhat diminished as the principle focus of these calculations is to provide a basis for comparing different quantities, rather than in obtaining absolute values. Where compared quantities are affected in similar ways by an erroneous assumption of model linearity, relativity of their values may nevertheless be preserved. Parameter Contributions to Predictive Uncertainty and Error Variance The first of these derived quantities is referred to as the contribution that a particular parameter, or group of parameters, makes to the uncertainty or error variance of a prediction. This is defined as the fall in predictive uncertainty/error variance that is accrued when perfect knowledge of the parameter, or group of parameters, is gained. This is easily computed using either of equations (5.4) or (5.8) by simply modifying the C(k) matrix to respect acquisition of this perfect knowledge, and calculating the diminution in uncertainty or error variance so accrued. Parameter or parameter group contributions to pre-calibration predictive uncertainty can be computed in similar fashion using equation (5.3a). In general, it is better to use equation (5.4) rather than equation (5.8) in making calculations of this type. It is sometimes found that parameter contributions to predictive error variance (as distinct from predictive uncertainty variance) can be negative. This is an outcome of the non-Bayesian nature of error as opposed to uncertainty. It can also arise from the discrete nature of singular values in contexts where calibration is implemented using truncated singular value decomposition An outcome of the latter phenomenon is that the minimum of the predictive error variance curve may shift between singular values when the error variance is calculated on the basis of two very different C(k) matrices - one which reflects the true uncertainty of a particular parameter or parameter group, and another which infers that the parameter or parameter group is perfectly known. Contribution to predictive uncertainty analysis need not be restricted to parameters whose values are estimated during the calibration process. It is often fruitful to include in this analysis boundary conditions and system stresses which are fixed at assumed values during the calibration process. When building a model, many details of its construction are only poorly known. Hence a litany of reasonable assumptions are made pertaining to various aspects of the model in order to allow model construction to proceed. In normal modelling practice, these reasonable assumptions are then retained during both the calibration and predictive phases of model usage, in spite of the fact that they may be erroneous. This can cause errors in model predictions, these arising from: the fact that certain parameters may assume erroneous values during the calibration process to compensate for the structural defects that are thus built into the model; and the fact that some or many model predictions may be sensitive to features of the model which are subject to error. Features which are commonly fixed at reasonable values during the model construction, calibration and deployment processes include the following: inflow into some model boundary segments; head, pressure and concentration values assigned to other boundaries; historical pumping rates; recharge under historical and present land uses; elevations of the beds of rivers and streams; many other aspects of model design. When undertaking linear predictive uncertainty/error analysis based on equations (5.4) and (5.8) these quantities can be awarded parameter status and thereby included in the analysis. Sometimes the unknown quantities themselves can be introduced as parameters into these equations. At other times surrogate parameters will be required that can simulate the effects of unknown quantities while not representing them directly. For example a spatial, pilot-point-based multiplier field can be applied to recharge in order to include in the analysis a suite of parameters which represent the fact that the disposition of recharge over a model domain is only imperfectly known. Similarly, seasonal multipliers can be applied to historical rainfall to simulate the effect of limited rain gauge coverage on the integrity of parameters estimated for a rainfall/runoff model and of predictions which depend on these parameters. In all of these cases, the negative impact of having to make possibly erroneous assumptions on overall model predictive performance can be assessed by computing the contribution that parameters pertinent to these assumptions make to the uncertainty/error variance of critical model predictions. Figure 5.2 shows the outcomes of such an analysis when applied to a regional groundwater model; this is taken from Gallagher and Doherty (2007a). See James et al. (2009) for another example.  Figure 5.2. Pre-calibration (back row) and post-calibration (front row) contributions to the error variance of a prediction of interest made by different parameter and boundary condition types employed by a regional groundwater management model. It is sometimes found when computing pre- and post-calibration contributions to predictive uncertainty/error variance that the post-calibration contribution of a particular parameter to the uncertainty/error variance of a particular prediction exceeds its pre-calibration contribution. This seemingly contradictory conclusion arises from the definition of contribution to predictive uncertainty/error variance given above. If a prediction is not sensitive to a parameter, then acquisition of perfect knowledge of that parameter does not decrease the uncertainty of that prediction under pre-calibration conditions. However if that parameter can only be estimated in conjunction with another parameter during the model calibration process - one to which the prediction is indeed sensitive - then acquisition of perfect knowledge of the first parameter supplements the information available through the calibration dataset pertaining to the second parameter. Hence acquisition of perfect post-calibration knowledge of the first parameter reduces the uncertainty/error variance of the prediction. As will be demonstrated shortly, parameter contributions to predictive uncertainty and error variance can be calculated using the PREDUNC4 and PREDVAR4 utilities supplied with PEST. Data Worth Suppose that a particular prediction of future environmental behaviour is important to the management of that environment. The worth of a particular item of data in relation to that prediction can be defined as the reduction in uncertainty of that prediction that is accrued through acquisition of that data. An immediate outcome of the ability to compute predictive uncertainty is therefore an ability to compute the utility of data in reducing that uncertainty, and hence the worth of that data. An extremely useful feature of equations (5.4) and (5.8) when used for assessment of data worth in this manner is that neither predictive uncertainty nor predictive error variance depends on the actual value of the measurements that comprise an observation dataset, nor on the actual values of parameters that populate the model. According to these equations, predictive uncertainty and error variance depend only on the stochastic characterization of parameter variability as expressed by C(k), the stochastic characterization of measurement noise as expressed by C(), and on the sensitivities of calibration and predictive model outputs to parameters as expressed by Z and y. (For a linear model both Z and y are independent of parameter values.) It follows that equations (5.4) and (5.8) can be used to calculate the reduction in uncertainty that would be accrued through acquisition of data that has not yet been gathered. All that is needed is the sensitivity of model outputs corresponding to the yet-to-be-acquired measurements to parameters employed by the model. Obviously these can be computed by the model irrespective of whether corresponding field measurements have actually been taken or not. When assessing data worth, use of equation (5.4) is preferable to that of equation (5.8) for the same reasons as those discussed above for assessment of parameter contributions to predictive uncertainty. That is, use of equation (5.8) can be compromised to some extent by the granular nature of singular values, and by the fact that predictive error is less of an intrinsic quality of a system and its data than is predictive uncertainty. When simulating the acquisition of data in order to assess its worth, the user must specify elements of the C() matrix pertinent to the tested data. Where measurement noise is independent from one data element to the next, this requires specification only of the uncertainty associated with individual measurements, as the C() matrix is diagonal under these circumstances. Data can include either measurements of system state, or direct measurements of system properties. The sensitivity of data of the later type to model parameters (i.e. the rows of the Z matrix corresponding to such data) are zeros except for a 1 which corresponds to the parameter whose value is measured. The (normally diagonal) elements of the C() matrix corresponding to these measurements are calculated on the basis of the expected propensity for error in making them. Data worth computation based on equation (5.4) is undertaken using the PEST PREDUNC5 utility; data worth computation based on equation (5.8) is undertaken using the PEST PREDVAR5 utility. These utilities allow the worth of either individual data elements or of suites of data to be assessed. In both cases worth can be assessed in both of the following ways: through the reduction in predictive uncertainty/error variance that is accrued if data were added to an existing dataset (including the null dataset); and through the rise in predictive uncertainty that is incurred if data were lost. It is essential that data worth assessment take place in a highly parameterized context, particularly if assessing the worth of acquiring data at different places within a spatial model domain such as that of a groundwater model. If artificial constructs such as zones of piecewise constancy are employed for regularization purposes, it will be found that optimal locations for acquisition of new data will invariably lie at the boundaries of such zones. If real-world hydraulic properties are not, in fact, piecewise constant, the disinformation involved in the assumption of piecewise uniformity will create an artificial context for assessment of the worth of new data which does little more than reinforce this assumption. See Fienen et al. (2010) for a discussion of this topic. In most cases, data is of worth to the extent that it reduces the dimensionality of the null space, for the existence of this space is by definition an outcome of data insufficiency. The null space can only be explicitly or implicitly defined where a model is endowed with parameterization complexity that reflects real-world complexity. Exercises This section of the original document has been omitted. 6. How Wrong can a Prediction Be? Nonlinear Analysis Error and Uncertainty Bayes equation makes it clear that a models parameters retain uncertainty even after they have been subjected to the history-matching process. That is, a models parameters are still free to wiggle, even though a model has been calibrated. However their post-calibration variability is subject to constraints. Obviously one of these constraints is that the parameters remain realistic, thereby respecting expert knowledge. This constraint is embodied in the prior probability distribution of parameters which is an integral part of Bayes equation. However a second constraint is imposed through the history-matching process. This constraint further restricts parameter wiggle room; it maintains that as model parameters wiggle, they must wiggle in such a way that model-to-measurement misfit (as embodied in the likelihood term of Bayes equation) does not rise unduly. The same notions are expressed in equations (4.10) and (4.13) which focus on error rather than on uncertainty. Post-calibration variability is again subject to two constraints. If regularisation is achieved using singular value decomposition, these constraints on parameters are orthogonal to each other in parameter space. One set of constraints restricts solution space parameter variability to that which allows the model to retain its calibrated state. However parameter combinations that are orthogonal to this, and hence lie within the calibration null space, are given much greater freedom of movement for, by definition, their variation has no effect on model outputs for which there are complimentary field measurements comprising the calibration dataset. Mathematically, for a linear model, their variability has no limits. It is only expert knowledge that imposes limits on their variability. As has been discussed, while Bayesian analysis provides a complete conceptual framework for assessment of parameter and predictive uncertainty, modelling practicalities normally require a two step process of calibration followed by analysis of potential for parameter and predictive error. Not only is the latter approach generally more computationally tractable than Bayesian analysis. It also acknowledges the fact that sources of error in model predictions include not only information deficits in expert knowledge and hard site data; they also arise from the imperfect nature of a model as a simulator of environmental behaviour. Ideally, as has been stated, the potential for error in model predictions of interest should be reduced to their theoretical lower bounds through the model construction and calibration processes, these lower bounds being the inherent uncertainties of those predictions. Paradoxically, the role played by parameters of a necessarily defective model during the history-matching process is such that the presence of these defects may either help or hinder the attainment of this minimum. However the ability to quantify the potential for predictive error diminishes as the models relationship with the reality that it purports to simulate becomes less physically-based, even though the potential for predictive error may actually be reduced. This, unfortunately, is the murky world in which we work as modellers. It is worth repeating here a point that has been made earlier in this document, and which is salient to this chapter and to the next. It is this. There should be no expectation that a model can provide a correct prediction. However one model, or modelling approach, can be assessed as technically superior to another when, for predictions of interest: it can guarantee that correct predictions are within computed error limits; and these error limits approach the inherent uncertainty limits of these predictions, and hence approach optimality. From this it is apparent that quantification and minimization of predictive uncertainty/error limits should ideally be the focus of a modelling enterprise. Because of the numerical and practical difficulties associated with this endeavour, informal approaches must often be taken. One such approach - that of using a model as a basis for scientific hypothesis-testing is examined in the following chapter. In this chapter we focus on post-calibration predictive error analysis, keeping in mind the desirability of reducing a models propensity for predictive error to that of predictive uncertainty as conceptually provided by Bayes equation. Constraints The process of post-calibration predictive error analysis involves exploration of parameter variability subject to two constraints. These are that: the model remains calibrated; and that parameters retain believability. Loss of calibration status and loss of parameter credibility will, of course, happen by degrees. A reduction in either of these engenders lower probabilities for predictions that are calculated on the basis of these parameters. The range of predictive values associated with finite probabilities defines a predictive confidence region. Ideally a number should be associated with this region. For example it would be nice to be able to say that there is a 95% probability that the true value of the prediction lies between an upper limit of x and a lower limit of y. Unfortunately, given the imperfect nature of models, it will rarely be possible to say this. Sadly, there is a high degree of uncertainty associated with our assessment of uncertainty. At the heart of the first term of equation (5.8) is the C(k) matrix of innate parameter variability. At the heart of the second term of equation (5.8) is the C() matrix of measurement noise. Each of these is associated with an explicit or implicit probability distribution (implicit in most modelling studies). These are the reference distributions through which diminishing parameter, and hence predictive, probability are assessed as parameters are varied from values that are deemed to calibrate the model to those that are required if a specified predictive value is to occur. That is to say, if a number is to be associated with any predictive confidence interval, this number must ultimately arise from these two probability distributions collectively, as these provide the metrics through which parameter credibility (or lack thereof) on the one hand, and acceptability of model-to-measurement misfit (or lack thereof) are assigned numbers from which confidence intervals can be calculated. Unfortunately the probability distributions which C(k) and C() represent are often complex beyond measure. While simplified geostatistical models are often used to calculate C(k), assumptions such as stationarity make little sense when applied over the disparate and interconnected but non-continuous, fractured and faulted geological materials that comprise the domain of any regional groundwater model, or the patchwork of changing land uses and heterogeneous soil types spread over the uneven topography that comprises the domain of any surface water or land systems model. The situation for C() is no less complicated, for model-to-measurement misfit is an outcome of far more than measurement noise. It owes its origins to model structural defects that are as complex as the natural system of which the model is a necessarily imperfect simulator. The probability distribution of structural noise as it affects different types of measurements made at different locations within a complex model domain is therefore virtually impossible to quantify, and is probably singular. Nevertheless, some attempt must be made to limit the range of predictive possibilities to those that are compatible with expert knowledge as it resides in C(k), and with information available to us through historical measurements of system state, the integrity of which is characterized by C(). The situation is depicted in Figure 6.1, which builds on Figure 4.4. We do not know the reality parameter field k. However we know something of its projection onto the parameter solution space. Our knowledge of this, however, is compromised by the fact that estimation of this projection takes place on the basis of a calibration dataset that is contaminated by measurement and structural noise. Hence there is some wiggle room in our assessment of this projection, the size of this wiggle room being set by the amount and nature of (measurement and structural) noise  associated with this data, that is by C(). Any parameter set that projects onto that part of the solution space that is identified in this way as being feasible, and which can be assessed as having non-zero probability in terms C(k), is a contender to be the real parameter field. It moves out of contention when either it is deemed to be unrealistic on the basis of C(k), or is deemed to provide a misfit with the calibration dataset which cannot be explained by measurement/structural noise as characterized by C().  Figure 6.1. Post-calibration parameter variability. All of the parameter sets represented by dark arrows can legitimately be used by the model in exploring post-calibration predictive uncertainty as they are all realistic as assessed in terms of C(k) and they all provide an adequate fit with the calibration dataset as assessed in terms of C(). Well-Posed Inverse Problems General The surface water model calibration problem which comprises one of the exercises provided with this document constitutes an over-determined, or well-posed, inverse problem because all parameters are estimable on the basis of the calibration dataset. As has already been discussed, this is an outcome of the fact that regularisation is done: manually through estimating only a few of the many parameters offered by the HSPF model; and structurally through the fact that complex environmental process are simulated in a lumped and averaged way. In cases like this only the second term of equation (5.8) is used to calculate post-calibration predictive variability as the first term is zero. Linear analysis based on this term was described in the previous chapter. Two methods that are available through the PEST suite for implementing over-determined nonlinear analysis are now described. A third will be discussed in the following chapter. It should be noted however that if model run times are small, well-posed inverse problems constitute useful candidates for Markov chain Monte Carlo analysis of parameter and predictive uncertainty. Software to implement this analysis is not presently available through the PEST suite. Constrained Maximization/Minimization The processes of determining the margin of predictive variability associated with a particular confidence level can be formulated as a constrained maximization/minimization problem. This problem is more easily formulated in the over-determined context than in the under-determined context as only one set of constraints must be applied to parameters in this context, namely those that pertain to model-to-measurement misfit. The methodology is described by Cooley and Vecchia (1987), Vecchia and Cooley (1987), Cooley (2004) and Christensen and Cooley (2006). It is implemented by PEST when run in predictive analysis mode. Suppose that the objective function achieved through the calibration process is min and that weights used in definition of the objective function are correct in a relative sense, in that they properly reflect the propensity of each measurement to be degraded by measurement error (and structural error to the extent that this is possible). Let s be a prediction of interest. Suppose that we want to determine the 95% confidence interval of this prediction. To do this we must carry out two optimization exercises. First we must maximize the prediction subject to the constraint that the objective function  rises no higher than a value which we denote as 95%; then we must minimize the prediction subject to the same constraint. More generally, suppose that we wish to determine the two-sided 1- confidence interval of a prediction. Then we must maximize and minimize that prediction subject to the constraint that the objective function rises no higher than 0, where 0 is given by the first of the following two equations if the so-called simultaneous confidence interval of the prediction is explored, and by the second of the following two equations if the so-called individual confidence interval of the prediction is explored. The latter provides a narrower, and theoretically more correct statistical bound, though there is a certain level of approximation involved in its usage.  EMBED Equation.3  (6.1)  EMBED Equation.3  (6.2) In the first of these equations F(m,n-m) refers to the F distribution with (m,n-m) degrees of freedom; in the second equation t(n-m) signifies a t distribution with (n-m) degrees of freedom. The constrained maximization/minimization procedure can include the effects of predictive noise if: the noise term is added to the prediction as part of the model; the noise term is treated as a parameter; the noise term is also treated as an observation whose observed value is zero and whose weight is the inverse of its uncertainty. These are done internally by PEST if the user requests it (see the example at the end of this chapter). Meanwhile the objective function threshold associated with a given confidence interval is still given by (6.2) where the individual predictive confidence interval is sought. However equation (6.1) must be replaced by the following equation where the simultaneous predictive confidence interval is sought:  EMBED Equation.3  (6.3) Conceptually, the constrained maximization/minimization process is an efficient method for determination of the confidence band of a prediction of interest. As Gallagher and Doherty (2006) demonstrate, where the uncertainty band of only a single prediction must be explored its model run requirements are much more parsimonious that those of Markov chain Monte Carlo analysis. However in real-world modelling its use incurs some difficulties. These include the following. As has already been discussed, the statistical structure of model-to-measurement misfit induced by model imperfections is unknown; furthermore the greater the extent to which simplifications have been introduced to a model to formulate a well-posed inverse problem, the greater is the magnitude of structural noise likely to be. Unfortunately the veracity of equations (6.1) to (6.3) is dependent on implementation of a weighting scheme which properly compliments the statistical structure of measurement/structural noise in ways already described. Unless a prediction is very similar to measurements used in the calibration process, the stochastic character of the predictive noise term will not be known. Numerical performance of the constrained predictive maximization/minimization process degrades rapidly with the introduction of even a small amount of model numerical malperformance. Though the deleterious effects of model output granularity on calculation of finite-difference derivatives can be mitigated to some extent through use of appropriate PEST derivative control settings, and through implementation of a line search option as part of the constrained predictive maximization/minimization process, these measures all increase the run-time burden of this process considerably. Notwithstanding these problems, this procedure can provide a means of rapid assessment of predictive wiggle room in many modelling contexts. Calibration-Constrained Monte Carlo Equation (5.11) provides the covariance matrix of post-calibration parameter error for a linear model. In an over-determined inversion context it also provides the covariance matrix of the posterior parameter probability distribution, as equation (5.12) shows. This matrix is calculated and recorded by PEST whenever it undertakes over-determined parameter estimation. Conceptually this matrix could be used as a basis for random parameter set generation. If predictive model runs were then undertaken using all such random parameter sets, the post-calibration error/uncertainty distribution of that prediction could thereby be explored. In many circumstances this would provide a more efficient means of post-calibration random parameter set generation for a linear model than that provided by Markov chain Monte Carlo as subsequent parameter sets could be very different from each other, and no parameter sets would be rejected (neither of which holds true when sampling the posterior parameter distribution using the Markov chain Monte Carlo methodology). Further model run efficiencies could be gained through using this matrix as a basis for implementation of targeted sampling methodologies such as Latin-hypercube. Where a model is nonlinear, the covariance matrix of equations (5.11) and (5.12) does not provide a true descriptor of post-calibration parameter variability. Exploration of parameter and predictive uncertainty through random parameter set generation based on this matrix can only therefore be approximate at best. Nevertheless the integrity of such a sampling scheme can be improved if random parameter sets that are generated on the basis of this matrix are subjected to re-calibration. If the model is not too nonlinear, the computational effort required for adjustment of parameters in order to reduce model-to-measurement misfit to a suitable threshold, for example the objective function value described by equations (6.1) and (6.2), will not be large. In most cases this efficiency can be dramatically increased by re-use of a single Jacobian matrix when undertaking parameter adjustment, as is demonstrated in the worked example discussed later in this chapter. Ill-Posed Inverse Problems General Ill-posed inverse problems are those for which parameters cannot be estimated uniquely because of the existence of a null space. While the existence of a null space may make the inverse problem of model calibration a little more difficult to solve than for a well-posed inverse problem, recognition of its presence is often essential to the integrity of model predictive uncertainty analysis. In the past it has often been recommended that when calibrating an environmental model the principle of parameter parsimony should be respected. It was shown earlier in this document that this precept should not be an end in itself, but may be a logical outcome of the pursuit of a calibrated parameter field of minimum error variance. Parsimonization is an inherent part of the model calibration process. If calibration is based on highly-parameterized inversion, parsimonization is achieved mathematically (and hopefully optimally) as part of the inversion process itself. Parsimonious parameterization of a model that simulates complex processes in a heterogeneous environment can constitute a profound obstacle to the integrity of predictive uncertainty analysis, however. If parameterization complexity cannot be estimated on the basis of a given calibration dataset because of a paucity of information within this dataset, this does not make that complexity go away. In fact it makes the need for its inclusion in the uncertainty analysis process even stronger, for to the extent that a prediction is sensitive to parameters, or to parameter combinations, that are inestimable through the calibration process, the uncertainty of that prediction is not diminished through calibration. It is the authors experience that the uncertainty associated with many predictions in many contexts is dominated by the null-space term. Where the predictions required of a model are similar to measurements used in calibration of that model, a simplified parameter set that emerges from manual and/or structural regularization may provide an adequate basis for post-calibration uncertainty analysis. By definition, in these circumstances the prediction is sensitive mainly to those parameters to which model outputs used in the calibration process are sensitive (provided that conditions that will prevail when a prediction is required are not too different from those which prevailed when calibration was effected). Furthermore, if parameter simplification engenders structural noise, the nature and extent of this noise as if affects model outcomes used for calibration (and hence for predictive) purposes can be determined. Alternatively, if model structural defects require that parameters assume compensatory roles to allow a good fit between model outputs and members of the calibration dataset to be obtained, this same compensatory role is likely to provide a beneficial effect on the ability of the model to make predictions of the same type at the same locations. However many modelling contexts are very different from this. Models are often built precisely because conditions are about to be changed, or because predictions must be made of a quantity, or at a location, for which little historical hard information is available. It is precisely for this reason that a physically-based model is chosen for environmental simulation, and that a considerable level of numerical complexity may be devoted to the simulation of environmental processes encapsulated in the model. In fact an important design consideration for many models is that a process should not be excluded from the model if a prediction of interest may be sensitive to it. Given that the uncertainty associated with the making of that prediction will probably be high, the same logic must apply to the parameterisation that is associated with prediction-salient processes (and the heterogeneity thereof) if the uncertainty associated with the prediction is to be properly explored. As has been stated, this (and not an illusion of predictive certainty) must then form the basis for the making of decisions to which the prediction pertains. Highly-parameterized, nonlinear, post-calibration predictive uncertainty analysis is therefore a topic that must be at the heart of modern model usage. Two methods are discussed below. The first is generally impractical, but is an extension of a methodology that was discussed above for use in the over-determined context. The second is more practical. It can (and has) been applied in modelling contexts of considerable parameter and process complexity. A third methodology will be discussed in the next chapter. Constrained Predictive Maximization/Minimization In principle, and in practice, PEST can be used in predictive analysis mode to undertake constrained predictive maximization/minimization in the highly-parameterized context. In doing this two constraints must be enforced. The first is on model-to-measurement misfit, while the second is on null-space projected parameter departures from their calibrated values. The PEST REGPRED utility (REGPRED stand for regularized predictive uncertainty analysis) automates construction of a PEST input dataset that can be used to implement this process. Tonkin et al. (2007) discuss this methodology and demonstrate its use. While of theoretical interest, this process is unlikely to find much application in real-world modelling applications for at least the following reasons. In spite of the fact that its efficiency can be increased through the use of predictive super parameters as Tonkin et al. (2007) describe, it is a model-run-intensive numerical procedure. As for its over-determined counterpart, the integrity and efficiency of the constrained maximization/minimization procedure is easily degraded where model numerical imperfections degrade the integrity of finite-difference derivatives calculations. Separate constrained maximization/minimization processes must be undertaken to obtain prediction values corresponding to different confidence levels. Thus the attainment of a relationship between prediction value and confidence level requires that an inordinately large number of model runs be carried out. Null Space Monte Carlo The null space Monte Carlo (NSMC) procedure is unique to PEST. It provides a mechanism for rapid generation of diverse parameter fields which satisfy both the model-to-measurement misfit and reality constraints required for exploration of post-calibration parameter uncertainty. By making a model prediction using many such parameter sets, the calibration-constrained variability of that prediction can be explored. The method is not Bayesian, for it has its roots in equation (5.5) rather than in Bayes equation. However Bayesian analysis is very difficult to implement where models are highly parameterized, nonlinear and possess long run times, Strictly speaking, NSMC provides a methodology for exploration of post-calibration parameter and predictive error rather than of post-calibration parameter and predictive uncertainty. However the outcomes of such an analysis are not expected to be significantly different from the outcomes of Bayesian analysis, and can be acquired with considerably less numerical difficulty. Another advantage of the NSMC method is that it can be easily adapted to a users computing circumstances. If certain compromises are made (these being explained below), the numerical efficiency of the method can be greatly increased. This may be necessary where model run times are high and/or where computing resources are limited. Though the need for such compromises may not always be welcomed, it must be remembered that compromise is a better alternative than doing nothing to explore post-calibration predictive uncertainty for, with the exception of the methodology that is discussed in the following chapter, there is simply no other practical methodology available for use in conjunction with highly parameterized models with long run times. The NSMC process takes its inspiration from Figure 6.1. It attempts to generate many different parameter fields which have the same solution space projection as that of the parameter field which calibrates the model. (Ideally the latter should have no null space projection at all, for it attempts to provide the most simple means to achieve a desired level of model-to-measurement fit; any attempts to wander off the hyper plane that constitutes the solution space into the null space are likely to increase the potential for predictive error as there is no guarantee that such a journey into the null space is in the right direction.) The NSMC process is implemented as follows. Random parameter fields k are generated using the prior parameter probability distribution; the covariance matrix of the prior parameter probability distribution is, of course, C(k). The calibration parameter field k is subtracted from each of these random parameter fields to yield a random collection of k-k parameter difference fields. In each case the difference field is projected onto the calibration null space. The projected difference field is then added back to the calibration parameter field. If the model were linear, the process would end here, as the new parameter field would be guaranteed to calibrate the model. The model is re-calibrated through adjustment of the new parameter field. However only solution space components of this field are adjusted. Hence re-calibration relies on adjustment of only a limited number of super parameters rather than all parameters used by the model. Further gains in efficiency are achieved through re-use of the same set of super parameter sensitivities for the first iteration of all random field re-calibration exercises. Note that while the need to re-calibrate in this fashion may be seen as an undesirable consequence of model non-linearity, it does provide an opportunity to introduce necessary variability to solution space parameter components. The outcome of an NSMC exercise is a suite of parameter fields which can be used for the making of any prediction required of the model. The uncertainty associated with that prediction can thereby be assessed through construction of an empirical probability density function. Means by which the efficiency of this process can be further increased (with some sacrifice to the integrity of that process) include the following. The random parameter field generation process based on C(k) can be centred on the calibrated parameter field rather than on the pre-calibration expected parameter field. Parameter variability as encapsulated in C(k) can be reduced through use of a surrogate C(k) matrix with narrower probability intervals when generating random parameter fields. This reduces the extent to which different (null space projected) random parameter fields de-calibrate the model, and hence reduces the numerical effort required to achieve model re-calibration. The objective function threshold at which a model is deemed to be recalibrated can be made higher than that which would be considered to be statistically correct on the basis of the stochastic properties of measurement noise as encapsulated in C(). Given the unknown stochastic nature of structural noise which is, in most cases, the dominant contributor to calibration misfit, this is unlikely to introduce a significant loss of integrity to the NSMC process. As Tonkin and Doherty (2009) explain, the NSMC process can be made even more sophisticated when implemented in conjunction with groundwater and other spatial models by allowing calibration-constrained random heterogeneity to be represented on a cell-by-cell or element-by-element basis. To accomplish this, the NSMC process must be slightly modified as follows. A spatial parameter field is generated using a cell-by-cell stochastic field generator such as the FIELDGEN utility supplied with the PEST Groundwater Data Utility suite. That field is sampled at a discrete number of points. These samples comprise a random parameter set for a model parameterization scheme based on pilot points. A smooth field is interpolated between the pilot points; the difference between the smoothed, interpolated field and the stochastic field is calculated. The null space projection operation discussed above is applied to the pilot point parameter field. When super-parameter re-calibration is applied to the pilot point parameters the difference field obtained as above is added to the parameter field obtained through spatial interpolation between pilot points. See Tonkin and Doherty (2009) and the PPSAMP Groundwater Data Utility for further details. Exercises This section of the original document has been omitted. 7. Hypothesis-Testing and Pareto Methods Where are we at? Conceptually, Bayes equation provides a mechanism for synthesis of information contained in expert knowledge and in measurements of system state. Unfortunately, problems arise in applying Bayes equation in the environmental modelling context. These include the following. Numerically, working directly with probability distributions is difficult, unless models are linear and/or probability distributions are amenable to simple analytical description. Encapsulation of expert knowledge in a prior probability distribution is often a difficult matter. In the environmental modelling context expert knowledge of complex geological, environmental, land use and climatic systems is often slight. Furthermore, the variables that describe the nature of spatial and temporal heterogeneity as it applies to influential system properties are categorical rather than continuous, and mathematical simplifications such as stationarity or homoscedasticity are inappropriate. Numerical models are defective simulators of environmental behaviour. This has a number of repercussions. It gives rise to significant structural noise under both calibration and predictive conditions. The stochastic character of this noise (which is needed for definition of the likelihood function in Bayes equation) is unknown. Many parameters that are adjusted through the history-matching process must assume, to at least some degree, surrogate roles to compensate for a models inadequacies in simulating past system behaviour. Depending on the type of prediction required of a model, these roles may either enhance or detract from its ability to make those predictions. In either case, the link between parameter optimality and predictive optimality is broken. Analysis of predictive uncertainty in the environmental modelling context will therefore be compromised. In fact quantitative exploration of uncertainty may be impossible, as many aspects of the analysis must be heuristic. This does not detract from the importance of assessing model predictive uncertainty. As has already been stated, such an assessment is fundamental to the making of important decisions - decisions that cannot be avoided and that must be made with as high a level of scientific integrity as possible. However it does raise the question of how to approach this matter and whether, given its necessarily subjective, but nevertheless numerically intensive nature, direct application of Bayes equation is the best way to go about it. A distinction has been made in this document between analysis of the potential for predictive error and analysis of predictive uncertainty. Error is what we, as modellers, carry; our goal is to reduce the potential for error associated with a given prediction to its theoretical lower limit, this being the inherent uncertainty of that prediction given the information that is presently at hand. Working with error rather than uncertainty allows us many conveniences that can help overcome the problems associated with direct application of Bayes equation. In particular, use of Bayes equation to estimate predictive uncertainty is replaced by a two step process of model calibration followed by post-calibration predictive error analysis. Despite its convenience, this approach still leaves us with two major problems. As equation (5.8) demonstrates, formal assessment of model predictive error still requires that the user provide assessments of the stochastic character of pre-calibration parameter uncertainty, and of the stochastic character of measurement/structural noise In equation (5.8) these are represented by the C(k) and C() covariance matrices respectively. The significance of model structural defects should not be underestimated. Defects arise because no model can provide perfect simulation of all aspects of environmental behaviour that are salient to a particular environmental outcome that we would like to predict. Their existence is not always a cause for concern however, as construction of the perfect model may not be a fruitful pursuit anyway. As discussed earlier in this document, while a perfect model may, in theory, provide the best mathematical repository for expert knowledge, it may make a very poor tool for extracting vital information from historical measurements of system state. To the extent that a prediction resembles historical measurements of system state, the importance of these measurements to the making of that prediction is increased. While a simplified model may provide receptacles for this kind of information that are only loosely linked to nameable system properties, at least these receptacles are accessible, thanks to manageable model run times and elimination of problematical numerical behaviour. However as model abstraction increases with increased model simplification, and as parameters increasingly assume roles that compensate for model inadequacies, their capacity to act as receptacles for information arising from expert knowledge is diminished. In many circumstances this may be a small price to pay for greatly enhanced predictive ability. In other circumstances, particularly those where predictions are of very different types, or must be made under very different circumstances, from those comprising the calibration dataset, even a small level of abstraction may incur significant and unquantifiable model predictive error. Where do we go from here? The above brief analysis suggests that there is no single path forward. Nevertheless the purpose of this chapter is to explain a methodology that, in the authors opinion, has the potential to provide a useful basis for model usage in many decision-making frameworks. Underpinning its use are a number of assumptions that can be summarized as follows. Models that are used as a basis for environmental decision-making are deployed in environments where data have been gathered for many years. History-matching is thus an important part of the model development process. The capacity exists to build a model that, though not providing an exact replica of system processes, is nevertheless physically-based to a reasonably high degree. Nevertheless, the model that is ultimately used as a basis for environmental management in any particular context will have many numerical imperfections. Some of these imperfections will arise because of the necessity to make assumptions pertaining to elements of the system that are imperfectly known, for example the disposition of geological layering and the nature of historical system stresses. Others will arise from model simplifications that are adopted to forestall numerical instability and/or excessive run times. Though a considerable amount of expert knowledge exists over a study area at which an environmental model is employed, this knowledge is frustratingly inadequate - especially as it pertains to the degree and nature of heterogeneity that prevails in subsurface hydraulic properties, the nature and magnitude of historical system stresses, the spatial and temporal variability of present and historical land uses, etc. Despite the difficulties that they present, these aspects of real-world environmental modelling do not erode the potential for numerical simulation to provide a sound basis for decision support. However its ability to achieve this potential will depend on the manner in which it is employed, and on the philosophical underpinnings of its use. The Scientific Method The basis of the so-called scientific method (whose rigorous exposition is credited to the great philosopher of science Karl Popper) is this: an hypothesis is proposed; that hypothesis is then tested using an appropriate experiment. On the basis of outcomes of that experiment it may be possible to invalidate the hypothesis by demonstrating its inconsistency with data gathered and processed through the experiment. If the hypothesis cannot be invalidated, then it remains viable. It can never be validated however. Nevertheless it may achieve something approaching this status as competing hypotheses are successively invalidated through clever, targeted and incisive experiments that are designed to do so. As discussed earlier in this document, environmental decision-making is often based on avoidance of an unwanted event. Its unwanted status may emerge from the high monetary, social or environmental costs associated with its occurrence. Environmental managers are then charged with preventing its occurrence. The occurrence of an unwanted event can be considered a scientific hypothesis. Even after management practices are proposed whose intent is to prevent occurrence of the event, the hypothesis that it can nevertheless occur maintains its status until it is invalidated through scientific inquiry. The purpose of a modelling exercise in a management setting that is marked by avoidance of this event must be to provide a basis for rejection of the hypothesis that the event can occur if a certain management option is taken. This is achieved through processing all available information with the model - both expert knowledge and hard data arising from measurements of system state. If such processing leads to rejection of the hypothesis that the unwanted event can occur despite the adoption of a proposed management practice, then model-based environmental data analysis has provided support to the decision-making process that this process requires. In practice, the situation may not be quite as clear-cut as just outlined. For example the hypothesis to be rejected may need to be modified to that of event occurrence without early enough warning to take preventative action. Alternatively, while it may not be possible to eliminate the possibility that an untoward event can occur, it may be possible to ascribe such a low probability to its occurrence that society is willing to take the risk in order to receive the benefits of a proposed development that may have created the need to explore the hypothesis that a bad thing may happen in the first place. Notwithstanding these variations, the central premise remains. A modelling exercise should constitute an incisive numerical experiment that attempts to process data optimally in relation to a particular end - this being rejection of the hypothesis that an unwanted event will occur if a certain management practice is adopted. So on what grounds can a hypothesis be rejected? It can be rejected if its occurrence is incompatible with all available information. That information is composed of expert knowledge, measurements of system properties, and measurements of system state. Expert knowledge and direct measurements of system properties constitute the prior information term of Bayes equation and the C(k) matrix of innate parameter variability. Measurements of system state constitute the calibration dataset. The errors associated with these measurements, together with model imperfections that impede flow of information from these measurements, form the basis for calculation of the likelihood term of Bayes equation and the C() covariance matrix of measurement/structural noise. These concepts provide the philosophical basis for deployment of simulation technology as a scientific instrument through which modellers (as scientists) may be able to differentiate between events which can happen and those that cannot. This means of model usage can be summarized as follows. A model is deployed specifically to test the hypothesis that an unwanted event will occur. This is done by including system states corresponding to that event in the models calibration dataset, along with historical measurements of system state. The model is then calibrated against this composite dataset. The hypothesis that the proposed state can eventuate can be rejected if: the model cannot replicate the occurrence of the event in the future while simultaneously respecting historical system behaviour; the model can accommodate the simultaneous occurrence of the proposed and historical system states only through use of parameters that are unrealistic. Failure to simulate historical conditions is assessed in terms of C(). Unacceptability of parameter fields is assessed in terms of C(k). The two probability distributions that these covariance matrices represent are (as always) of pivotal importance. However their utility in the decision-making context is enhanced if they are used in a way that is slightly different from that embodied in Bayes equation, or even in the error analysis equations presented in previous sections, as the imperfect nature of numerical models as simulators of environmental behaviour can then be better taken into account. The Role of Model Calibration Two calibration contexts are thus proposed. In one of these (which may in fact comprise many calibration exercises) the model is used in an hypothesis-testing capacity in order to assess the likelihood or otherwise of an unwanted future event, and to develop management plans which can forestall the occurrence of that event. However prior to this, the model is calibrated against historical data alone, this constituting model calibration in its traditional sense. The present subsection examines what traditional model calibration should hope to achieve in light of its role as a precursor to model calibration against a dataset which includes an hypothesized event in order to assess the likelihood or otherwise of that events occurrence. Calibration against an historical dataset is often a process of compromise, and one that involves a high degree of subjectivity. Where a highly-parameterized approach is taken to model calibration, the process of compromise can be given a more scientific foundation, and can assist in creation of a sharper instrument through which hypotheses pertaining to future events can subsequently be tested. Ideally the degree of parameter complexity ascribed to a model should be commensurate with its process complexity. Presumably the decision to include representation of certain processes within an overall environmental simulation exercise is based on the fact that the model would lose its relevance to decision-making if these processes were omitted. The same logic dictates that if these processes are affected by heterogeneity of system properties that govern them, then this heterogeneity may have an effect on model outcomes on which decisions may rest. Failure to represent a propensity for system property heterogeneity in a model-based hypothesis-testing procedure may result in the drawing of erroneous conclusions regarding the possibility or otherwise of hypothesised events. A problem with the inclusion of many parameters however is that their use can result in over-fitting of the model to historical system behaviour. In contrast, use of too few parameters can result in failure to sufficiently fit historical data - and therefore failure to extract from it all of the information that resides in it. Ideally, the intelligent use of mathematical regularization should provide the perfect compromise between these two extremes. Thus a model is endowed with parameterization density that is commensurate with the sensitivity of key model outcomes to system property heterogeneity; regularisation ensures that only as much heterogeneity is actually introduced to the model domain as can be supported by the data. Meanwhile, the calibration null space is sufficiently well populated for the range of possibilities associated with a particular model outcome to be assessed, with due recognition paid to the inability of the calibration process to constrain some (or perhaps many) of these outcomes. In practice, the determination of an appropriate level of regularisation to employ, even when regularisation is mathematically implemented, is often a matter of subjectivity. If regularisation is applied too strongly, model-to-measurement misfit is increased while the emergence of parameter heterogeneity within the model domain is suppressed. On many occasions a modeller will chose to do exactly this if he/she judges that emergent heterogeneity indicates the adoption by model parameters of surrogate roles that provide compensation for model inadequacies. Violation of an explicit or implicit C(k) matrix of acceptable parameter variability provides the modeller with the evidence that he/she needs to identify this occurrence, and hence to take remedial action through imposition of stronger regularisation constraints. Through doing this, he/she denies him/herself as good a fit between model outcomes and historical measurements of system state as he/she would otherwise like. Thus respect for C(k) takes precedence over respect for C(), as a greater level of model-to-measurement misfit is tolerated than that which would be expected on the basis of measurement noise alone. On the other hand, a modeller may have a high degree of confidence that his/her model provides accurate simulation of system behaviour. Furthermore, as is often the case, he/she may have little idea of the innate variability of system properties, and of the propensity for local heterogeneity to prevail within certain parts of the model domain. Model calibration may provide a great deal of information in this regard, especially if it is implemented using a strategy based on highly parameterized inversion that provides the calibration process with the flexibility to introduce complexity to the model domain if and where it is needed. In a case such as this, respect for C() may take precedence over respect for C(k), as a modeller may be loath to reject information pertaining to the existence of system property heterogeneity by ascribing the emergence of such heterogeneity to structural noise. There is thus a tension between C(k) and C(). The highly-parameterized model calibration process plays one against the other as model-to-measurement misfit is traded off against parameter field heterogeneity. This should not be construed as a disadvantage of the highly parameterized approach to model calibration. Rather it is a distinct advantage because it endows the modeller with the ability to apply his/her subjective judgment in a manner that is unimpeded by the necessity to employ parsimonious parameterization schemes in order to create a well-posed inverse problem where none actually exists, this being a requirement of older calibration methodologies. It is thus apparent that an important outcome of the calibration process (an outcome whose importance has been insufficiently recognized to date) is a subjective reconciliation by the modeller of C(k) with C(). However this reconciliation is often implicit rather than explicit because rarely, in real-world modelling practices, is either of these matrices defined (or even needs to be defined). Rather, these matrices are implicit in the heterogeneity that a modeller is prepared to accept within the model domain on the one hand, and in the model-to-measurement misfit that he/she is prepared to tolerate on the other hand. As we have seen, whether implicit or explicit, both of these matrices are crucial to the assessment of model predictive uncertainty, and hence to the hypothesis-testing procedure that constitutes the next stage of model deployment. It is thus apparent that the task of a properly orchestrated process of model calibration against historical measurements of system state is to provide a platform for the hypothesis-testing that will take place thereafter which, in turn, provides a platform for model-based decision-making. The process of calibrating a model against a historical dataset thus has three important outcomes. These are: a parameter field that can be considered to approach that of minimum error variance; through this parameter field, an assessment of the degree of system property variability that may prevail within a model domain, this defining an implicit C(k) matrix; an assessment of the degree of model-to-measurement misfit that accompanies simulation of environmental processes within a particular study area, this defining an implicit C() matrix. When hypotheses pertaining to future system behaviour are tested, these will be rejected if simulation of their occurrence requires too great a departure from either the parameter field that emerged during the calibration process, or from the fit with historical data that was achieved during that process, or both. Too great a departure will probably be the outcome of subjective assessment. However it will rest heavily on what was learned through the calibration process - this including the estimated parameter field itself, the tolerable level of heterogeneity that may exist in this field, and the tolerable inability of the model to exactly replicate past system behaviour. Introduction of excessive heterogeneity or excessive misfit as a necessary condition for allowing the model to replicate an hypothesised future event provides a basis for deeming that event to be of low likelihood. It is through calibrating the model against historical data alone as a precursor to testing hypotheses of future system behaviour that excessive now has a metric, albeit probably a subjective one. It is important to note that recognition of model structural defects is an implicit part of the calibration and hypothesis-testing processes as thus described. Model-to-measurement misfit will almost certainly be greater than measurement noise. Parameter variability as it is introduced to the calibrated parameter field, may be greater than that which a modeller would ascribe to parameters based on expert knowledge alone. The former recognises the existence of structural noise; the second recognizes the fact that parameters may need to assume unusual values that compensate for a models defects and that this may actually enhance a models ability to simulate both past and future system behaviour. Both of these phenomena are unavoidable. Both of them operate under both calibration and predictive conditions. Their recognition and accommodation are an essential aspect of model deployment, and for identification of future events as unlikely or otherwise. Pareto Concepts - Model Calibration The previous discussion shows that in calibrating a model against an historical system dataset there are too competing objectives. These objectives can be formally encapsulated in two different objective functions, as is done when implementing Tikhonov regularisation. One of these is the so-called regularisation objective function. This is zero when parameter values perfectly respect their pre-calibration preferred values or preferred relationships (depending on the way in which the modeller chooses to express his/her preferred parameter condition). The other is the so-called measurement objective function. This is zero when a perfect fit is obtained between field measurements and their model-generated counterparts. Between these two extremes lies a subjectively chosen optimal calibration outcome - this defining the parameter field which is accepted as giving rise to predictions of minimized error variance. At the same time the metrics for acceptability of model-to-measurement misfit and for acceptable parameter field variability are defined. Both of these metrics will be applied during future model-based hypothesis-testing. Whenever two or more objectives compete, a curve such as that shown in Figure 7.1 can be drawn.  Figure 7.1 The Pareto front as it applies to the model calibration process. Conceptually, any point to the right of the curve shown in Figure 7.1 is feasible. Parameters can readily be generated that introduce too much system property variability into a model domain at the same time as they provide a poor fit to the calibration dataset. Both the measurement and regularization objective functions associated with such a parameter set will therefore be high. Ideally the point A is unique. The regularization objective function is zero, implying total respect for regularization constraints. The parameter field that corresponds to point A should therefore be that of minimum pre-calibration error variance. However model-to-measurement misfit associated with this parameter field may be high, and with it the measurement objective function. At point B the opposite occurs; model-to-measurement fit is as good as can be attained using the current model. It is probable that a multiplicity of parameter sets, all of equal likelihood from a pre-calibration point of view, can provide this level of fit; hence point B is not associated with a unique set of parameter values. But even if it were, we would not be too interested in these values because their departure from preferred pre-calibration values is probably too great, rendering them unrealistic from an expert knowledge point of view. The curve joining points A and B in Figure 7.1 is referred to as the Pareto front if it defines the locus of points in objective function space (and implicitly in parameter space) for which it is not possible to improve both objective function components simultaneously. Hence a better fit with the calibration dataset can only be achieved through lowering parameter pre-calibration likelihood, and vice versa. As a direct consequence of this definition, points cannot exist to the left of the Pareto front. Hence it defines a barrier in objective function space that cannot be crossed. The Pareto front can also be viewed as implicitly defining the locus of solutions to a set of constrained optimization problems. For any measurement objective function, a unique point on the Pareto front can be selected. The regularization objective function pertaining to that point is the lowest that can be achieved while maintaining the chosen measurement objective function. The parameter set corresponding to that point therefore defines the solution to the same constrained minimization problem as that which is sought through solution of the inverse problem of model calibration through application of Tikhonov regularisation. When PEST is run in regularisation mode, the user must select a target measurement objective function in advance of the Tikhonov solution process. If he/she does not like the outcomes of the regularised inversion process because either the parameter field is too lumpy, or because the fit between model outcomes and historical measurements of system state is not good enough, the process must be repeated using a different target measurement objective function. Eventually a measurement objective function is selected that represents the best compromise between excessive parameter field lumpiness and excessive model-to-measurement misfit. Conceptually, the process of choosing an optimum point of compromise between goodness of fit and parameter field lumpiness is the process of travelling along the Pareto front. It follows that if software can be designed to traverse this front, recording parameter sets as it does so, this should provide the modeller with all of the information that he/she needs to choose the optimal point of compromise that constitutes the outcomes of a properly-conducted calibration process. Furthermore, if passage along the front can be continuous rather than discrete, the modeller is provided with maximum flexibility in choosing this point. When run in Pareto mode, PEST attempts to provide this outcome. When using PESTs Pareto capabilities to calibrate a model, measurement and regularisation objective functions are defined in the usual way. PEST starts with a parameter set for which the regularisation objective function is zero. Weights applied to observations comprising the calibration dataset are then slowly increased. During the ensuing sequence of optimisation iterations PEST crawls along the Pareto front, with the measurement objective function slowly decreasing and the regularisation objective function slowly increasing as it does so. Meanwhile the user inspects the changing nature of model-to-measurement misfit on the one hand, and of parameter field variability on the other hand. Eventually he/she selects a point along the front that he/she deems to express the best compromise between the two. In doing so, as stated above, he/she explicitly chooses the parameter field of minimum error variance. At the same time he/she implicitly selects metrics through which departures from this field will be judged when subsequently testing an hypothesised prediction, both in terms of what extra model-to-measurement misfi./hpw. 0     ? P  ƾƧʣʣʣʣʛʧ{ss`%h#CJOJPJQJmHnHtH ujh#Uh!h(5CJ$OJQJaJ$h!hJ5CJ$OJQJaJ$h#hE3>*h#hE35CJ$OJQJaJ$ hu6 hM6hQhJ6hJhE3h(hQh(5CJ,OJQJ^JhE35CJ,OJQJ^JhQhJ5CJ,OJQJ^J!/01236789:;<=Jhvw $Ifgd( $$Ifa$gdQ  S ww % 8# gd#gdE3@&gdGgd+Fkkd$$Ifl4#$ t0644 laf4ytQ     ! " 1 2 3 M N O P Q R S T U q r s t { | } ߵkߵY#jh#UmHnHu*j}hF?h#0J!UmHnHu%h#CJOJPJQJmHnHtH u#jh#UmHnHujh#UmHnHuh#mHnHu*jhF?h#0J!UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu"S <<4G`j$vB ' 8# gd# % 8# gd# & 8# gd#   6789:;<=>Z[ҿ败j败X#jh#UmHnHu*jqhF?h#0J!UmHnHu%h#CJOJPJQJmHnHtH u#jh#UmHnHujh#UmHnHuh#mHnHu$jhF?h#0J!UmHnHu*jwhF?h#0J!UmHnHuhF?h#0J!mHnHuh#mHnHu"[\]wxyɾ׊ɁkɾY׊Ɂ#jh#UmHnHu*jehF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#jh#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*jkhF?h#0J!UmHnHu  6789:;<=>Z[\]uvwɾ׊ɁkɾY׊Ɂ#jh#UmHnHu*jYhF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#jh#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*j_hF?h#0J!UmHnHu -./123456RSɾ׊ɁkɾY׊Ɂ#j h#UmHnHu*jM hF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#jh#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*jShF?h#0J!UmHnHu STUɾ׊ɁkɾY׊Ɂ#j h#UmHnHu*jA hF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#j h#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*jG hF?h#0J!UmHnHu $%&@ABDEFGHIefghɾ׊ɁkɾY׊Ɂ#j h#UmHnHu*j5 hF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#j h#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*j; hF?h#0J!UmHnHu "#$%=>?YZ[]^_`ab~ɾ׊ɁkɾY׊Ɂ#jh#UmHnHu*j)hF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#jh#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*j/hF?h#0J!UmHnHu ;<ɾ׊ɁkɾY׊Ɂ#jh#UmHnHu*jhF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#jh#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*j#hF?h#0J!UmHnHu <=>GHIcdeghijklɾ׊ɁkɾY׊Ɂ#jh#UmHnHu*jhF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#jh#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*jhF?h#0J!UmHnHu !"#$%&BCDESTUopqstuvwxɾ׊ɁkɾY׊Ɂ#jh#UmHnHu*jhF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#jh#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*j hF?h#0J!UmHnHu  !;<=?@ABCD`aɾ׊ɁkɾY׊Ɂ#jvh#UmHnHu*jhF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#j|h#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*jhF?h#0J!UmHnHu abcqrsɾ׊ɁkɾY׊Ɂ#jjh#UmHnHu*jhF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#jph#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*jhF?h#0J!UmHnHu *+,./0123OPQRijkɾ׊ɁkɾY׊Ɂ#j^h#UmHnHu*jhF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#jdh#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*jhF?h#0J!UmHnHu 1@H^?Xa7 % 8# gd# & 8# gd# ' 8# gd#9:;=>?@AB^_ɾ׊ɁkɾY׊Ɂ#jRh#UmHnHu*jhF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#jXh#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*jhF?h#0J!UmHnHu _`avwxɾ׊ɁkɾY׊Ɂ#jFh#UmHnHu*jhF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#jLh#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*jhF?h#0J!UmHnHu %&'ABCEFGHIJfghiyz{ɾ׊ɁkɾY׊Ɂ#j:!h#UmHnHu*j hF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#j@ h#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*jhF?h#0J!UmHnHu   ;<=WXY[\]^_`|}ɾ׊ɁkɾY׊Ɂ#j.#h#UmHnHu*j"hF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#j4"h#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*j!hF?h#0J!UmHnHu }~ 23ɾ׊ɁkɾY׊Ɂ#j"%h#UmHnHu*j$hF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#j($h#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*j#hF?h#0J!UmHnHu 345ijkɾ׊ɁkɾY׊Ɂ#j'h#UmHnHu*j&hF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#j&h#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*j%hF?h#0J!UmHnHu 89:<=>?@A]^_`ɾ׊ɁkɾY׊Ɂ#j )h#UmHnHu*j(hF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#j(h#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*j'hF?h#0J!UmHnHu "#$%567QRSUVWXYZvwɾ׊ɁkɾY׊Ɂ#j*h#UmHnHu*j*hF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#j*h#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*j)hF?h#0J!UmHnHu wxy45ɾ׊ɁkɾY׊Ɂ#j,h#UmHnHu*ju,hF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#j+h#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*j{+hF?h#0J!UmHnHu 567>?@Z[\^_`abcɾ׊ɁkɾY׊Ɂ#j.h#UmHnHu*ji.hF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#j-h#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*jo-hF?h#0J!UmHnHu   012456789UVWXabc}~ɾ׊ɁkɾY׊Ɂ#j0h#UmHnHu*j]0hF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#j/h#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*jc/hF?h#0J!UmHnHu 234NOPRSTUVWstɾ׊ɁkɾY׊Ɂ#j2h#UmHnHu*jQ2hF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#j1h#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*jW1hF?h#0J!UmHnHu U N !|!!;"""N###X$$ %%&& 'W'''X( % 8# gd# ' 8# gd# & 8# gd#tuv      ! " ɾ׊ɁkɾY׊Ɂ#j4h#UmHnHu*jE4hF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#j3h#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*jK3hF?h#0J!UmHnHu " # $ + , - G H I K L M N O P l m n o ɾ׊ɁkɾY׊Ɂ#j6h#UmHnHu*j96hF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#j5h#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*j?5hF?h#0J!UmHnHu !!!!!!!! !!?!Y!Z![!u!v!w!y!z!{!|!}!~!!!ɾ׊ɁkɾY׊Ɂ#j8h#UmHnHu*j-8hF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#j7h#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*j37hF?h#0J!UmHnHu !!!!!!!!!!!!!!!!!!!"""4"5"6"8"9":";"<"="Y"Z"ɾ׊ɁkɾY׊Ɂ#j:h#UmHnHu*j!:hF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#j9h#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*j'9hF?h#0J!UmHnHu Z"["\"r"s"t""""""""""""""""""""""""""##ɾ׊ɁkɾY׊Ɂ#j<h#UmHnHu*j<hF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#j;h#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*j;hF?h#0J!UmHnHu ###+#,#-#G#H#I#K#L#M#N#O#P#l#m#n#o###############ɾ׊ɁkɾY׊Ɂ#j>h#UmHnHu*j >hF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#j=h#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*j=hF?h#0J!UmHnHu #############$$$$$ $5$6$7$Q$R$S$U$V$W$X$Y$Z$v$w$ɾ׊ɁkɾY׊Ɂ#jz@h#UmHnHu*j?hF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#j?h#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*j?hF?h#0J!UmHnHu w$x$y$$$$$$$$$$$$$$$$$$$$%%%%%% %!%"%>%?%ɾ׊ɁkɾY׊Ɂ#jnBh#UmHnHu*jAhF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#jtAh#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*j@hF?h#0J!UmHnHu ?%@%A%%%%%%%%%%%%%%%%%%%%&&&&&&&&&3&4&ɾ׊ɁkɾY׊Ɂ#jbDh#UmHnHu*jChF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#jhCh#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*jBhF?h#0J!UmHnHu 4&5&6&&&&&&&&&&&&&&&&&&&&''''' ' ' ' '(')'ɾ׊ɁkɾY׊Ɂ#jVFh#UmHnHu*jEhF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#j\Eh#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*jDhF?h#0J!UmHnHu )'*'+'4'5'6'P'Q'R'T'U'V'W'X'Y'u'v'w'x'''''''''''''''ɾ׊ɁkɾY׊Ɂ#jJHh#UmHnHu*jGhF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#jPGh#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*jFhF?h#0J!UmHnHu '''''''''''''''((((5(6(7(Q(R(S(U(V(W(X(Y(Z(v(w(ɾ׊ɁkɾY׊Ɂ#j>Jh#UmHnHu*jIhF?h#0J!UmHnHuh#mHnHu%h#CJOJPJQJmHnHtH u#jDIh#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*jHhF?h#0J!UmHnHu w(x(y(((((((((((((((()))))))6**6+>+w+}++++0,:,?,L,P,Q,Z,ɾ׊~zvrvnvjvjnjnjnjnjnjnjfjfnjh hh Vh?Gh+FhJh(jh#U%h#CJOJPJQJmHnHtH u#j8Kh#UmHnHujh#UmHnHuh#mHnHuhF?h#0J!mHnHu$jhF?h#0J!UmHnHu*jJhF?h#0J!UmHnHu(X(((((((()*,0157;j?zA"DIGWGIZII & FgdbgdagdGgdG$a$gd(gd+Fgd# % 8# gd#Z,\,],m,p,z,{,~,,,-$-2-T-l-. .-.7.h.u.//'/4/G/I/f0g000000000011#1,1D1F1K1112|2222222$3.3333^4_44444440515267686=6S6V6[6]666*7hf5hs5hhKhRhuhhUhMh?Gh hh VP*7~777777777 888889999$:H:::::9;>;J;K;;;;;<<G=H=j========>>I>J>>>>>>>>>>>>?>?E?j????e@f@)AIAJAeAfAzAAAAhFGhehh6Mh huh*Wh&dh'shvhTnhKhhvhf5MAAB B BB%BBBYBpBBBBBBB C0C2CwC~CCC DD!D"DDDDDDDDDDE.E8EEEF#F,G/GIGJGWGtGGGGGGGGG/HxhMh-zh6Mhuh hU$h7TTQY^YYYYZZZ [[([2[5[[[[[\\\]h]]]]]]]]]1^7^^^^^ __$_3_H_[__i`j``````!a-aaaaaajbc,cccccdddddd'e0eOeqe}ffffffhv|h>xh h4h%hhs`h1ihMhu= h}ShuhhHOfffffff"g#g$g%g&g'g(g)g*g+g.gEgFgSg}ggh h hh&h8hVh]h^h_hhhhhiii(iFiHi]i_iiiiiiiij8jCjIjLjjjjjTk]kfkvkkhMhhIrhVgh| hDjh<%Rh+FhJ h+Fhhhf5h#hvh7Th-zhs`h O6h770h7706hv|h%A+gFgSg_h&m8mZnprtwwyzF{{|~#ca# & Fgdb & Fgdbgdgd+FgdGgdGklll$l'lAlHlSlkllllll mm%m&m_mlmmmmmmmmmmnn!n9nAnXnZneokoooFpSpUp`ppppppq q)qfqgqhqiq}qqqqqqqqqqqqrr"r*r.rMrNrUr]rrrrrhihv|hIrh+hMh hjaahenhDjhi^h| hh=Prrr`slssssst:tatdtxtttttuuuIuNuOuUuduiuuuuu v$v5vev~vvvv#w$w-w2w{E{G{W{Z{~{{{{{{{{||Y|||||,}}}}} ~~~~:~\~b~~~~~~!()dj&,7P\cKOjŁʁhhuhh3chOxh| hKhP,hp7h+U+129`a~ׂ݂>N[ #ׄ!cndž#5~؈ъӊۊ܊'TX^`wx$HҌhLBh[h/th%hh#ChhhKhMhuhR#=aїg'͙Y<(ʞ gdgrgdf & FgdbgduNgdGgd : & Fgdb & Fgdbgd & FgdbҌ:;<=Í !?J`aǎҎӎӏԏ @Mؐܐqu̒ (*<@`dpw“ǓUjk|uh8rhuNhZhr2h%h :hhQhMh[hLBhPuv{ϖזݖpw~ΗϗЗї7Ofg %'(01:BP`u˙ڙܙ)BHW9:<EFKU 6ʜ̜֜'hMh5HhPj;hS0h%hfh?huNhQ h8r6h8rhAxJR'(ʞ̞Ξ"$&.6Zz &(*,.bdvxРҠ<>ޡ̨̨̛hkhgr6 hgrH* hhgr hgrH*hchgr6hIrhgr6H*h^hgr6hPj;hQh%hgr6H*h%hgr6jKh%hkEHUjdP hkUVjh%Uh%hgrhf5 $:ܡ&4(ԣd9Ԭd3. & Fgdbgd$gdokgdGgdgrޡ68:<>.02q19(Ħ %=E .8}5D`jpѽh0vhNxh$h+hPj;hMhQh^hgr6hkhgr6H*hchgr6hgrhkhgr6J16AJhmp+lӬԬ߬(*=FGXbŮBPSbïį%HIYcۼϧhZhH\6jpOhZh$EHUj#8O h$UVjh$UhH\hghMh$hE'h?zhokhQ h6hh0vhNx?TUVWXYDZȱɱʱ˱̱78X\]^_`axzʲ˲̲Ͳβϲ!abcdefh³diմ8O$235Jak;οûhE'hLbhou hH\h$huh3)hH\hH\5 hH\6hQhghZhH\6hH\hZhH\5I;GYZ`ops߷+.S["DMع;h{+3J[^MQYfʼ)XͽAF\`*389:CLY]hmwhhnShQhQhSPh+huhouhMVwɿοٿaeopqz{:IVj{} JW%1;=BJjsz(*;=KhKRhQhuhhSPhMhhhnSh"]W.H|9`cwQh Fv  & FgdbgdTgd8gdxgddgdGgd{g & Fgdb]^gpr$6FGH"4z $FJj-4P fxhBhK$hv%hSPh{ghf/hh"]hdeh'yhMhKRR $-CDOkq,HLMaz|? 59UV /<Qgj37`'hLbhph ch+h{ghdha'Fh{>hSPhMh"]hBhK$P'5`,/89]ILWs Vg _` _eu?NR^i{ )$hA*h10hkhSPhxhMhL) hph@hdh+h chLbP-/GNabcmor$/s+8O\vwL$8Wbe ot|hyhw`h`Bh+hX]MhMhph1Aht*hxKh+h10hkhL) O}!0R 0<=I\]qv^ls~P^gcj 8<h50hcht*hA*hthSPhyhI|hw`h`Bh1AT<NQr~09BGIWbjz6M,Cf'3UYe Njl%)AEh[qh!6h[qhz86h[qh[q6hXhA*hSPhz8hw`hyh50PEU[bcdjox#<PQfgh<=#$1<DFGQR[ ("-F Xb&ҽ hdS6hLLhgchSPhhdShXh8h`Bh*hw`h[qh! h[q6h[qh!6J&B% +,5?Z[qrz Hhlu $N`  (o  ! ) Y [ ^ ` f {     hS(hxhThgchhSPhh hw`h&hLLhdSR                  - .           X       /0:;>Te CXafg}%EtuپhXGhNhj0h2h hhhenhE hJhQ hThThgch>hxhw`hTI     / 4~* "9" $J'U(gdA5gdu & FgdX|p & Fgdbgd?gdE gdGgdGgd8gdT & Fgdbu&2:Z0"$%0BV!#d}Faeu),-UyUXi(1whU+hy|hS(h?hvh#Uh?hj0hw`hNhXGT9}!.Egmotu{|*+MR_juvF J d e    !.!E!g!|!!!!"1"8"9"k"o""""""#n#x#y#########$ $E$X$Y$g$h]ohw`h>zhA5hwShuhX|phj0h?hU+Ug$i$r$$$$$%%,%=%?%_%p%%%%%%%%%& &&&&^&&&&&&&&''H''''+(T(U()")=))))))))m*u*****i+u++++",,---------. .*.=.C.. /!/A/`/|//h?$h?}hw`hhnU6h`?h]ohA5hj0WU(=))$+",-C/|1R2h457u89Q::[=l= ?N@C&FG & F gdbgdE& gdGgdv & F gdbgd3y & F ^`gddgd?} & Fgdb//////0E0F0d000000 1-1.14171A1O1V1y1z1{1|11162D2S2q2w2~22222223333+363G3c3g3{333333333333 4X4f4v4444444445,5354565A555555666hy|h3yhdhshw`hzhj0h?}h?$W66667(7.7L7b7e7777778t88888888x9999999999993:O:P:Q:R:t:u:~:::;G;Z;];U<_<<=$=&=[=j=k=l====>>*>>>>>>>>??? ?h*\h)1;hw`h/RhGhE& hvhyhj0hdhd6hdh3y6hy|hdh3yK ?/?0?e?f?s?t?????????@@k@l@}@~@@@@@@@@@@@@@AA AAAA?AbAAAAAAAAAAABBBBBB!BGBKBZB`BaBiBtBuBBB CC;CLtLzL{LLLLLLLLLLLLLLLLLLLMM MFMWM N"N&N(N*N0N2N4N8N:NNJNXNjNlNnNNNN~O߿󿷯˿hw hsj6hqhsjH*hqhsj5hsjhsjH*hsjhsjH*hsjhNh#5h:shhH*h%Zh5h#hNhN5hy|hNhhNh5;~OOOOPPP:P;P>P^PnPoPpPqPPPQQQQ*RCRtRzRRRRRRSS%SeSfSSSS'T(TLTMTTTTTUUUUUUUUwVVVVVWWWWWWX#X5X6X̼Ը̴İİԬh"+hch5=hFhw`hdhaIh.B2h[hrh ehChS(hJhlph770h7706hNhsjhwh/RD6X7XNXYXaXdXeXzXXXXXXXXX YY)Y*Y.Y8YOY[YvYxYyY~YYYYY2Z6ZaZeZnZqZZZ [[S[X[[[6\F\\]J]]]]^T^^^^^^^C_L_[_\_v_w_____9`:`;`<`c`k`l`t`hrUhjIRhrUhrUUhbhJbhchAph ehaIhw`h"+hdOWYYZ[]:`<`v`}bbgimowpp,qtLNxz|†Ć߻߷÷hWxh,hWxh,5h{h:dh:d5hrHhrH5h:dhrH5hrHh:sh%Zh:dhw`hoHdhoHdhoHd5F 68DFTX@FƉ؉ÊЊ,\]*01FJN\]^~Œڌ()*?Er2hC h{h{5hchc5>*hWxhWx5hWxhWx5>*hMh{hWxh hWxh,5h,J2BG^0>4`k&(*>@BҒ.24JLNlԓ.13456BE(8=Öhw`h-hahO=hlhC 5hlhC hC 5hC h 5h jh h Uhch,h{hC h F—ڗTUVklopvwřƙәޙpZuΜ؜AMc֟ן؟ٟܟߟÿÿÿÿݿݷݳݫݫh}.h85h hkXh85hzOhkXh/hw`hO=h8h85H*h8 hah8jh8h8Uh- ha6ha@ߟ-3KL]^bm{Рؠ/01hlաܡ Yh !ûûû׷˳hzhbOh0 ~hI hu)7hkXh:H*hkXh:5h\h}.h:5h:hdhw`hkXh85hkXh:sh8h}.h8H*B GUmɭWKc[f,CpOe#a\ & FgdRmgdeRgd[7gdgdQagdbOgdG!#K,2U,79ALM]_ߦ^|FG9Ump{êĪȪ.pǫ_k۬#lɭ (,Q~®Sh?]h3chK3hQah2)hrkhYh )h/hw`hhbOhzPSTVW^hqVWƱٲZlϵеѵIJKcʶ)v6AFHw*iwֹ)G`лڻ1Z%19qh+hb h\'hPhhh/hghh?]hw`hK3h}P!YZ[DGSpqruvǿ5fFPAD^7;+,ABC!39@Bp]^noyhw}Fhhh2h h{h[7h}_whw`hdhI2hPhb hh+ONOdeq'@ERn7CDjt^t.T),9<ijs"0IahT@hJhTh<3heRh hw`hrkh> h[7h[7hw}Fh{Q-7DSlo} ,[\CN:DX\_jz*YZZ`'5?CPQa|#[bLMfg~h )h#oVhuhh[yh{hy:hJhhT@hw`TMgq8t 7> b^`bgd2 nb^n`bgdngd# gd#oVgdG & FgdRm~Wapu#npxy/cgpqw~+2ryep,h )*;%hM hV\hw`hhSh0[ ! 23hn#*OSYe|} #VW]g :hnhn5hnhJK(hV\h# 5>*hV\hV\5h# h# 5h:shV\h# 5h# h# 5>*hhV\h#  h#oVh0D:<BDFHLNPRTXZdf $  㾶󮦮ˢ󶢞 h2h2h2h2H*hdhnh2h2h# 5h2h# H*h2h25 h25hrNh# 6h:shUh2h2H*h# h# H*h# h# H*hnh# H*h# h# h# 56KLOP34HXcgqrsyz:Z "#%&'(-.<=󯧯󧯧h:shrNh2H*hrNh25h2h2H*hnh26hnh2H*hnhn5hnh25hnh2hH*h2h5h2h2h25@=>?@Agh WXYZ]^hm*,28>BRTvx~hhrN5>*h:shrNhrN5 hd5hJK(hYYhrN5hhYYhrNh25>*hrNhrNhrN5>*hdhrNh25h2B>(bt jJL^ & Fn^ngdRmgdgdv1Wgdv1Wgd W^`WgdrN nW^n`WgdrN nW^n`Wgd24658rs>?lm "$*,68>@BDJNZ\hgj_aϽϵϽϵϵϵϵϵϪϦϪhdhc h;h;h;h;5h;h;5>*hYYh;5h;hhYYhYY5>*hYYhhh5 hrNhrNhrNh:sAJKLj(0n$&,.08:@BHJLNPVXZ\bdhjlnpt巰巰巰 h;hhhH*hhH*hh5 hhh;h5h;h5>*hdhG@hh5hhch5hhv1Wjhv1Whv1WUh;hc9tv\^  :<jlz|"$*,68:<>@BDLNPRTVZݤݜhkfhkfH*hhH*hYhY5>*hv1W hYhYh@'zh@'zhY5hYhYH*hYhYH*hYhY5hhchYh h;hh:s:.v|t h        -Vpgd'gdGgd# gd & FgdRm8^8gdY^gdYZ\^`bjl/019Diclnr #JTUX2L}  * ? @    r t   ÿÿûӻ÷ӳӮӦÞÞӚhNi3hXOHh2hkfhkf5h@'zh@'z6 h@'z6hdhgh( hJK(hkfhh5hh@'zh:shYhYhYH*hhH*hYhY5hkfhkfH*9 - : > G H f g h                                                         78FVr~̿hz hNi35hNi3hNi3H*hNi3hNi35h4h4H*h:sh4h4H*h4h45hXOHhJK(hNi3h4J>O[c&GWY,-./012356789:<=>?@ALMS\]^fghyz̼̼̼ܸ఩̼ hMhMh(h(6h:sh(h(H*h(h(H*h(h(5h%hJK(hMh(h4hNi3hNi3H*hNi3hNi35hNi3hzB"45]mop>M  ?H]~㽹ߵȱĝh-bhhdBhw|h Xh-[h)3hF h# h4hzhJK( hMhMh(h(6h:shMh(h(h(H*h(h(H*h(h(5>yz(*,.468:<BDFHJPR^`l|~ "$&234=>?DEFSTnr9츰츰䬨h"hg$hdBhdBH*hdBhdB5hdBhwgH*hdBhwg5h:shwghwgH*hwghwg5hwgh-[hdBhzhdBh X5h XB*pUI,"",#%(*01>1?1@1A1s1122gdSkgdGgd^]gd#gdRJgdGgd+| & FgdRm & F^`gdRmgd'9 3]cr$%HI|=JOPRsv/08{  !$!.!/!!!!!,"-"s""""""######$#$%%%%%{&&&hJK(hBh+|h2hwghhzhdBhg$hDb{h"T&&h'l'w'x''''M((((()))x)))))) *%*/*0*3*9*:*;*\*n********+1+3+++++++,$,?,F,N,_,`,a,c,i,j,k,z,,C-I-N-R-S-------...?/@/$0;0D0v0x000000hrhJjhJK(hhBh4h+|h2X000011>1?1@1r1s11111111T2m2o2222222222222d3g3h333333333333333ѹѮљѕ hJ5hJ5 hJ56hJ5hJ55hJ5hJ5hSk5jhSkh-# EHUjoqO h-# UVjhSkUh-# hth2khr#hSkh-"wh,2 hh#hh770h7706hIh+|hhr133333333444&4C4D4E4s4t444444444555667 7 77-7.7/7?7777888 8G8H8I8J8Y8888888)929\9i9s99999999T:i:y:::::;#;;;=;hS(h|Urh hJK(h!h,5h-# hJ5hJ55hJ5hJ56hJ5h-# h-# 5N2O6J89=C HH[IIOLR4SWTXVY[[`cpfRhkkkFqgdgdGgdH-gdG & FgdRmgd' & FgdRmgdSk=;C;];f;r;u;;;;;;;;;;<$<.</<s<t<<==8=9====><>U>>>(?5?R??@@@A2A9A?AKAAA BBBB"B2BEBHBoBBBBBBBBBBB#C8CCCCCCDD%D,DDDDDDEEh% hK^homhRhOhCh,5huhh|Urh h!TEDEEEGGG H HHHHHHHHI!I,IYIZIIIII JJ#JJJK!KKK#LLLNLOLLLLLM)MAMEMjMMMM N6NNNNNNNOO*OUObOjOPPQQRRRRRR3S4S^SeSShEh1Gh;Lh\5hIhOh'h'h]hRh)hJK(hbh% NSSSS T TTT.TT UUDUYUfUUU VVVV$VXVVVWWbWWWWXXXX4XX+YrYsYxYYYYYYYYYY'Z9ZEZSZbZxZZZZZZZZ[ [t[[[[[[[[[[[[[\\#\hthGht89h2ChJ5h~>hH-h)I^hehJK(huhhIh1GhCO#\2\Q\\\\\]%])];]K]S]]]o]{]]]]]^S^\^f^z^__&___*`y``````a&aRaab>bIbTbjbbb4c=cOc[ctc}cccccd$d%d]dld|dddddee e/ePepezeeeeeeehG^hG^5hXh(YhG^hJK(h:YhH-hkhGhJh2Cht89Peeeeeff fmfnfofffffffffffggggh"h(h3hOhPhQhRhhh$iYixiii(jjjjjjjk:kWkXkkktkkkkkkkkkk)l.l6l7l8lDlulllllllllh/ rhUhhiSh:Yh(YhA5hAhJK(h^xhVh(YhG^5h(YhGhG^Llllmm4mVmsmmmmmm n"n$n&n(n*n^n`nfn o>o@onopoooooop(p*p,p.p6p8p@pvpppp$q&q(qBqDqFqTqVqrrrr~sssstttupuvuxu߼߸߼߼߰߰h hpN5hpNhe4bh!h h 5htht5ht h/ rh/ rhUhU5h h6h65 h 6h/ rh6hUhBFqrssz{|{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{|| | ||||||| |$|&|(|4|6|||||L}دh +UhpNhe4bh-^ hG@H* hc\9H*hhc\95hhc\9H*hc\9hc\9H*hc\9hc\9H*h:shc\9hc\95 h#v;hc\9 hc\95h#v;hc\95h#v;hc\9H*hc\9~?~@~E~F~N~U~v~{~~~iyEJ܀12^_فځ&(.068:BDJLRTVXZ`bdflйЯЧЧЧЧИИАhh:H*hh:5 hh:h;h:5h;h:5>* hc\9h:h;h;5he4bh;h:h +Uh +U6h]h]hc\9H*h]hc\95h +UhS(hpNhc\982(!2<RSS%ʌzjgH & F"gdRm & F!gdRmgd-^gdGgd:gdpNlnrtvxz*+UV]^wxyƄ !2CNOPctuvͯ՛h-^h-^H*h-^h-^5h-^h-^6h-^h6SWh6SW5hMCh6SWh +Uh +U5h<h<5>*h<h<h<5h +Uh:h:s h;h:h +Uh +UH*hh:5 hh:h;h:53 `  ,.<hj⫣hh-^5 hh-^h;h-^5h-^h-^6H*h-^h-^H*h-^h-^6>*H*h#<8hG@h-^h-^6hMChMC5>*hMCh:sh-^h-^5>*h-^h-^H*h-^h-^5h-^h-^h-^6>*3 "$(*,.2468:<BDHNPRr8:DF.ES׊%.PQs{ vxyzĿ󰬰hYBhYB5hYBhYBH*hthJK(h hYBhIhMChu~5hMChu~h:s h;h-^ h-^5h#<8h#<8H* hh-^h;h-^5h-^h-^H*h-^h-^5hh-^H*h-^hh-^52Č  J~"06ABCFGV!vx~$.1fgwÒ̒͒ڒ%GHJlrмh-^hu~6>*H*h-^hu~H*hvhRXhthu~hG6+h#<8hv>^hv>^hYB5hYBhYB5hYBhYBH*h hYBB”ĔƔʔ̔ΔДҔԔܔޔ ؖږܖޖ顙|wskhNhNH*hN hv>^H*h-^hv>^6H*h-^hv>^H*h-^hv>^6>*H*h-^hv>^H*hv>^h:shkfhu~H*hkfhu~H*hvhvH*hYhu~5hu~hu~H*hu~hu~5hYhu~H*hYhu~H*hu~h-^hu~6H*h-^hu~H*'lnݙ6աp\]B_tgdGgdOgdy"wgdGgdmgdm$a$gdy"wgdu~(,;Cfrklmnƙʙ/3;ۚݚVlr23Ǟ՞`45AkotܽԽԽԽԽԽԽԽԹԽԽԽԹԽԽԽԽԽԽԹԽh;h:hvhmjh,-h,-Uhu~hJK(h>hRXhN hRXhRXhNhNH*hNhN5hNhNH*FƠ+,7?ԡա!"F`ϣգ֣BDȤ4vxz|QRSTuv}ººº²®ººhEhvh 2$h\mh\m5h\mh:shy"why"wH*hy"why"w5hy"why"w5>*hmhy"whBh;h;5h;h:D$&0248:>JL\]`k8H 56EMRlqrwx&'()EF 2;ȼȸȼȼȰȨȸȸȜhVohBhX_c5h hI5h3chBhX_chVohI5h;hIhOhO6hvh:shy"whOH*hy"whO5h 2$h hOhO5hO=;BCDcgŮƮ5ABy9:^f-MNűͱ#@T^_\]p!stȶ`-.ιRмܸܸhIohAPhAP5hJK(hX_chAP6hAPhAPh;5h Mh;h chBhH*hOhVohX_ch HRX׺3Jν?No{DPϿAstKJSrs}12345?+J輸hn3]hhKhIh;j1hIUh6Mh~FqhBhhhX_ch|hhAPh MhIohJK(Gн0as235+s)4*gdDgdGgdKgdI$a$gdIgd & F$gdRmgdIo & F#gdRmgdO qCHSk.8P_rs)34')()*l8:`ht,.<P6Rn hDhD hD5h7hi5hihE}h7hDh?hRhIhh=ehhJK(hn3]L(L\5 NWX_bs}@Pzpsξκκκξκκζh:h:5h:h8hF_,hF_,5h(hJK(hF_,hE}hhhDhD5h7h75hDh7hi hD5DCP& 6LWhs'EF45>?w|༸hN!hmkBh KhJhylwh770h7706h";hShHBGh6MhXjEhJK(hGhhhjMhF_,h8h85h8h:B5?wx-[ }gd Kgdm & F,gdRmgdpQgdGgd:gdXOHgdGgdjM & F%gdRmgdD(589ItzFkoq}-9;WXr~,-VWXZ^eivNoqCDEO]gphGLhd?rhJK(hZdhnzhy hpQhhmkBh m(hS(hN!h$$P69LU$-] ')/:;YZ[LOZ[|}M\Ͻعععععh Kh@h@6 h ^6 h6'6 h@6h ^h6'h@hnzhjwhS(hX^(hGLhhd?rhZdI`uQ_py  $%(8[\68OZdez{)29achJK(hbhb5hbhh)Hh)H6hhwZh)Hhmh ^h Kh6'L  :Qw!gdttjgdqe & F-gdRmgdgdGgdGgd"gd"$a$gdgd K<^<gdwZ & FAgdwZZ^h~9<>8z9B $dfpr[!>6˿˿˿˿˷hSyhSy5hhPhhPhSy5hSyh5!h 5h h h5!h5!5h5!hJK(hbhhh5H6HTVXF     a t u w { ~                 NVr!"3:%)-348;Ŷɺɺɺɺɺɺɺɺhttjh=hhqe h Kh"hh;Th"j8hhUh h hhPhSy5hSyhhPhhP5hhPhD8\fulp|$.PQw-1Sr@Xtz68BCflR2:pͿͷͿͿͳȬȳȳhJK( hfShfSh/5hMXhMXH*hfS hMX6 h/5H*hMXhGVhNh h0DhNhha_hS h3chhhttjh;Thqe@      !!!!(!)!*!+!/!0!8!9!K!L!M!N!S!T!|!}!~!!!!!!!!̼xpkckkhfSh/56 h/56hfNh/56jhfNh/5EHUjh O hfNh/5EHUVhfNh/5EHjhfNh/5EHUh:sjσhDh/5EHUjf O hDh/5EHUVhDh/5EHjhDh/5EHU h/5EHhGVhJK(hfSh/5hh;.&!7![!""""o# %,%'))),,,1U5p5y57B9<AgdlgdGgd- gdGgd~ & F/gdRmgdL, & F.gdRmgdttjgd/5!!!!!!!!!!!"""""""`"N#n#### $+$$$$ % %%% %!%$%%%,%l%n%p%q%r%{%%%%%=&&&ҹҏҋ΋΋hBh@Nh]1Ph, h:sj,hDhL,EHUjg O hDhL,EHUVhDhL,EHjhDhL,EHUhJK(hh]hL,hfSh"A h/56 h/5H* hB6 hg+h/5hg+h/56h/52&&'' ''L([(g(((()|)})))))))%*-*2*F*^*d*|*****+,,,4,S,U,,,,,,,+-T------.5.D.F.L.M.....//B/D////0&0D0e0}0000Q1Z1hhThBhDhHih"h- huhh~hs2hh]1PhBhL,h, h]MZ1y1~1111172E2]222222#3(3,3J3]3_333334F4l4m44444445 5 5525<5T5U5_5g5o5p5w5x5y5555566p6u6y666677D7S7^7g777778'8S8T888B999hJK(hDQhkd"hlhAhhBhhThBh- hHihuhhDP9 :j::l;n;q;t;;;;;;;; < <<Q<T<d<h<r<t<<<<<>>>>?AAAA%BEBBBBB/C?ChCwCCCVDiDpDrDDDDDEEEFFFFGGGGGHHH7ILIOIXI\IxIIhFhF>*hFh huh)hJK(h@YhLh!}h#hBhlhDQNAEGGJJtKlLMMVOQT$WPWXXXYl\]^^3` & F2gdRmgd l  & F1gd l  & F1gdRmgdr & F0gdRmgd)gdGgdlIIIIIJJJJJJK K+K2KtKzKKKlLmLtLLLLLLMUMVMMMMNNNNO%OhOpOwOOOOOOO=PQPUPpPPPPPPPcQkQQQQQQQQ-R8RRRRSSS(S)S1S8S[S`SaSSSShgShh3 #hJK(hrh h)hFhuh6Mh5TSST T9TCTTTUGUOUUUUVIVTVUVfVVVW WWW"W#W(W-WgWhWiWWXXXX X'X(XIXaXXXXXXXXXXXX YYYYYYqZZZZ3[@[\[_[[[[hhJ]hJ]5>*h l h l 5>*h ih i5hJ]hJ]5h ihs2hJ]hLh3 #h l h hgShJK(E[[[k\l\\]]^)^N^O^X^_^^^^^^_ ____#_t_z_|__`"`2`a%abbb bbb2cDcJc`ccccccduddddddddeeee#eAeBeqe~effmf~ffffffIgRgkgqghi{hdhhhrhh ih i5hJK(h ih 5h ih hbh h"M3`cBeef%ggYhhhhhh!i2iBjjlorptLxgdgU & F9gdsgdsgdGgdZgdEgdGgd  & F3gdRmgdrh & F2gdRmqggggggggh$hKhPhXhYhhhhhh i!i1i2iiijjjjskkklllnlllll/m8mOm\m^m_mmmmn}nnnAoBoooepqp}pp qqqqqrrrrhuhS(h? 4hR.rhgUhJh|Uh!>hshhG&hZhh7706hQhJK(h hdh"hDr$s(s1s5ssssst*totqtttttuu u'u@uAuMuXueugumunuuuvvvvvvvwwCwLwMwSwxxx,xFxJxLxxxxxxx yyy y\yyz/z4zLzXzYz`zjzmzļh('hBh3ChI\h!huh!hi5hrhr5hihrhVo,h$ hR.rhJK(h? 4h,h9uGmzzzzP{p{{{{||<|>|I||||||| } }p}r}s}y}}}}}}}}}}}~~~$&X\cm35AELcdkoЁҁ>CKQRcghG@ hSh(vhbchG`hA9hBh,h3Ch('hmhJK(hR.rh!Od>NJa7¦ªgdBgd, & F;gd,gd%gdgU & F:gdbcgdA9gdG:TEVӄڄ&'(-/9CFZ\vx݅ĆT]`·>HfFV{`lƊNJaejsth6hht,h}>h%huhJK(h;1hjV hA9hABhv0hABhG@ h dh(vLt{Ӎ֍܍ *d)BLT^AKpt_`a˓ٓ:Mޔߔhjvܕ>LNԖNd=AJ̼̼ĸhU hL2WhY~hY~5h dhjVhY~h~Lh]h=hJK(hh1XohFh}>hrhh6ht,E CDə(Yc.,PTҜڜ,BD]_67GOÿh "'hU hBh,hf4hjVhT<hrIhU hU 6hU hT<6hU h,56hU hJK(6 hf46hU h,6 hL2W6hU hY~6?ܠ)GhȢ͢wڣ _equ =IN`ahk¦զަ(5ƧΧe$YbfoªϪӪ M˫ h Lh~<hqhjVhBzh~h<hH4hBh "'hwNSpt48^`12fDzTVnprZlµĵ H 5KU^c.\abmn{}ǿǿǻ۷hBhB5hBhwh9ehb)5hb)h9eh5h "'hh9eh Lh~<5h~<hJK(hjVh Lh Lh L5Eªdĵ~ڽkhT<eTxddgdRJ!@&gdG$a$gdRJ!gdMgdGgd9 & F<gd9gdB  -6^hrͽ׽ٽڽcpվ<޿12?A]^f 025  sySTq|h*uhjeh' hi5h95hi5hhJK(h9hQEhBhB5hFhBO  IJX]';<`u!/8G%IK *9CIUxJPKPRԽŹŹhjVh6h5hRJ!jahRJ!hRJ!Uhfh?~hp#hhB>KhMhhh' hi5h*uhJK(I=Bi'd GHmb@&y(+?Tbcv1<uq|QRlq}hhQE0hJK(h5hYYh_jhMw'hfhRJ!hjVU3<.:GLdddd d&d=dHdOdYdxdddd ee&e'eZe[eeeeeeef ff#f'f+f3f;fffffhh0i>iFjPjjӿӿh10hO]hO]H*h54h3HbhO]hB>K hjhjUh5hjVhjhJK(hIhQE0hWHKt he/she will tolerate, and what extra level of heterogeneity he/she is willing to endure in attaining this prediction. Pareto Concepts - Model Prediction Not only the process of model calibration, but the process of model-based hypothesis-testing, can be formulated as that of seeking solutions to a series of constrained minimization problems. Formulation of the hypothesis-testing process in this way can take place either formally or informally. In either case, through traversal of a Pareto front, an optimal outcome of the hypothesis-testing process can be obtained, possibly aided by a high degree of subjective judgement based on information that becomes available to a modeller through traversal of the Pareto front. Suppose that a model is being used to explore the possibility that an untoward event will occur. Let that event be associated with a value of 0 for a certain model outcome. This outcome is then  observed to occur; hence it can be included in the objective function along with other observations that comprise the calibration dataset and still other observations and/or prior information equations that encapsulate the preferred system condition as applied through regularisation constraints. Let the hypothesised prediction be assigned to its own observation group, with its own objective function component. This is calculated as w2(m-0)2 where w is the weight associated with the prediction and m is the model-calculated prediction. This objective function component will diminish as the prediction is approached. Meanwhile the other component of the objective function (comprising measurement and regularization constraints) will increase as calibration model-to-measurement misfit and/or departures of parameters from their calibrated values increase. At some point these departures may be considered to be too unlikely for the corresponding value of the prediction to have reasonable likelihood. Let the value of the model prediction corresponding to this point be denoted as 'm. Hypotheses that predictive outcomes are closer to 0 than 'm can then be rejected. As stated earlier, this point may be formally selected on the basis of known or assumed C(k) and C() matrices. In most cases however, it will be informally selected. In all cases, the previous calibration process, in which model outcomes are matched against historical measurements of system state alone, will have played a large part in determining the explicit or implicit C(k) and C() matrices through which predictive credibility (or lack thereof) is assessed. Figure 7.2 depicts the Pareto front that is applicable to this aspect of model usage.  Figure 7.2 The Pareto front as it applies to model-based hypothesis-testing. Once again, traversal of the Pareto front commences at point A. At this point the calibration objective function is that achieved through calibration. Meanwhile the calibration dataset has a new member - this being the prediction whose value is hypothesized. At point A this has a weight of zero. Slowly it is given a greater weight so that traversal of the Pareto front can occur. For a given calibration objective function, each point along the Pareto front represents the minimized prediction objective function for which that calibration objective function is capable of being attained. As such, it represents the maximum or minimum value of the prediction subject to the constraint that the calibration objective function is no higher than that which corresponds to that position on the Pareto front. Hence traversal of the Pareto front constitutes solution of a series of constrained maximization/minimization problems that are exactly equivalent to those solved by PEST when it runs in predictive analysis mode. However because a series of problems is solved rather than an individual one, the modeller is able to associate likelihood or otherwise with a series of predictive outcomes instead of just one. Furthermore he/she is able to make subjective decisions pertaining to likelihood if he/she judges (as he/she mostly will) that exact mathematical characterization of pre-calibration parameter variability through an explicit C(k), and measurement/structural noise through an explicit C() is impossible. Variations of the above theme are possible. For example the calibration objective function could be formulated in terms of differences between parameter fields and model outcomes necessary to achieve a certain prediction and those which were achieved at calibration. The initial calibration objective function would therefore be zero. If desired, parameter departures from their calibrated status could be decomposed into solution space and null space components, with likelihood of the latter formally assessed using a null-space projected C(k) matrix. Meanwhile, solution space parameter components would be limited by constraints on model-to-measurement misfit that are incurred as a predictive model outcome approaches its hypothesised value. Such an exploration of predictive variability would be more in keeping with an analysis of potential for predictive error rather than with exploration of predictive uncertainty, and has its roots in equation (5.8). PEST setup for this exercise could be aided through use of the REGPRED and OBSREP utilities; see PEST documentation for further details. Pareto Methods - Some Final Words To date, practical experience in using PESTs pareto mode in real-world settings has proved very rewarding. The following points are noteworthy. When using PESTs pareto mode to apply Tikhonov constraints, it has been found that it is generally possible to attain a lower regularisation objective function corresponding to a given measurement objective function than that which is attainable when PEST is run in regularisation mode. Normally the outcome is a smoother parameter field. It seems that the process of moving slowly along the Pareto front provides a stronger defence against the appearance of unnecessary heterogeneity than that provided by direct solution of a constrained optimization problem pertaining to a given target measurement objective function. The ability of Pareto constrainment of parameters to subjugate introduction of spurious heterogeneity is diminished somewhat when SVD-assisted parameter estimation takes place, this arising from the fact that the combinations of parameters that emerge from definition of super parameters may not necessarily be those required to preserve maximum parameter field smoothness. When using the Pareto method to explore predictive possibilities that are compatible with an historical calibration dataset, it is sometimes found that the Pareto curve does not resemble that depicted in Figure 7.2. Instead, the curve may rise for a while from point A and then settle into a new minimum to the left of point A in that figure. Not only does this indicate significant nonlinearity of model predictive behaviour; it also appears to indicate the existence of at least two separate system states whose likelihood is difficult to separate on the basis of currently available information. Ideally, model parameters should change in a continuous fashion as the Pareto front is traversed. However it is often found that discontinuous parameter changes are encountered as traversal of the Pareto front causes parameters to cease congregating about one local objective function minimum and to start congregating about another. See Moore et al. (2010) and the following exercise for further details. Exercises This section of the original document has been omitted. 8. Conclusions This document has attempted to describe the range of possibilities that are offered by PEST and its associated utility support software for exploration of the uncertainty associated with predictions of future environmental behaviour, and for reducing that uncertainty to its theoretical lower limit. In doing this, it has attempted to make this exploration as salient as possible to the way in which numerical simulation should be used to underpin real-world environmental management. Modelling cannot provide certainty where none exists. However, if used properly, it can minimize our potential for error when making predictions of future environmental behaviour by providing proper receptacles for all available information. This information includes expert knowledge, point measurements of system properties, and historical measurements of system state. Modelling can then be used to quantity the potential for error that remains once all of this information has been assimilated. This quantification is essential to risk assessment which, in turn, is essential to good decision-making. From these considerations it is apparent that if environmental management is to benefit from numerical modelling, two types of software are required. Obviously, simulation capabilities must be provided by numerical models. Indeed the art of modelling is a mature art, and while models will continue to improve, a solid foundation has been built over the last 30 years of model usage for their continued development and improvement. However for models to achieve their full potential in environmental management, they must be partnered with software which can use them to extract information from all available sources, and to quantity the nature and ramifications of gaps in this information as it pertains to the assessment of future environmental behaviour under different possible management regimes. Unfortunately, the development of this kind of software has not yet reached maturity. However there is growing recognition within the industry that there is an urgent need for this to happen. Perhaps, as software developers respond to this need, the next 10 years will see modelling come of age as a consequence. Meanwhile, another obstacle to the general acceptance of methodologies and tools such as that provided by the PEST suite is fast disappearing. The numerical burden of having to undertake hundreds, thousands, or even tens of thousands of model runs will not be too great of a numerical burden for much longer. Parallelisation of model runs is essential if runs are to be done in these numbers. At the time of writing, the development of computing technology is exactly in this direction. Within a few years massive parallelisation of model runs within a single machine, or across multiple real or virtual machines in unknown places across the world will be a trivial undertaking, available to all modellers in all places. It is the authors hope that modellers find the methodologies and exercises provided herein useful, and that they convey the ideas which underpin them to their colleagues, and to those who must make decisions on the basis of models. It is hoped that these exercises and ideas provide at least a small contribution to the continued evolution of a modelling culture that abandons the magical aura that is sometimes implicitly associated with models, replacing it instead with a scientific understanding of the modelling process that guarantees it an indispensable role in hard-nosed, risk-based decision-making. 9. References Albert, A., 1972. Regression and the Moore-Penrose pseudo-inverse. Academic Press, New York. Anderman, E.R. and Hill, M.C., 2001. MODFLOW-2000, The U.S. Geological Survey modular ground-water modelDocumentation of the advective-transport observation (ADV2) package, version 2. U.S. Geological Survey Open-File Report, 01-54, 69p., U.S. Geological Survey, Reston, Va. Bicknell, B.R., Imhoff, J.C., Kittle, J.L., Jobes, T.H. and Donigian, A.S., 2001. HSPF Users Manual. Aqua Terra Consultants, Mountain View, California. Cooley, R.L. and Vecchia, A.V., 1987. Calculation of nonlinear confidence and prediction intervals for ground-water flow models. Water Resour. Bull. 23 (4), 581-599. Cooley, R.L., 2004. A theory for modeling ground-water flow in heterogeneous media. US Geological Survey Professional Paper 1679, 220p. Cooley, R.L, and Christensen, S., 2006. Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media. Adv. Water Resour., 29, 639-656. Deutsch, C.V. and Journel, A.G., 1998. GSLIB: Geostatistical software library. Oxford University Press. Doherty, J., and Hunt, R.J., 2010. Response to comment on Two statistics for evaluating parameter identifiability and error reduction. J Hydrol. 380, 489-496. Doherty, J. and Johnston, J.M., 2003. Methodologies for calibration and predictive analysis of a watershed model. J. Am. Water Resour. Assoc., 39(2), 251-265. Doherty, J. and Welter, D., 2010, A short exploration of structural noise. Water Resour. Res., 46, W05525, doi:10.1029/2009WR008377. Fienen, M.N., Doherty, J.E., Hunt, R.J. and Reeves, H.W., 2010. Using prediction uncertainty analysis to design hydrologic monitoring networks: example applications from the Great Lakes Water Authority availability pilot project. USGS Scientific Investigations Report 2010-5159. Freeze, R.A., Massmann, J., Smith, L, Sperling, T, and James, B., 1990. Hydrogeological decision analysis: 1 A Framework. Ground Water 28 (5), 738 - 766. Freeze, A., James, B., Massmann, J., Sperling, T and Smith, L., 1992. Hydrogeological decision analysis: 4. The concept of data worth and its use in the development of site investigation strategies. Ground Water, 30 (4), 574 - 588. Gallagher, M.R. and Doherty, J. 2006. Parameter estimation and uncertainty analysis for a watershed model. Environ Modell. Softw., 22, 1000-1020. Gallagher, M. and Doherty, J., 2007a. Predictive error analysis for a water resource management model. J Hydrol, 34(3-4), 513-533. Gallagher, M. R., and Doherty, J., 2007b. Parameter interdependence and uncertainty induced by lumping in a hydrologic model. Water Resour. Res., 43, W05421, doi:10.1029/2006WR005347. Harbaugh, A.W., Banta, E.R., Hill, M.C. and McDonald, M.G., 2000. The U.S. Geological Survey Modular Ground-Water Model User Guide to Modularization Concepts and the Ground-Water Flow process. U.S. Geological Survey Open File Report 00-92. Reston, Virginia. Herckenrath, D., Langevin, C.D., and Doherty, J., 2010. Predictive uncertainty analysis of a salt water intrusion model using null space Monte Carlo. Submitted to Water Resources Research. James, S.C., Doherty, J. and Eddebarh, A.-A., 2009. Post-calibration uncertainty analysis: Yucca Mountain, Nevada, USA. Ground Water. 47 (6), 851-869. Koch, K-R., 1999. Parameter estimation and hypothesis testing in linear models. Springer-Verlag, Berlin, Heidelberg. Third Edition. Massmann, J., Freeze, R.A., Smith, L., Sperling, T. and James, B., 1991. Hydrogeological decision analysis 2. Applications to Ground-Water Contamination. Ground Water 29 (4), 536-548. Moore, C. and Doherty, J., 2005. The role of the calibration process in reducing model predictive error. Water Resour. Res., 41 (5), W05050. Moore, C. and Doherty, J., 2006. The cost of uniqueness in groundwater model calibration. Adv. Water Resour., 29 (4), 605623. Moore, C., Whling, T., and Doherty, J., 2010. Efficient regularization and uncertainty analysis using a global optimization methodology. Water Resour. Res., In press. Nathan, R.J. and McMahon, T.A., 1990. Evaluation of automated techniques for base flow and recession analysis. Water Resour. Res., 26 (7), 1465-1473. Orrell, D., 2007. Apollos Arrow. The Science of Prediction and the Future of Everything. Harper Collins Publishers, Toronto, Canada. Sperling, T., Freeze, R.A., Massmann, J., Smith, L. and James, B., 1992. Hydrogeoldf,tttt{ /€5܈݈Ҋ/gdEgd gdGgdd8gdU  & F=gdX20 & F=gdU gdX20gdGgd\z@&gdG$a$gd10gdB>Kjj;kKh10h`5h3Hbh3HbH*h54hO]h3HbH*h10hN$h`h3HbhO]hO]H*hO]hO]H*hO]hO]6hO]hJK(Ctttyuu(v1vvvwwwwwwx4xMxYxxxxqyyyyzzzzz0{^{`{b{{{<|G|}}W}a}}}}}}}}}.~9~=~~~~ %1PQ\ -.^_€ր迻ԿԿh.UhX20hhhX 5hh hX 6h54h10h105h10hJK(hX h\zh`jh\zh\zUJ<Łcdo!5t*CSTrząDž@Kq  1;\ci܈ !.P[踴hFdhSWhd8h h770h7706hX hU h6Mh6h:h{>hVhVmH sH h_PhV6h_PhV6mH sH h:mH sH hVmH sH  h6h6 h66h6h1mH sH h:h16nH tH h1nH tH hYhGV hGV6sZfz{şƟٟ֟  01t|} I_Ķۧ}rnf_ h(hhRh?6h:h~Fqh~FqmH sH hh~FqmH nH sH tH hh~FqmH sH hh~Fq6mH sH h?h~Fqh:6aJmH sH hhgg6aJmH sH hhggaJmH sH h:aJmH sH hggaJmH sH  hhh hKqMhhKqMh6$ IM %ݤj(ILMNOgdgg 7^7`gd7fp 7^7`gdgg 7^7`gd_ 7^7`gd 7^7`gd 7^7`gde9_`dfklu}~ 4LM  %ˤۤܤݤFWY]^_ajĥե֥ץۥܥݥޥȹ{{hE2OJQJaJ h:6 hE26h_OJQJaJhE2h} hE26h_hh:hKqMh6 hKqMhh:h#6h# h~Fq6h~Fqh}h}6 h}6h}hih6h h(h hh/s(iu}~ mp%߮~w h0Dhch:he96mH sH he9mH sH h_mH sH h} h_6hRh_6h/ h_hhKqMh6U hKqMhh:hE26h:h Z36h Z3h7fph:h} hE26hE2hggh_h_OJQJaJ+ogical decision analysis: 3. Application to design of a ground-water control system at an open pit mine. Ground Water, 30 (3), 376-389. Tonkin, M., Doherty, J. and Moore, C., 2007. Efficient nonlinear predictive error variance evaluation for highly parameterized models. Water Resour. Res., 43, W07429, doi:10.1029/2006WR005348. Tonkin M., J. and Doherty, J., 2009. Calibration-constrained Monte Carlo analysis of highly parameterized models using subspace techniques. Water Resour. Res., 45, W00B10, doi:10.1029/2007WR006678. USEPA, 2000. BASINS Technical Note 6: Estimating Hydrology and Hydraulic Parameters for HSPF. EPA-823-R-00-012. Vecchia, A.V. and Cooley, R.L., 1987. Simultaneous confidence and prediction intervals for nonlinear regression models with application to a ground water flow model. Water Resour. Res., 23 (7), 1237-1250. Appendix 1. PEST Utilities This section of the original document has been omitted. Appendix 2. PEST Groundwater Data Utilities This section of the original document has been omitted.     Introduction  PAGE \* MERGEFORMAT 5 What will happen if...?  PAGE \* MERGEFORMAT 17 Models, Simulation and Uncertainty  PAGE \* MERGEFORMAT 23 Getting Information out of Data  PAGE \* MERGEFORMAT 41 How Wrong Can a Prediction Be? Linear Analysis  PAGE \* MERGEFORMAT 54 How Wrong Can a Prediction Be? Nonlinear Analysis  PAGE \* MERGEFORMAT 64 Hypothesis-Testing and Pareto Methods  PAGE \* MERGEFORMAT 75 Conclusions  PAGE \* MERGEFORMAT 76 References  PAGE \* MERGEFORMAT 78 PEST Utilities  PAGE \* MERGEFORMAT 80 %67;=>?KLMNYZij    %&=>?@ABC\]ǿ˿ynyjhs2h6mHnHuhQhs26jhQhs26U hs26!jhs26UmHnHtH uhtjhtUhCfRhjNnH tH h770h770h7706hgghjNhhhggaJmH sH he9hcmH sH  h0Dhchih} hi6)Oj    AByzJgdylwgdjNgdgdgdG]tuwxyz{-.EFHIJKL~    346789:JKbcefgh hs26!jhs26UmHnHtH uhs2h6mHnHujhQhs26UhQhs26NJK 89ghijklgdjNhijklhCfRhjNnH tH hhYht21h:pj^. A!"#$% 21h:pj^. A!"#$% ?0P1h:pQ. A!"#$% Dp21h:pj^. A!"#$% 21h:pj^. A!"#$% 21h:pj^. A!"#$% 21h:pj^. A!"#$% 21h:pj^. A!"#$% 21h:pj^. A!"#$% 21h:pj^. A!"#$% 21h:pj^. A!"#$% 21h:pj^. A!"#$% 21h:pj^. A!"#$% $$If!vh5$#v$:V l4 t065$f4ytQ}DyK _Toc278522711}DyK _Toc278522711}DyK _Toc278522712}DyK _Toc278522712}DyK _Toc278522713}DyK _Toc278522713}DyK _Toc278522714}DyK _Toc278522714}DyK _Toc278522715}DyK _Toc278522715}DyK _Toc278522716}DyK _Toc278522716}DyK _Toc278522717}DyK _Toc278522717}DyK _Toc278522718}DyK _Toc278522718}DyK _Toc278522719}DyK _Toc278522719}DyK _Toc278522720}DyK _Toc278522720}DyK _Toc278522721}DyK _Toc278522721}DyK _Toc278522722}DyK _Toc278522722}DyK _Toc278522723}DyK _Toc278522723}DyK _Toc278522724}DyK _Toc278522724}DyK _Toc278522725}DyK _Toc278522725}DyK _Toc278522726}DyK _Toc278522726}DyK _Toc278522727}DyK _Toc278522727}DyK _Toc278522728}DyK _Toc278522728}DyK _Toc278522729}DyK _Toc278522729}DyK _Toc278522730}DyK _Toc278522730}DyK _Toc278522731}DyK _Toc278522731}DyK _Toc278522732}DyK _Toc278522732}DyK _Toc278522733}DyK _Toc278522733}DyK _Toc278522734}DyK _Toc278522734}DyK _Toc278522735}DyK _Toc278522735}DyK _Toc278522736}DyK _Toc278522736}DyK _Toc278522737}DyK _Toc278522737}DyK _Toc278522738}DyK _Toc278522738}DyK _Toc278522739}DyK _Toc278522739}DyK _Toc278522740}DyK _Toc278522740}DyK _Toc278522741}DyK _Toc278522741}DyK _Toc278522742}DyK _Toc278522742}DyK _Toc278522743}DyK _Toc278522743}DyK _Toc278522744}DyK _Toc278522744}DyK _Toc278522745}DyK _Toc278522745}DyK _Toc278522746}DyK _Toc278522746}DyK _Toc278522747}DyK _Toc278522747}DyK _Toc278522748}DyK _Toc278522748}DyK _Toc278522749}DyK _Toc278522749}DyK _Toc278522750}DyK _Toc278522750}DyK _Toc278522751}DyK _Toc278522751}DyK _Toc278522752}DyK _Toc278522752}DyK _Toc278522753}DyK _Toc278522753}DyK _Toc278522754}DyK _Toc278522754}DyK _Toc278522755}DyK _Toc278522755}DyK _Toc278522756}DyK _Toc278522756}DyK _Toc278522757}DyK _Toc278522757}DyK _Toc278522758}DyK _Toc278522758}DyK _Toc278522759}DyK _Toc278522759}DyK _Toc278522760}DyK _Toc278522760}DyK _Toc278522761}DyK _Toc278522761}DyK _Toc278522762}DyK _Toc278522762}DyK _Toc278522763}DyK _Toc278522763}DyK _Toc278522764}DyK _Toc278522764}DyK _Toc278522765}DyK _Toc278522765}DyK _Toc278522766}DyK _Toc278522766}DyK _Toc278522767}DyK _Toc278522767}DyK _Toc278522768}DyK _Toc278522768}DyK _Toc278522769}DyK _Toc278522769}DyK _Toc278522770}DyK _Toc278522770}DyK _Toc278522771}DyK _Toc278522771}DyK _Toc278522772}DyK _Toc278522772}DyK _Toc278522773}DyK _Toc278522773}DyK _Toc278522774}DyK _Toc278522774}DyK _Toc278522775}DyK _Toc278522775}DyK _Toc278522776}DyK _Toc278522776}DyK _Toc278522777}DyK _Toc278522777}DyK _Toc278522778}DyK _Toc278522778}DyK _Toc278522779}DyK _Toc278522779}DyK _Toc278522780}DyK _Toc278522780}DyK _Toc278522781}DyK _Toc278522781}DyK _Toc278522782}DyK _Toc278522782}DyK _Toc278522783}DyK _Toc278522783}DyK _Toc278522784}DyK _Toc278522784}DyK _Toc278522785}DyK _Toc278522785}DyK _Toc278522786}DyK _Toc278522786}DyK _Toc278522787}DyK _Toc278522787Dd b  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~     _` !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^b$#cdefghijklmnopqrstuvwxyz{|}~Root Entry# FData WordDocument"ObjectPool)%_1343196516FOle CompObjfObjInfo "%&'*-./12356789; FMicrosoft Equation 3.0 DS Equation Equation.39qW.j\h  j =11+i() tt=0T " B j t()"C j t()"Equation Native /_1341077539 FOle  CompObj fR j t()[] FMicrosoft Equation 3.0 DS Equation Equation.39qFjx|W}U Pk|h()"Ph|k()Pk()ObjInfo Equation Native  _1341682031FOle   c $A? ?3"`?2MQYbmhf-7K7`!MQYbmhf-7`X!PxUMhQMf'TJ QA#Rxͭڦ-4AŋPͥxSx9ŋbǖ6z<d)Y;R`μzXQ*y hIdr.^]IߊM,foB~ yȉ(`/OQ/-=l?cٟoOK{U}sRw^uQS.2O {O |Q'zW`Sw@]gkLd6ӑo"+on⪺} [x [htT_uS";λ~Nnzjy8?xyU]jTR1jh䍻&(uT}WA<0i,YX^9'>ۧ??bFO/fDd Tb  c $A? ?3"`?2#"xGjt%ˍ.#xd1t,K q׀;ڀ(υz#' 1%˰7Dd !2  >  # A"`"7q˧,xvi1%806R7@=6q˧,xvi1%80E{LT~>6x] |Egz2$ GzC!"\AW$ \+ry? C`݀\JP@$T$QpP ɘF>~TU՛y2LÀ#Νj6ިtVM&t6jVl5M& T䓭&Sd|z䰛,PעVIó0; wm`H64dATe|Axn4)@fk̓&ScH$P)soT.spTbɴlzɌ4LY'd>30xN%qr.ץ`>jkNyuSZO&f q<+3teQϤ@L|yn2 RzYx{0s/:*X,/]StI.JJ !ᵉc&9"eTg KkӚj װS%h6^je줸rDNEcL1fmy:\3ehB%ȩ}ɫI;8/ ,9okڠ5Ҋ&!t-%O o}Z{뜘,_ )'}{&wΝ/Gr^G#27ITl\%Ӊ1 [@:VegRо Lxn #?(hEvxFAd.Zq%pKchRkduP;nkivD\[wD`Іխ9ZEvF@'@kL i֑c j<'K={ n Pɧxy֥te#muP\Q= sv`3<^߈@c7_*'W+ j3f-B;sǠ#S' +h k1j Fode3KlVX _K@uB\uBIAHR[/o~ :~zG}1TJ%dQ\s-mj?Oom`1_ 9 aaOYGRkc;͎ȮxWIv`f6x+mN'-oXۍP b0 6Z;o[ìH+chNvN1sU9ye.2! NYБ]9Й QG>[-BF`k+MؓJkxQ66JͿ}+tIACwUE1-faO9%gt⟂HŒSHK~D5mJ~q-i `{0 !f#Msŝ~zqz˽N- |o{ =]*C;&{c{3ю:"rpD;"1/t/1htv }!b:9@kgZ69g oCG PtvF%3]p(4`ޛs*+okགxx^Ƽd8*xm jcn.u*;pn ^7  ^7>Nz ޳]!MxN4x$MM{u7PY 7x㽏qGXp xý0^;{Dv7ow5i \g|=*y//oL&x~h]&Mɚ{Z~fC|ûtLxo?6kv"}jeI\:T@)2A 3bO | <T?ߑ6l{c',ъTyK;C^Y[?Fl߷4ħb龌tFŒSHKSoo t6 |dvJ呶e\l/Fm!k +00>*vf;2}/tI-v<}]] ߷ m)myrѯ1o-÷} }[Mf2mX6lfx.Aa+L9dOvX}e.2! }YБ]9Й QG>ˆl8YfAG| |M؀{VEow 9q{Xd`*Cc蔫zV_ELyZvXSNbɋNщ?xP}>?y~[*6:c9{MBm&m9FO`?Ν?.ۖw0ސNvpA1J%pǠCԎcKuP; (vD"z_~ W w-ȟG9|̈?Wʿ:t 8oj~[}A/ 6?>8oxCC' u/1xσ7X{L1!e_ HnF`,G`LGn6Zc׸'ovgu ? /Aʾ ܇faQ2 dϨE%SS4<@YM}tpӡZO&c_\y~ 3߫m1)6{`B~[ooJy:{}F>͑cʽ ք&ħŒ?;%O'~+eNŒSHK~D5m Um0_䷑|dvJAzoI>OOܴ`` I}"`RDm(s >Hc_ C}70 1JЗ'=юHD osa=;G%c: %~[{  \*@S뾭OdFُh| 3P>umd8xވ@muD8oxJg)fwZ8^^ 6xowV[zօpm!6k;X7{H[xX] pK x/FcO{Xz'@Fc,cL|~}j6ꟈ_ Ee]dJ =pkv'}{ RW⡳6tåpp;:x{ؗ3jL~W?mPapi9qs 4Ro<  >f=0G:v~+zgHOI.՟_b~k-S ʔkNhLd:) s1 :=VH`wT?[=/T joVOSo߯.~]Zzg<m=~Kb2>j+o73/]3P8 4L=scP՚-&C*}?gQLdc_XP}p'oYю# O76H7%Yg9>_ $Vk_)GDD? {WK*l`yu)#iY6~G 4K@9Y]4P6wyLW!>/$&P}>?ywL' 8 NcgH8 \OH]s}9!.gT\ \ǾڱO:zWr9Q:NV߇=@vD"_{(gkG'"l \C= 69j[D{s?&bMsP\smu 3O]1_}pK+]R8pDqK\#$p8DqK Sos,׋KR8[,E=뀷pm,lGJ)ga?x8 00p2Dqaaˤp,A]G\p5BW+^J$ .lmYX'gk{ /&>}MŒSHK~D5dj% qM!GK)GrY/kXɎHwn{ӂ n@a%pݠCԎnŤJ:4x{G$Z,dRa%D orF5 N,30 ~mza~jmnږ Py,~ahB)orn$m`ヒ I  x?OE !'C|+ˈOgT_,9䨽ďl?6JθXަ ocsͱd묙l_޶:DŖ@a%p~~Afqkep}:pX9J}PIڃ/} 4o@bFqFg Y \*,PNb)N)nKN#9j/#y%~?7A &/{&D߷M>(o0~vL-B;sqL=? %NTN%"nS}6]!m7)P6m8 O"q| ^56<~E*@mk p6*G7s&ek|M,W9:SI(ȵ9ݹ:x8 Nzw=%gAOgrKn3Sgmk'm(-pU,D=+p,̲ei YOg!v=C'qh],̰]^h=ߦע5]Pޝ\(v)ω(S g2O))u עl>- %ḿ, $&P}>?TokUmtr6 sm6$>73,d*ILD6 k'-\.n~zPvTAT?7SH,{/~9 -=)mH>X6q#ۢQ9KI>F+8 J{55Z YZߖ ŧ'Vsp\α+PβyOgP96p]WzvpΨ8x8 'Яc`Y~Upt,441rN8 xl))K祭xQ,miIԤik?2HSiM.U5Nϟ@ ʔld /5-Vt$2q*ux W7|F7M(/.31my܆y%c2h8kV#ش qY6>BYv|*==~@瑿Ͽ .3XhN;:^6;N>d1\RAYv|,mi{9o' ;!i\)Ly1l sIϽ{} ȲYՎ!!ȜGs_Oi`ss~h^B xC:/(dlځ~49`3}Gng[_ uӍҚS?mOw{iZ,M!CӮPKߔ= rzVzsj+s?^ %ԛlaYs*Z %DxWFAZ )vZ8t6/p< v|`r3t$I: 8FJKkޗǝZ ,m왖7}և9^zhP@mLc5Xuh$5\k': ^/kO pH xݍvr67Pg(lS5z։5I$HO|T$؊N$#.Y)_ҕʫJ/nc{}\fOqlr|cG|8{ۑWx WH9yǻlj!ޟ{j㵼y<qԱ1߱;O:&}^c0:mv\ማcOI6~ZĝUq(3^ERmc{vzmm9y :Fu0:R~:g`y$u|pur5O> orXy,o/v-ϲV`igocyn$m{Ez/1UH8m~Ql~1b~3w5 jq.x79ZVX<[P'^Q94d_@' ?:/ -A- ^J?sg;ջXpVK ^nd+[ϡmo-7o,GK;?-D/n|U \a@+bC yS2jGQ 6l m-@}y3Շ}z5W~dpK+>o-8$~1K,yE_0Pߘoݼ([MC}C;m ;k~v[`dz )?_ ;f 9O| ==PhݒqtMzqo ǘ$ݑGߒRK~+J?ibѤqg%Kq,cUwM~K?٧:qжxOMzD}A.z_z*qr[_?u_)wٚ.-Ϊq9 91/?T>߭ݯ);~~^%>b\x 嫳evŵmm度Ppn\˥mv{PD"2Ws )"!ߞvbm{Ԉ5p4ymҭ{+^ `Zkar/#_$q>2CPx=sf>& ;!eDT߉<2(Q&(-@ =2uBi2\(Q&(-@( ze"= D rCLDCLth:&(-@=2@i2T = napїG\HF0k@`DnٰMܨ:+ѽwNtԟv:3pYn@~3}b/@:ky>;׸ 7^@<_? 8l)3!9,41 y_˔"i:,{M_$O GL)L&S#%zƤ E/._~7g3ҒBE6'BMҴd- 0?[Y[%pD 5bR.X.1|g\I&EdIT7}bשoAu_\XP:jp]/Ɛ>ki[Og*e3MA.e-@/[j̈́ aZ~t)n)U_@_GE6Cr arkv[q Tca4&ST'uḎa>1wkګSMssCpɉDDNG~D5~\y !Uf`J.;Ejif|Ք \ ^Е GR}v5Oy݁W=rڝ> ϮS^O9VJh- 'uRsW$W*e3vbN?wx5=~H ;DbފڇwTmzu%u#PZTw/h&#ݫW5vO9;}֒_*GKH iuw$kv[ T̃ec!Aֳw O7Xo0 BW0?} }/P-Dೀ6ϥ9'nu*B}XdQX8^Et`!D:\F+tqfNq̮_S]>w]ޮ(ڰkZP]:qdrdv\+^Ơ.FÀj ĸ "+&d1v(82m01(p{)cE\⿄}t+E:sW$n:Ob;sWpT5Cq %㕾;}X{ɖn$wG3o%lq9ol|>1|Q9(s,g}k~Ԛ Y:95ĕpnR\*\I|v^ `8Hs{|LE'tNbRNwỮv.]]2vI(7g@dj_ m@@1S >  xNBi@[@i|;ed%30+ނkFuDV:\S^Sg5IY}!?}.kr~݁~ ޯY_okm,e OYb އx8ba?RbmWPKD_ۓ |uDZ~fz(zձyO{We/>3CH;C5K%[~aQM<3<ދD6"l\-{4OĞ'}ֲe0qAe/ -$mO: Հ _/@ZN4T#Fr'm2u72S1ugp/M@W,xa vRY+ K70  # A"`2 +De˗s6*=7`!*De˗s6 Qr*x] \34]*&E5Jmaٺ(kȭJr5 3Ŧͱ(M\GnkaY!E|3ifI<{>|g(Gۤ!\biB߆\c"כ(BV;B &c!70)q!„'zQ80[<a}Qw&{L~mAIg c[ixOTǰ32cBdzb6 GW%S,hG:yWW >c6<5S=0Y,&v Y"ԿlA#:LQQLlv# ԧuoN_{& X{M@qQs 3Lf">gA/TNG[C.ucm+;ݡ}iv wbS;9B'U;AmBVU۝ҟ5_]0]4tmӥni=V j׶*o'ֶԷr-]vNzp3 [Aό.NWaw8iv.vڎa }*L(h61kw=M8Yr 8rҬ39ˬq~pUvo [2o#ȋyr oyJs(6[BߺN^{ݥyϹe}ȇ|=T9uOJohl;Kto#wN`/Jsn)客oa4aX= CO fS}`78+&~H3P()CG\ A4!B6N`YCw@=L`BRP/̺olO yn%PɎ8Kx(moqb;1E9 !bR< ɧx)@Ý۸kuI0}@7Dr>{; e.2ržm 1Bm9V0ЈPN`N8 V\Bf3F;C´9 *Ӿ? PG! cEX1iဌ_M. ]6J!Wm"op"\ b(ɸImV}跅q)ͧ;2rI F7Ap8a Np'<p>M+KX͎?LuM=&dj\>Z?=ٻ*y+2黚\Ә~q 0cEFyawa}>OӚLsq5׾Tkڶ{5 vfMCZEfJL{ `2s ?! qLuMf\>::{\EF?e?析mk=iz/8ʸZ'$5Lmk3CdK-rO]kϰ`3U_yMC%To>>H x}fUǶ?ƠM|}&NnPOu}ؚ` ~Ll*ỳ|Mؘcڄ| r}=DuM&yŠwyBZ8Tw7;0~]4٧1<.m?:-]~˷oXwqnwޣ%}S/ߵ?4p8'~f}MosS7?}5g+ftsp < ;5H2*^C%_rJbKEFrslVԴ&%v}M izW>[`H~N;-4cnQhήS塧 Etu/ $ a%itAI 7ou=:}Oh4O"i~lNΆ<: +pI^;hk⺬nw~ΈJ̲8l@g%]yB|xcA^AIִ5ѽġzO&O42SڜzP@xuF 4^0m퓿ך\n eok;w uUWJ2"?N.Mq{v}1p{ &~Ύ{X_Y2M6#|3iuZ18Әؕٺ:=%ܝ1HY*_2>s'.<|mZ"(m;J3*>ؿۢ=s\AĆzt,7}W ==(|}ŌeWMꟺ; o`N(vY?Y!D/Ўp 5U[39G_tەɭ0z)i&nVGBwOoN~QX'+CK׽!?޺)o#;ZlidjHrf[ܓlh R>oD?-k9kݳ #= rݠ72fxͻldnO iqK2 ZpvEo,׼)o~-@Mmv$wf]>5TNMWZ>tzPK/ߪY$ /m q:x.KFM6;f[`kg-rS+7W݁o3/%7ga_?|T_tfXsp8?4}m'U%QdVD䉩$ow Izm 3Xp*§aUd[dƆ1sW΅BڛOϔ={A ffSّ'YTݼ-i>.]&Ikt)\;9O%1tMvô^n?~iVᐦ|(Q/$$)W ?6C\ p)מ\w*ZQinוۋofv͡/^=ۑ ]^ڲ\mZ鎢9;F->w3ic@rJv{/K[f5]8Ν[ 5QginAkw݌{LR<|oAz_s3g-X1sF p;k%esfKO7+=YU ljSh13(ZA gkFn?~ms{j[4ɚooaGȌ\oک^3k~ݬ]φ[f5"Y.!0 ϋo{xEJ\zh_~)Y'oxjHqwYuqQQ7Յ{M{{nlyc/ ]^.hm՟-aG{r{숱ߊ;-uI!pjo$hbBzonz
    ;&6Wa'-=']tR锄2fe^i6Y2VΣKuqϤ+{;^wm<0"fF>Citoҕ7ƓRc%2АTzMyVjϳ.=H5; 5$JxjI%jIsd.2i$XOi@:xjItלֺH$hO -yJbGhK)ϻXuV~~~~Oµ>~4uʞsA4q|>S{k= ϤB}k1Ƈ'fLKxzM Ei \:LQުg_?z7z[')e@䀬>u=5@>x 4]D;`̔6@X}v0aUЏ, _l 1]X~C` tsp!CPOY;^#@`0֫^NRlއ҈!4ܢ C,F @ieH#|,!C (0iCȣ@#H#t#}hpp,`;!F8Z4"j`U(+ +Q ̚ JNqĹhp@{O;2ָ0g>b=1V']}(XUg|ísϲ99zj*Ne5žUjUUN0ت ѪN#m8a>G7[.h` -µ&F 5,Al|Bw~k,rq˝ܼM3>yS#'T=]7=c@Ε*}Bi f!wx-ielx+68tg1 #>}18;.s67eA . 8KaV<4~bg3{ 9O+>ik[_oKbP6%Sj[w!S::v}5YOѾh:K'3@i>eGpfpgl YC!15LEא5C :kB.6A/$&ĵB+Li\kcACC>^vB!?tA`'˴ii;jVLZghֵ8>qc!Z*^yK޷]^*W]o;y_ܫ*nmu)OyG嚌t!Cuf暌|Mşpy0ӖVpL'vfB߀k@?+k]{ԧ:kl=9YOjj=M6'2P7~G);Kr$8ף@bE"X "d(}}f+4㰟0O}䉀wx}_{}\:&C,C z| ]Tk&\ߐLb}8&V/wϰp2wHiЃwB L6v> pdjas+lo) !ORI5wwei$Y,EL˸l/5b=eN#ϓE@ ~'Fj80NBv%˯d+% Vk `:e džgt\Ɣ=UX uԁZn<V5 uAMd C&/FQQuu|@=j6Ue5Q'n:@ POQ(?П>b7l U';QRdѾX'FA#I<-fh$!z14"'b8$Q8YY: aS$oB,oc} q}z՜(SоL_w˕}OxeC׫`CVjVm}ao|25b/ۯeyahEXU5TzfHE)`g9aJRA333@_` u0gf0P& Z_F/=3CI33*TV33QA_ؙS+<}x'Cw5w|T4^T55 S{QN穕 U( TeJQ֋ ͨb~!S^ү_f*"~Y/`N_O6Y[Ǚ\+03E,3o cjE5`"˚Cf@"bJ|1i@Xrdb͚OXsC·'dem7Kgxq9A i H8҈ŀ @ 5F X Ȃ0iD6` H#vC4b@a҈CF 88 a҈F 8} ՕA~\.?"J e2(PW"ʠ wM@7NqÓq>.j]f=.QˬeV̊BJ dV3Z2 tՅ ]fb.0BYcˬ Le4f2kstEZ2]f`.֑ˬMle2tuc:)]f].r(eڒe瀫Xx!yXػsy:Bh~quHszlhw7٩yP2 Puծs+?{M){lonoZyGinJnm4G%\=qc8\*f[T{b'4,Dd 4">  # A"`"\,IKHxC{QZ38,ڵ7@=0,IKHxC{QZ3Q_/V+x] \LL5N5HmY[i)DIQz%h*l#v=~kfC$ a4ߙ;ݦZa}vs99 B/e2;2& CI1,50IǰX5Z0Ko kicP`8œDj"%, *L$ۘ!-T]->T $:*^; $p$r@-NբȝQS%|S8Xm7!0f706ri~~>y" F2{(.tO>"Ĕ|`Ă)y͕ PߣSa`AYPd>% .TN2`6 ^@dC$\sK4 jbJ!$*B]!ZM!X?P;HB5^d_!@{3# tQ 1-ʶ NH쇪6*HO ;8+<1,6`2x.V7.4Ip3&aX0HQ#Ydн͍DVFBHuǰi\uwt>+t_澒%0im\^;{{y/ym1=۬1{:wFVW~,PhHs{>@}F DdmʘaG%ǘ/ċ"NDK79 GTO%r-L8T L!dYn+ 2D><+#lE! d}@]Hi'׸y( eBZ"O32`8"vO}X8B{)Baf@dqѠs 9QxEq9784@>A36O?r7J0!}0i5Wc#Gus8e>ȡ<mHI6Cޘ92<)8<0ԮGЮģ; Xv( |C8\s~sB'BHá/5~]_C~^n/ʜ;m!n 8OuR#;F`515k;1_0ZkFFx\j ~h9~(wIF6EFF|Me;` hw0no; w{mc~'w:ZxÅ2œi띎м]7:@H-aBA$r-C]+kU2KolJeYmdJZa ҚϪkkAcPdA)Le1=t~F@Bؾ Nh %>'Bч8KK^3Zwևڶ ?米` qHPBke<"m@S_(sR*oa#YސFCÊg PAD, #k>?\;bym1^:%G tdU<u+F ʌ2쎚d׻_K݀H-fTB&5웝x^2ˊyq 4z1iNQh{Y+HNNj  1 xL}9!aͣBnoҎX}^'< B1_;(Y ).W둽hLyXG(+ZAGtf4!_ܪsȟlpЎFu!&8ZFB@@}"j{ND՜G'q~>QVbd(|"Yvy*"5Þ8CIl$}c6Bwj\>z_hl>Jhݐ>1f6C!@EhNp$G8 1>ь >AaݢΏL}C'LT=*W6%٩q_O5}>FK3S|!%5>O\}^mk9:(4Gִ:ibMCs;rMkӐkֱ1+ivh__|GMH;>VmZ!H@wC֋N࠽%@&}@84l7z@B~}^3 `H@SO :kT'M}_=\z!]c>o퍴AROg)|bY6O@=_dV>]Cz!BH \X}Z6 | =_>i}^39u^n4 jkN5 wzccrﮝs֏!ö%U E3n{b;1paODp|9Y' v4uz9aO:>7X=8w€I  ;6 iӒ/֣݃IM-p_yڜ9[EwlkǨG1a׼;WLvKv\6Xvp.<0ώmDj~Ѻ ^G%K[ϔ =XYٶΎr)i~z2anKv !j$x4~=w ҧ}|9q׸.%:%6ᬽg*C1Dhd\kI^)1)ؘ8T4Wう:78~U˘ {X13L$l?;C n2Ms,[s?2[+=mHǾNdA&)^cD2,Vv̛gu^+ isߺ؃p>W!$:Rٜ/"%]l2&}KpT6GYRП]j83{ܘ@렭ÍMkPGyP޼ۛpM{ڲN;-?}=*dM2KӓAB-_hO_}.i񊱫v6K'WZl ӳ ޳7b"=]tlTx#^\.ڴM)`xF`FA)Aߕ j'ui:g2yA$.7{`fHgyoy%mE!>YzdTRv 4umOol;{ci.io7sNF$\']o?;3n]!,2i>0.=ۍo?54U~{)'vqcz0NWJ,YmƭMޜr~;)榿 3Zh]P`/yĞsqu>%WSѱヴ;=uqRE /bLz6t9m%g۶ eěDŽ\T-uDK`WZ=Lڿj7{sK3֮ 8)V"$x03,O_S ii Ӗy"z)L̇6FtkKq}i0cNn.-Zsc\^}69$sm'G32yW8.;lyFה9loJՒocMֿ; q_vLߕOO./ED cLI2w&n}w]\s=IM&ɸvvT!%qg80(Y^vtY/DǦ+q78 s))psтs=Eu 2hEPʞksVtZ}XM؂g^u 3~{>G5YɓwYM]o`U 2BCDgf>ܺLF3gre/mr6s?b~ KXo#~fu,xp݁?$f3 ^4O ;.%6gk1,4|E} 6|O#s )#Z>Eg׿,=)>Sق#GluG_-';+N]pFgLX,+ eK6/qu|S"/۴uv&fϙv2gnnZ=1ku/^ѭsK"|E{kmSQGF;0! 5(U&:ъ3X\J~uvi}tohMO9x IO l%N.4-:w?N;rF+lp a،p#-?vҥVQ@ĥ-ڦֱ32riґK]||Zz҅Ϳpze=j֎۶f(}glgO۝ puPvy=O.j6$ۻӸ'LIͷ;e<pxG½Js>2`Nz=[w<"qw_rCma.>i}㍁>\m% DR^tg&J7{tot8ٛPӿՏ+tFr3}}oN[El)?|p昞/Zn;v'l]~|לwۋymA΂6z-*z*2o%ŏo|q}@.w(|W-F`&yxGjl'zow^{X{M\&_«78^=aGӹi{D۬Lw_YƱd:$/txQFVڒtgeM&[o}evMqk;8uU+2oO xԕXd|;tt& ^i-/2Zm:#qAwSSo)OvA,tzצ)٦ft};YZh񶳣xoH+]2mV˃s̽o[ YX.lm = ކ溮䧜ڲʸ-lTIEgr:.khR)G_8;al_K FMicrosoft Equation 3.0 DS Equation Equation.39qY"ZX Pk|h()=Ph|k()Pk()Ph|k()  +" Pk()dkCompObjfObjInfoEquation Native _1340874854Fu3y5]*Z0TКpo1rrfŝsSXd}c 볎=y)p!נB>I oMcŖC,Y~uk+Bl(~rt3"Gon;=6^NlLb)K7&aAYPz_W"[iOʞrtCf>o:#g{%D+/3xy۩.ʺ* D㶬$6j/N5=/;fÜAVw6%][~{Oi7;g_\Ԯ۫CE~lS+gi89Pw4MBf7qACBly,;{ᝈHו& hP]ϣ6.(2D|ݪ8k׽=B{|nِ}3 u=z`ʽ'5Y?#b 2}3O͍5&Μd/z#rSLYigg%c-dn%|L}{魻L/4t.ctiھz_?>epT"cD3:ƀwIQш B@ N%+#p2Ur! Wg;|Yw0(l(pP<`SEf8)!ba <Xyd$?QX  b'wo"-srȧm%MbQK22Z3dH*Ta\L\Mc1%VW,2(ebj-"DZl3.Eìu.<:YT5%¥*֧UJ3-9&U^mBܫ:fic21= z|R1iUJ]!1Ŵ82 -kxB^럊Qvm%W[XJcaJjc7@Rˋx j'^և1LNabZPa-4򎨽>-XDS\:yj}[GkP{B |:Gr4/QG[KP2z?DiPJU慘L0F*^ERf-%$PSֻd:Wm]Bv,+^.TĜ*c*lu1ɶ*lx2I--}y.-O2BMQUL8"s\x^jGX+}S*JLoQnWuŪ2e,I彯آo;Ts4rU o*l/bVo,2j< 2fx2/2e^UdU<--ҋWC˫/{A2J'EQ0k˲xh[:e V+{)CW.9aM/:a6-k,V"vY^SG=:>'+SxE-edEίwx-}:AZ޹ X+O24O2Ɩ^K{ )M[deRP!ttm""^m/hʔPGaxyUBޖ)֯o~sҁs/t*pzw٘ěA>B[\۱2hy.Fp~{ä^ @UoOB:W+ տťG-5M 71YP.)؆=Uiճ9}#<e>w`@$C@9н QQ3VkRo F)W8.E >@&@-Oʰ!5I fQn1^ P_>kFEnu"䆶 Bc MvQ|־e& ͕<2ao@8C1PXA\@u}wH-f>ŷ衺Ck}P/-i G e Ca?6Aぜ_ ``͝M@ɂzO(Vbdd|@>A Oݻn1YQPgnI9OA~ŕ-iߐ>ԅ~ h/}D}w|‰ZuuK>t2`g4FQO'd^>:gRfL})4_y$s &jF9Q|-;$Xhա!״s`P4dd2dY^}9lkŷz]_h^و[d5n7Mƛ~ݏ3dSԯJO7f ݘ<[1Md y`%\_=?p懽7m (z6]DIȞ -.w~OPe5:?R2LC=T}{AcޚKsSE=)kzk$U}e:vWe]^5Uw)"4'k+ע %Z‡~EUf_G!A? apl4\J!DyhCeA~cFZwk̟9.:Xw8辶W;M 1 ],F T6ra4u4ܕ(DD^wt7|[f()5uO[x?#$)f KK Z{@ )!) =c '2rkA<0 70\X:\b~p~0?e8 ?d8128e=8l.t-BQ9oA\cuBMP Byf@T3&P|(cT&jȥK/I\_w8Uկy A++_%&V5|P ]M5^˭LoXMq7y5v`f-,7by'π1W_ߘ63f^!~ \b*8m|cu`,5MAG XnPovpZߊ |eq DΏȟĸ+]ϡxExwÙx`vm9B-Ʋ?—עE;~XPu7C4lUd'4yڟ ;#x"Cv:U+Z`(IE(%@PBke<"m@Smh^P | <I\G& _> 9}$JH }- -{87GGܜhs 0މ{4 y;>Gn|+Khl>JnHHO1M{*x_B'5kX[L}C' vFctJU#m5+2黆ϧ\~A{<2@RC'%6Qն1پekڲt߯P\Vi4 uj5-^qꚆlW}b<|̚V}96"$Z@t}}z?O|>/HfX.g\_cy>  й3t,ְcFc|P׹96TaݢΏL}C'-WiYnYD^BZ"O35H{WyP;oU1!~8ag(.䭤{&&`8P}~^`Yq$rf x@ ckC2\vwR K Q/ d=#Lѳ}dQ$)f$ m Mؼ1mahDʡ>B=dCuu={@muu!89D !̴~I/~B<]fF| @;Pj %Wy`OCDʒ2(bKocE_";Dt4N4^ke88˦xP/29el:3gs={ gTn\5~)kĦ 0 PlU»eD3slTB)E"0ݲC[beq WLmP? \&,j .ΖΗN.NhGtjg ̓@O [:Y8Nv|BH|I"kdO m'kĦe_}#Tv2%B1@v-  OڤDĠİdRĠ|>,ӵ =ls {AS\q$N9h(H 12+1.e@< PwςS@QPu86'`o;s=(aP I  # A"`"t4ͨ'P”dDP47@=H4ͨ'P”dD`Q_/V4x] \ߐKnL!bAkDEQOTT(xڪE }["jԪU""d!l~{3oޛٙy3B#"LT,4tH&~A#G!NHa# A6#"LLc |si3V#+5Tjy` 1oC1aixbz!6RCd>1J&3x71A@f|*R3Y7Gͧ ".&\ h T%$Cؒ{tnx9AANLȬb_:tH) `>D@vP< B0?7iV'ӡ,YHT,=!'9C&)&%.'uY&ɴ qMUXw) -o 6B$177QI2$+jB@^ Сl .R $%hUmT&~:6C pX@T VScB\ⰉSsBα'qڨd 6m6{p&R,oz^1pYy[vRk-IM8\,֟l&FMLnjB~թ daOriuhe97f]+Ck.Nz/:.4<gd+#yZcd;]!c"g7qK5ǹC>?T(-62ʆH~9"@I )pL;#D!yBChaTl1M ORUɷdFZA>h:KȱesF/p)}ۃ `<p/:dKss`k#et(*u5( |CCoS`#9:}imLj;ĈA>3Ds'= *?(~~PbYkw%F4~g\w{ [l~om uZ6!;_^1wS8['sq?pn[/ܴN lll@np4l\!/,?w,~U0ɪldp^kw]w'>? LzGTP?@evbuJeuNLUNiN˝?Aܐ80Nk ڶ[lT:3hP*_Cm[G~'Wv½2hO}ȁ|=T3c_bFgWPǼvuĻ`04T5RUV)@; ?|HU~A0!p9|&o8;lOi6^f3W:A2 N5=Ȟ{neAhBz!B|c-KeP1r{_x/sJLqw]6t|b>9\/B2"d( OWneO ĄXgz-3ٿI wnR#~v&dVj~.c{;pL`1㚱b!{>A P'V.B'8$z3c̳1 #m.w|}:'}k|bS] #8tP U9l0pa>Ohniׇʏ?Ui&7UC&i<2ִ:kM~ONikִ\ӬQFk ȱ=/ROq\+$uĻ`0P? ( -d0t B e}^v[O}Ex-{ÏI r'aG}# ٠OC[ܚ}tL=c:J6!ydPi kMy!?>o@k삜qmStZbp.״`$a>'gA/ r콏o/?ɮO|}e>oϋ|ep.}b5y@}#@dp l? gk_|HaҮ綇xS<q{l1xY=x wp ]P`W SGմ_/Ql&yBܫk'"0F%-fCƗ?^=bjYQIb Rxn G n<;J^8x~rudDAwsm^>w_eG~SfGrzV|8{ciV^)| swR*+o+)oÊyXaӇgQ$}yѪ z:"qL>*Q"])מQ;jV|GI˶S[w;-[07 QBoFr~ axbx~nͿ:'ХU lfe93@4tն3%g&Nn*i8L9'P<_1j’LAs*bYIWm'Zjװ2:9x\8Vkxpŝ f/`H;06|T`== 'f7 ZnxgߜW!Hkbevm'Yr<ۍ6ois }ZNzxLah!8s_oz5-3+k$2o$_7 w^TX'xmp'׶{c֛⡸3!+=Z!KIdk8_*g|8?h׉3ʋ~ɄA?l2) WqPoݦksK3Ǵ1|[^Zv2gzgؼfIA_6^']UkmNE@#)]Nu`7kZx&?+}zE0hq+zdlZ)iq˿:+^;;GП[Mv>5^9Lf̸xX<+՗sdq#¤gNx$߼b%/jlmN7M4<[y`A³~}͋I57ʵxPUr읜K'h~k]:+^3.1o4hkZrʗ FYaEO 3A_#~9Iڰ3v<)o>%'sxi/ 41]]v݋nӱt !Edʮx4KYoe siMWt1k2on-h2a%˶ܰ%c)dL]>d}+)CAocV/lz= ',o-dZomG_WT/Sak7χO7ܠ߱&/ xܦMQԼٻ\~^ڦղsJOR߼5ScV|ů^#,g b_~ȜCpvcMn6힬{f]~Mi%?Ձ~p1 lw[3ȟ}&Hooc˭W8FP.祅 7|ѭW+=nfI H^ܢ- oo?-{i&m'<ˁw4~=7uܲ)UG@3;w~cWmͷg ?!{㖒!w9\>O;pek6nMt{\޷is?' |>-~HW['~`M;O{{/~b@Nŕ70F\~6qõgRyoYW:=x̼WiB|aE,m9Ê_\qo|51[lG\0>`׭<鶝IL찵:_͉{I:Ҵ bO90۞= o om37NWsTւ/F>~cV_J7(-pNQJ WVZ9a[9EWt}+̋;=N\;1g- V1^ [lXzs%BmMME*o]BFk&84wOb'yyႉzB#0rlF[a}\?ePֺŃ.ϿRp'3#o5bHKYWVvdomggԹOTL$AV{6Jc-;dB7W~f^)*^,z9k^|㴔Z@%F14($}|W* nA!v]ɍmdz{2'/-:=bI(5Mr?Fu[#wm3Bi4nLS f ;89Wlr=Ạ{+wla%[`iQ!`w07SDŎ_ʦ\V Nt"Wn^&2=5+u9ץr2o豣 x*ay_٠#Eٝz6L8?/`c?Ls &{\yY"[r+BwaN=6b _CE썭L\p!9~S KyҐw-Ovw{+w7X ~X|// ~~d4igZ{1mQLg!['oC26dpXOݹfTFŜrĬ- n !ߜsvu?,O\P[2Znq遇1]Y|{{ eVuroZ16džC7~1'~tdƤ[[0Xu43Y9D޹C‰- '6}WV]&w^cd)ٖyu}~DG){S~R|UCK5HKވzevqT7}dgqԫ>i,`_6e(Y,|6?žl8&2!EyeN k^x\2cw~~R~n/< w[{oQWܳkfb;95巉q_`΂Y~' M,?@Ϳ{Fnb@}>U{7o;`Pv{u1pN &iUZ>a&i {Zx?ۦ!S@uy`C|۴YRG,3 3=?ݷMgۦ8߼B^ͫmuwo=P@aV?/ ~& u3>p0B/ap Zg .BO9 s!*M0]0[0_0Bpp^0hNHpGa95 E۠Ly<4OUeHE{:}Q&&>ٳ[1-C\ ٺGI9<˳E'9uB Ɣ+е@ql0G\WZH#WrNs r_ rTKܞ7@ϝ'88_{{@*_ɝ7qr131Pǻ]n AxGɇ}{xUuvm_5r>xouU"p ΈFf|qЧ9<'u ή/ 3E$Dn!o&[h#9;?:3ȝON~O,FS(Ϯ+}p.k/rZj?ρuI}+.# ;aQ˪|MIxU|yGju!C=* 1$dvb ^Ot{7)hAHЎ\Cd DS7+SD|jD4{!b{"& 7MaYQ1ܷsG}C29⚠<Ǔ7@CռUl :X[Eg:l3L' לba F"Vdk(b ́MFзl#Cr@7ho7^Y0> b /ykF ,/6h ꫅yH.̲/\o3\a_ܦpTocvADVq,XQ EUWd  ?>nr-ɂ ֥ .@7KCPXU}˛`U62r[Īhf Sm go.3xX#ǂoǪJTҽbYIIIIƒ70`!ۢ=`8h0Ё0X';}<B9ZրO.:\6&|N}W`jgQ-ϻ9{̞w^ׇSߩJs0j*&Lqύqs'j3@>k 7Vdh/@$LZ"oػS#`& Lum<5'|n<=y͙οsFcc!P uD@;-;@v":3wmeI:ku$c.+%)ѐZdVuyY &i,R thk,ZN ~,iP7۬II 4\]b9ا>y}]yO|O& OB!6|q6rmU Ŋ=13%{ ~jú=>`(_l+Uۺǵ8n ~D[qyo5rIry Ỵ!<;(} 6>wm~{~]Qjb%دϒgXg FXaFK'άW,C?76?dF"gx62~d 8i`ʧ)> Ni)97,c=^Cg^3/掠ٕ3CoyYje+Uq:Ƽ' vp漏j++zᛃ{-3y#M)i3ӽoh)ُ])Y#:m>\kI]ɳcmrsh5;̗qǵ ~Vt/t 2z+mG/&}Kޞn+RbEƞֶFW`T 0WD~H} ڌpρ}M.)DzDg1(@ yS1-u"Wt%)moqe wj|D>ab9`wm{:Ɋsu&ʫ)P=R+5É/6&|5r^A֦Jm~a}9IΠΠYܜ5΁w /qs ʩ!;dN qpş6lC>R紗rxG޳O/x|<ٸSm͐md^<3ΗT/V/ _~Y|5_2-SAY}mE>`M^xTgfP8p? n C#p{_xN{|,ca[nz}ا.̤%dTӦu>6Lfߨ!%!(3}>Cb2qkQʦ4iRiy-g1-;fW^%/[r_;q:Ƽ' vp漏U vTnHgx_(W}5V"i @}-\khq O p{է 9Y}kMߣ(M+%H/9|؃3 tw@:xh{ ^ MJ0k՟wlSpvo1<: d\ǚ;QOs/:f5ppf]Fy:0dL1$:_p{S߫':ة+v{:cT"8]5OىTV>  lђv$8Kȅr%ɯdv:1'-y֫%lNAΘΘ6NK}`(^ol+U:ǵx n7Y\ו(gsg6 @63vVxa!x9j9fqv/pd0W}Q\ fD8;$?Ul~8\ PxZ9/\"D}\RuƩUִ+w0W&O@7?`#f$}i>+w~Zn}V\'ف^>e}jmg̚ ?|Dd ( b  c $A? ?3"`?2[۬hY(c 7`![۬hY(c &@1`\hxUkAf/f@xH,"QBc!JHRj؈''wr.XؤI9Y@; kRXXD f}ovld`w}yyO [<#B=&o_댳@!Cv@lq(h"\iTeQ{an}zE-gү_A.e8<#7>O0B}ԟ;o7ϾuS<)xf~dp[}kMfR4p;|4ZFm߽|1lt~[r<ے>6oK~uTP-~h?uh+&nsK[+YZCq/~նb%9vFE{₊!˄ xz5ǥ0+n˓*k籠g]TyloCxGN~l{{bӉyǀyɠ, ^Og:øYyOMq񺸢ӾM|@ek@&F|w8 jFj=tc?\4Dd >  # A"`"63({?(bf͈J7@= 3({?(bf͈h\ KAx\ |EH'su d$SdAdbFD4@\DQAt >P"T c5OyxxJ1oꫮA رct0A7d a$Eΐ+L7P LNO~EA uЅuB wۘ7 2l쨣a}?Fk6Wܰ Ble)t2ن_0/.{y ީՒwF-(!AnA -=SOe0(U#Qb#\1Zڗ'4(Eݺˉz=s񴧆.*my}? T/ğM4N(İ%'J*0ڸ~0U= %xݜ$L@Ua/TpMA#ç X)ASTr2> 2ݻhiR1.͉w9،pMєu0\ so6qZc:cgn`PLBNlDVmZ39_F_ؙ]S$6?f_+Wb7l>-n̊^aOʾGY,g%z|6fUlK٣E33V_ĮafGMʶtYz Ⱦ|~.ˌnc~?XKUBeH7rZgw,QGz6D]@q;xc/EGИ8/zZo>ע b"ЋtZZ'v<)jT0_g|YZeN1ZM ,X|yӤQ$}e$c՛6yFFSι9+\GםG*<\"*oG0|>Q6(ner2YLE]nQt T -zZP+svlq7PD2A]WRʕS \ڇu7-zkqm\840kVV~lv@e_Xľɛvv\Ƭ/g١Wֲ}bgu߷MlWʈ:` {_#oeZL6VqZ$j<8;1/U&MY.$wdwEew(=0D8V8A]lBb.R_3YJyY~TYw*RP*2MK|gU^s}ϺI܏̞"ϸN;YD~{t\8e:3F++gnQ~$Y9@TMݦM_6 t2F ݁ō[7o>A~\6݂`nSpbpC &'>\(0 l6t t3?f|i5ӟ̪T~&-VZ9kҌW|#3dGoU% ir.a՞Ҭ,χYM9MY<:_xaywc/Όe[jR϶tQgI^LtzxzjpI?s;!vH҇!#NRkL}9Hf{ ށu8}^O8#?he/͉}l"vA' <k1[wh0slTaVl{Z6MYXˇ#~WisϲGOܺ>Q [m(N!q"{x?[j&(?L&'Cv4 #ALI$zt}6KbWiaJiqTe|Unr[^7 1?iy췶ZKtbT[wY3Z'eڞȜddtt<Gc(); Y}.p5]0hWw(Ϲm.޽Y.rnROge_wg} {}ت|mub>\`9<$ TRZfR7e76>/6\uM .̀O4&^Y'g M/L4S}_ VFYnPw|}mخgΐ3{9.uu]%utZ @߁cDЯ=P`c9~W9]rNxzuHdI6(^+&::/r;r~&ۜ#W |yi!GY/2G+2߲!ۻ[gg̴l!H߾OZl"ylv^3{,;L-ΰlJb(6& 1L.Q:[̗KpN`'/CUFE:wZG` 7qst7yn,yϫ΍œ>Z2MVAeh-Hǵʈ=iL|J_C瓚bT_9o`6̳8Uʣ9v@t J'shsl*kcFj;?&{ϱNcsXrls@ .Pmq6=Sq^=.V$ !c4{ BΒcIiJDsP&9Ĵ/f-WFKs5yhs R+ևD֩ho*~ߍ{-oV}瞑SwSZ4:M(߇>M:>Jj_7][1HN У.9~NMkGt4t2t' h5DGmr_ڙOç!OpMK^+-I{ہsK^Xg,yr?IY@׹1}.߶$6MK8U.0Ҽov>qK\>tܛ#?CZЦsǠ-X#=_&cai/N'E@}>r=֒zyh?]1n[+ֹOiy~=zO%;H2%t+nvkXnE7ڴt mD4=xЗi y `[B1=U \76ՓS=:鎯cl97[بz瓦u֊Vҥywʭ3Vm<}09~}.n5lL6%#d§%<5Mtca[;Zҽt*v`f䵝iPӯG[wY<;K<;؉4yM}c]㛹&y7k)'}<ɼrol75׏٤レ1{6YK.N##l6gѳ٫ҥߤ[#6&}F_E~9!(W^2: ﹙`s':g-[uf[:'iE:bm)lկ)@X:sZ"O|?HKg4H(^?"7;7ۨklE2V@~a;ԟg\ֵȒnzm3?:OiHeK}JOtd־24Od)ze75!b_/CߖwF})s~k!!~`L@Z}WHfvebk-f^&7Rb^!vG]34늪(qQG@&R MNдN@bT[۔zo5lsv%wlP,5|}z;5l,ҒzSƌٹL&rCNٸFw%(_V B%ڿlwXR^'cl]v q AtB?I~lGҏ!@~xQGɉlju">#z_K]ЉPOlU,X%nXn6OS}XNx7]:Q xH z"Z:q=],?'5oetg@ t&Kn'̈yOF/ע 1}Un9EHf c s y0Q\(њBָmBj1etĄsܧ5c|M>=qC=h>Mߥ>tw8݀bBɉ"TNMN}S&\zE?a?# i~P}J}Vk|DkGڿ֧!I]O1i7Oҟlm%"7?xqH47y|*Ykn Zd[IeUw݆JZ</}>o|>ݶh9’=O~/޿l^0N<2@H~|a:ԥAҫq{) sv^sm;&6~G~ЦIp@x K9mud|>φkX\KkCx897}i{pOkؿːVd9?އP O}mX:V+ok }f~sd(QDd $[%2V   C "A 3"`"]2`+йs+ 0927@=12`+йs+ 0 t[\xzwX˶w(FTD܈H ! %R EJ1҄ЋD=BBlJK;o-|߹w93+w͚U3@`{%a0>z,] ۣ !.' a]S)À  NE}f@5 fDC) @q=ƃW_\ EKZ2j_QA@oAEP0>vg(`(Ԩc=QϹyP풹[ߑ?$ E NXe"A wua1 (؃o0g@TBAWCn`1@t@π~e{0ȷ1C@.*k8Bw.[> N>L-hoooÎ|߿|Cgwdzծ~u3wk,@ٽF]Ϗ Џ&: G %/AӃw$ .@?|t-5O d3^-Ļ-G0~`7ٝ0N9 {w=.Gżeyyw0?p)f$ F=YYvuWow5 ~x힯l?6c+js@砆E]P=`D9Q}>c 'G;4/__.\ 6>v%KQf3lmei=tIb&e[~:Q1TZK4)a2M 0ޔwTפ#cqj)a Vf]Qa{߾^`48;X1yBy{dW-Mt^Z?{o{cjk}z+> |F:S6"uKy>uK|7/<LZSW'&okgJx?$EfhZRRmÞdUk͈\nڐL # ~IH=O$~dѥ?y*%A X)#IE%_ MT~/ӪCpGERLM;*JMٓ׶O뺷.|pm =JQ򻳁84eoҢk 5\K{ g;UϙmQ:PL4vrܜXdK!ˁm^jpj2,_cWHJ@gv>uwZT~ R)*5-k$Ta,U1Og5ҋmKd~W -LC.xڑGN~aɶVt7Z# E$GAW*eye'OKY7{VWr#h%@q=/\;ܔy"{x`s}'_r6\?mP #/t:77r)ob8F9еDϜ+*0љ{S8_҈}}ԅ[Ywc{U %5qeay,ZWƕv(^h|Pe+gƺ_'pЪ|tKg8t85q9);E?H5nUg? @'d۰V|+-ey+Y7SfKzwҰgH9?NX4z˖{mGRS;J⅓H?kϠ#'{ߓ?v[/1sCtfE'<<&%]Q8uqniwRzN.SW3^wh&)h8򇉩3AKүʣ{> 5zk&ˀt=o4RĴJ2ݜu6^k\Ri*gƌʠph$T+Yef,bd;l̏ސ_XyZ}^8h˘( 2)%_OMOYfr] F7` 2oqZb*pI&q0d+9T)Dc_ƙmǬr+^\fZ1ek^e?|fؘKڻYHllw̝LYƍVg;gk>䁑{,&N{cc &=X4d+|so|w@" ΍,էl?񷻬h/^@*h$krHWq#ĘՂ Gxd@d7~f􄀰jBCvZ"`1-ĀǗ)ҴɎ(^a4V]mg+_ˢJ\r8S\.6f8f;Ei[%a]pC[! qI3+*E2:ϐngN#&r /'Ɨy5PU-._jeY0#^}K.js3%:s('">?3OfoVO88 4^~%1*Sn^_DͬRp8©N*;,}^ KttZiߜZcӺ2ROе^pћ6/3=i4xBƦhF1atwx㘛yeI78Cdkyix˝Y2N[aĹN1p:N|x{ӒmxDK6Rz:VñFtNn,+!2L;5Wƪbԣ']qO ~U1/x0 5CY6/D=fuɅߊMS i,5X@bywWg.Ex)ŹÃ/FI&$1ȣlD~=R?V5y@:םZ,h\0aR&`3489Tu Wl>u:NJkMRǔzhLgrxY8qSb/>5vKZA&C7K> >٥- +c[pEz@xt)t5i)7( !2٣Gd}U9Fak6EPU{ 7҄@&O}:D8`q}y^l^fm+_P 3pR{hc-J20DXY6;q.!TpO\Ogz_P/<+f:t^kJ9a[9~=XwNq h4$ Ewn>eSTt$/qb#>te!&=t0zyH-Dz=vQ!Vk]|[$mμUmw͍D /)-[% @f} $#2:HdaG$QהqmPT,Er;5-y> R%mcJ[ˬP8^KUR/!yߪ&PL` 'QNF$VaJ9 B1ͥzq+'1r G3Z7f/89Mhv=ad1QgbʿKIj’x:l-iȣVEx,>vQI nգww-uC31':{_gGPY[I<ޕg3'V֬&P*l]C "?Άk? XĒ;]D|9_ZzJ$`xY;ޫ@?OL6rFX%Ю0{S~¥ܣg͇L Mcpl$-et$22X_*IE"?a T.FzGZYJURIc}vG n] JTn% Z)9Dat0I kqcabkцʨ MV]RE .ru`pf'/i'% +fqH0qfs,ܪbM+6Nnx̐jqX hA dsgsb""{<GR-N´I iѺw7' MCO@DU]rOP/@Ԙ8۔f:cbZ"3Xc)\"yF YM$7?oxE1~$[|{^CĔuu M+(Rbg.9wե\&fحv\hxo3 sQyŨ%Ȭe>W}$xoM(vѿ? ,ojuIhb Q)sOcdz-=Rk*Wޱ\ c=-h(5Z\iH::;r :[wv`˽gF1Ss1gk[9ztKCnBU`AbĹaa_,^cl;a2ǭZ<@]kAD=mkJiЛVcT_u= %F"+쿝5M~?Ƕ߱/c@zf Bt'vu6ȯ/iTXa26OȤI?bp眵W#oZ ҭXNò@T cv lOk;'?C Ǟa\W͘!zJ O\Co̶xPP~ K2)RaqQMѷd,XPA:!t%J-Ho BUQP*U&-H*{kޛ0pB_ ހ vDO 027 !Ifߞ/]=q'Gĕj!x `eX,bx 0H(02 žCEObٱg5@U %*[;'1ȅC?S zeJn5z|_Enɚe)_\^Tw0 !!Hwd+O_}~ŽX8׶6AYݼz`>̨=O@@d!{&7T[P 67=E=;#T#OƴT̻O+SHxΪΰWyꬣUcIѻ0?<i9c*9Zp6k(g8h=[=zBLmBJTCFn/ /]U5Q=.+ smhTI& #Sŧl xC(o4&C(t.пBeX%v̜T@/@0!l={ka`-<1+Z/gb7~O{f}Is(Oد'ff "cUq낛O 56 Yk(LX2(lΣC{˓-Q/)^qǪ2c/wRMR^J9:6Di=&v|Hdl0Jj_r!qrH2@h9i^k'ӱ֫3g29rwJ247J"NlOښ/exِw*d4MS>[,.f٧"ij,ň ħۓ5vr,X'IE N(or@u*%5Zp-zr{`S9Pejiy"Ge؛;E \ˉ\Aꐃ+ &e(9#<ɋ|s sx2p4z*]99t6Hut/¼! ^odpI%ߘ8tfM0Jw (vp,rUfONFuQP;2TݨcC'OHZ#D'w9 _.JBLsdN9)L: ׵ĹQNB{ଏo?ߺ V'_vp&Y7?\–g|bqӾNgIMhxX0HyE &Cj%s:Mܱ`ƒμU#9t-"9r.:4Y\b2,|6v/Dkh:6*^L?py^KoK~ Q0y(<K'j ߾X=9"l?RKHA#e3(e4Rzֻ5mޢ~} 2&h67?FKA&*g{a"ɝ-Y!*?/珵V肗;X@󘠵8؃ގ#z $+fK8LUDou4cw7 c!݁1/ϐ\Kc 'DSp#(:58YP$C6@P9a=Cߜ5W? wWn&3,.Z[dڲ!VtGjܖl7+~@Ԩ;jo:<)34(h'Чq5V52``N!~v9 Q5ZԍSn.@jw">H gN@Hu5ԣ!'9ƽcT.LHa☖:M3UmiӌQW T#yNH}-Ƞ#0]3!q)Q$FPu |ѹg1-QdsHI&Mo=_C{v{ W@hfYp#]shshRI}= Z+_\*LӮ1vM'L6de@|ȟEr )dɄ?@0)v`24LUwxdBH?A(UHEꗏ딾;uK9oc铭bieR9OMG"k;jo[ji Us&g3hZ;0X |5i8+]+UXrXm1e5ԏL0ܧRě-K%2,~PB}z^99>K58jg hzb/Ci) UHc. `/ Envvx `a6 2.k{R]?vs> aƺ0&njXIR%Xkӕ<ܱԏCFiao!>|;{`5 7 *u\kx`5B$kYW@AzgvG0>O!##ۤ09V:GK>1&kU A,(ނ(AE#-=}";~VU.׌hrq2bP?/vPGH1E;I:U>wf"Ї{G K<*>I_0EGKY2-_V"&o ӹ@:W`>:f><ҕ,DG+0ph|=cyeQ>o _]QiF|\U,֨v=1.hoƥZ~K'YQ,cM<7bwS\9 )`ڴm` ʐ)gF dGNs2r "2[]ĵe!XXv)ǕmoqߒlnA'`Kely0Ϡ;.ʕ<,r6b u2 hmu`ýtZ5 F3'֪AnW\'`S.wG^ŷ¡g_pu&dq/YlAoD6|/@5E$v$# @PIRs)qS6/G$ORr_4{ɁB:W=e *|_Yʩ%ⲯ]|lr:T)p34QWjy`zkwiʴjdX}};(g-pk[a-1`YAR>Fy<,-`*i&3-3v3J@sE)ᗓ?*ic^б-@Td.jK?G%1 +Gϡ={[&!Dmy6ȡ|v%dwa2%pQ<R83˖ZSdoІؚd om8uHL1+ ن{ks:p$ Q@Q9&7S]W1u.m*j%1{0h!ρ.DqQ3aWo!=o1RJI6VZI.}iO [.T-KQ{ЫNhh߸#'z2y5[S{Eg8d&@ HJc[ZZ2N8@UZHfusr0OwOŐ]PH;͆ ɟ 4r׹(;/Ќ*tG 2l;o 8LeJސ =USI ;;PA&8e<_5qcѯ8{nehӗD_ʞ`P !ꌠ"T݈=0Z?&.8}jM\ QVjv1f.}ٙ]'`2Ȝ]7 {1A%o&z2Sǹ~˃!~&9hn܄N]聅pFY!UgՔűw}CkSnRu[+azZ fvUۤ2b?zArQNv?@*jRq>hB0~yy_.IE܏9/Q[M ݥ,>R:rL_΋qDFۭŘ7+Է2h)*ZT.>/67{G1(U$ja?񠢘r"$'-tz7O&Ԗ3on)rKh-Ң򫙳S|K9HB(UU,&^<+|7U+V;DH[|Cg9=Y>O. &_Z.a(&Lx  Ց"јsk%Y[\g^7pSpM=S]f\:piXZ]./b8deM lx $ im8l3΀M5Xbzjw~q%I$T )1;S̐yp.f"_@ǘ褾!HRg \s|R}`kȌzt@!٢ vSP%S Z<#dKf\Phа)^`EY nˤyFxVa%$q.Wf^cXo)Juq<;YW]6jZudS0Xw"?!wòv 0>I[Acu>]`gCTђ{epT^SO1ir~0"H35볬Eư;=24s (nQNaͯ8*hf WµI-}K>PA~)hǗ$/q^Ԇq4ms,I}P1H͎r?nD(GTTW6>gqBR AܘT mK햫u:IOBz36G9yۻaeSZuPXE$=65JcUk~?Dg߆Y0!uzik`sR3#ӏ!v"LO RՀ+=J+7c@Z>_u_Wub 杸f:d( ӭ YRm "Ed_3or μQ?dwr㲰"rU$1Yr {P)xK]19-JgxFXXE*5Ht)LiJ-pu9;yq葏RN9N54lv/c@"|2rU˨~C AT2!=+*:LTl x(ʟf+ې+u@Kx"8,ާ@iCZfsktOiBfd+nMAE[2sZWtmFuj(8UH~F:O W]9g hdcQw4 P εw3U>oOHqaw,o1F/EN9y"#9u 庬 ~T?]6ss܇C[ ̞LQ}6f^8 Y)Z Ʒ) ^} mG [#>n8FN`iaP}IaL&L@_(ap:+~B2љyИH_2kG%!ms2Z#ÀwueԿ9hvIu\Z[6#.(52Ƅ~F \ F+ :8OipaHH8i=T="-c4xȮYaf?'.gʥ6CjtۆfPI什غkǰ&&;C!꽿c87h2I {g<+2e"{ڑ`iSLxx_oPr$*'}!7'Q38D/gLD%p+LU|z'!pдO2RG-rv 8SHw.RcfC}=awoo?Zh_Eƭa|x6mN2kOH5nn\X Y|lB}%Oڷ3`@|DE1_a ,z:0Eޅ фz!A1kOqO[Rsw*XOl`^!U%N$"7T{[ k RE^DZ Pa ~\_u`ܲCi/.>QЎr;=A,kI }$:!8LJq:ZٝԹ}I"4|zeOILQKDU 0PK77grKShGns@XB"`򚀝1:B!_V?O$ZI;$}B\t։W*lB\k 咮gܞF5ym{)'iho>-ۺ]Zs.07ˏ/ߒAJ%&̰whPH0?{[L@0.(݈~gv.7_=CDW;#q&HG?Iw]ܶ'TK˂G1^3?Bv"FE?x}v;FcU}yK0|ȫs >];sQ{I`U{a?-/ufNhW,%8Th[la $e3W[ \w=EDkֹ- 2iKЏ85 khsko+u:.Z~7ƪJESf1 rClmI!9G-p-'It[mnB0tb)[|3!KtR[Jn L.3zmTh)P|x6e4m;.9ܫ˳[(^H*w%̞1zt*7dـ*^zׂ# ckm|߅ڴ9?t>>|<3{tTX(|lF-Kp'OqɬHLK: XA~MGKuh\c^Su cvo Z_V-`abx(͎j%#/A0wLŁm[ ei1X'uJ?]' ө GSυgn+]jg/TGLG8IG g-UŎN5lx?PvN L XX}?&߉O 9ɟU^A=vu%u@{EY`Vf+>ԠGc9xT !*RH gFNKzHMu/bf*o.kTKikRR%@x:9 8s7S.$P&bq*׺H-xb&0[4zvj@s?WBPsdm^ K?L>X7$,Ǟe%iRN .*[}>8#8C ua2Ԅ/d,.\ %Y5K,BTDG<FvdqYiCK4ZV4T/ -!=(8L!=DQp, ǝ:~ɛnyV6eӣ7/}e_ٲ)^lJp#DI{Q?|@+Yj׮Z+~U̯CDU9ͥeVvqyxs}Hp1#LJ$Rh}amJ/sLj2?5BYe{quh [Ip|[)j\џca˚'O|w;L*HFJ3az!?|^$a.>*0!0|Ҵɻ(e,nwٕTY6s( v҉$jïQ.9J'A= FbV GD?MޤN΄"7mbPOL",P`0b;Mߗ.GbA<{Y׹?9vUӂ q嗪j?^z3RݧD_^raE䲁Ԭj[tݰ^ieپ4 ׍SD&Lηc*xs}Jn6Wс tNѮ_Ԋ;W+g`+0 zLEmFE\IhuS Du&NLpywȜ~*d` %$\w:9 h -B =N䆷#w7Pu` w~O+00,TT{{C>*uZ嫥2s5fM1yτXm-ɤ+gFTOt1eЅYMO~t2D9}[*5pKﻅ+ݬۉOǘI:siXoj;P/"$4F,JJS]֥oFL`'"yF%}Vwg(X$%t_4RP-,dD|D}V:Ht ;Q59an"mҌ/muGmaŁӽ!!B&$PcVFP9o&<n3uuU |.%kC[VWY(zYR6%oΌHn9jQSfۑ'__7ZŢ @j=m}yp|ў\%[S##/y^??zVXC-UfFeƩ| _bsj!ihw:\EڷOX%mD,7X$*~F3ќ3֝{Eet(b~M^ޡ_jQCb+1XӉ5+ɘ 60] eRO'N:@}Ey! pp,;ZUQg}YtwhVx!VNH6=2!՚_)-ȆK4886@to|bUuյ5>no螻n,縦J?"ZE>-:Xl9mힿvyb/s8^yS(S{J-Dd2rZP{e[N`ȗ_5ȐFeJZB .k*L>B1)()hA(ICcIXahYxOTkuDb©Zxee= Ìi.|~W7Fefc$.nI;OR!%,hAcD;G`t@l`-n,NEsRf84,ޡzɟHB>^q1iQ,jbtB]XC}^%zPwumgwəöf30aE; /_L29X`m3MJ;">A Fdz"ڭɚ&i7Q=*ƆJzNN7_mVE/i>dSk#Cf,,;u.D֠~YOy|~] }zTIW0hVŏB_lJ{|߬;m#KGYix/C 8[xIҤԶ:W} Ͽ&6Vɳ?kw{QRpj7-()Vx0Wr?rSf iWo7w=lrF 7_|wg#D':i?sAs%ٚ&) ef>Xҏ" &a۶jy7ݠ4y'*w\~CT!V&/Qc+ }X@;&![)mLHIcrvTZ=HLѡ)TH#M<( 5OwЇڕ`V{~cd0$aJ3^yqv,|vF],h";ee ̗dM>_n~f󢓕.g!hV sK6v2uL3 C[ґO/횯"WɴU fToZ|]|RA˛^K~2va<+U.^`@>m98sjHDÓrN1z0q3M"ue#8CoS@~wL0GEo$ҏdWq>fӛ5k__iUF&Rb궓:R"w%U^lj>3"L#wÆjSUDfϯI$])[3ڴ(ݥ,FEӆꢓKO=bE 5L{*rk[@GډC>]䐨g9:U}b;לSvlل7?_$q2sMmq72˒k2#Zi-ޓdMg؉>eV'Ft #9,A"K]uBnjP$ljS 6(lg.#|㢔x2aۙ#ޱ/&_:myr[gMDe5 .,TLE*)Tre LLH43DYb5m_"/5 _qNL1&٩"=iӗ4,ilOR1 O>f,tFT̡U}Xk<<ݡX$MVҟ_r:g;:t lx8gL%]"i8Xq/QԔ]MmmA yS8ZmDo@^h9jˬ6 q2 .Hzn}iʌYdyY#}QcU[f-858kd Cr-WA{C5TUjx>GDvA _?aR`W÷BZevKRJRjLR흢H]}? |B^~> *=\2iln̡$_LJ'8j_ o?#+? }xC7.1j7R;_$Rb۷ |}jl|LegTY]>Г3jUN'LD7[͞>fJд2GvᥤW)٠7OInj`w'9#:Ô0#mZ)0Zcnan)3TU͑W/fXܓ6|MTv0'X$] SVb<$ L'8Dz14U8lnb[U~N,,U +PnmN'r( XNNKgN5֓9VLBOC"JBy/f`i( _-%> LǪ'nQuP+ 1MgLZ٦] s8V]b=v%`_G@U[D+Q%Ty~aCɛQB߃~M)E/!(B2W,wK7`NWzfY+컵$RΛuxLs/h@XI(+D=2WtD-dO+Cdv3ṉ.`;B@8('zY;@t;Yu:M֞~D]<,,;幉e=7+߾QDŽ<uHD?vl ?SzufKg&/~8T` DDڅgR '$ۂ@ UXom!.:0*elbk|wp 7pu85*uUݫ$Pql;}B]#yO1,~pC+~T/Gɻ&>]goq ˠ/jx\+f_g {vЖ KgZYyu('TI EݚGxqWm)q<dvp:JtOK%&d9.#zI^0v=e]wt21L| ayUnn8}FDY5diGZjU\gTU9v5e?X1@s7%+[?n旴Tn{N 9Зvu;]UP -@rlixwV[>6 CKdKLV N&=j0]zPp|*H(5O>1sЖQ\@mΧeI(e)!awJ+mN⧿S}KR2od~"">ڂ4&Ԅs5pvd#) a9! U}AIy & ޑnls2drѴegٞgISiElS(~hh['R1~j tzlPȓ&(U5Ԩ332-=K5Jvܶ[ԃO'ɋ'wy71^ jqcݱj95`6)3掭BZ-qvBsS.%$~9eg>c9 2MNxi;f#w_௶g{G LY<?ܮ):6nKpIv5':g@rT- :#^뵄ive!Guoam'ZU9ކ"?:Rul8(n;jG\s3h$~;BVK?ŷ8=;}va.~zxˋX3#Nxz̏} "pA?` w=-3mx>kon;̀ߩ幊1=G-~&wW99~[{57fPF;:ۙk[2 ?ͧoV+~Zo"9 㝰.)e,l9U;;K[;ROmu(:թŴS;}d_.{ud6o,%ﺻW|8$ukqSJ+z;RpnSxCGByъO Ύ{oI,ZZSOYѷAk7:zxj1c?Y-u.oª ~RwJ ׿bwE=}0n/5=rֺ_|%uȦmsҭ4N^R|x6hjxSOO޸Ε.;wN+wS\ft} :* x]Bѧpv+xѠ5-Kt{Rk?N~g9!Xl'0%caTۅXZq܁ޭ{G}9cOUZb%֝ڮ?S&^Edq~Otk;P˭Wx({?8tuT~fZIIbo%N6Yu}wuU''1eQ^Gox/#7oh67[_ӀMw$u=(Kao9K7,ձgocWyoeSW[5qD-n眼Qʽ{X&SCjiy54MvM:bjzJ k/Hnq+U5.5R)ˏgxu{sSrߵ5b:m4h55жSy}jλv܀\/Pw5dB&bh+mYn[zPFѷҸ.eV"n.?ѬͲQ]݇% ui*~Or8z>OP۰intTΩH]?ؖQYU2.gݷHNkӢF=x^qF5Ms+\0񍍗Ds7(ju O~ZuSnUJ;4pqcA69wn5VE9QF={8|ۛ}ʵ)WlW_-:>.;+Hf3+*N{*sfre=Ν^Q7ze^՚eճ/z}{(cqTrD5tػlXLktD>j Q7|#z*mE\ڮiݿP}I:WVjG|4)F7~E9°vjсͧ%RHBo}iիw "᳎H_C? [CX9- Pf妇S}lk|;'WM\y˯mup`=\xiq3#'un!1K>zAFƶw;ןjk0r(Z1_aaof,7GWf8`}*Qɭ~prƮ 7–=b"A;b$_Q]=*ٽGH^3^&'FI۷RA1\C^歑OIIv-"7 M0JqR  ćGI '#a4MC̪t]:V"=ʨ{+rtaRU{$~ð>I*L*i$t-Eg8NBl2g㲚=5ιǁu>/IJ=5s+e1.z +6ؘ 6 +E/JQm?1?fo|ŏ)~L1ŏ-_cSlLc{WSm?1?fo|ŏ)~L1ŏYc#>=9i)ʗH*745؛C(GW9֨573NJ=(fGjB='Og8X ލ<41Y1OLݡUUZ ϮTV#w^]U7wo='9 6sSaY'_9+r"9˥<'L_ bx yzܭGvf-8"Q2PoRy/Uc7pW1]sJ "iꄅݸ(N좑I1AEz3}tLyG5Jz6}W~'yY.LF7̃3qkdd?՚_tKuVir~.D7Q܍\.qmOVVAIgX rExA"SLhBK{#G#y1$MOS>+;۰qjCnOѳ7sa{ sڧ_` 6~0-6?=?{1QlDPͿlob: ٖ&ެ- 7mXaI-M3+V/,[1)o'JmL+lSYl3ўئ*|4XR6q^aK{~-cr)F`?)Y);xf?FO[lKl3cvkBlFmX|~a lS+vئ{۴ئe f=M4!H10G]/M*i XoK6CVͿMy?f?FO[lKl\vVhBlFmX|~a lb;lшmˇ3ws{b_ 4൭6Aym,Ïʥ`&9iؒuҺٗҦYlOSMZg[bCmey#6,.aiD؆'66Za 6?u>fd{b|S9YmjZӾ% uT ѳ`xxkj(7-gR G{!0I#[s*yN?}ܓe=YOS{b[lKs[7[XaIMOF#t>W|b)>Dar*YMy},Ï%gԳ7M:Q:3IԳRϞ礊~|(~|r(Գͯ*|Qʼ[ugƫ=Pm2~2LN{~8֐sCE2y=4;i:LztpOde9uּ&k/LTF~Cӈd)w&L^π?Ɉ@dPE}?'0xR$?L-L|#_eRu6ξ i),7;} p m[5:F33|l/szKPGC$#./IM3{z=5ɢ}q-WSuLewȆ~fǃq< J2=IvUPqz?8\'}$unQS"NTTTH1M2\8 }tT5?ogٓKwx͏}I/' /S_%\Pk|SMY9l҈=O2QOm,+fJ7`;{|?"vAa]=W=DQ^{f@Qu-^$̀ >E<^[6+)UI4+fM=IC-j?D-p>g}HDum6L&g(B> rtٙH&g^Jc=_4x /R]߳xs؜{dDR{wLIJG c==G.($9lE}hc>1Ce-gv|Bo$Z A @|5v 1M0|11%q/oS2t|̙7SyS|mL>L,DY{&8`4r% ӓ\~ŧ1L9:H&g^-}!zowj{o4"|hCׄO'eJ i_ c=}o.(4r؊,|lc>&ʏ3;]oFI~|}o1}1 2oʷ)XO/$O匕Men@5lȱ%'}+ZA6L&gGz, IQ yg~kGmFNY20=z%jo>rtٙH&g^-} o7N9ZCAB|asFY|~~h-w𽄾L~5||N?vw F[!OGyY7]Co$Z ;c}Dka| Qc3_h+ߦce<*^YHߑDixy:|?*m%L~V4bΓg0ţN#,O|FTv9:/3٩l;L_ nz.Vi{o4"|hCk}]З '+ﱞT{.(1r؊,|_bc>qZe-gv<vMΌh2$w' O0=>oS2l|{s4e6mJ-3< &S/h=enu9Xْ _.K=)Gg̋ C볛}=ڄ,6giD'fF&|}Z>l'|9"M_S8de41|b(?Z|Ov. nTDKeAr}<ޟ3{IێRܵ7p? ޷=13]`i6+)LV W=ﯾ{VjGm{&Sk tI_pLqi50=ze g2,3^g | g4%Lps/5̟ś,߳$#j|L}"p~=ω(FL|?|c>1We-gv|vzU'R Hs'\F|3?S|m{&s> QIO>X9I߮* u"ǖDK_xdccS|dzpU3Yi3K'_/|Dͧ1LEv&g |  so㎶:B|asFY|~ e5O|!;{ 9+3(/k=ct2N_a}_c?7'Oη'g~VM߿Ji9iMy~Q=\3Xjȱ%gٺ%+Mo[|dzpI[vo y=eΏ:61=z%jo>)g2,3L^ϼh?[R>AZq;7,Z߳xs؜{dDZS gO'eJ֐WYN Dx{1#/f=+EQ^{ _n ~{s7ٟp=q"߲oS2d9y;M=[.Z֦Lk7/}.e27:lx/%ɔ\|dzp3{ GL^e'|g\Q>Q Y9l҈=O21&Z7.&|`Z!OJrsw@H/ c=oxc>$ʏ̎;A@$48N _z .;}{)ac כ!oO|MX9>OӒ ',7xTyS~UeB3"xmaI?Tp?enu{'LO^rYL9:L&g~fLϽaw ז} z'{ ǣO }XY9@>($#z;`a]DPIf,vu~Rl?)8kA\UK   D}p?׈pL'$y1<(?Z,A"hT$q?qOdz~4]q?=_4,g gy4 Prj:cK"<&/?v 3nGrd#N#,O|72rtٙH&g^-p\aku SL^x%!a~o4"hEu_Zf9d'܏">AAmdĸ󌵢(/kpgv?@sq?7Gq4su~Zg '4r|%IsZΛ5ie?oٺmA7l:`2ER{1<ÍekLqiiK'_/|,0rt;`2y=9sGhv2 N8`;K#$#uarH o8 t w W^T%s>tٷsOsVae{;oA.='ңm[5:㼳W07BK^܇"LPUNw0X$ʏf}$Zz TT$B\ 8 ~>3 kvL4 {G4]`iAw+c/Va5y#M"&c4kNp=`L^KBuѧͿtu9mfx/4Ydg"y~ o)JLϽaw ӹrHh¹7Y\'Ѩ 4x}SsLi. "l cd;w.Ѡ1?Y4ks+(/kq?c/mTDKu@APE8A\,f9 'Lp=;.kAEpG-pJPoVư E>FN9Mx 3S|ϽlF73Lqi50=ze g2,3Ϙ,c=KiO(p Dk D.:?K~:7=F)o:Q%!B|asDy'3Gg>f+,e*K`Ny'~n1|b(?Z|6N_ wÓ?C8)D ߳>//m{x}1_"-'M~k?y(/A.?TNw\Y^]'.98)u rB[Dj*7C'\KuWAyM F2,.bоKA7 6zQNj{Fgeozolլ%ٗm:_e2yw4d^Y;6 '|Asi5p|ftdy7mF b?wȒ'_^Oӈd)5ziw9IX?-87{L, V5<ūpIF4z?[>Zӣ8-d$ ? q2-T\ڧ&wOi&w'NR`+r3si~Ǣc 4Q~5+8"*qT B.6ƞ]mϸt0 N >6 󕺣_-y풞[pp/3qdtFܳ".cms?zPW7{?M(?˚O}= ]@TqU@pWJW{{$^xlf{n=/o1W\egMoRs|WuNV.X~c<di'?KD`l6l *k]3~ɔ9>͞ .䤷U>0=w |oCq K@H?6'K(XqP1}&Aq1LzC[HB'j $ 66>q:٘4&A@4#q`lz5b>[_boڂIQ,b_)`/iC]^4QۏpޥTxh]J˗D ZlxRHtQ*Wr` r)?xI IƃQd.      !"*%&'()+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}P[g\49^ՎjVh߫E+ x(X c%62HaC  I[&&`X0&$;IH 'Ę?Npc6_utV_n׿U[_&oz矷ӹ+ox'w)֧}_9#oEa܈?sw?n/No[׉2e@l~}m' 9zj_fמo7'y sw>}rψҾj ?}tʹN@Z!N Ώbڟf_O-'lm&gkLUփ[3Tw˶Q>Gmۛ:_%FEGu+-s2kQ9?鋞w㍓iֱOQٲ_-PNJYn{1lm4t{m Ϥ/=Z7k+CYяm\!,~V"s?t8[_ //\x2G(S,1v ٺz gχgl}zb8[oppf8[lpZ8[}-\~%`{+?pp>ևW/g빏u? g gkٺy1Άw>\gzwvE 1傀yp-W|ԏ ۢ~$\p o W)gݭ瞈l񔍏;^n֝9s?lVm$K9#\pM[#S>-vE[Ʀ}S-E_,MŎxj\K=)\Vӊso~rU-Ŏuؒ(->ЂJk#~"M$/ǏL6^.cgwhږاoװou7kkn"iM;495ik-d?h\Sy 3/6f=lg5_es@iu}31뎟m\.Q^PۢrmzHl\fm[gmS~z][ikI;w ƻZ=;;oecT ulu[YT?ar]_#rVe*חU.AdEckQ8s֝v״l}X[  OV6Mɬ#[G3J^r_|hYLm,dW*i~WK٪B [ŭ%[*BOx۪UVULmKO[G Ur@:tƗت〩VJ%006OssAmQo=ZkHƓ ϳRy [V>@*כKV~xwƖwQ&*W6g޼ƛr-*jvkɤ^YP~{q*\԰֒dֺB#0P/Ztp[6TM_ R*1bv]:k. gkSZn8[Objݧ!߸p{0w6+>pl}+l+ 7Mruppn}x6\pzÕC_ W|%\~•+G~.\~'•O W/ Wxr۾\wów+ 營 p_Ir}uu•uo\=s;•?\\'\pV gÕ g;ٚ د|Õo/_8[p~zL8[<+kX8 lgz^=2Õ d dD_Z d#lQz{jCag+KYrpzˋÕk{ W$P*\EPT_pľp轜klҳZ8ֳ?;υu^2_jh3clY8[Ko g 吿?Ut_`3lq wJlMfQ.?\{q@}*?,֨haB?,Rx;X=E okTHb)T= [.WH a;yˠ JUа`ݡAaI CC(%W}gAwY_5_Q\%3.pްUo Q>\Ð~Vߍ@o/ 2롡V=M_+۞!5&ϼJ]?r]ӬV/V^Q_^29w8wil#꫾k?!k?!CC !0xBj[>fa{4ʺ#4JzKeZ  xoxkt-iF(usP_j} \}>!j9||`h m 7lZ6eׇ>Ξ.WaJJ\[]Q:E //;xmz ~k#K&NC<^^29w:?ɹ)Ɛ~(ly]UP43 @}Sx}7D=CۺI^>j֗OijlCYK-yCƥk퀪Z &UUtSb!]0Sm]]_vQ7Ok; LkUN 5ҏ+ Z+GX~ f*Ȳn ~ fꔫ&Aye>T,'Lyʔ9kJ*Sbr)o;.)o˯\U`*W%UreZXu`2\Ôַl6??o=oU_¶m6op?[|=cV=n:vٮױw[}g%l墋 ggVv٣{;s c^{-ۻw/{ֳWUnc{/:|P @(P @(P @(P @(P @(P @(P @(P @(P @(P @(P @(P @(P @(P @(P @(P @(P @(P @(P @(P @(P @(P @(P @(P @(P @(P @(P @(P @(P @(P @(P @(P @(P @(P @(P @(P @(P T@[[[[]8( 9'O^&`'(0 ! I IPxJ#N jA( =X? (uZ]UW2z3(@ڈs㮸#D\a"*K4 e sƟdk/ҦP ždH(\h:x;8^(eiQ q1,c !e*a:xpeMlMG,76"Qd2 LB[`PnV{X-GhPHuqege>n%N&r"?fx pa2mqɄAd2YM+k TZrLĹ\&t:4w*g'4ƌWXI0GoP7¤my.! bdM3,Ddgi7R9='R^X 9Wiz~~~hH=T@iğiJ>`*!.KvG)݃/KE+9[# šp.Ul^\b5B'dn˴WZžuvq.#o^hcgo^uɫ/..S hQ5e=$tȑ#ܭ){* %۔E0ot]e;vlϮN .' 9NaDDHN7×q]@(ol[Z)"ʵ NQՓev@G> BqOx:rrUFȝJ]$,2yIٌ29A/R|B9px߂kQ2ù$>K nh)4KetLVN4ogjƩ:ȣ!BDP=#Kpd E'r^v[~ 5W,(4r/w3.Ǐqi4] Dvt>wD(K,s *Mʐ3(_#g)"t82Rw:8q7K&;ǟ? (S1KMKGSCq9~ۑ 3F!gZw>EqCvv=ΙFSsۜ8U/]R|Ҟe.=%D=!v5{\%t16,SEHqE7gq]ٓ+I|G(>K鼄}{?KKC+jnz3nT!KƯ&ҾDdM&v~DQlamQ"$8XZ^.)_I4-!?F}ގ0(=ν`ɒL0~T} Craqi>Ka* 7㜄~'qrp53GÆawKΉ k64f/<swڌKMiϡ!Iwof`kZ(nei9Hq`T<- ?"di1%eFیK=!CK5O#2m4K1s%ь;0fX*e+i4xr_4ciyqxayya!!'e7"PơG'勉")p;9L%i@/=ddGRCeFG8Q2!|MrzD×K4dXa#aIqzLawK{Ghz7)D dkqﷲDyMv pXf)pY=aP, ij#' Prd,΁s1XL[I x o0> g=츤'&4Œ +~U K]_jR5 qs{pu=5rsgD lqrЋS˲%cK-:c_OB#RD,OoI!!Y%ϚYUdݵ q1^F كP#1pEsY[#rC2{HJD <#H Ʉ$ Kȉ׆)Bv\%&cMLLHg^ ɳ$˽+"/b,,*&)YVGhqJ > 䳩ofh't w74L4B] 2Ԉ[BŸd&ND?_dݵ qɃ dw{7ϐ/1ya Eb+7 xQ^a1&'2KK5.$ܚwIOٗF%KUcȋv^͵"8MУc&?/*K 4V=q\GXM2zDݙ=VBVkv ,٥_v>7nv,tmhHFG[;A'3N7?lպ:}2@-7sX͸/ѝKK+4\(W9KjJ[IVN6Ss p[aa-" KWf^o#B7D υ1_J!~IO<Nƙ"2O^K%KI#2' 2|U}CFT8`shEl: v z pK/4_|)j% _ӐSDۍnJ',XV$`՗ٸMq*Ai)E'hmYμM)j;Go:_NHJij!oQ\dK{{ҬK̽`a|g$E Q&ev["8u6]!,Q5F],~GҒGO%Me333e9EB_\Jݛv\<3![f-YI'NbrcxGfhlz{BL7c\f9G9KӼA{iENVFL~Nc>+P<)PԪK OϦ6\=5 ~!PF M+JMdܽ\SL<~y*.Kk[h"Kh%x-t6.%oR.["+(mX⣍rq)psĢi)YXr)E/ON݊KWS5@1B"@K!$;ч7c.Ki=KosJ*ꮜ~*'U VtRk`eBض9Fp룍}ER1<:IO~Dq ,%qƗ|?vգ Rr\R8G=EU@CBMxID|䒠iT2-U;3IbL&~_vYESW2σK( bKpcڕNj-h2%lb|9a&. bY`:r@*. d{'d]'NbvA;τ CBF?0Fz|a6_y> ~NRMS(V/i%t\JO6A`zw qƕtP DMD#ܿM8zh+;_5~L|,>8*~/T(54-:,91nj#.%y8 /FyJ0praͥc 1dFn?Q/T][Лf gcFr,YfqJVX:bxv 4y< RL᭝ ns+&% D8dUG]f ˔eI) F|/KG%Xre"(dM>sS&*ܒm\QND8Eff}˳$h[]]RQ1o~ԦIܩ J]46Ԍ_gֱյUX=hm%jOOSxCId#NtA(HJ(熂(N囇5|T)1z B'kɫd?Iujx;.(w ?| z4'\7ۨ튃K7ܘ')"drD w&mu-3ʇ^Q! mrFیsHX?Yƭ8QSgIS#r6j I@bT@ SAtÈq`ɩ(6z^ZK7^)e.>CW˄6*}Z 6Ai%=.!.G*:;T YڶmW191*vr|EZ*N_gzrRsFD8VMS\G7&Pp8cȕc0a~RJ*Ҷm;/HzڔioÖ(hq 8yO|P4噞;c U֌Kkİ [7V1a2]Dh]0&"xp.5$ًhz)h1C<8H\1 5'Yқq .&TMBzʱz)@*ʒn 0R0LI(ܸ糍\9@Z|~`Һ,R] QHJOcAi#9Os[:2~,?~S/zsDkcRK*ăi""ye!ufi ;Q[Q9KxrnR}]}E:xް :ޤ,Z͉7ǥ~.ODlHzG3>;>\]͸1$(~$ : J'^YuDIMD(%FٜuOufR.u%hfL܉hjVf x[i)ӳxzJhv)T MR/X{󏇊4!yveF\rKD5DAqIӧ4"r x#\'IY1Q%%#(ѰĎe?CY4@i5nwt(bG[/KZɇLPZI`)բ%͍\P)Wש5x 왊W ޣEPeri%IticcZ#)5.TK5Wd)_:q`I+XJ\ҜϬO|7'Kr,NwX%%a>y%skWCRɒ=.E qK>oR,%GUi>4KsK1N KZARž4į=rx͸hRRQ\8 {ͲyPgRj>KT2e岕+׀E]f) 4-yk'čXʏ]m,퍛vɟl ÔT́KUbt1[OT U:ROR*AmyіY?@Jp=D;Sc!oLpfi 2X*eipT2HkX r==cIxz=F4X}]A?X4#@ ,%wNT!% 2%,%(K!P) K5XDNRuW 9XjR6NԼM1(Ki.c' "4J戟}R\1yL,s[/zĒ Z% ,K*~& l.%P)C\* GI2ؔYHb'^vv(v+@5\XR\tυIOح%X rUxP"XRbK,)bx.%OF:q;տXT^CT^GXj`|-OL&C nRdy8HM.ݛBAg*[j}}xN=k\tnP.ӕKK8_gЪƥ,-4 WNh\\Af^,,7_Kb"Fqit,@ʉz/vWEĥv*@UeYڻwqscR.@\Rb)K{obӅ(ǔݚ],KKM`)W5b.ݻ48oBO)r?w%a(T\fl8MKfi&2 @?d14Lh\إxC7=ty_9)5/ b'jQk@owj[%W\jv!.91@ڕMP4RnӞ7HLcaRywYZ7N, wJd4ف1J@k\Zg!٭o|Kѻ|zq)D,ωGf[G,)B)&Di1R%71 ЁVVKW;L8),%7".ͺRҶm;}qPIj,l ϥڇh:t,4#;ߞǓ,Qѱ [w4[Y@_V0u|X%vӠXqr"$KhU]@Lj!ؒ,'UKvyIx]j^բ%%ǣ ,F3H=a0Cdð{?Y"S_mvGoQĩ5 r`㒕{˧$K^,x:K ?q( v~Tʳdiv̕P%4HUGUK$K$Z3\ K4~k^%NX@/fI89א5X"^1KjizO!5뱤daI/p N ky]%'%!~v.Kݴ*M$qZQBVd[b7RK2 ,DXZ0?zaZ< 7Ҷmj e54hXM*$Kժ)%?3?jK6e>@,% owU#e-$o#M9 qGyc׽{6R%?j=XjƸ,pAD ni UYtA]XXR@s՛٣7. %'Kb6$_NtxW' ;,W @c~pRq , G p$,&M]O_%[ͦd4nXB\.| C9v%#K5߁r(^1{C/1,9kzSAo$ l?jFn>k+l򂥬"w%M~do⭺rKNx#G%w ]y~TF~_2N`uR^ߠk47,kz&䖺2x&j\E}%\Ը٪q\X8~ԋpd:_8K%cqD\ReO@3H&j\Xrg[ r$e!B\Ei$n0-~ K&RX@O@3"D Bqh+m @<GxKe/`H37Mc٨@=XRqIE!׎K /<Щ_jE0yghp ,D\rSb QT5ˈKyMʬw:@(S̾`)#H=dN=دďZ`IU#˚Ǜ4&+'g5,ųǥ,M-ok̀_Ibֱ4l:v 4mG;".&3xcKm%,>ɒ=)>6+.*<lj$Xr3kK*SY2-n:K{ݷob|ťm;xj)u%Ϙ'4Kx4 %휺“NaՔg}n4,mbLs }v'N@Yd왢PU;$i8d\fss DN=҄:Sx~3~JD4'447GjuuRq ĒlS%'zYy<-.E8Em%@ Є۫~\qIdE4R݂=ڳ*K=*bK]=S,ɫJoRrXJdM5T·%+KTe% qi$f)Ӵ!~cW^>.E)S0d.ҠgjRǜ^).E8erE,M}q4DG'y]oaCr\8%&#D%>Y'.E8ms KɥU/G] eei•{)J-9ˇL);ueX"?uk;RqWp|GK|B4RGUrR7ׂ㻭@XjS*RSRKܗk% ⠍7KBa ,Y ;[;ls_[?-"|I1͏b֒HVdUyd)ēFy>k~S&K_݅տ+K;3gih 49šK|;pkSs,mDKdIyg_suk,Up|w4Aݚ4QT.:~&K{ԇPJ 6Vg:?{XR|2p &g<6&vSüvg[37[mV|ɑ K;r'NJ+P&Guߌt`)d;?ilb4TQF?|;N K ֛_RN`@l>Z&G Xkҙ5>Sz D!*-!PfD8m.wyUhĤ L`@7GCO<ru,(hK~(k9&|֠aAZV`ɑbK`)U5F~X*ĥnN;?-IvU$q)#HCY=z%]|*)p\,Y}tFRDt|>d6glcik:k ,ۓ4=V7zZKfW MX7Ͱ4$ztci~^|UY.=z%]|+ƅ]lN= .𛋥Gd%]|+bIDݱ{IقԎC3K>HJPG\wqIƞ|$ȥ|<|g|$ m#A.;$<yKsXr)8.E`A0{ߟ%"ʢƺ#k+ѬzGr)Wآ&56hX\ߪ#.\hqi:phrɋhk\hE:$|J)lVD4EKKYkxJ Ȟ/<ϥ^Q%Mf|0OS%pz$\+d\r)|Z>UKdOD'\5b\)bQLdOrr)q)L5Fm}&.D%7rSMEK <.pۧFW1d K9?hx&&iۧs5o&(UT%脷YDɞ8\h%{(Տц@'r)&h?"h%Y/q\2"\c~B=3Tlq$5k /8.0% iť@ȃꐟo# [.-W C}8܉#tD[c\rGS'%qiQ- eRj %kr)^8uq5B{9:NƶڢNA.d\" s R%yR utB.EƢu6'Sxj!\=\%O*4\rG/y鳶H\,KugQ|0L^`SO.EA%=.D <:yKO.E%4Tr.fmAKb| @ R5qI< ?~&"9̅Kf)K'y%<W+RT!$9NiŨU%ރs&- IM"@.E5+mȥ@.E!xr)^ Tmȥ@.E!xr)^ T}AMeK~_ Cj r)ooOnW))Wآ0\Z~\uޞuZxP%K.Dk[\ =RRWqh&|0 *RCi!K>L@uqIS'v&lADq#qIRʵj.Mu|J. a@ iR*C\A€@Rߨ w@ K¥Ee0 quu){eȥ)jJ[>.U)u|s\A€@JƥZ%K>H@\?WE@ R#xMؘ\>K>H@('l=q)9zǥZob/o}_**wm)ƥ>\<F.E`_q\9ѡi~6o[M'9RW+u!6qr j5u< ވ¥KVKRxr78/;u|(K>H\ [H.EVqi~d)L 7uNn'$u|%|xJ1.K^T:; |x8.y)y8u;DȥH8.EBTl=aK)u|7%K=-8.k:>T%$K^D(u|(qI% .K^0]\A E$|R'9!|xJ0.ͪg⤎ɥ&Ϣ:јIzD꘺?\c YR]Sssg":U HO'9%"KM͋ZPi咦NW^'+`nw|" JF3@KN^Ö\D?.5j4R0u/TnIMr))%MXTE:iH&\k}r,ñdzYwV3K9BE PepGՁZ̪nX}.GWWΗ.!l.r4y2=(HEOq8t%KGz)ǣZA-<30q? uC 涴.p4K]2>r)Bz)G%R'-]* 4}륰q)Gg>.YmS'l|L nK9BG[ARk_<K9>ُ$0đ`Ջ@&oWhDpK+kKR 5/o`\jL~7Ȣ@(D.UXLq)xG.Y !RT!YL!ǥkaNqRg95Kjy;B\T]YH3R[u&ȥjj-5=]Gϙ3}M ys ( ıȓΘ1C54שF?ȥ43򉓵u@u uBXT=Κ@qdı##NLq,[ bDh`n8IG<OȥJ4*(?!q8H\ʔ@!V8 8yd(\ o4N= S 5N!+"㹜\D'GgFi v 9IT8xɥ i[8YyC)4m;w9o<8YvH@ɥrvwqjcĉG-rizݥ8 3ju)YQGY4?klj׳+*D.L8 qd@a;*߯DrW嗁8JZ[=N[[)f(3v9.UıF#Nd#>I6gQ @8kktˁ\6#NũL.,iϙi8G7x'&Kޚ"8 xIAsJ88--qG5"Jόh=Uצ2Y#'c'(RRdq_#<>)h8uj㒛 =q+ǥX7mq5=`~œOJ8'KT2ާ5NwrV=4?Uq8qj/ƥj撋88qĩ];̞W;Kf`cowZ.P^296RʥsvkkyTo JlR3* \2#N:dg<{^Qq.gzvk"@ٳ;M*/q8Sњe k) ZDKq9,q8)h{V`RM3~B*9^>q\ p#N[:<#T,.5N [U T:\2#NڈUu33g5Ϟ#?qT oqUzO:9$Է k@ D"@ D"@ D"@ D"@ D"@ D"@ D"@ D"@ D"@ D"@ D"@ D"@ D"@ D"@ D"@ D"@ D"@ D"@ D"@ D"@ D"@ D"@ D"@ D"@ D"@ D"@ D"@ D"@ D"@ D"@ D"@ D"@ D"@ ՍJy&֋7oQT/bJo9=/ u.S&Cl\e`C2*N볔71lsxhq[7gwfN=8u0+6tjw)G2bOΊ=5wdFldAkVl8kPYΎ-oAY-2flh6ʂmYv "Tcπfc97/u`r7m!a-pmm;&9zz]S'8W;ZgsC!\qjuV]vO:VǞXV󻶟;3tZtIV YI<*r|{zǃo{Kmt(9 _2ۄkr\t=VcuMɪx9?Bs=$/G} {iʬw.&qbcbL"N7![ֽuv5 R ";=ժ>D]?2l:M{q9-0ۢ_ s cc' \DE-8k7@c3a[`0hQxxF}KnuzDuF:ܵ&Ն+Vl֒'tvA WU#6 <\ վfYk4q0%"?"}@U{]U{n~c '>dRgZ2%uUDV~swƛL71'_q%q'_o 0i[sq \*.<~ \dvA=nWu^ĉ♇R2j҄BbhZ[ۻk;|YAhK:u]+HWFY.]+# v;:sk};0xWRC\8hI۷"nPQ.\wDk*;7lСٳ8b^֝4&btGpќA5c÷d.'^m0JΫ OZsJY9uR70 ܶ"t,`;='3i yغRxBwc,lՐöxzq㽦R30;Tw>VNg,:;4q;gOjtG1鎎g.Rl{&}m^VNݹBF{PY9źs?I0iL㙇;iģӵt3S)ԝ[Bu翕Sw p ՝US;C~Qwt|f4&Y9 ȓ|g7ߙ ՝SwChc5{/ܫ;obFaiL㙇;iģ$t3Ήgӧ;Ǘԝ~sp?P9gu>ph֋4CŽygxaN(/4}yNPݹs>f?p ՝ΕG޿˞((>Lѓ/8>#ȯq(=Mr Òض챞m XݽgLݽgm2o0 o2}EڲeW}ðo}}#_߆ceCX_e-kLˎ: ;e]v{?a'MOLmML3Z[&~r5p`b3lr`e<0qAX,Wq23tJm昫ϭFsx9 ¤täbf{ރ4ݰ}75)ォٰ-w'Mi}G>ic (:z#uN#h?n'6X;{RJ}rޏ|7!{'_m}Evyשտ_P[Vdl0mEv'oΞ^ c[o|eRG?o=}w\ =χq]}r%t[y\#+8AJ6ȯZXg:{T68? 23'ey `2^u@..VͰn\ <Y4ouj׸g2݀W]mðeqŴ NR漏_K.uns0laG[P@!\ ˦^wF a|iz ak+MxlAao XU<5lo4 8|Ҥ)Wq_aW j h4~SyRVy}.:RܩdVhN~ C02Ұ"#Jpos<?mQt|t^ܿWt<5,xlAajhX[aZ& ü\lL [7j0hL5VHxKz4,Gî]w^ZD`cu(ʚ;svh*"Pܤa%F a>ż㙇aic ڭh  'JaI0/ISh׃f f5ְh?]n|5LK뗸0c}8? < Q{nҰ_?nK5LLJNc0*5Lbz>[ 2xpzMvtF aӘ4L3[҈G)4^0LL2iE?0iʈ]M԰c0k4s<)X ^whؙ?]~5zk.~!|Rނ{[g{4#ۍG1igRh}aÍ?{Ly_4}vrSc0h泦b5lx1p?0y{GîӰ]Y֒[n J\4LhNZ{nҰwcgl3jƤa:yF<kҔ8uq'bj؆f f>l*aρ矄[;Ui867]q/d-Ӻ&V) %9T_@& yQt|4& ְ4}h {@4wny7ˤaؤ)r԰b0hݦqy} xgѰ0^RtJLK VJFeA& ͢˨a:>LtxakX؂~[EÎI0/zĤ)7;vԫX%>n0h1Syh Ӽ^ܣag"8vf`k0w_2 :&aZ*a{Q胨@hܤa/_f aӘ4L3[҈GI4싢aZ5Ia?(a^y|Ѯ9f f4Y(ZÄ ZR8Gv`)۳9hS<]>7ܤao{W0t|4& ְ4Q xP4lѰZsV e0̋~jҔO8e}8ގ԰5 9n*Ea_ɣa/0~kq7 묹ǐeD˼2Ws3ѸJqCo}m󞛴soFma~ż{_u'a89j%u;q7Af/yMzw问Y¨w:>Ltxa](yklne;̳5G4~7ލQA}=n*PԚf\y]Tϼ\ċaQ.CܵPk]{;cniJMσv {{nҰ}߼xQt|41M3[҈G)4l#kةe0̡ߜ?h׳{eL {Yà0{sY:UWî=K^f4'uJ}8.^5IÚzcϘQK㮾Zía0hܑ]Zkn{dd F{яzC* FƠ`ļ&M9QpG'A8xְ&_~ˠe8s9q#뛘p5[s57Xw^yvV*֮R'oW`v ̭71U֘Zd9 &:G%.~I#ru`I|NAt5ꅫB~0Cu]$\إ8;nd/ 9ʿWiH`xCwk|LtLm5! C}4&Æ`RgqЮSf%i<=܇N#0i;)?`aNwΧw>ބVX7 > ]n)ؑun]7@C_Ɂ]`'~':7M0}C^}'†`F؏27t?tGqG/t~!{0d/. ~~ v %$Oܗh8~}l^f߃1`y 5{vC̣6 =~]ghm?lX39k1Y݄cuܝ˭uqu+w;!:'_7˗9d<˜坍'y9/ʓ2ӳן cpAg77ZsMK"|»y1e)zL}IX%_VQ3&?{ ق2ժ K&WZunVǟW%_éWW F'e#&ydk kz59jճZ gs˥U3QUU#0֤U576}s59+62:֦khulٕmk&5Aڄy&X┕TW Z%SyR\1}6-z;O+ QFmځxnO/3L-k1څp7ɜJx*&~0qEt$}\]id%4?{Ԙܓ#}Ae#\~"m3c{'[pľ?>6;q즵^JDIw?3lpt/6r) v%L&w_R+b(u' ba9qu|ū ѭ0wVX 1JDd 4C&UV>   # A "` "hJACB tS}6DJ97@=i"a1e,$*.|B!m*& ^-;#utmBZv5BuF6s%T 2?Oqq"Vϣ6n.IŭuOB=]Gt{n<)&TvFE0Rch2++K!\sQՑqc_~Ld% N̅>'>!|],u'4tUV˸-+i@n 1X:66Fփb* }6p(X]l9&2'hǶR<؎<5}Grx1D 24LL(0&qD0W5 |DOn Y{*Ipz%Õژ%v::H Smvz?lׅc~k=da/m&6>4wue ߜb.ΏOuµu15}Ɂ|5vumv=zuwZn{Bn `>@u^1V Bs#n|g@͙]~+lN6}Ɖ2aʲ-yPePoџXOH(Ii >ͣGz&g3 ODg(݁0j7]HVyt)^ӠF# &*QW-ɰ`< rj%mj@t@п!촽JƶB Z.{Bp ;zǞfΣÀ:fcvuZ b! 6 0&]K>"0X':Zxnh1wz}mTL{U 2&PVslp`B>rȪsXB}4&.wbš 17uK?Tq|({OU[mKS2KO9w4XP uXii|L{kZV͗N^PV]L>5c_:"̵.@*'&M>v1mז4]P!m<^U1;O:sƄ 9G@Ƙh vwsGO :uK?T1|[_ku9]~YsoCu̿ϻq[=VuM 삜az7@:s^/mS WAVyo?=ϳV:/3H:sD2B0։ ru^+ggcJLͷpRϷCUsE5Sjm%]<uP 4]&uIχ6ODeg0Fr'ݱAڠϳm%2(B<8D ڭl<6𼇇3'8\rH==X=6) s360N9gǿYY/vY1c^ȼc-#:zwQfMtNy?._PS/S Qܿ3l>'cIG7yI;:%{wZ6$YbڰmzN-lQ?Dld0jb'!z6s۟wë9}}ei9]u0]J O,nA׹.Mt1:W޺<3gmkld*P3;F6x~+0fMIRh'xᗟT_6OoM6ҾiFzz{}E޴;  zY07:gbf0Zh2H}ҎW|hq /qʱڳCsH tqޥ+ҝg&tdlӝoNy("4nGc[2.蜃EW -d;#n ܹԺY~iFᦷGҳ򥢂?z˗8޶# ,1hid˻r36qZVE_ ]u&cC2~D R&~l?94軡]Of3*9A'0ήEJ~,jxW37]vtMM MLҹEa;j8$~{WO^ d0́6t̚@\͆'=}3^9/;٫6N=Ā {7mvܷVO/;9u}%!duѨ.n3KInt)/6޴2 i%}wioiсw{=]g80zprѓ=?8Fn&fg 9'H8^|p{\NkcX"D mu4ǽ2ss>gM` Q9]l'0Pkiߋ0KB]wv7lׇ{mݔ=Şrw}}-h㲛/1~Ti_nF{yE'}62}uge]wcVyLgK&Yh.q/}C;ߛ{%=4ShA\[L|oz#Ep/aʪ!c'gzO'ui󲍎sq^?X_vìSE $?h[l/xYNp0C8 ˽~vgǰl{%f~Be?m}緘=Y3ڍn~STJOr~Tqh'0!@oșj8a@ 2P4^LYKy9Ke 1x+4ޣAaN3Vb_8$Xьb3O|%yl{h{Z4n"Ϟ~Zď;ng-Z:k@ޡɒN2""6>1ޒDIȥdK:[ _i;t9Fyl;wdquSmNRA4掞3z{`1m Zޜݦ#VccWØ>m$`o\1aK.ڻm ijl_i:6x{^AQ{'y&hmM̝U^/ڴ՚#aWK7ojV<%/+age1rаS/7exÜRAxu^Wh[?_u9Ax]AZ3-73# 1hұKL];o}'Id:zu<К%N/AϷ*lE{oo=+mץק#Ole CompObjfObjInfoEquation Native  FMicrosoft Equation 3.0 DS Equation Equation.39q)¦%tQ  0 = min mn"mF  m,n"m()+1[]_1340874856 FOle  CompObj!fObjInfo# FMicrosoft Equation 3.0 DS Equation Equation.39q)¶dP  0 = min t /22 n"m()n"m()+1[]Equation Native $_1340874855FOle (CompObj )f FMicrosoft Equation 3.0 DS Equation Equation.39q)¶]  0 = min m+1n"mF  m+1,n"m()+1[]ObjInfo!+Equation Native ,1TableVS}SummaryInformation($9ڳ󒄳N7lVvuڽ+#F4j@I;wipVz$Y#3FMe6#aL5;]g=t]3=^G%kմs\n%'C!no3o>5Oy7]cϞ.wvPz͡w0F#aMӭyħ׻3qo +&>0ٕ6w{d[GMx?m1m1/|AvQsϸ"˜-4{, ʷqV>9j#~/Ϗ>M5,R-EHʺ9 ^ry#f+gzuթ<[~W wff),q=XZl7 n gIϒ=fK#{(+x'껔< Wk̼)YN/1 m~/?R`dg*yehzqE%㳜7mcMqL Ը 3Vx\{[sO?`ԓ+|:,|X. 0雙D%9Z\q:.3Tl<#7ݍ:<7 N|ow9 Lp Iv[/js|#7C¢;~k;Xr<-qmi;ߜ x1r-͚ Hj l2ؼҟ,?-yɃ_48H>L=eR_Xo ?<40F7^5T }QxGxQXӆN}9P4a<+qg0njč{~͇IvgO bDW92~ѻDA6w޸msY~G93[u$]ѱ[G^p={eǷLN6]"h<]=;K怨+jύ9y_ωOK5P=kY&/O" G`*皫dܖ }|Pd'- u(R&2H> 2q/nAȺԿxVU teŝpO4đz{ ڹ|OnlihKNԆz{ rv;f0S-uKjt/.44-k'-,!?a@a t*' Q`ˌl-RW*AZPRzWR b6F4\DЌBٰd!{h#o(pʨzEV-~d$ )UKͽiZGFuUfjhrʬJL:=]8](MT$똚vI9 F鬚v uhvDN'mq?3a%ãњNM;mB]cQ5$y ]55R`lf+VoV\kSoMUQoG6(WoG(<ԏ ށ8VMVc;[k{XiͼV;4([RhWpGvP]mZTAH*벗G: ,-Ӯ"Fj' LGe""n!T2:Խi"Mh2 U%FfDs"=F@efK)mf~TsR@M-,Uj(:)QݒKkC]VDMk(iErXI@-g;Z~BYa5^lj}-#ͬZK/Eu;os'P<:(*PkڑkؔԎNۑ/ztXP}dNӀT`_֗J}PclXZVaO8Ron+G= [ݚ =KI}zssE?..~bѣ ջ1PWc@;*:2 pz9d9o h'eg*G=c@(7z=d"G>lp5x;UW=nP-4uY[DN7藪_vQz=z4&^pAoRd͂x=<@U90r;S?`FIq߫_*xp?1' SƗ`;_ )=)bݓq\|qLsܹyvݯX=`/(45R@LC05\A[S5d7 hS%pgO W-ٿ :j|7?G U1sƠl/ysUO{vZ W%ہ]87ٯ]XϽ_ Q BRy=!s rc6`U9 -I>^_] !}=]ԾwѺC+mOs$뽻n˭<@e;M؟w} `s1p.?/L<^ I7"p<'ǻA;ʱ`6 8a9r }wۀs&x-_UGQ<7RGWhgD{F[ %{.#yw pVc1T Kއ ~YTӺK`Š2>XnD}#S,Pi &+@NbQZYyZ7_hN\4ʘiŧ!a,mp -QC[' mmprÌXTs8>ER}:wwo4ǝ0*,b 5 䲋9 eq&`E ld&{M |L4Wx(5Opw 5Q bM`EX XTq~a ۀo~9>5D %~ְ߼ySU.u,Sʪ8)07./&+M_ 0^_ `"sgȣUF_r@Ⱥd,7~ 91 |ǠPߴhhB_(CPF#(#UK (#-:2,AI<Xtte$X:2\Ay@0Hte$ :20ѾeD](e0`Hp te!`ʈ @GPF+@GPFk#(#u0`Hp te!` .AAI n+AjP(2e$X :2 \A)L2b@GPF#(#uT(#Vte$X :2 \A)L 2be@GPF#(# Gm%(#ֆB(CPF@GPF7@GPFf 2b Vln<̌2bm@GPF@GPF7@GPFf@6#(#ÁU#(#[\ǀ9V0@V te1`n:22A 䂎<M2bc @GPFFq#(#[\ǀPFl`5H pte [Ba_(CPFFAGPFR#(#OF2bV7AGPF@e?j&(#:22 X:2 Ay V2b׸PQTSʈր@GPFlP~%׀p=okOO ~?ͨo";`.2K,C2 ﻫ/뽿`̒K ok(.G_W_ G{]n#]3ྛ#q5w -l IyA%w=ww17QpC@㿽^Xm&d4{%n`*! 'pͅSL{w@~w5yWUܱ5ƝjqǮ;KjCU~$+gf<޷{/4Ɔ+ۊۆ+UngEYcqO ySqL!(n6< hÅwo9\kCkqu3o-|!>=T8-\Q=߳g>GvnxAXm WK¹\-H>=@&IVޝGU,@P 3dDAR#j*D@lb.7TV%(ZUڪE}~CN@h|<{f/g73x~x&{H}}`uDቬPD'%vwݡw2ϻ־p=vdοZȟCc5cGjOy8hz^p u^w} i%ۃR}dz} 4xiE##;=Asם` 8ubIpr g%瞐x1ij'JTX+sߍ箉'r3So<7k}uwnw`{=;jf=ߙ;*w52'o\PIm4d^?[syw@m3 w)8#>`R|b) NQpV|HACe{>۞:\Igoɿ(~Sw~<{2#Y[>q%߽x;w/ww/~y||T;:rS uK^p u?]w }]!-{fc1~axnpM|?sn4y㷅 ~l"cb?nw'zO|zxN \'3<>M__xz?t?ms=q܌587/ ]_X,y;=׍` }v|/{X(Fbn/űb±ػDlEKoឱwž ˋ;>7ظoc'o"ύy׉y~&=ϻ  λ'&jo ݼۜgJ^>/9w ㅤU#ms*\01}78f$/mL+2;W$%[;go$_ ={̞|(J!3ֽ0˺uYنd. 3׻۹^ݥbͶ\5KU5aq(#c궾g]8k8a0ތYcgϙ:7~ c~f2rɷֳkCemAYqkϾi_j=]뺟]uߣ^%<0 mOMN]2>gˣӹ =vC#яcgF?]}5vY̍nI?gmd=EmhF?%:9~Vt`|F{I ~\>0=~F?uGOh/šuM vK8:)1*$qFS8'zG'p;Gl߻%vOFIbT#? XݢՉ^gNA#(N6vNGN$7ڿw-O[g?߮ٿcb9_'WGNKl8?nvhvOVG%G^J>y(,rW+-E/,:YɅN+^X6r;'E[|rxj}n9ٿ"42ߑu6Xw|h!ؓA(g:FtqQdTq.L*,ޥ,NEƧ8&Ky~N=]U43:.6xnfd񞑪'sRI|S/Dn&Qn˶ ~%B|v-[Ͻ-?PޯWm,7fPmw>7˞3oH6sx= sg0+d F:,fg s*^F5}5KUm~̄wNΙe>]Ma߮ǵKnp{ͮǰcz7{l]lh;3?W/~caW?kyBlK!X pt8.Þ\!eџ}ϭ=s5)s~' n/x #,ǚMsX<9!hycxMvɟq?zbX5KU-^ߵlkuv^=l[><.18u:.$Pou&}!Xv T2U:fݥD? []8 'gg ˷dO+Zê{]/~ԫM>yI&OLGݲ Efim7lLw^WURlu/^up@+̚OHM9sfzW\s^[r^ob'MMg5x_wiEqu7CBם8s99KBEǦȆYGWxk x0RtkY7dFוsC'orm47KcXC&kfxkͅ܇s=Y_v.]<1FΥ<&4>4Lbbf?cCETghY_07I\ .s>ךpKX?5wsÿ8[7w\E,;s̹ϒύ`Ιk\dk͹aGsngιJ.wqqsQr~8'аxfNToќ1 [˿Ǻ9QuV[ѭ9'pC7/Ǣ*5ddʒ /C*2Fpz}Jn 02kˬ0]f- zeVnfiI.ifuEoɬq,]f̚2.2k9Ve*Z2>պeiI./]fQtu*e币.bzeևfV\6}2PˬS1.. tue3x.>DF+Zb|t}2PˬS1.. tue3x.>DFY]fuG_:2TLˬq]f݃t ޠˬl%ͬ6K9id4@?ɬ!MYp]ftu/ˬg]fFfڴa|teˬ 8.fF̺eֳx.V#-Ym2>2kFeCYp#]f݋G2YAY`V|GY{]f h̚s2knˬx.Ût ն$62k(eDLˬ+p]f-tޤˬYmwb|t2k(eDLˬ+p]f-tޤˬՎ]f@1t52 DY (]f=72#d2>նeVˬCY1.ffZ%t<ޢˬ5n%ͬ%Iɬ@Y0.&\̚2k!ex. ;`V~eVO ˬaKYp.]ftK2yEYka|0+?2'e0ˬI8.ffZ%t<ޢˬ5Ȏ2>e|t2kˬp]f]e}x.^t1tfVAҒbɬ`?: ei8.2>UБeO]f24GYWtuˬ6]f}&*2'؏.qtuΣˬ+{̺e x.>F"v**-ٙ.@:edLˬ0.tWCYiKY;d֞HYx̚tuex.w243>Se֞HYx̚tuex.w24M0>Se֞HYx̚tuex.^Ļt)%.YZ d^؟.~2tOY-tLY/]͊jW2k/OY? tu:Χˬzˬ.]f}f;3>ngGY{aNˬq>]fe3]fw2S4ۅv0> ep]f 2jJYtޣˬϐfVa$6 2J ()K K ,5bĒR'_U]el;:uL !9J.I3$cz 2&\AY0.`(]fGJ-]feYo&]f=tuˬiHYc0.]f%Ж.Q7㌏.̺ We4Lˬ1JYу.hCYX[fg&tu#fe9@Y1.å ]febuYo1>ze֍EY`]f=2+6t՝zˬGp/]f݈Ytu&eh ˬ~؃.hCYXݑ7:2>ze֍EY`]f¡t2+t;,I3 SϰMf-=tu.ˬ8.FPtCkQ^2>Z{2\NYSq*]f¡t2+t#fa|te .TC2/eV 2+z=2k1ˬ9IYUˬˬ2tˬ"eV/I3kelY՘OYs0.PIY#QAYeFYEȣˬV2>eV5e̤ˬ*TeHTeVeV2+UZَeV5e̤ˬ*TeHTeVeV2+U;1>retwe.xC2vˬNhEYk >(XfkX6we.xC2vˬNhEYk_>Ya]f-tu=.ˬ1.F2Zeڶ-Ya]f-tu=.ˬ1.F2Ze7mo$ͬX60ˬ0.`]f27eVGe7_kV`)]f=2:̠ˬ)GY`ꍮttMc|0keø. ̚qtu,ˬJYKYߴb|Zt0ˬ0.`]f27vˬhIY_.L%if&tu-.ˬp ]f2k_FYВ.nZ2>*ˬp']f]K2,BY eVˬ[0f2!IYR: e18.nt-2ߎf2!IYR: e1LYBYQˬv bz2\BYgd:2]2+tUsלW]f=ytu .ˬ3q2]ftU.tE *0> ˬ1.%tu&Nˬ1.Jх.hAY_5c|zOeփGY:'eQ8.Ag̊`G6]f d֟p]fe8.t:eV;e֗M_^_2O.~2 DYG@2+2lƗ` ]f wep1]f2(HY3]fE#]f}zˬ?.ˬ_tu$2koJYa@Y_d.L#sIYǓlY2뷸.~_e֑(ˬ+]f]f}2;ˬt[\DY/2H |dFo\T^ߊyUj34[> ϧQ߹ ƹv-[my9ef r?e7y|辗~^hOW\8mMص -!mx7w\<)S.BG`9킭15GaMTOu3_X? -O7 C\sjd\safa禞Wݸ<8Nrջ\)f] I]Y߹ صKY}.Bz,]anm8?wW=T?o2AOey=pz  e% (\)jl> ʺԞ3O}Ls(8G`].Yv$k\\?ş< Y"mR&vWnMq9s}>?w([X8p8֥ѕ`eIES4fnkUf{ܯL;WneM͓;{M=8EV'nw#|Z;ovc )8o68Kޜy~csW2sIy*6|J7}dݠ{o{hdz jD z8Bugza1d_*9 }r9d#+SuZfzo?^-us'M λ`΃7 [M>CxNJ&sssE}k.y9^>hqqK >>lvԞV^> Ҋ>l3 c[ֱKp; 3+@_21Dd< L   C A ?"? 23.H@d@\Kz27`!r3.H@d@\KX`b(]@xT=hA~fsg9\"b !œ!E.B""Hema j{`RF<'%f|7O@@֑~nӄBI`Qbeba'K1 pG$xbd0)UM>ixʇ"}bu̷,V?Pw_ߣkDq~-ҼJx;.$ `'xGwT^~2/^amɡ3;ooSh{/G%Fn]q/ ~MtcK`r66:6˖mB?7c}68O*&W nɺ}xo3_G$ߊρ^\,TNn,8ƒQnrǥwM9x{I<dYc_\ _ p KeOGĕu7fqK)>%+K7zCQ뭷U&T]=]v g<c%ߦqUۂ~mt`P]UbQqڑmE> ́5DdLL   C A ?"? 24db)QWqp7`!i4db)QW@v#7xTkA6MMhzX=H%bz)z0^C*+.I\<((xkCo= O*JfX V0}{oF@!}`x LV+Bv:z/e8Ʋ=l!O]~X)-ړRD0Hd5xhNkf]0Vlg`D90X>2껣,uR?GC ʦfM\Jd7{M^oeǔ'>f7qrݜeAB?5Yqlbq|` io -ktowX/23(^  # A"` "  qٻ Pci 7@=a  qٻ PcH2NŐB/ x[ pTI6%M^%v C3-cyI4k! AB?Xp`OtRH:, $@:"3U˴:CuHnxdX{ﹿ{݇X`F'5R+:,l$Nzjc,OdWSu^Vb耕E`Q- ^?Q:HևsP5CBUŪ}D2RH($pe-g25) *9YV -0^\EvA\ʭWkwk bhĻ/!3L 'Q< xs03isyui!!7.*Hdy*<3 `*v"BQ![wi-ogU:\fVĖC#l5"6d[Tf#;e ar_45GJ&>ـNtQ<'hMJLmc(VUR)bA1=VQ~p2S A p=Ű-Qe(bOX-1 Q1}&H)tb:YL;z\OnD9@t6x|'op2BGo~]}xj1\M$'$d'}0Q> w#DG&sz+(O<>t DJU~:ІL<)xJ)Oz7~%VDž= *h?xfQ[vz{S8%-sj߲`#4\Ofkm[(Q>N@7+M#E^{nzҙkLxydb+7L"UpJ߭.n- hIoOqh凯ioڵomE mZ.kv6mשּׂcZljf]ZEZ۴xc_6ezƋ>OWjkjO` QNg=ǔt[RMeΒd)%iǡ Ie7c Л/H}Ғ{ZR+\ߞC9{s~}sf ۑs&=sh\fK3skB2gLyDҟ(;EcΧ9\)3ǜmcmluK!UrIŹRX#*'|) ǭӖbO\`{?E%X$0Uj%QO$?R1ۭ6پٶ~VpۭEC_ק,u\?J{9EgN}),S匃;2*]Eݮ̉irKiN O<]̩벎es9yvLIQ~Vڔ=YV0-t%r>Ƴ6. .Sb<䷶&c#=f8scj:+e][\n~9hߏ;f|Qkp ~AX^DgbXl0FŦw0 -1fy5|ZYL"5ިq-4dƹ6JG2=7!>i1Wob#v 1 tqZ]ݻ]&RoԿU0Io[:Ky !^!7%2,>8 @zńBﳵ@}|}4h"΃xe&c!F`@uit@G  # A"`" pomLfYӇs+ Ę7@= pomLfYӇs+.`sL5TE xZ pTWgf6 6L+Sb gi0L P.?jMLXD!`O-`2jPVZUN5lul;|{?{݇1qI"X2,.aLZkEɪK1ߠO2.]o^ u n+  l,X0=[chQln˒a:Wf,N\ 2y+2Cc/O,ۭ8Ifz,HXӐBn) ״\ &Y'p >+)yE-Jr1P3n6cE'L& "=0@B I+L ?e8|;@;>_.e <λc0t  WPh~} 6VӃ`4fK>%ⅽɧr>ewѰScq؋i/؊M8hYaXǴ=mp=ј\Ds( ΅V̓wBN}86O?B{Ge9լ $OrX|'06 "wa|Εo9^Ȝ,LLȜ#ɃdqRTڹ/<]&}'j\u|ܜ'J&hNhxb(q$NJ,ڭ_,e#^;|4Ǽ WGxam]iu{+k.N2VbOS,3IXLbX|K'3BL Ѧ#Ȱ)"ϰcȆD5)?U97bXdFsM_qйvX ۲f}C~M4VwncG˯bjnZ~M>69<8l 4ֲtѣs2܇o6DɄO$%$z6<߶70W-o`Ќ;M4 ϟrlM;I9~i%f|oV+ =kGʔڹ ̿KI=id^/Gy2vi_7]Ib .KshB&@[ЙMyNv,fǤ!YE z'&>2G9:뗎6RG^1C)Ƀ¤MvSUەyw&{!iF¤b/&ΌbQy{bZǿo^ü;m~5vx\nf [ 6e/w,_ KW2 }T{W\{]e'K|^} bwt|%$< U ۥwii&Y 9^7fh0ڌwV`z\ce=1>sQWb>l)˫9JqI?;) _<>dtc:9GH֥%TƩ7twYuգtګɜDZ1OV )([y AC}7\R,,H~RTC}3I!tBt0D3>ou\`,17}t,1g@h>Q>|4A$ҒQǐLT^ zp'3 33A2Zuw87ف^j#F j-9j( 666666666vvvvvvvvv666666>666666666666666666666666666666666666666666666666hH6666666666666666666666666666666666666666666666666666666666666666662 0@P`p2( 0@P`p 0@P`p 0@P`p 0@P`p 0@P`p 0@P`p8XV~ OJPJQJ_HmH nH sH tH V`V Normal$xxa$ CJOJQJ_HaJmH sH tH d@d ! Heading 1$$@&#5B*CJ$OJPJQJ\aJphh@h 5 Heading 2$$@&'5B*CJOJPJQJ\^JaJphJ@J 4CD Heading 3$<@&5PJ\aJNN c Heading 4$<@&6PJ\^JaJ^^ 4 Heading 5 <@&$56CJOJPJQJ\]^JaJDA`D Default Paragraph FontRi@R 0 Table Normal4 l4a (k ( 0No List \O\ !Heading 1 Char'5B*CJ$OJPJQJ\aJphtH \\ 5Heading 2 Char'5B*CJOJPJQJ\^JaJphRR 4CDHeading 3 Char5CJOJPJQJ\aJtH V!V cHeading 4 Char"6CJOJPJQJ\^JaJtH \1\ 4Heading 5 Char(56CJOJPJQJ\]^JaJtH @B@ 6 List Paragraph ^m$HYRH t0 Document MapCJOJQJ^JaJRaR t0Document Map CharCJOJQJ^JaJtH BB@rB W Body Text$da$PJaJHOH WBody Text CharCJOJPJQJtH .O. X4caption5jj + Table Grid7:V0<< screenCJOJQJmH sH 4@4 ylw0Header  B#BB ylw0 Header CharCJOJQJaJtH 4 4 ylw0Footer  B#BB ylw0 Footer CharCJOJQJaJtH pp TableCellHeading $d$a$5CJOJPJQJaJmH sH :U@: 0 Hyperlink6>*B*ph:!:  TableNumber 5OJQJ\q2\  TableCellBody#dCJOJ QJ aJmH sH FAF F+ Technical 2CJOJ QJ mH sH u&@& TpTOC 1%.@. TpTOC 2 &^.@. TpTOC 3 '^PK![Content_Types].xmlj0Eжr(΢Iw},-j4 wP-t#bΙ{UTU^hd}㨫)*1P' ^W0)T9<l#$yi};~@(Hu* Dנz/0ǰ $ X3aZ,D0j~3߶b~i>3\`?/[G\!-Rk.sԻ..a濭?PK!֧6 _rels/.relsj0 }Q%v/C/}(h"O = C?hv=Ʌ%[xp{۵_Pѣ<1H0ORBdJE4b$q_6LR7`0̞O,En7Lib/SeеPK!kytheme/theme/themeManager.xml M @}w7c(EbˮCAǠҟ7՛K Y, e.|,H,lxɴIsQ}#Ր ֵ+!,^$j=GW)E+& 8PK!Ptheme/theme/theme1.xmlYOo6w toc'vuر-MniP@I}úama[إ4:lЯGRX^6؊>$ !)O^rC$y@/yH*񄴽)޵߻UDb`}"qۋJחX^)I`nEp)liV[]1M<OP6r=zgbIguSebORD۫qu gZo~ٺlAplxpT0+[}`jzAV2Fi@qv֬5\|ʜ̭NleXdsjcs7f W+Ն7`g ȘJj|h(KD- dXiJ؇(x$( :;˹! I_TS 1?E??ZBΪmU/?~xY'y5g&΋/ɋ>GMGeD3Vq%'#q$8K)fw9:ĵ x}rxwr:\TZaG*y8IjbRc|XŻǿI u3KGnD1NIBs RuK>V.EL+M2#'fi ~V vl{u8zH *:(W☕ ~JTe\O*tHGHY}KNP*ݾ˦TѼ9/#A7qZ$*c?qUnwN%Oi4 =3ڗP 1Pm \\9Mؓ2aD];Yt\[x]}Wr|]g- eW )6-rCSj id DЇAΜIqbJ#x꺃 6k#ASh&ʌt(Q%p%m&]caSl=X\P1Mh9MVdDAaVB[݈fJíP|8 քAV^f Hn- "d>znNJ ة>b&2vKyϼD:,AGm\nziÙ.uχYC6OMf3or$5NHT[XF64T,ќM0E)`#5XY`פ;%1U٥m;R>QD DcpU'&LE/pm%]8firS4d 7y\`JnίI R3U~7+׸#m qBiDi*L69mY&iHE=(K&N!V.KeLDĕ{D vEꦚdeNƟe(MN9ߜR6&3(a/DUz<{ˊYȳV)9Z[4^n5!J?Q3eBoCM m<.vpIYfZY_p[=al-Y}Nc͙ŋ4vfavl'SA8|*u{-ߟ0%M07%<ҍPK! ѐ'theme/theme/_rels/themeManager.xml.relsM 0wooӺ&݈Э5 6?$Q ,.aic21h:qm@RN;d`o7gK(M&$R(.1r'JЊT8V"AȻHu}|$b{P8g/]QAsم(#L[PK-![Content_Types].xmlPK-!֧6 +_rels/.relsPK-!kytheme/theme/themeManager.xmlPK-!Ptheme/theme/theme1.xmlPK-! ѐ' theme/theme/_rels/themeManager.xml.relsPK]  +_;D{'uV4hEyI} 888888ppppppAAAAAA//////^^^^^^^^^^^a [S<a_}3w5t" !Z"##w$?%4&)''w(Z,*7AIQQYfkrzҌu'ޡ;w'<E& ug$/6 ?CQIK~O6Xt`8mw+2ߟ!S~:=tZ 9&03=;ES#\elxuxz{L}l;R68!&Z19IS[qgrmztjt9s_%]hl      !"#%&')*+,./02345689:;<=>?@BCDFGHIKLMNPQRTUVWXZ[\]^`abcdfghjklmnpqrsuvwxyz|}~S X(I+g# . U(GWf~>2Fq2!A3`ªd OJl$(-17AEJOSY_eiot{!2NPQSs| 79:<\x79:<\v.124T % A D E G g    $ > Z ] ^ `     = H d g h j   ! " $ D T p s t v  < ? @ B b r +./1Qj:=>@`w&BEFHhz <X[\^~4j9<=?_$6RUVXx6?[^_a 1457Wb~3ORSUu#,HKLNn>Zvyz|589;[s,HKLNn6RUVXx @5 *5QTUWw 6 R U V X x fz|"$ tX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕX%tĕ̕::::::35Rjm#;>t),@X[a!!!!!!!!!!8 ?"$&Dr[2$`ҺFz)E#^"$2`+йs+ 09# AA@ 0(  B S  ? d(  J  # "?J  # "?J  # "?J  # "?J  # "?J  # "?J  # "?J  # "?J   # "?J   # "? @    8pA/a7U#7u7U#7u7U#7u7U#7u7U#7u7U#7u7U#7u7U#7u 7U#7u 7U#7u _Toc271614785 _Toc278522709 _Toc278522710 _Toc268360828 _Toc268360947 _Toc268463833 _Toc269065064 _Toc269458011 _Toc271614786 _Toc278522711 _Toc268360829 _Toc268360948 _Toc268463834 _Toc269065065 _Toc269458012 _Toc271614787 _Toc278522712 _Toc268360830 _Toc268360949 _Toc268463835 _Toc269065066 _Toc269458013 _Toc271614788 _Toc278522713 _Toc268360831 _Toc268360950 _Toc268463836 _Toc269065067 _Toc269458014 _Toc271614789 _Toc278522714 _Toc268360832 _Toc268360951 _Toc268463837 _Toc269065068 _Toc269458015 _Toc268360835 _Toc268360954 _Toc268463840 _Toc269065071 _Toc269458018 _Toc271614794 _Toc278522715 _Toc268360836 _Toc268360955 _Toc268463841 _Toc269065072 _Toc269458019 _Toc271614795 _Toc278522716 _Toc268360837 _Toc268360956 _Toc268463842 _Toc269065073 _Toc269458020 _Toc271614796 _Toc278522717 _Toc268360838 _Toc268360957 _Toc268463843 _Toc269065074 _Toc269458021 _Toc271614797 _Toc278522718 _Toc268360839 _Toc268360958 _Toc268463844 _Toc269065075 _Toc269458022 _Toc278522719 _Toc268360840 _Toc268360959 _Toc268463845 _Toc269065076 _Toc269458023 _Toc278522720 _Toc268360841 _Toc268360960 _Toc268463846 _Toc269065077 _Toc269458024 _Toc271614798 _Toc278522721 _Toc268360842 _Toc268360961 _Toc268463847 _Toc269065078 _Toc269458025 _Toc271614799 _Toc278522722 _Toc268360843 _Toc268360962 _Toc268463848 _Toc269065079 _Toc269458026 _Toc271614800 _Toc278522723 _Toc268360844 _Toc268360963 _Toc268463849 _Toc269065080 _Toc269458027 _Toc271614801 _Toc278522724 _Toc268360845 _Toc268360964 _Toc268463850 _Toc269065081 _Toc269458028 _Toc271614802 _Toc278522725 _Toc268360846 _Toc268360965 _Toc268463851 _Toc269065082 _Toc269458029 _Toc271614803 _Toc278522726 _Toc268360847 _Toc268360966 _Toc268463852 _Toc269065083 _Toc269458030 _Toc271614804 _Toc278522727 _Toc268360848 _Toc268360967 _Toc268463853 _Toc269065084 _Toc269458031 _Toc271614805 _Toc278522728 _Toc268360849 _Toc268360968 _Toc268463854 _Toc269065085 _Toc269458032 _Toc271614806 _Toc278522729 _Toc268360850 _Toc268360969 _Toc268463855 _Toc269065086 _Toc269458033 _Toc268360853 _Toc268360972 _Toc268463858 _Toc269065089 _Toc269458036 _Toc271614817 _Toc278522730 _Toc269065090 _Toc269458037 _Toc271614818 _Toc278522731 _Toc268360855 _Toc268360974 _Toc268463860 _Toc269065091 _Toc269458038 _Toc271614819 _Toc278522732 _Toc271614820 _Toc278522733 _Toc268360856 _Toc268360975 _Toc268463861 _Toc269065092 _Toc269458039 _Toc271614821 _Toc278522734 _Toc268360857 _Toc268360976 _Toc268463862 _Toc269065093 _Toc269458040 _Toc271614822 _Toc278522735 _Toc268360858 _Toc268360977 _Toc268463863 _Toc269065094 _Toc269458041 _Toc271614823 _Toc278522736 _Toc268360859 _Toc268360978 _Toc268463864 _Toc269065095 _Toc269458042 _Toc278522737 _Toc268360860 _Toc268360979 _Toc268463865 _Toc269065096 _Toc269458043 _Toc278522738 _Toc268360861 _Toc268360980 _Toc268463866 _Toc269065097 _Toc269458044 _Toc278522739 _Toc268360862 _Toc268360981 _Toc268463867 _Toc269065098 _Toc269458045 _Toc278522740 _Toc268360863 _Toc268360982 _Toc268463868 _Toc269065099 _Toc269458046 _Toc278522741 _Toc268360864 _Toc268360983 _Toc268463869 _Toc269065100 _Toc269458047 _Toc278522742 _Toc268360865 _Toc268360984 _Toc268463870 _Toc269065101 _Toc269458048 _Toc278522743 _Toc268360866 _Toc268360985 _Toc268463871 _Toc269065102 _Toc269458049 _Toc271614824 _Toc278522744 _Toc268360867 _Toc268360986 _Toc268463872 _Toc269065103 _Toc269458050 _Toc271614825 _Toc278522745 _Toc268360870 _Toc268360989 _Toc268463875 _Toc269065106 _Toc269458053 _Toc271614848 _Toc278522746 _Toc268360871 _Toc268360990 _Toc268463876 _Toc269065107 _Toc269458054 _Toc271614849 _Toc278522747 _Toc268360872 _Toc268360991 _Toc268463877 _Toc269065108 _Toc269458055 _Toc271614850 _Toc278522748 _Toc268360873 _Toc268360992 _Toc268463878 _Toc269065109 _Toc269458056 _Toc271614851 _Toc278522749 _Toc268360874 _Toc268360993 _Toc268463879 _Toc269065110 _Toc269458057 _Toc278522750 _Toc268360875 _Toc268360994 _Toc268463880 _Toc269065111 _Toc269458058 _Toc278522751 _Toc268360876 _Toc268360995 _Toc268463881 _Toc269065112 _Toc269458059 _Toc271614852 _Toc278522752 _Toc268360877 _Toc268360996 _Toc268463882 _Toc269065113 _Toc269458060 _Toc278522753 _Toc268360878 _Toc268360997 _Toc268463883 _Toc269065114 _Toc269458061 _Toc278522754 _Toc268360879 _Toc268360998 _Toc268463884 _Toc269065115 _Toc269458062 _Toc271614853 _Toc278522755 _Toc268360880 _Toc268360999 _Toc268463885 _Toc269065116 _Toc269458063 _Toc271614854 _Toc278522756 _Toc268360881 _Toc268361000 _Toc268463886 _Toc269065117 _Toc269458064 _Toc278522757 _Toc268360882 _Toc268361001 _Toc268463887 _Toc269065118 _Toc269458065 _Toc278522758 _Toc268360883 _Toc268361002 _Toc268463888 _Toc269065119 _Toc269458066 _Toc278522759 _Toc268360884 _Toc268361003 _Toc268463889 _Toc269065120 _Toc269458067 _Toc271614855 _Toc278522760 _Toc268360888 _Toc268361007 _Toc268463893 _Toc269065124 _Toc269458071 _Toc271614869 _Toc278522761 _Toc268360889 _Toc268361008 _Toc268463894 _Toc269065125 _Toc269458072 _Toc271614870 _Toc278522762 _Toc268360890 _Toc268361009 _Toc268463895 _Toc269065126 _Toc269458073 _Toc271614871 _Toc278522763 _Toc268360891 _Toc268361010 _Toc268463896 _Toc269065127 _Toc269458074 _Toc271614872 _Toc278522764 _Toc268360892 _Toc268361011 _Toc268463897 _Toc269065128 _Toc269458075 _Toc278522765 _Toc268360893 _Toc268361012 _Toc268463898 _Toc269065129 _Toc269458076 _Toc278522766 _Toc268360894 _Toc268361013 _Toc268463899 _Toc269065130 _Toc269458077 _Toc278522767 _Toc268360895 _Toc268361014 _Toc268463900 _Toc269065131 _Toc269458078 _Toc271614873 _Toc278522768 _Toc268360896 _Toc268361015 _Toc268463901 _Toc269065132 _Toc269458079 _Toc278522769 _Toc268360897 _Toc268361016 _Toc268463902 _Toc269065133 _Toc269458080 _Toc278522770 _Toc268360898 _Toc268361017 _Toc268463903 _Toc269065134 _Toc269458081 _Toc278522771 _Toc268360899 _Toc268361018 _Toc268463904 _Toc269065135 _Toc269458082 _Toc271614874 _Toc278522772 _Toc268360902 _Toc268361021 _Toc268463907 _Toc269065138 _Toc269458085 _Toc271614889 _Toc278522773 _Toc268360903 _Toc268361022 _Toc268463908 _Toc269065139 _Toc269458086 _Toc271614890 _Toc278522774 _Toc268360904 _Toc268361023 _Toc268463909 _Toc269065140 _Toc269458087 _Toc271614891 _Toc278522775 _Toc268360905 _Toc268361024 _Toc268463910 _Toc269065141 _Toc269458088 _Toc271614892 _Toc278522776 _Toc268360906 _Toc268361025 _Toc268463911 _Toc269065142 _Toc269458089 _Toc271614893 _Toc278522777 _Toc268360907 _Toc268361026 _Toc268463912 _Toc269065143 _Toc269458090 _Toc271614894 _Toc278522778 _Toc271614895 _Toc278522779 _Toc268360908 _Toc268361027 _Toc268463913 _Toc269065144 _Toc269458091 _Toc271614896 _Toc278522780 _Toc271614897 _Toc278522781 _Toc268360909 _Toc268361028 _Toc268463914 _Toc269065145 _Toc269458092 _Toc271614898 _Toc278522782 _Toc268360910 _Toc268361029 _Toc268463915 _Toc269065146 _Toc269458093 _Toc271614899 _Toc278522783 _Toc268360913 _Toc268361032 _Toc268463918 _Toc269065149 _Toc269458096 _Toc271614901 _Toc278522784 _Toc268360914 _Toc268361033 _Toc268463919 _Toc269065150 _Toc269458097 _Toc271614902 _Toc278522785 _Toc268360915 _Toc268361034 _Toc268463920 _Toc269065151 _Toc269458098 _Toc271614903 _Toc278522786 _Toc268360928 _Toc268361047 _Toc268463933 _Toc269065164 _Toc269458111 _Toc271614904 _Toc278522787 I?I?I?I?I?I?I?^^^^^^^^^^^^+_+_+_+_+_+_+_F_F_F_F_F_F_F_&e&e&e&e&e&e&eoooooooԡԡԡԡԡԡQQQQQQQ       [2[2[2[2[2[2[2CCCCCCCDDDDD;D;D;D;D;D;D;D^D^D^D^DJJJJJJJ?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ V?V?V?V?V?V?V?^^^^^^^"_"_"_"_"_E_E_E_E_E_E_E_R_R_R_R_R_R_R_7e7e7e7e7e7e7eooooooofffffff-------       8888888j2j2j2j2j2j2j2DDDDDDD:D:D:D:D:D]D]D]D]D]D]D]DnDnDnDnD K K K K K K KuTuTVVVVVVVpppppppOOOOOOEEEEEE$$$$$$GGGGGGIIIIII???????,?,?,?,?,?,?,?OOOOOOOOOOOOOWWWWWW _ _ _ _ _ _ _______4a4a4a4a4a4a x x x x x x xSSSSSSS[[[[[[XXXXXXaaaaaaaϰϰϰϰϰϰϰ0000000       FFFFFFFNNNNNN'''''''"("("("("("("(3(3(3(3(3(3(3(Z=Z=Z=Z=Z=Z=Z=UEUEUEUEUEUEUEYYYYYYY88nnnnnnnެެެެެެެTTTTTTT dH$HGGdG$GFHH$IdIqq}}9  ~? B*urn:schemas-microsoft-com:office:smarttagscountry-region9 *urn:schemas-microsoft-com:office:smarttagsState8 *urn:schemas-microsoft-com:office:smarttagsCity9 *urn:schemas-microsoft-com:office:smarttagsplace P  ppssv&v\drzٖۖJLY[~¥ͥҥVYɦ̦jo̧ϧcfMR$)27[`IN    F K &q}$FR`lT"`"}$$225666779999999999::::??s@v@CCJK&L+LMMLXQXzrrLvNvwwxxxydyhyyyyyzz{}9HZiÍ8@fn:BT\Zb(0+3jr#,'/FJy{(*DI";@J!R!%%SSU UW]_e[ciq'- .*0_g{07.0?P56?_ 5VWb3Stu"#,Lmn=>Zz9Z[s,Lmn6Vwx?@45)*5Uvw  6 V w x ''.0?P5VWb3Stu"#,Lmn=>Zz9Z[s,Lmn6Vwx?@45)*5Uvw  6 V w x  )Pl/Ku.0?P5VWb3Stu"#,Lmn=>Zz9Z[s,Lmn6Vwx?@45)*5Uvw  6 V w x  )Pl/KuAzI]m2] 4( b 8 M%*:-l6z@\:842Uޘq1**Z Z` dulv!~T8U:T";.d"T/1<%h-)'05H(a}(*8^@N+T,lPk-|`mh/6TI1'4g( ;7T~T:Tr>&zW[?֪yaOFβb$|wF8`fm&F@:gGL36 ;GX*+I*2PJbjKf2KF ~Y2L\<4+MjV5sUM<*kQFVQ:EM V?+Wu2k0XV"l6\Q_xH5%`֖,zb @eFk4h0S>jE1'm`!oXzWK3o,#jp!6e6tD rv zx`C y\6eh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh ^ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh| ^| `OJQJo(hHhL^L`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHhp^p`OJQJ^Jo(hHoh@ ^@ `OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHhP^P`OJQJ^Jo(hHoh ^ `OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh ^ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh| ^| `OJQJo(hHhL^L`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHhDocumentSummaryInformation8MsoDataStore&NVZXEP4CQ==2'Item (0Oh+'09    (08Doherty Normal.dotmDoherty5Microsoft Office Word@ա@.N@g@hTEG|8 Rto# .&" WMFCp (lah Rt EMF(D ?f   ah% % !-\4]RpI@"Arial77 7x7Pf 77 `7 7Of 77 yum7 7 Aum$X3. *Cx ArialupSBu$76m`7`7y}m7Adv% % %   TT  _@?@@ 9L-\4]P f TT  d@?@@ >L-\4]P f TT  i@?@@ CL-\4]P 7f TT  n@?@@ HL-\4]P pf Tlv r@?@@vLL-\4]XModeluppf3TT  r@?@@ LL-\4]P-=Tp  r@?@@ LL-\4]XBased fgfp3TT  r@?@@ LL-\4]P pf TA: w@?@@AQ L-\4]hEnvironmental Yzpf3Gppgp=f3TT; m w@?@@; QL-\4]P 3T|n i w@?@@n QL-\4]\Decisionff3f3ppTTj  w@?@@j QL-\4]P-=Tp w@?@@ QL-\4]XMakingff3pqTTw@?@@QL-\4]P f Rp@Times New Roman tŏ ȏ XƏ Pfȏ ŏ @Ǐ ȏ Ofȏ ŏ yumŏ ȏ AumXG*Ax Times ew Roman GƏ 6m@Ə @Ə y}mhƏ Adv% % % TTX @?@@X L-\4]P - TTXP  @?@@X L-\4]P - TTX f @?@@XO L-\4]P - TTX  @?@@X L-\4]P - TX~ ? @?@@~ L-\4]Pby31TT ?  @?@@ L-\4]P ?- TT U @?@@ > L-\4]P - TT @?@@ L-\4]P - TT . @?@@ L-\4]P - TT D@?@@ -L-\4]P - TT x @?@@ L-\4]P - TT  @?@@ wL-\4]P - TT  3@?@@ L-\4]P - Tg @?@@ L-\4]dJohn Doherty(222H22,!1TT g @?@@ L-\4]P - T ] }@?@@fL-\4]Watermark Numerical Computing_,,!N,!2H2N,!,,D2N2221TT^  }@?@@^ fL-\4]P - T|k "@?@@m L-\4]\NovemberH22,N2,!TT * "@?@@ L-\4]P Td+  "@?@@+ L-\4]T20102223TT  "@?@@ L-\4]P ?- TT V @?@@ L-\4]P - Rp@Times New Roman77 7x7Pf 77 `7 7Of 77 yum7 7 |AumXG*Ax Times ew Romang$76m`7`7y}m7|Adv% % % T~ l@?@@~U4L-\4]Support for the writing of this document was provide22222'2'2,C'2212'22,2H,2C2'2'2,2,TT  l@?@@ UL-\4]Pd2TT  l@?@@ UL-\4]P T l@?@@ U"L-\4]by South Florida Water Management 2,2222=2'22Q3,'S2222,H,2 T| n> @?@@ L-\4]\DistrictH'',TT= nU @?@@= L-\4]P.TTV n @?@@V L-\4]P 4 % % % TT  @?@@ mL-\4]P - TT  )@?@@ L-\4]P 2" '% Ld)X,[)X!??%  % Ld)X,[)X!??% (  '% Ld-X3[-X!??% (  '% Ld4X7[4X!??%  % Ld4X7[4X!??% (  '% Ld)\,\)\!??% (  '% Ld)],`)]!??%  % Ld)],`)]!??% (  '% Ld-]3`-]!??% & WMFC((  '% Ld4\7\4\!??% (  '% Ld4]7`4]!??%  % Ld4]7`4]!??% (  % % %  TTX@?@@XLahP -% % 6h6ah6a66g6`g6`66f6_f6_66e6^e6^66d6]d6]66c6\c6\66b6[b6[66a6Za6Z66`6Y`6Y6 6 _6X_6X 6  6 ^6W^6W 6  6 ]6V]6V 6  6 \6U\6U 6  6 [6T[6T 6 6Z6SZ6S66Y6RY6R66X6QX6Q66W6PW6P66V6OV6O6  KS."SystemMS Shell Dlg--,HC@"Arial---  2 c*CH  2 *CH  2 *CH  2 *CH 2 CHModeli  2 %CH-2 ,CHBased   2 tCH 2 CHEnvironmental     2 CH 2 "CHDecision   2 }CH-2 CHMaking  2 CH @Times New Roman--- 2 HCH  2 )HCH  2 =HCH  2 PHCH 2 d$CHby 2 d0CH  2 x*CH  2 *CH  2 *CH  2 *CH  2 *CH  2 *CH  2 *CH 2   CHJohn Doherty 2 JCH 72 CHWatermark Numerical Computing      2 |CH 2 *CHNovember  2 *5CH 2 *8CH2010 2 *PCH  2 >*CH @Times New Roman---Y2 RM4CHSupport for the writing of this document was provide 2 RJCHd 2 RPCH >2 RS"CHby South Florida Water Management    2 _CHDistrict 2 _:CH. 2 _=CH --- 2 s*CH  2 *CH '- @ !HB-- @ !HB-- @ !HC-- @ !H-- @ !H-- @ !HHB-- @ !B-- @ !B-- @ !C-- @ !HH-- @ !-- @ !----  2 HSJ --JJSSIISSIISSIISSIISSIISSIIRRIIRRIIRRHHRRHHRRHHRRHHRRHHRRHHQQHHQQHHQQGGQQGGQQ7_Toc278522725?P7_Toc278522724?J7_Toc278522723?D7_Toc278522722?>7_Toc278522721?87_Toc278522720?27_Toc278522719?,7_Toc278522718?&7_Toc278522717? 7_Toc278522716?7_Toc278522715?7_Toc278522714?7_Toc278522713?7_Toc278522712?7_Toc2785227117_Toc278522787?7_Toc278522786?7_Toc278522785?7_Toc278522784?7Properties4UCompObj:y278522774?v 7_Toc278522769  F'Microsoft Office Word 97-2003 Document MSWordDocWord.Document.89q_Toc278522767?L7_Toc278522766?F7_Toc278522765?@7_Toc2785^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh ^ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(h      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEF՜.+,D՜.+,@ hp|   PrefaceTable of Contents1. Introduction General This Document Case Studies 2. What will happen if...? Introduction Making a Decision Environmental Management Risk  Hypothesis-Testing 1 Reducing Model Augmentations to Uncertainty  The Scientific Method Summary&3. Models, Simulation and Uncertainty Expert Knowledge  What a Model can Provide + What an Uncalibrated Model can Provide Linear Analysis  Exercises #4. Getting Information out of Data History-matching  Bayes Equation:Figure 4.1 Schematic representation of Bayesian analysis. Calibration The Null Space Title Headings\ 8@ _PID_HLINKSA?7_Toc278522787?7_Toc278522786?7_Toc278522785?7_Toc278522784?7_Toc278522783?7_Toc278522782?7_Toc278522781?7_Toc278522780?7_Toc278522779?7_Toc278522778?7_Toc278522777?7_Toc278522776?7_Toc278522775?|7_Toc278522774?v7_Toc278522773?p7_Toc278522772?j7_Toc278522771?d7_Toc278522770?^7_Toc278522769?X7_Toc278522768?R7_Toc278522767?L7_Toc278522766?F7_Toc278522765?@7_Toc278522764?:7_Toc278522763?47_Toc278522762?.7_Toc278522761?(7_Toc278522760?"7_Toc278522759?7_Toc278522758?7_Toc278522757?7_Toc278522756? 7_Toc278522755?7_Toc278522754?7_Toc278522753?7_Toc278522752?7_Toc278522751?7_Toc278522750?7_Toc278522749?7_Toc278522748?7_Toc278522747?7_Toc278522746?7_Toc278522745?7_Toc278522744?7_Toc278522743?7_Toc278522742?7_Toc278522741?7_Toc278522740?7_Toc278522739?7_Toc278522738?7_Toc278522737?7_Toc278522736?7_Toc278522735?7_Toc278522734?7_Toc278522733?7_Toc278522732?z7_Toc278522731?t7_Toc278522730?n7_Toc278522729?h7_Toc278522728?b7_Toc278522727?\7_Toc278522726?VHh| ^| `OJQJo(hHhL^L`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh ^ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh| ^| `OJQJo(hHhL^L`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJ QJ o(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJ QJ o(hHh8^8`OJQJo(hHh^`OJQJ^Jo(hHoh ^ `OJ QJ o(hHh ^ `OJQJo(hHhx^x`OJQJ^Jo(hHohH^H`OJ QJ o(hHh^`OJQJo(hHh^`OJQJ^Jo(hHoh^`OJ QJ o(hHA ,zb '4] WK3o%d"2k0Xjp ;7z VaOFrv,gGzx5%`)'y\k4h*kQr>I12K5H(?+W2PJ1'm_I/1<%zW[?~Y2L*] b 6\&FU:T"6tsUMVQ@N+@e:8Z -q1mh/k-<4+MT:>j;Ga}(bjK6!oulv!$|wFAA                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         Ag E'/ ds!/6inu\zy|~,-#C]nS[34eA?]foM aS/BLQKzO{d i(]! ', u= *d l Do ~ u $ E& ( , K ` h ap a -# / U t  1 I - G@ F |  L) / a1 yT SW{qqu{U k(T+U^G&@IT';1=1gc 4Nn^N0rHHx}1^5EE euRu!bI\v_~<i@N\m' .$3CSoUbv# ] IUA\cRmXv 1KRd\x/5i5:_cf's3x .y:Nuw*|i 0 e  !L !!!5!RJ!""uE"kd"{m"3 #+7#C#U$$$$$*$ 2$K$w$%%|/%I%J%@X%X%v%<|%p&wL&Q&'' "'('6'P'T'=l'Mw'{'(JK(N(Q(S(W(X^(kl( m(+)C)[)*H*sw*}*++'++"+G6+F+F+U+^+A{+}+,,L,F_,Vo,t,u,-H-f]-;.E.Zo.}./v/6/f/j/0 0 01010X2050770QE0`0s0^15*19F1I1>l1s2,2.B2H2h2)3 Z3Ni3v3? 44!+454H4X4\47i4n4N~4A5,575C5E5WM5\5s56 O6nU6Kc6Mi6u)75777[7k7p7p7}|7#<8T8z8c,9t89A9B9VG9R9c\9e93x9h{9}9 :(:6:Q:C\:d:)1;Pj;rz;T</\<`<Ty<e=O= 4=5=i=ow= >!>)>ea>~>???'?1?IaIJJnIJRJAxJ KB>KuKxK L;LGLYL^L~LM M6MX]MNuN@NpNrNwN|%O [ObOjOoOzOhPE!P]1P;PAP_PgPpP{PQ5Q9QUAUdU|UVCVZEV-aVjV#oVVW*W'Wv1WL2WPWSW6SWljW!X+8X==XMXY(Y0Y 1Y:YYYYYdYZwZ>[[I\6\(\H\V\\\%]n3]wC]J]oW]^]^ ^r^^-^=^v>^)I^K^of^i^r^w`:x`a NaQajYa]a`ajaa/ha3haQb-bbe4bCb3HbJbLbVb c c%c3cp^cX_cbc dddZd:dFdoHd/Jd$]dhdjede@e#e'eqe ff/gFgVglYgggigwg{g,hFhNhrhuhj|h}h iiii1iNi$xi2$jH4j>jDjJjttj)kSknkrkxk]l:lm @m\m"nTnh_n ooIoVo1Xo!hoVqog p(p:pAp7fplpX|pq.-q0q~Fq-Jq[q/ r<rR.r]/rd?rIr|Urgr s-$s3sS>s@s]stMt/tBt^t*u9uujv(vzzz!z@'znz{;{Db{i{I||||)|S|Qf|v|v|!}?}|}0 ~(~+~u5~O8~?~R~7^~u~X aOn<3I Vj(f56;<Q&U\au<JY~&2&^(b)1Es :^;$LO `@f9okHLMehrdw,*0?DFTU(aQmTsj;Bl} "8<At~8;<PWww#+5RhWqZ9e(?CTgUkfv" ))=EZf%'RUG^Br#(3) X'MBz{3O"?|Rhv)_2=rU1KVp>xk4bh>1ALDTy 1BC|!!g 06;JAOf~ ["E27d8 JQSyg(.=AIwMsx##FGK+(6QEP:swy"++S0MAj^b %*2HE3>c ,v0CIRX['m|S LBLHi?z WHYc=e{C3lLN(#7AWwk Wgt7|1V2enr$FXVhkjE} #=B-[lxopxe,rIG\Z. kM`5-'`?jj53k~$&6>!f4Tou|eE I20]0_jmvr2N4LLYdhtUWGZr GH| p934I\kZszN!"%?'jN[-z5EpQV3 :G\o ]A*fS!Vo j0ZoaOgi sJu*5BDP S ?$:FgSgTsqu)/B}S6=IkX~5l/>L 2:AB.c%yy{Tu20?D%ZO]E+3\q~v%&-4j<1GejomWxX818";C]D!4DrSUkq%cL/Z]ab%YB&dhy n%B5Hx~-9CY,gvyz+Q#]c0y }dyjkK~+S*\ >l(cgtg$Fc?RQquxrrW8riP,D% #(P3J_cdAOY* 2),-_b} AXe/UZk0v+|2FG VC [l (K3P@Ye}p)s` 639G"jMqr12Kdr{X :S]mw|#&<av :A7Tgu+c1B2kqcbN$M[en#3b8a -%FiSUqt*0A5d7,61'GL34~F#2SPUG`]o+J5MT$-IJg2!:{>(EPn\"]mbeu~ AM#A9IJOTUw8y p#\'BKUhHi>*9b@v *DQSa_jlm*+}>[w !42:<Nx}gJGVZhLIOQYdiel@CDvwxyz:::::Q:R:S:U:V:WYZ[_`bcdghlmnz{|}####,,EEEE"#W6W7WSWTWXhilmnpqvwԗ&&@H@N@@@(@8@@@@@@ @4@np@tv@z|@@@ @0@L@T@@@@@@@ @@  @4@@@b@x@0@<@`@p@x@@ @h @l @pr @z @ @Unknown G*Ax Times New Roman5Symbol3. *Cx Arial7K@Cambria7.{ @Calibri=AdvTREBU-R5. *[`)Tahoma?= *Cx Courier New["Univers 47 CondensedLight3*Ax TimesO1 CourierCourier New;WingdingsA BCambria Math"1hY:FETET!0 2qHP  $Pr^2@! xxDohertyDohertyA                           ! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @