ࡱ> y Qfbjbjcc ? f f)^' 8f\*&(O*Q*Q*Q*Q*Q*Q*$,/u*u**O*O*r&T'AzW_`';**0*!'0000'00'|u*u*M*00 X L: Windows BEOPESTPRIVATE  John Doherty Watermark Numerical Computing Dec 2019 1. Introduction BEOPEST was introduced to the PEST family of programs by Willem Schreuder of Principia Mathematica. Information on BEOPEST, including download instructions for Unix source code, as well as Willems documentation, is available at the following site and at links available through that site: http://www.pesthomepage.org/Third_Party_PEST-Compatible_Software.php The present document is meant to act as a supplement to Willems documentation, and not a replacement. BEOPEST was originally written for use on Unix platforms but, with some help from Doug Rumbaugh from Environmental Simulations, has also been ported to Windows. Since then, I have added some refinements to the Windows version of BEOPEST. Because source code is shared between the Unix and Windows versions of BEOPEST, these refinements will eventually make their way back to the Unix version. However at the present state of development the Windows and Unix versions are a little different in that run management and run management reporting are not the same between the two versions. Also the Unix version does not offer restart capabilities. Hopefully, when the present developmental phase is complete, these differences will no longer exist. In spite of its being the focus of recent development and refinement, one significant feature that, at the time of writing, the Windows version of BEOPEST is missing is the option to use MPI for communication between the master and slaves. This will be rectified in the not-too-distant future. In the meantime this is not expected to present a problem to many (if any) Windows users. At the time of writing it has been well over a year since Willem developed his original version of BEOPEST. I have to admit that while I was very interested in what he had done, my interest did not extend to studying its intimate details, nor gaining the necessary knowledge to understand TCP/IP communications in the Windows setting. It was then my opinion that the traditional Parallel PEST did a good enough job in the Windows environment, and there was no reason to replace something that worked satisfactorily with anything new. Three things changed my mind about that: the need to reduce all impediments to the use of large number of parameters when quantifying model predictive uncertainty; the ubiquitous use of multicore processors in modern machines; and the explosion in availability of cloud computing resources. All of these require a more efficient and more flexible parallelization paradigm than that provided by Parallel PEST. BEOPEST provides such a paradigm. Hence it is my intention to build on the wonderful work that Willem did in adding BEOPEST enhancements to PEST code by continuing to support and improve these enhancements. Unfortunately, some problems have been encountered in developments so far. Whether these can be attributed to certain versions of the Windows operating system, or to the Intel FORTRAN compiler with which the Windows version of BEOPEST is presently compiled, is as yet unknown. In particular, though I have not encountered this myself, some users have reported that slaves freeze when undertaking the Nth model run, where N is a random number. Furthermore they freeze in such a way that the BEOPEST master cannot differentiate their inactivity from that which would occur through undertaking an unusually long model run. Fortunately this does not happen often; furthermore BEOPESTs restart capabilities can mitigate the cost of this occurrence. Nevertheless, at the time of writing, the matter is being investigated, and the BEOPEST run manager has been altered to accommodate this situation. One way or another, these (and any other problems encountered in using BEOPEST) will be overcome over time, as I am committed to making BEOPEST the software of choice for PEST parallelization. The present version of BEOPEST should be considered as a beta version. I ask you, the user, to report back to me any problems that you encounter in using BEOPEST, supplying as many details as you can. Through this process BEOPEST will be able to reach maturity as soon as possible and, I hope, provide some significant improvements in what we can do with models in modern computing environments. 2. Using BEOPEST 2.1 General BEOPEST shares source code with PEST. Hence it supports all functionality that is offered by PEST. BEOPEST and PEST use exactly the same inversion algorithms; they only differ in parallel run management. The present version of BEOPEST is compiled using the Intel FORTRAN compiler. However parts of it are written in C++; these are compiled using the Microsoft C++ compiler. Two versions of BEOPEST are available, these being named BEOPEST32 and BEOPEST64. As the names suggest, the latter is compiled specifically for use on a 64 bit operating system and will therefore not work on a 32 bit operating system. The reverse is not true however. 2.2 Some BEOPEST Concepts As is described in Willems original BEOPEST documentation, the same BEOPEST executable program serves as both the master and the slave. Its role in any particular circumstance depends on the command used to run it. In setting up a parallel BEOPEST run, there should be only one master; however there is effectively no limit to the number of slaves that can be initialized. As for the normal Parallel PEST, the user must ensure that all slaves operate in different working directories so that input and output files for different model instances used by different slaves are not confused. In contrast to the normal Parallel PEST, a BEOPEST slave is smart in that it does more than simply run the model when given the command by the PEST master program to do so. In fact the slave (and not the master as with the traditional Parallel PEST) writes the input files and reads the output files pertaining to the model of which it has control. This brings with it the advantage that the master does not need to write model input files to the slaves working directory across what may be a busy network; nor does it need to read model output files from that directory. In fact, the PEST master does not even need to know where the slaves working directory is. Prior to running the model, the slave receives from PEST the set of parameters that it must use for a particular model run. When the model run is complete, it sends PEST the outcomes of the model run. Model input/output communications are handled by the slaves. Communication between PEST and its slaves is reduced to the minimum possible; the need to read/write model input/output files from afar is eliminated. In order to write model input files and read model output files, the slave version of BEOPEST must have access to template and instruction files respectively. It obtains the names of these by reading the PEST control file, just as the master does. In most cases the directory from which each slave operates should thus be a copy of the directory from which PEST would operate if it were calibrating the model itself in serial fashion. Use of BEOPEST does not require that the user prepare a run management file. As stated above, the master does not even need to know where the slaves are (this comprising the bulk of the information recorded in a run management file). However if a run management file is supplied, BEOPEST will read the first two lines of this file, looking for the value for the optional PARLAM variable. This matter is further discussed below. For those interested, BEOPEST is featured in a paper that has recently appeared in the Ground Water journal. See: Hunt, R.J., Luchette, J., Shreuder, W.A., Rumbaugh, J., Doherty, J., Tonkin, M.J. and Rumbaugh, D., 2010. Using the cloud to replenish parched groundwater modeling efforts. Rapid Communication for Ground Water, doi: 10.1111/j.1745-6584.2010.00699 2.3 Running BEOPEST as the Master To run BEOPEST as the master, use a command such as the following while situated in the master directory: beopest64 case /H :4004 If desired, the master directory can coincide with a slave directory. It will be to this directory that the run record file and all other files produced by PEST to record the status and progress of the parameter estimation process are written. In the above command it is important to note that: beopest64 can be replaced by beopest32 on a machine that does not possess 64 bit architecture. case is the filename base of a PEST control file, for which an extension of .pst is expected. (The extension can be included in the above command if desired.) 4004 is the port number. This number can be replaced by the number of any unused port. A space must separate /H from the colon that precedes the port number; a lower case h can be used if desired. As for the traditional Parallel PEST, BEOPEST can be restarted using the /r, /j or /s switches. For the last of these cases the above command becomes: beopest64 case /s /H :4004 A similar protocol is followed for the other restart switches. Similarly, BEOPEST can be started with the command to read an existing Jacobian matrix file instead of calculating the Jacobian matrix during its first iteration. This is accomplished through use of the /i command-line option. The command then becomes: beopest32 case /i /H :4004 As for the normal version of PEST, BEOPEST will, in this case, prompt for the name of the JCO file that it must read. See the addendum to the PEST manual for further details. In principle BEOPEST can restart a previously interrupted Parallel PEST run. In practice it has been found that this cannot be guaranteed due to differences in the way that programs compiled by different compilers read and write binary files. The present version of Parallel PEST is compiled using the Lahey compiler, while BEOPEST is compiled using the Intel compiler. (Theoretically the use of binary rather than unformatted file storage should eradicate such incompatibilities; however this does not appear to be the case.) BEOPEST will restart a previously interrupted BEOPEST run without difficulties however. 2.4 Running BEOPEST as the Slave In contrast to Parallel PEST, slaves must be started after, and not before, execution of the BEOPEST master has been initiated. Once execution of the master has commenced, slaves can be started at any time thereafter, and in any order. While positioned in a slave working directory, type a command such as the following to run BEOPEST as a slave. beopest64 case /H masterhost:4004 where masterhost should be replaced by the host name of the machine on which the master resides. Once again, make sure that there is a space between /H and the host name. However there should be no space between the host name and the following colon. If you are unsure of the host name of the master, type the command: hostname in a command-line window of the master machine. Alternatively, instead of the host name, use the IPv4 address of the master. Thus the above command becomes, for example: beopest64 case /H 192.168.1.104:4004 If you do not know the IP address of the host machine, type the command: ipconfig while situated in a command-line window on the host machine. 2.5 Terminating BEOPEST Execution Execution of the BEOPEST master can be brought to a halt using the PSTOP and PSTOPST commands in the usual manner. These commands should be issued from a command-line window which is open in the directory from which the BEOPEST master is running. If the PSTOP or PSTOPST command is issued from a command-line window open to a slave directory, the slave will cease execution at the end of the current model run, as soon as it has passed the outcomes of that run back to the master. 2.6 SVDA BEOPESTs tasks when undertaking SVD-assisted inversion are much more complicated than when undertaking normal inversion. This is because PEST writes its own parcalc.tpl template file at the start of every iteration of the parameter estimation process, this file containing the information required to calculate base parameter values from current super parameter values. When model input files are written locally by smart slaves rather than by a PEST master which is aware of the directories in which all of its slaves are operating (the latter being the modus operandi of the traditional Parallel PEST), the PEST master must communicate to each slave the means through which base parameters are re-constructed from super parameters. The BEOPEST master transfers this information to its slaves using the TCP/IP protocol in a manner that is transparent to the user. While the user need have no involvement in this procedure, it is important however that, when preparing for a BEOPEST run, he/she transfers files from the master directory to the slave working directories after, and not before, SVDAPREP has been run in order to create a super parameter PEST control file. In particular, this new PEST control file must be transferred to the working directory of all slaves, along with the picalc.ins, picalc.tpl and svdabatch.bat files written by SVDAPREP. The base parameter PEST control file file must also be transferred to all slave working directories (for the slaves need to obtain details of base parameter names, bounds, scales and offsets from this file). Naturally, the name of the super parameter PEST control file written by SVDAPREP must be supplied to both the master and slave versions of BEOPEST through their respective command lines as execution of each of these is initiated. 2.7 Parallelization of Initial Model Run As described in the addendum to the PEST manual, Parallel PEST can be asked to undertake its initial model run as part of the same parallelized run parcel as that used to undertake computation of the initial Jacobian matrix. This avoids the problem of many slaves standing idle waiting for work while the initial model run is being undertaken. This is implemented by starting Parallel PEST with the /p1 command-line switch. The same applies to BEOPEST if, using the same example as above, BEOPEST execution is initiated using the command: beopest64 case /p1 /H masterhost:4004 As described above, if a BEOPEST run is interrupted, it can be restarted using the /s command line switch. There is no need to repeat the /p1 switch to re-commence a BEOPEST run that was previously interrupted during computation of its initial run parcel. BEOPEST will figure out for itself whether or not the /p1 switch was employed during its previous run, and hence the contents of its restart file. 2.8 Multiple Command Lines If a PEST input file employs multiple model command lines, BEOPEST will respect this. When sending a parameter set to a slave it will also send the index of the model command that must be used for that particular model run. It is up to the user to ensure that all files necessary for undertaking all model runs are accessible from all slave working directories. It will be recalled from PEST documentation that the NUMCOM control variable needs to be set to greater than 1 in the control data section of the PEST control file if PEST is to employ multiple model command lines. Command line indices are provided through the DERCOM variable appearing in the parameter data section of the PEST control file. 2.9 MKL Version of BEOPEST MKL stands for maths kernel library. This is an Intel product. Use of the MKL library for certain tasks performed by the BEOPEST master (for example, singular value decomposition) can make it run much more quickly. So, if the BEOPEST master runs slowly, use BEOPEST_MKL instead of BEOPEST64. If you do this, make sure that file libiomp5md.dll (supplied with BEOPEST) is placed in the folder from which BEOPEST_MKL is run, or in a folder that is cited in your machines PATH environment variable. 3. Run Management 3.1 The Run Management File Unlike the traditional Parallel PEST, BEOPEST does not need to read a run management file. As the slaves, and not the master, write model input files and read model output files, the master does not need to know the slave working directories. Nor does it need to know in advance of a BEOPEST run how many slaves there are. It will just add slaves to its register as they open communications with the BEOPEST master through the TCP/IP protocol, and allocate them runs as long as they are still prepared to implement these runs. Nevertheless, if a run management file is present within the directory from which the master is launched, the BEOPEST master will read the first two lines of this file. Actually it will only read one variable from this file, this being the optional PARLAM variable. This is the fourth variable on the second line of the file. Recall that its settings are as follows. PARLAM settingPEST action0Do not parallelize model runs when testing different parameter upgrades calculated on the basis of different Marquardt lambdas.1Parallelize the lambda search procedure. Use all available slaves in this process.-NParallelize the lambda search procedure. Use a maximum of N slaves in this process.-9999Parallelize the lambda search procedure. Use a maximum of NUMLAM slaves, and undertake only one round of lambda testing.Table 1. PARLAM settings. As is explained in the addendum to the PEST manual, a setting of -9999 is the best to use where model run times are long and where a user has access to a moderate to high number of slaves whose run times are similar. In that case it may be wise to set the NUMLAM variable in the PEST control file to a higher-than-normal value if it would otherwise be smaller than the number of available slaves. (Recall that the NUMLAM variable is situated in the control data section of the PEST control file; it governs the maximum number of model runs that PEST will commit to the testing of different Marquardt lambdas.) It is important to note that for all PARLAM settings other than -9999, PEST abandons parallelization of the lambda search procedure if any parameter encounters its bounds. Traditional lambda-based upgrading then becomes a serial procedure as the parameter upgrade direction is re-calculated in a manner that is dependent on the number of parameters that have not yet encountered their bounds. However with PARLAM set to -9999, PEST will, under no circumstances, undertake a second set of model runs during any one lambda search procedure; nor will it serialize the lambda search procedure. This ensures that no processors are idle during the lambda search procedure. Where a user has many processors at his/her disposal, some lack of efficiency in conducting the lambda search that is incurred through failure to serialize this search as parameters encounter their bounds, is more than compensated by efficiencies gained through keeping all processors busy. If BEOPEST finds a run management file in its current directory and encounters an error condition while reading the first two lines of this file, it will cease execution with an appropriate error message. If it does not find a run management file it sets PARLAM to 1, and proceeds with its execution. The same occurs if it finds a run management file and the optional PARLAM variable is not cited within this file. Recall that the run management file must possess a filename base which is the same as that of the PEST control file; however its extension must be .rmf. 3.2 Run Management Record File As for the normal Parallel PEST, the BEOPEST master records all communications between itself and its slaves to a run management record file. The filename base of this file is the same as that of the PEST control file; its extension is .rmr. 3.3 Run Management At the time of writing, run management as carried out by the Windows version of BEOPEST is slightly more sophisticated than that implemented by the Unix version of BEOPEST; this discrepancy is not expected to remain for long. However with sophistication goes a greater propensity for error and/or an inability to accommodate the unexpected. It is the authors experience that this is nowhere more the case than in parallel run management. Users are therefore urged to report any suspicious BEOPEST run management behaviour to the author. Please provide the following details with your report: the PEST control file; the run record file; the run management file (if used); and the run management record file. 4. Culling Slaves Use of the PSTOP or PSTOPST commands was described above. As is documented in the PEST manual, these utilities write a file named pest.stp. This file contains a single integer. PEST monitors its working directory for this file and takes action according to the value of this integer. Special integer settings are available for BEOPEST. However they are not available through utilities such as PSTOP and PSTOPST. If it is to contain these special settings, file pest.stp must be written by the user. These settings are now described. If pest.stp records a value of 10, and if this file is written to a slave directory, then the slave will stop. The slave will also stop if the contents of pest.stp are 1 or 2 (values which are written by PSTOP and PSTOPST). However if the slave directory is the same as the master directory, the master will also stop if file pest.stp contains a value of 1 or 2. In contrast, a value of 10 will not affect the master, but will precipitate termination of slave execution. Note that termination of execution of a slave does not affect BEOPEST execution. The BEOPEST master detects the death of a slave; if other slaves are still alive the master then distributes model runs to remaining slaves. If no other slaves are alive the master simply waits for new slaves to appear, or old slaves to reappear before initiating any further model runs. If pest.stp is written to the directory from which the BEOPEST master is operating, and if the single integer which comprises its contents has a value of N, then the master will cull slaves until there are only N of these remaining. It does this by sending commands to the slaves to terminate their own execution. Commands to terminate execution are distributed to idle slaves immediately. If more slaves must be culled to reach the user-supplied target of N remaining slaves, then other slaves are culled after they have finished their respective model runs and have reported their results to the BEOPEST master. A value of -1000000 must be used for N if it is desired to cull slaves to zero. Even with all of its slaves terminated, the BEOPEST master will not itself cease execution. It will simply wait until a new slave appears, or an old slave re-appears, before distributing any further model runs. 5. Opening a Port to the Outside World The following information may prove useful. Suppose the BEOPEST master is running on one of the machines comprising your office or home network. Suppose also that you would like to use a machine that is not part of that network to carry out model runs. The first thing that you must do is transfer all files required by the model and by BEOPEST to that machine. This can be done in whatever way is most convenient for you. Suppose now that the BEOPEST master is running on a machine with whose local IP address is 192.168.1.104. This address is provided to your machine by your router. It cannot be used by the outside machine as this address is only recognizable by other machines on your local network. Suppose that your router is visible to the outside world through an IP address such as 232.213.21.313. Suppose also that you will ask BEOPEST to use port 4004 on its local machine (as in the BEOPEST command example described above). It is an easy matter to use the port-forwarding functionality of your router to make this port on this machine visible to the outside world as port 4004 associated with the IP address 232.213.21.313. Hence the BEOPEST slave can be run on the outside machine using, for example, the command: beopest64 case /H 232.213.21.313:4004 The your routers manual for further details.     PAGE 4 PAGE 8 (EFIKMNO_   A C V   - 4 X [ 巯|ttlt|d|d|Y||t|tlh; h; mH sH h}2mH sH hIX??AgdvLgdvLgdutgdutgdLgdE<gdE<gd8gdugdEdgdRgdR1112S2222222222&3,3K3T3V3s33333333333344%41454>4E444444445555566ϱǩ𙙑hLhLmH sH hLmH sH hE<mH sH hK"mH sH hYmH sH h?OJQJ^JmH sH hmH sH hgj mH sH h8h8OJQJ^JmH sH h&dmH sH h?mH sH h8mH sH hZ7mH sH 366e6v6666667\7a7i7y77777778 8&868;8I888/9C9o9p9q99999>:C:D:M:S:T:j:s:::;;";$;.;3;<;@;[;n;;;;;+<,<8<<<==ƾ׾h?mH sH h/CmH sH he4mH sH he46mH sH h}2mH sH hgj mH sH hM`{6mH sH hK"mH sH hM`{mH sH hutmH sH B=;=====j>k>>>>>>>??8?:?X?Y?b?g?k?o?y??????@@@AA3AAAAAAABBBijij虙yh44mH sH h6mH sH hPImH sH hmH sH hD*zmH sH #h?hvL6OJQJ^JmH sH hE0hvLOJQJ^JmH sH hvLOJQJ^JmH sH hD*zOJQJ^JmH sH h<5mH sH h9zmH sH h[mmH sH hvLmH sH ,A3ABCDFF5FDHIII $IfgdMgdMgdMgdMgd4gd4gdPIgdPI BBBBCCDD#D$D8D9DFDSDTD|DDDDDDDDDDDD EEYE[E^ElEmEEEEEEEEEFF F FFFFFF5FKFTFpFxF GG\GGG H"HBHDHHHǿǷǷǷǯǧǯǯhZ7mH sH hK"mH sH hgj mH sH he4mH sH hMmH sH h4h4mH sH h4h46]mH sH h4mH sH hPImH sH h3mH sH BH IIIIIIII JQJRJTJJJJJJJJJKK KKK=K\KKKKKKK LLULqLLLLLLLLM MѾѾѾѾіwh]mH sH hK"mH sH h_ZmH sH hMhMhM'h1QhM6CJOJQJ^JmH sH hK"CJOJQJ^JmH sH $h1QhMCJOJQJ^JmH sH h1QhMmH sH 'h1QhM5CJOJQJ^JmH sH hgj mH sH hMmH sH .IIIQJ{{ $IfgdM{kdT$$Ifl0j # E t0644 layt1QQJRJTJJ{{ $IfgdM{kd$$Ifl0j # E t0644 layt1QJJJJ{{ $IfgdM{kdZ$$Ifl0j # E t0644 layt1QJKKK{{ $IfgdM{kd$$Ifl0j # E t0644 layt1QKKKMQ]SST UUoWzzzzupkfgdHgdHgd_Zgd_ZgdE<gdM{kd`$$Ifl0j # E t0644 layt1Q MQMSMMMMMMM NNxNNNNNNNNNNNNNN O OOO!OOOOOOPP P_PbPuPPPP QQQQQ R,RCRR SQSZS[S\S]SSSSSSS TTUUU UUhHmH sH hZ7mH sH h mh_Z6mH sH h mmH sH hhmH sH h_ZmH sH hgj mH sH hK"mH sH hMmH sH GU4U5U[U\UUUUU VVVW WWW7WoWWWWWWWX XXXXSXTXVXXXYXbXvX~X YYYCYqYtYYYYYYYYZ Z Z ZZ*Z+Z/Zf?f@fAfBfCfIfJfKfLfMfNfOfPfQfh mmH sH h@h0JmHnHuh}4@h}4@0JmHnHu h}4@0Jjh}4@0JU=....()()))()000 P8$BP. A!"#$% DpTD phoenix$$If!vh#v #vE:V l t065 5Eyt1Q$$If!vh#v #vE:V l t065 5Eyt1Q$$If!vh#v #vE:V l t065 5Eyt1Q$$If!vh#v #vE:V l t065 5Eyt1Q$$If!vh#v #vE:V l t065 5Eyt1Qx2 0@P`p2( 0@P`p 0@P`p 0@P`p 0@P`p 0@P`p 0@P`p8XV~ 0@ 0@ 0@ 0@ 0@ 0@ 0@ 0@ 0@ 0@ 0@ 0@ 0@ 0@66666_HmH nH sH tH N`N iNormal$xx1$a$CJ_HmH sH tH P@P i Heading 1$<@&5CJ$KHOJQJL@L i Heading 2$<@&5CJOJQJH@H i Heading 3$<@& 5OJQJDA`D Default Paragraph FontVi@V  Table Normal :V 44 la (k (No List 4 @4 iFooter  9r .)@. i Page Number>O> icaption5aJmH sH |#| i Table Grid7:V0$xx1$a$P2P H Balloon Text CJOJQJ^JaJV/AV HBalloon Text CharCJOJQJ^JaJmH sH PK![Content_Types].xmlN0EH-J@%ǎǢ|ș$زULTB l,3;rØJB+$G]7O٭Vj\{cp/IDg6wZ0s=Dĵw %;r,qlEآyDQ"Q,=c8B,!gxMD&铁M./SAe^QשF½|SˌDإbj|E7C<bʼNpr8fnߧFrI.{1fVԅ$21(t}kJV1/ ÚQL×07#]fVIhcMZ6/Hߏ bW`Gv Ts'BCt!LQ#JxݴyJ] C:= ċ(tRQ;^e1/-/A_Y)^6(p[_&N}njzb\->;nVb*.7p]M|MMM# ud9c47=iV7̪~㦓ødfÕ 5j z'^9J{rJЃ3Ax| FU9…i3Q/B)LʾRPx)04N O'> agYeHj*kblC=hPW!alfpX OAXl:XVZbr Zy4Sw3?WӊhPxzSq]y Q^4 &&&)[ %).16=BH MUZ`5fQf4678:;<=?@ACDJKMNP/AIQJJJKoW/fQf59>BEFGHILO  ")!!8@0(  B S  ?JNvP~PQQ RRRROSWSUU]U)^+^,^.^/^1^2^4^5^O^R^FORvzCK "! !/'6'* *+"+@@EEGGKKMMNNDQKQRR]^)^+^,^.^/^1^2^4^5^O^R^3333333333333333333333FIMN<<Y=Y=[=>)^)^+^,^,^.^/^1^2^4^5^O^R^FIMN<<Y=Y=[=>)^)^+^,^,^.^/^1^2^4^5^O^R^YqXR&\$h}!ʆ3K&th^`OJQJo(hHh^`OJQJ^Jo(hHoh^`OJQJo(hHhvv^v`OJQJo(hHhF F ^F `OJQJ^Jo(hHoh^`OJQJo(hHh^`OJQJo(hHh^`OJQJ^Jo(hHoh^`OJQJo(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohp^p`OJQJo(hHh@ ^@ `OJQJo(hHh^`OJQJ^Jo(hHoh^`OJQJo(hHh^`OJQJo(hHh^`OJQJ^Jo(hHohP^P`OJQJo(hHh ^`hH.h ^`hH.h pL^p`LhH.h @ ^@ `hH.h ^`hH.h L^`LhH.h ^`hH.h ^`hH.h PL^P`LhH.h ^`hH.h ^`hH.h pL^p`LhH.h @ ^@ `hH.h ^`hH.h L^`LhH.h ^`hH.h ^`hH.h PL^P`LhH.Yh}!3K&R&\                                    gBK"y'% G0 ; V % gj 7L?Fa &((h"+xi0Jg2e4<59e6Z7t8z9:<E<cb=Nh>@+@}4@/CtCcD 6GHPIJJLvL#OP1QI?@ABCDEFGHIJKLMNOPQSTUVWXY[\]^_`abcdefghijklmnopqrtuvwxyz|}~Root Entry F`IzW@Data R1TableZL0WordDocument ?SummaryInformation(sDocumentSummaryInformation8{MsoDataStorep"zWAzWL5FWEKUWP2KVNH==2p"zWAzWItem  PropertiesUCompObj r   F Microsoft Word 97-2003 Document MSWordDocWord.Document.89q