FEATURE
Computational Validation Series: Part 2 by W.L. Oberkampf
WHAT ARE VALIDATION EXPERIMENTS? ROLE OF COMPUTATIONAL MODEL VALIDATION FOR DOE The United States has adhered to a moratorium on the underground testing of nuclear weapons for the last eight years. In response to this moratorium, the Department of Energy (DOE) initiated the Accelerated Strategic Computing Initiative (ASCI) Program as part of the Stockpile Stewardship Program. ASCI is a large-scale effort that aims to increase our confidence in the capabilities of computational simulations to predict the reliability, safety, and performance of the nuclear weapon stockpile.1 To attain this goal, ASCI has focused on two critical elements: 1) rapidly building and improving massively parallel computers that achieve teraOps computing speed, and 2) writing new software codes for these computers that implement high-fidelity computational physics and chemistry models for weapon applications. To establish confidence in computational simulations of the new codes, a verification and validation program was set up as part of ASCI. Verification of computational models is the process of determining that a computer code accurately represents the developer’s mathematical description of the model and the accurate solution to the model. Validation, on the other hand, is the process of determining the degree to which a computational model is an accurate representation of the real world.2, 3
. . . . . . . . . . . . . . . . . .
periments. Effectively, we needed to go beyond a consensus on ‘‘satisfactory comfort level’’ with our predictions.
Importance of Validation for Industry ‘‘Virtual prototyping’’ and ‘‘virtual testing’’ are terms describing the new engineering development trend of using computational simulations to design, evaluate, and ‘‘test’’ new hardware and even entire systems. Industry’s use of computational simulations is primarily driven by increased competition in many markets, e.g., aircraft, automobiles, and civil engineering structures, where the need to decrease the time and cost of bringing products to market is intense. Other factors contributing to the new trend are increased safety requirements of products, and the high cost and time required for both laboratory and field testing.
Fig. 1: Verification and validation in computational simulation (adapted from Ref. 4)
Although verification and validation activities have been practiced in engineering for some time, it became clear in ASCI that significant improvements were needed. We needed to better quantify verification and validation results, as well as confidence in the predictive capabilities of our computational simulations based on limited validation exEditors Note: This is the second of a series of articles that describe recent work on computational model validation being conducted at the nuclear weapons laboratories within the Department of Energy. This article describes what computational model validation is, and how validation experiments are designed, executed, and analyzed. The next article will present a framework for assessing confidence in computational predictions based on validation experiments. The final article will highlight the application of validation methods within industry and explain how these methods can be applied by the practicing engineer. W. Oberkampf is a Distinguished Member of the Technical Staff at Sandia National Laboratories, Albuquerque, New Mexico. Sandia National Laboratories is operated by Lockheed Martin Corp. for the U.S. Department of Energy under contract No. DE-AC04-94AL85000.
. . . . . . . . . . . . . . . . . . . .
Concerns with safety, in fact, represent an important element of testing or validating computational simulations. The potential legal and liability costs of hardware failures can be staggering to a company, the environment, or the public. This consideration is especially critical for high-consequence systems subject to conditions that cannot ever be tested. For example, it is impossible to test the catastrophic failure of a full-scale containment building at a nuclear power plant, the damage to a high-rise office building from an explosion, or the structural failure of an offshore drilling platform.
As the use of computational simulations becomes more widespread, users and developers of this software face a critical issue: How should confidence in modeling and simulation be critically assessed? We contend that validation of computational simulations is the primary method for building and quantifying this confidence. This article discusses the meaning of validation, some distinctive characteristics of validation experiments, and a methodology for constructing a validation hierarchy. A set of guidelines is proposed for designing, conducting, and analyzing validation experiments.
WHAT IS VALIDATION? Figure 1 identifies the main components and activities in computational simulation. The mathematical model provides a mathematical description of the physical system or process of interest. This description typically includes partial differMay/June 2001 EXPERIMENTAL TECHNIQUES
35
WHAT ARE VALIDATION EXPERIMENTS?
ential equations, initial conditions, boundary conditions, excitation functions, constitutive equations, and equations for material properties. In the computer code we have the embodiment of the mathematical model as formulated into discrete mathematics. As Fig. 1 shows, verification deals solely with the relationship between the mathematical model and its programmed implementation in the computer code. Validation deals with the relationship between the computational results from the computer code and reality, i.e., experimental observations. Verification and validation are basically tools for quantitatively assessing the accuracy of the computer code and the mathematical model. For validation, accuracy is measured in relation to experimental data, which is our best measure of reality. Importantly, this comparison does not assume that the experimental measurements are more accurate than the computational results. Rather, experimental measurements are understood to be the only true reflections of reality. When validation data can be obtained only from subscale physical models or subsystems of the complete system, then we face the challenge of how such partial data sets can be used to assess confidence in predictions for a complete system. While code-to-code comparisons are useful in such cases, these comparisons do not constitute validation of the computational model.
Validation vs Calibration The concept of validation just described is fundamentally different from what is commonly meant by ‘‘validation’’ in computational simulation. For example, in computational solid mechanics (CSM), ‘‘validation’’ generally involves the estimation or optimization of mathematical modeling parameters using experimental data from a system. In modal testing, for example, one experimentally measures the response of a structure to various types of excitations so that as many modes as possible are excited. Then parameters in the mathematical model, typically damping and stiffness characteristics, are optimized to best match the observed dynamics. This technique has long been used in CSM and has proven to be extremely effective and efficient in understanding complex structures. Using the concept of validation described above, however, this process would be referred to as model updating, model correlation, or model calibration. The reason for this distinction is that in CSM, analysts have traditionally determined physical parameters in the computational model based on an iterative combination of experimental data and computational results. In contrast, the concept of validation described above would involve computing the results before the experiment are conducted (i.e., predictions), or at least before knowledge of the experimental results is available. Calibration is primarily used to make the computational results more consistent with existing experimental data, not to determine the accuracy of the results. Oftentimes, in fact, calibration may be a more appropriate process than validation because of constraints in fiscal budgets and computer resources, or because of incomplete physical modeling data. And though the distinction between calibration and validation is not always easily recognizable, attempts should be made to recognize when calibration is exercised because it
36
EXPERIMENTAL TECHNIQUES May/June 2001
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
directly impacts the confidence in predictions from the computational model. Stated differently, calibration affects ‘‘how far’’ from the existing experimental database one can make a prediction and still retain an acceptable level of confidence in that prediction. The capability of a calibrated model to make accurate predictions becomes more difficult to estimate if the mathematical model is poorly formulated or if insufficient numerical discretization is used in the computational solution. For example, in analyzing shock waves through complex structures, it is common to use computational simulations for which grid-resolved solutions have not been attained— possibly far from being resolved. When physical modeling parameters are determined from solutions on grids that are clearly underresolved, the optimized parameters are biased from their physically based values due to the numerical error in the solution. This bias degrades the predictive capability of the mathematical model and makes it much less reliable for future applications.
Validation vs Prediction It is difficult to sort out the relationship between validation and prediction because each discipline in science and engineering have different a connotation to the terms. We believe the reason for this difficulty can be captured, in part, with the following question: ‘‘Exactly what does validation infer?’’ A significant step forward in clarifying this question was taken recently by the American Institute of Aeronautics and Astronautics Committee on Standards for computational fluid dynamics.2 The committee recommended that the meaning of computational prediction should be viewed in the context of the validation database. Prediction is defined as ‘‘use of a computational model to foretell the state of a physical system under conditions for which the computational model has not been validated.’’ A prediction therefore refers to the computational simulation of a specific case of interest that is different from cases that have been validated. The validation database should thus be viewed as historical evidence that a model has achieved a given level of accuracy in the solution of specified problems. From this perspective, it becomes clear that validation comparisons do not directly make claims about the accuracy of predictions; they make inferences. The relationship between validation and prediction has recently been suggested by (Ref. 5) as shown in Fig. 2. The bottom loop of the figure shows the validation process and
Fig. 2: Relationship of validation to prediction5
WHAT ARE VALIDATION EXPERIMENTS?
the top path shows the prediction process. Data needed for input to the computational model is provided by the validation experiment; but the experimental outcomes are not provided. Computational predictions are made and then compared with the experimental outcomes. The level of agreement, or disagreement, is used to infer confidence in predictions for cases that are not in the validation database. The predictive cases of interest may be very closely related to those in the database, or they may be quite different. For example, predictive cases of interest may have strongly coupled physics processes, whereas cases in the validation database may have lower levels of coupling.
Validation Hierarchy Because it is sometimes not feasible to conduct true validation experiments on some complex systems, we recommend using a building-block approach to identify validation experiments that are possible to achieve.2 This approach divides the complex engineering system of interest into three, or more, progressively simpler tiers: subsystem cases, benchmark cases, and unit problems (see Fig. 3). The strategy in the tiered approach is to assess how accurately the computational results compare with the experimental data at multiple levels of complexity, e.g., in materials, geometry, and physics coupling. Of these contributors, the most important to segregate into separate-effects experiments, from one tier to the next, is physics coupling, which commonly contains the highest nonlinearity of the various contributors. The construction of a validation hierarchy for an engineering system is clearly not unique. The hierarchy must be application driven to be of engineering value, not code driven. As one constructs each lower tier, the emphasis moves from multiple coupled codes simulating different types of physics to single codes simulating a particular type of physics. Examples of types of physics that could be segregated as one moves down the hierarchy are structural dynamics, fluid dynamics, chemical reactions, heat transfer, and radiation transport. The complete system consists of the actual engineering hardware for which a reliable computational tool is needed. Thus, by definition, all the geometric, material, and physics effects
Fig. 3: Validation tiers2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
occur simultaneously. In a typical complex engineering system (like an offshore oil platform), multidisciplinary, coupled physical phenomena occur together. Data are measured on the engineering hardware under realistic operating conditions. The quantity and quality of these measurements, however, are always very limited because of the cost and difficulty in detailed characterization of the system and the forcing functions of the environment, e.g., accurate characterization of material and mechanical properties of structural elements, detailed as-constructed characteristics of the structure, and actual loading on the structure. Subsystem cases represent the first decomposition of the actual hardware into simplified systems or components. Each of the subsystems or components is composed of actual hardware from the complete system. Subsystem cases usually exhibit three or more types of coupled physics. Although the physical processes of the complete system are partially represented by the subsystem cases; the degree of coupling between various physical phenomena in the subsystem cases is typically reduced. For example, the geometric features are restricted to the subsystem and its attachment, or simplified connection, to the complete system. And while the quality and quantity of the test data are usually significantly better for subsystem cases than for the complete system, there are still limited test data for subsystem cases. Benchmark cases represent the next level of system decomposition. For these cases, special hardware is fabricated to represent the key features of each subsystem. Compared to the actual hardware, the benchmark hardware exhibits simplified geometry, properties, or materials. The benchmark hardware is normally not functional and it may not be fabricated with all of the same materials as the actual subsystems or components. Typically, only two or three types of coupled physics are considered in benchmark cases. Thus, for example, in a fluid-structure interaction problem, the benchmark hardware could be constructed with fairly representative materials and connectors, but the fluid characteristics might not be representative in terms of Reynolds number, turbulence level, and two-phase flow characteristics. Most of the experimental data that are obtained in benchmark cases have associated estimates of measurement uncertainties, and most of the required computational modeling data, initial conditions, boundary conditions, and excitation functions are measured. Unit problems represent the total decomposition of the complete system. At this level, high-precision special-purpose hardware is fabricated, inspected, and characterized, but this hardware rarely resembles the hardware of the subsystem from which it originated. One element of complex physics is allowed to occur in each of the unit problems that are examined. The purpose of unit problems is to isolate elements of complex physics so that critical evaluations of mathematical models or submodels can be evaluated. For example, unit problems could use test coupons of representative materials and very idealized materials, connectors, and geometries for investigating joint damping. Highly instrumented, highly accurate experimental data are obtained from unit problems, and an extensive uncertainty analysis of the experimental data is prepared. For unit problems, all May/June 2001 EXPERIMENTAL TECHNIQUES
37
WHAT ARE VALIDATION EXPERIMENTS?
of the important code input data, initial conditions, boundary conditions, and excitation functions are accurately measured and documented. These types of experiments are commonly conducted in universities or in research laboratories. Based on the above tiered approach, how does one actually go about constructing a validation hierarchy for a complex, multidisciplinary system? Figure 4 is an example of a portion of a validation hierarchy for an air-launched, airbreathing, hypersonic cruise missile.6 At the system level, the cruise missile is divided into four systems: the airframe system, propulsion system, warhead system, and guidance, navigation, and control system. One can view the validation hierarchy of each of these systems as the primary facets of a four-sided pyramid. As shown, the airframe system is divided into three additional facets, representing its three subsystems (i.e., aero / thermal protection, structural, and electrodynamics). Similarly, the propulsion system is divided into four additional facets to represent its subsystems (i.e., compressor, combustor, turbine, and thermal signature).
CHARACTERISTICS OF VALIDATION EXPERIMENTS Many ask, ‘‘What is a validation experiment?’’ or ‘‘How is a validation experiment different from other experiments?’’ These are appropriate questions. In response, we begin by suggesting that traditional experiments generally can be grouped into three categories. The first category comprises experiments that are conducted primarily to improve the fundamental understanding of some physical process; sometimes these are referred to as physical-discovery experiments. The second category consists of experiments conducted primarily to construct or improve mathematical models of fairly well-understood processes. Experiments in the third category are those that determine the functionality, reliability, performance, or safety of components, subsystems, or complete systems. These experiments are commonly called ‘‘product acceptance tests’’ or ‘‘performance tests’’ of engineered components or systems. Based on the above categorization, we contend that validation experiments fall into none of the traditional categories
Fig. 4: Validation pyramid for a hypersonic cruise missile6
38
EXPERIMENTAL TECHNIQUES May/June 2001
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
and thus constitute a new type of experiment. A validation experiment is conducted for the primary purpose of determining the validity of a computational model. In other words, a validation experiment is designed, executed, and analyzed to quantitatively determine how well a mathematical model and its embodiment in a computer code can simulate a well-characterized physical process. Thus, in a validation experiment ‘‘the code is the customer.’’ During the past several years, a group of researchers at Sandia National Laboratories has been developing procedures and guidelines for designing and conducting a validation experiment.7–10 As part of this work, we recommend the following six guidelines.
Guideline 1 Guideline 1: A validation experiment should be jointly designed by experimentalists and code developers or analysts working closely together throughout the program, from inception to documentation, with complete candor about the strengths and weaknesses of each approach. Validation experiments should be planned and conducted as part of a team effort between experimentalists and analysts. No one is allowed to withhold limitations or deficiencies, and failure or success of any part of the effort must be shared by all. Without this level of cooperation, openness, and commitment in a team environment, the process is likely to fail or fall short of its potential for creating a more capable computational model. Although Guideline 1 may sound easy to do, it is extraordinarily difficult to accomplish in practice. For example, suppose that the CSM team is in one organization and the experimental test facility is in a completely separate organization, e.g., the experimental facility is contracted by the organization of the CSM team to conduct the experiments. The experimental facility will be extremely reluctant to expose its weaknesses, limitations, or deficiencies, especially if the facility was, or will be, in competition with other facilities to win other contracts in the future. Validation experiments require a much greater depth of probing into the limitations of a facility and the limitations of a CSM capability than do any other types of experiments.
Guideline 2 Guideline 2: A validation experiment should be designed to capture the essential physics of interest, including all relevant physical modeling data and initial and boundary conditions required by the code. By essential physics of interest we mean spatial dimensionality, temporal nature, geometrical complexity, and material properties. Experimentalists must understand the modeling assumptions embodied in the code so that the experiment can match, if possible, the assumptions and requirements. If the testing parameters that are initially requested for the calculation cannot be satisfied in the proposed experimental facility, it may be feasible to alter the code inputs and still satisfy the primary validation requirements. For example, can the CSM-requested dynamic loading on a test specimen be ensured? Is the type and quantity of instrumentation appropriate to provide the required data in sufficient quantity and at the required accuracy, spatial resolution, and frequency range? Conversely, analysts
WHAT ARE VALIDATION EXPERIMENTS?
. must understand the limitations of the physical experiment . results initially. The CSM team should be given the complete and the facility, ensure that all the relevant physics are in- . details of the physical modeling parameters and the initial cluded, and define physically realizable boundary conditions. . and boundary conditions of the experiment, exactly as it was . conducted. That is, everything that is needed for the anaWe should note that almost all published experimental data . lysts to compute solutions must be provided—but no more. have limited use in the validation of CSM codes. The most . The analysts would then compute and present the results common reason is that the documentation does not provide . for comparison with the experimental data. enough details about all of the necessary physical modeling . . parameters, initial conditions, and boundary conditions. Oc- . Guideline 5 casionally, one may find sufficient characterization of an ex- . Guideline 5: A hierarchy of experimental measurements of periment in corporate reports or university reports, but . increasing computational difficulty and specificity should be rarely is sufficient detail contained in journal articles and . made. This hierarchy is totally different from that proposed . in Fig. 3. Here, we are referring to a multilevel approach to conference papers. . . identifying experimental measurements, where experiments Guideline 3 . performed at each level are designed to test the predictions Guideline 3: A validation experiment should strive to empha- . made by analysts at that level. The levels in the hierarchy size the inherent synergism between computational and ex- . represent a spectrum of quantities of increasing specificity perimental approaches. By ‘‘synergism,’’ we mean an activity . (and consequently difficulty) in terms of predictive capability conducted by one approach, whether computational or ex- . and experimental measurement. perimental, that generates improvements in the capability, . understanding, or accuracy of the other approach. In such . As an example, the hierarchy could define the predictive and an exchange, both approaches benefit. Some who are discov- . measured quantities from the global level to the local level. ering the benefits of validation experiments claim that this . For instance, the spectrum of quantities predicted and then . is the primary value of validation experiments. While the . measured in a crash-worthiness test might include the folbenefits accruing from close cooperation between analysts . lowing: total crush-up distance, crush force-versus-distance and experimentalists may be surprising, we have found that . during crush-up, deformed shape of individual structural validation experiments contribute numerous other rewards. . members, plastic strain at points along structural members, . and location of metal contact, tearing, and welds popping. Here is one example of how this synergism can be put into . As another example, the hierarchy could include the specpractice. During the planning stages of a validation experi- . trum of predicted and measured modal shapes and frequenment, CSM simulations should be used to improve the de- . cies, progressing from the low-frequency modes to the more sign, instrumentation, and execution of the experiment. An . complex high-frequency modes. Note that moving through . analyst could compute, before the experiment is conducted, . such spectrums of measured quantities will significantly inthe mode shapes and frequencies of a structure. Such com- . crease the challenge for both analysts and experimentalists. putations would allow the experimentalist to design the in- . strumentation system and choose the location of the sensors . Guideline 6 to best measure the modes. This strategy could also be taken . Guideline 6: The experimental design should be constructed a step further by optimizing the physical characteristics of . to analyze and estimate the components of random (precision) the experiment to most directly stress the model, i.e., design . and bias (systematic) experimental errors. The standard techthe experiment to ‘‘break the model.’’ This optimization can . nique for estimating experimental uncertainty in experimenbe done by optimizing the physical modeling parameters, . tal data has been developed over a number of years by many such as material properties, and by modifying the geometry . researchers and practitioners and is well documented in . in various parts of the structure. In our experience, however, . texts such as Coleman and Stern.11 The standard technique we have found that code developers have little enthusiasm . propagates components of random uncertainty through the for breaking their code. . entire data-flow process. These components and their inter. actions are estimated at each level of the process, from the . sensor level to the experimental-result level. We believe that Guideline 4 . Guideline 4: Although the experimental design should be de- . random error estimation and sampling from multiple realiveloped cooperatively, independence must be maintained in . zations, as is done in the standard technique, is just the minobtaining both the computational and experimental results. . imum level of effort required for uncertainty estimation in This guideline is included because it is so common for ana- . validation experiments. However, this level may not be suflysts to calibrate their results to the experimental measure- . ficient because of possible existing bias errors in the experments. For example, analysts have been known to say, ‘‘Why . iments. should I do any more mesh refinement when the code agrees . . During the last 15 years a very different approach from the with the experimental data?’’ . standard experimental procedure has been developed for esIt is difficult to get analysts and experimentalists to coop- . timating experimental uncertainty.7–10 Instead of propagat. erate in a joint endeavor and, at the same time, retain the . ing individual uncertainty components through the data flow independence of each group’s results. However, this chal- . process, we have developed an ‘‘end-to-end’’ approach that lenge can be met by careful attention to procedural details. . compares multiple experimental measurements for the same For example, when the experimental measurements are re- . experimental quantity and then statistically computes the duced and analyzed, the CSM team should not be given the . uncertainty. This approach relies fundamentally on symmeMay/June 2001 EXPERIMENTAL TECHNIQUES
39
WHAT ARE VALIDATION EXPERIMENTS?
try arguments and multiple realizations to estimate correlated bias errors. The traditional experimental uncertainty estimation approach could be viewed as an a priori approach, whereas ours is an a posteriori approach. Our approach provides not only a better estimate of random errors and certain correlated bias errors, but also a way to segregate important error contributions that cannot be estimated in the traditional approach.
CONCLUSIONS ASCI has brought to the fore many important issues related to the ability of computational simulations to predict the response of a wide variety of complex engineered systems. The building blocks of our confidence assessment are verification of computer codes and validation of computational models. We contend that a new type of experiment, a validation experiment, is needed so that more quantifiable statements of level of agreement between computational results and experimental data can be made. Only with experimentalists and computational analysts working closely together can we better understand the limits of computational models and improve our experimental capabilities. We must also improve our understanding of how validation evidence for our computational models can be used to quantify our assessment of prediction accuracy.
References 1. DOE, ‘‘Accelerated Strategic Computing Initiative (ASCI) Program Plan,’’ Department of Energy, Defense Programs, DOE / DP-99-000010592, Washington, DC, 2000.
40
EXPERIMENTAL TECHNIQUES May/June 2001
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2. AIAA, ‘‘Guide for the Verification and Validation of Computational Fluid Dynamics Simulations,’’ American Institute of Aeronautics and Astronautics, AIAA-G-077-1998, Reston, VA, 1998. 3. Pilch, M., Trucano, T.G., Moya, J.L., Froehlich, G.K., Hodges, A.L., and Peercy, D.E., ‘‘Guidelines for Sandia ASCI Verification and Validation Plans—Content and Format: Version 2,’’ Sandia National Laboratories, SAND2000-3101, Albuquerque, NM, 2001. 4. Schlesinger, S., ‘‘Terminology for Model Credibility,’’ Simulation, Vol. 32, No. 3, 1979, pp. 103–104. 5. Oberkampf, W.L., and Trucano, T.G., ‘‘Verification and Validation in Computational Fluid Dynamics,’’ Progress in Aerospace Sciences, accepted for publication, 2001. 6. Oberkampf, W.L., and Trucano, T.G., ‘‘Validation Methodology in Computational Fluid Dynamics,’’ American Institute of Aeronautics and Astronautics, AIAA 2000-2549, Fluids 2000 Conference, Denver, CO, 2000. 7. Oberkampf, W.L., and Aeschliman, D.P., ‘‘Joint Computational / Experimental Aerodynamics Research on a Hypersonic Vehicle: Part 1, Experimental Results,’’ AIAA Journal, Vol. 30, No. 8, 1992, pp. 2000–2009. 8. Oberkampf, W.L., Aeschliman, D. P., Tate, R.E., and Henfling, J.F., ‘‘Experimental Aerodynamics Research on a Hypersonic Vehicle,’’ Sandia National Laboratories, SAND92-1411, Albuquerque, NM, 1993. 9. Oberkampf, W.L., Aeschliman, D.P., Henfling, J.F., and Larson, D.E., ‘‘Surface Pressure Measurements for CFD Code Validation in Hypersonic Flow,’’ American Institute of Aeronautics and Astronautics, AIAA Paper No. 95-2273, 26th AIAA Fluid Dynamics Conf., San Diego, CA, 1995. 10. Aeschliman, D.P., and Oberkampf, W.L., ‘‘Experimental Methodology for Computational Fluid Dynamics Code Validation,’’ AIAA Journal, Vol. 36, No. 5, 1998, pp. 733–741. 11. Coleman, H.W., and Steele, W.G., Jr., Experimentation and Uncertainty Analysis for Engineers, 2nd ed., John Wiley & Sons, New York, 1999. 䡵