PSYCHOMETRIKA--VOL. 40) NO. 2 JUNE, 1975
REVIEWS David H. Krantz, R. Duncan Luce, Patrick Suppes, and Amos Tversky. Foundations of Measurement, Volume I. New York: Academic Press, 1971. Pp. xxiv + 577. $18.50 A reviewer of a book for Psychometrika feels vaguely that he ought to begin by declaring, " T h e readership of this journal can expect a book of this importance to appear on the average about every N years," or perhaps, "would prefer this book over P percent of the volumes that have appeared in the last ten years dealing with Psychology as a Quantitative Rational Science." Forced to do so, I should have said something like N -- 15 and P = 99, but in the case of this book this would be distinctly paradoxical. It is the aim of the authors to provide a rational basis for quantification and they would certainly agree that no such basis is now known for quantification of this sort. This would only be the beginning of the dilemmas that arise in reviewing FaundaHons of Measurement, Volume I, for although I shall argue that this first of two volumes on fundamental measurement theory is indeed very important, I must also point out that it is likely to be a difficult book for most psychometricians and one large source of the difficulty is the question of whether it has very much to offer them. This is in part the fault of the authors, but it is also due to a lamentable lack of communication between two large groups of applied mathematicians, neither of which is very comfortable with the other's technology and objectives. It seems important to try to contribute in the course of the review to a better understanding of what these two groups have in common and can offer each other. The subject of Foundations of Measurement, Volume I (or more compactly FM-1 in what follows) is a class of mathematically stateable conditions on sets of things which make possible the assignment of numbers to the things in such a way that the conditions on the set of things also apply to the set of numbers assigned. T h a t is, it is taken as basic that someone measures so as to have a compact and portable representation or picture of something important about the structure of the set. Various sets of numbers also have structures of various kinds and it is not so much the set itself which one wishes to capture in quantification as the structure. Thus, a particular set of numbers (integers, rationals, reals, real vectors, for example) is like an artist's palette; it is the way the paints are organized on a canvas in relation to something being portrayed that is interesting rather than the pigments themselves. Naturally, the analysis of data necessarily presupposes that some kinds of structure on the numbers involved are important and relevant while other kinds are not. Data analysis thus inevitably leads one back to the question: what structure in the set of things was the quantification supposed to be capturing in terms of a corresponding numerical structure? From this point of view, FM-1 is a catalogue of possibilities, some of obvious relevance to certain types of scientists, some possibly relevant to someone someday, and others mere curiosities. An annotated table of contents is: Chapter 1. Introduction: A statement of the aim of the book, a description of three basic classes of structure which are quantifiable, and some illustrations of these provide an excellent and nontechnical treatment of fundamental measurement. Chapter 2. Construction of Numerical Functions: A collection of theorems and proofs basic to much of what follows are collected in the most mathematically dense chapter of the book. The authors have tried to organize remaining chapters so as to avoid the necessity of reading this chapter. 257
258
PSYCHOMETRIKA
Chapter 3. Extensive Measurement: The classical quantification procedures basic to physics and founded on the existence of a concatenation operation (that is, associative and commutative) are exhaustively discussed. Chapter 4. Difference Measurement: This continues chapter 3 to include operations representable by differences or ratios. Chapter 5. Probability Representations: Although the measurement of probability is in principle a form of extensive measurement, it is in practice fraught with peculiar problems. Only a few of the attempts to deal with these are discussed in detail. Chapter 6. Additive Conjoint Measurement: An alternative formulation of structures representable additively is in terms of equality conditions on pairs of elements. This approach as well as conditions for n-tuples is treated in detail. Chapter 7. Polynomial Conjoint Measurement: This chapter mainly concerns itself with possible relations among triples of objects in addition to those dealt with in Chapter 6. Chapter 8. Conditional Expected Utility: A discussion of the measurement of utility of gambles occurs in this chapter which appeals to results in previous chapters. This is basically an application chapter and somewhat foreign to the more general spirit of the remainder of the book. Chapter 9. Measurement Inequalities. This is a brief treatment of inequality conditions on finite sets which could lead to numerical representation. Chapter 10. Dimensional Analysis and Numerical Laws: Also somewhat outside the coneerns of the rest of the book, this chapter is unique in considering the representation of relations between measureable structures explicitly. It deals with the fact that virtually all the laws of physics can be expressed numerically _as multiplications or divisions of measurements. Although this rule has been known for a long time and forms the basis of the technique of dimensional analysis widely used in engineering and physics, it remains a phenomenon for which no satisfactory explanation has been forthcoming. Although I found it well written and generally very clear, I would not recommend FM-1 as a text. Treatments of fundamental measurement theory more suitable for an elementary introduction are available in Coombs, Dawes, and Tversky [1970] and Dawes [1972]. Instead, FM-1 should be mainly viewed as a reference work which also might be suitable as an introduction to the area for the well-equipped reader. The book has been carefully designed to optimize its usefulness both to the casual reader and to the researcher using it as a reference. Chapters are divided into sections and subsections conveniently labelled for quick reference with those sections which the authors regard as central marked with a square ([-7). Theorems, axioms, and lemmas are stated in italics and I found this very helpful for quick reference. The reference list is as long and comprehensive as one would expect of a work covering as much ground as this does, although there are a few surprising omissions which I shall mention later. Each chapter concludes with a set of problems, most of which are not at all difficult and some of which are quite illuminating. These problems along with the many detailed proofs provide a rich training ground for prospective investigators. The basic tools for the construction of numerically representable axiom systems are set theory, algebra, and topology. Unfortunately readers of this journal are apt to be more familiar with linear algebra, mathematical statistics and other branches of classical analysis; contemporary data analysis is a highly specialized trade. Thus many readers will find themselves needing to dust off long unused set theoretic and algebraic concepts and comparatively few will have had access to elementary topology. The authors have made every attempt to use only basic notions in the first two areas and have tried to avoid topology altogether. This naturally greatly lengthens the proofs and the result may be that the reader will be deterred by the sheer quantity of mathematics involved. Experienced mathematicians, on the other hand, will be aggravated by the needless clutter of lemmas
REVIEWS
259
and derivational detail. An alternative solution to this inevitable problem might have been to include an additional Chapter Zero surveying useful mathematical concepts and results. As one pushes on beyond Chapter 3, a feeling begins to emerge that one is seeing an almost endless list of axiomatic structures, each differing from the preceding ones in only small detail. Surely, one asks, it would have been possible to organize the material in such a way as to reduce this redundancy considerably. For example, virtually all operations on pairs of elements dealt with are shown to relate to the additive operation on numbers in the following way: if the operation is indicated by o, then there are continuous monotonic functions f, g, and h such that h(al o as) = f(al) 4- g(a2) for any pair of elements a~ and a2. This relation between two operations is known as an isotopy. The better known relation of homomorphy results when f = g = h. A single set of representation and uniqueness theorems for the whole class of operations isotopic to addition would take care of almost all the separate theorems produced in detail in the paper. In fact, Chapter 6 provides just such a set of results in the representation of additive conjoint measurement structures. More generally, Aez61 [1965] in a very useful survey paper provides a simple proof that the class of structures iostopic to an Abelian (i.e., associative and commutative) group is equivalent to the class of structures satisfying the double cancellation or Thomsen condition. This paper is unfortunately omitted from the references, although the original German paper of which it is a translation does appear. In general I wonder why an allusion to concepts and work in nonassociative algebras and particularly to the survey by Bruck [1971] (first published in 1958) does not appear anywhere in the book. Concepts such as groupoid, quasigroup, loop, and isotopy are in my mind central to what the authors are doing. An even more general representation theorem is given by Hartman [1972] who shows that any ordered quasigroup (a set with an operation which is solvable in both arguments) which satisfies the Archimedean axiom is representable in the reals. More specifically, the great majority of operations considered, especially in Chapters 2 through 5, Chapter 8 and Chapter 10 are examples of bisymmetrie operations satisfying (w o x) o (y o z) = (w o y) o (x o z). In fact, one is hard put to think quickly of a nonbisymmetrie numerical operation and a very early treatment of this very important class of operations would seem obviously desirable. Instead, bisymmetry is briefly discussed in Chapter 6. These comments can be summed up by complaining that the algebra in the book is treated inductively, going from the narrowly specialized associative and commutative operations in Chapters 2 and 3, working through a variety of bisymmetric operations in Chapter 4, returning to concatenations in Chapter 5, and finally treating the whole problem of operations isotopic to addition in Chapter 6. This would not be too serious were it not for the fact that each axiom system is accompanied by its own battery of lemmas and proofs. Thus, although the authors cite the second chapter as containing the core of mathematical results for what follows, I feel that the reader should, instead, move directly from Chapter 1 to Chapter 6 and then work backwards and forwards. The algebraic aspect of the axiom systems is only half the story: almost all interesting structures in some way incorporate the concept of the relative nearness of pairs of elements. This idea of the proximity of some elements to each other will sound perfectly natural to devotees of multidimensional scaling and it is fundamental to virtually all applied mathematics. It is also a basic characteristic of all familiar number systems; we know that one number is more proximal to a fixed point than another by the fact that any open set which includes the second number and the fixed value will also include the first (after perhaps
260
PSYCHOMETRIKA
translating it). Without a means of defining proximity oa the structured set, the use of numbers seems basically pointless. The treatment of proximity is the main aim of topology: In order to avoid its use, the authors resort to the Archimedean axiom: every strictly bounded standard sequence is finite, where a standard sequence is a strictly monotonic sequence of elements generated so as to have a standard spacing between adjacent pairs. Even though it has a very long history (Hflder [1901], for example), I would like to complain about its use on two grounds. In the first place, it is too general since the rational numbers as well as the integers and reals satisfy it (with addition). The rationals are basically unsatisfactory for scientific work, a fact that was recognized by Pythagoras. Nowhere in FM-1 are the rationals mentioned as useful representations, it being more or less implicitly assumed that only integers and reals are really worth establishing representation and uniqueness theorems for. Hence, the fact that the rationals are inadvertantly included indicates that the Archimedean axiom is not quite what the authors really had in mind. M y second complaint about the Arehimedean axiom is that its algebraic nature disguises what is really going on; that is, the specifying of a topology or definition of proximity. The authors themselves seem to be in some confusion on this point since they say on page 25, " W h a t is surprising is that it is a needed a x i o m . . . " and " . . . with our relatively weak structural assumptions, we do not known how to eliminate it in favor of more desirable necessary axioms." I feel much would have been achieved by appealing directly to topological concepts suitably defined within the text. It is true that other authors such as Pfanzagl [1968] on occasion employ topological conditions which are too strong; nevertheless a clean separation between the algebraic and topological aspects of the axiomatic structures presented would aid clarity and shorten proofs. I would favor completeness as a suitable replacement; it is frequently defined in textbooks on analysis and therefore more familiar than other notions such as second degree countability and local compactness. It is, of course, stronger than the Archimedean axiom, as the additive rationals illustrate. But if one agrees that the goal of fundamental measurement theory is to investigate axiomatic structures which reflect what scientists believe or observe in the phenomena they are studying, then the Archimedean axiom is too weak. Many readers will be disappointed that there is not more concern in FM-1 with establishing the relevance of the axiomatic structures mentioned. This is not to say that there is not a good deal of discussion of issues such as the practical limits on the solvability of equations and the breakdown of the conditions of simple order. Undoubtably FM-2 will contain more material on these and other problems. Nevertheless, a much more complete treatmertt of the relevance of extensive measurement to behavioral data is possible than one finds in Section 3.14. One has only to read some of the papers by Stevens, for example, to find much material on the possible concatenability of some types of behavior and the evidence for the bisectability of others. It seems clear that one could endlessly generate axiom systems representable numerically; what should limit this process is whether or not someone has a phenomenon for which the system is a sensible description, just as a technique for data analysis should be related to some collection of data needing analysis in this way. The psychometrician reading FM-1 will look in vain for any treatment of the data analysis problems inevitable in applications of fundamental measurement theory. A brief comparison with scaling theory in Section 1.6 points out that most data analysis techniques are in fact mappings from one numerical representation into another, thus presuming that the original quantification has a rational basis. Obviously such a presumption is seldom justified in current applications of scaling theory and psychometrics. The authors promise to confront the problems arising in attempting to confirm a particular axiomatic structure
a~EWS
261
as a suitable model, especially when error is a serious factor, in Chapters 15, 16, and 17 in FM-2. Hence, we cannot really complain that they have sidestepped the question of just how does one obtain fundamental measurement of behavioral phenomena. In Section 3.14 oll extensive measurement in the social sciences it is pointed out that an empirical concatenation operation is usually not available but that this does not necessarily rule out fundamental extensive measurement. In anticipation of FM-2 I would like to make a few comments on scaling and statistical problems suggested by FM-1, partly because I am somewhat disappointed in the cursory treatment of these problems in FM-1 and partly because it seems to me that there is a great deal of interesting and useful work to be done. These remarks are the first part of my argument that this is a very important book for readers of Psychome~rika. There appear to be basically two kinds of questions which arise in applying any branch of fundamental measurement theory. The first centers on confirming that a particular axiomatic structure is a suitable statement about a particular variable or set of variables. For example, one might propose that a particular dependent variable relates to a certain pair of independent variables in a way that satisfies the axioms of additive conjoint measurement. The problem is then to confirm that the double cancellation axiom is satisfied. This, unfortunately, is not at all easy since the axiom is posed in such a way that solutions must be found to sets of equations. Hence, as Tukey [1969] points out, we have seen hardly any applications of additive conjoint measurement. The trouble may be either that the use of equality conditions like double cancellation is inconvenient for experimentation and that alternative but equivalent axiomatizations will be more satisfactory or, on the other hand, that it is just too loose a hypothesis to entertain about data and that we should concentrate on situations where stronger axioms are asserted which are more easily testable. For example, the method of bisection proposes that a subject can choose a response from the same domain as two stimuli in a way which satisfies the axiom of reflexivity. A careful experimental investigation of this statement has shown us that this axiom is confirmed for some psychophysical variables and disconfirmed for others [Stevens, 1971]. Similar work waits to be done for additivity. The second question presupposes that a particular axiomatic structure is a suitable model for the phenomena and that one wishes to develop a procedure for producing fundamental measurement; that is, a numerical assignment which is consistent with the axiomatic structure. Frequently the axiomatic structure gives little help. For example, the constructive representation proof for extensive measurement makes use of finer and finer scales constructed by concatenating smaller and smaller elements with themselves an arbitrary number of times. This does not make practical sense, even in physics. Similarly, one cannot achieve much by attempting to construct dual standard sequences of the kind described in the representation theorem for additive conjoint measurement. An alternative technique has been discussed by Anderson [1970] in the case where a quantification exists which is known to be monotonically related to the fundamental quantification. Suppose, for example, that a number w~i can be assigned to the ijth pair and that a fundamental measurement O~ is known to exist such that it can be decomposed additively into the sum O~j -- 0~ q- xI,j according to the axioms of additive conjo!nt measurement. Given that numbers wo are related by a strictly mon4tonic function 0o" -- f(w~s), the problem
/(wu) = O~ + % is an exercise in fitting a transformation of the dependent variable in a linear model, for which the algorithms of Box and Cox [1964] and Kruskal [1965] are suitable. If, moreover, a quantification x~ and y~ of the elements taken separately is available related to the fundamental measurement by two functions 0~ = g(x~) and @s = h(yj), then the problem
2{}2
PSYCHOMETRIKA
f(w~) = g(x~) -k h(yj) becomes a problem in transforming all variables in a regression analysis. Although no procedures have been published for this problem to my knowledge, extension of the work of Box arid Cox [1964] seems to present no obvious obstacles. The conclusion is that fundamental measurement may involve the mapping of one numerical set into another and, moreover, traditional scaling procedures may be valuable as means for providing preliminary order-consistent quantification even if they do not have a rational basis. In the psychophysical case, the physical measurements of the elements of pairs provides an orderconsistent quantification and in applications of magnitude estimation the numerical responses of subjects might be considered as at least order-consistent with some fundamental measurement (although not as fundamental themselves without a good deal of rationalization). In short, there seems to be much for the psychometrician to do and a thoughtful of reading of FM-1 will surely suggest many more avenues to explore. Finally, I wish to state the second part of my argument that FM-1 is required reading for expert psychometricians. The most challenging chapter in my mind is the last; it confronts the remarkable fact that throughout the gigantic range of physical knowledge numerical laws assume a remarkably simple form provided fundamental measurement has taken place. Although the authors cannot explain this fact to their own satisfaction, the extension to behavioral science is obvious: we may have to await fundamental measurement before we will see any real progress in quantitative laws of behavior. In short, ordinal scales (even contimmus ordinal scales) are perhaps not good enough and it may not be possible to live forever with a dozen different procedures for quantifying the same piece of behavior, each making strong but untestable and basically unlikely assumptions which result in nonlinear plots of one scale against another. Progress in physics would have been impossibly difficult without fundamental measurement and the reader who believes that all that is at stake in the axiomatic treatment of measurement is a possible criterion for canonizing one scaling procedure at the expense of others is missing the point. A rationalization of quantification may be a necessary precondition to Psychology as a Quantitative Rational Science. J. O. Ramsay
MCGILL UNIVERSITY
REFERENCES Acz~l, J. Quasigroups-Nets-Nomograms. Advances in Mathematics, 1965, 1, 383-450. Anderson, N. H. Functional measurement and psychophysical judgment. Psychological Review, 1970, 77, 153-170. Box, G. E. P. and Cox, D. R. An analysis of transformations. Journal of the Royal Statistical Society, Series B, 1964, 26, 211-243. Bruck, R. H. A survey of binary systems, Third printing. Berlin: Springer-Verlag, 1971. Coombs, C. It., Dawes, R. M., & Tversky, A. Mathematical psychology: An elementary introduction. Englewood Cliffs, N. J.: Prentice-Hall, 1970. Dawes, R. M. Fundamentals of attitude measurement. New York: Wiley, 1972. Hartman, P. A. Integrally closed and complete ordered quasigroups and loops. Proceedings of the American Mathematical Society, 1972, 33, 250-256. H61der, O. Die Axiome der Qnantit/it und die Lehre yon Mass. Bet. Verb. Rgl. Sdchsis. Ges. weiss Leipzig, Math-Phys. Classe, 1901, 53, 1-64. Krnskal, J. B. Analysis of factorial experiments by estimating monotone transformations of data. Journal of the Royal Statistical Society, Series B, 1965, 27, 251-263. Pfanzagl, J. Theory of measurement. W(irzburg-Wien: Physica-Verlag, 1971. Tukey, J. -~r. Analyzing data: Sanctification or detective work? American Psychologist, 1969, 24, 83-91.
Rm~EWS
263
Andrew L. Comrey. A First Course in Factor Analysis. New York and London: Academic Press, 1973. Pp. xii -{- 316. $12.95 This textbook is designed for students of psychology and education who have little quantitative training and who desire (a) an introduction to the factor analysis model, (b) knowledge of some, but not all, of the existing techniques of parameter estimation, and (c) some common sense advice on the design of factor analytic investigations. In general, the book fulfills this objective quite adequately. The chapter introducing the model clearly describes both the algebraic and geometric representations. Following this there are two carefully detailed chapters on successive factoring by the centroid, principal factor and minres methods. Three chapters are then given to procedures of rotation--the first two to orthogenal and oblique hand rotation and the third to varimax and the author's tandem criteria analytic methods. Finally there are three chapters on principles of design and interpretation of factor analytic investigations, a chapter on the author's use of factor analysis to develop a self-report personality inventory, and a chapter describing computer programs for the procedures discussed in earlier chapters. (The computer programs are not listed in the book but are available on tape from the publisher.) Given the limited purpose of the book, it should not be expected to provide a comprehensive coverage of factor analysis. There is, however, an omission which, unless covered separately in class and in supplemental reading, leaves the student unprepared to conduct or critically analyze an important type of factor analytic investigation--that which employs a statistical rationale. The utility of such a rationale and the present availability of appropriate and quite workable procedures (e. g., J6reskog's [1970] ACOVS procedure) make it difficult to justify omitting a description of them--if only an intuitive one--from an introductory course. The conceptual level and pacing of the book are appropriate for a student who has completed a semester of undergraduate statistics and a year of elementary college algebra and trigonometry and who has little inclination to follow proofs of theorems. At a few points the author uses calculus to demonstrate the derivation of optimization methods, but these can be omitted without a loss of continuity. Matrix algebra concepts are introduced as needed without a chapter being devoted to them, with the only deficit being that the coverage of determinants is insufficient. Equations are accurate, and appropriate numerical illustrations are provided. No problem sets are provided, although a large correlation matrix is given from which data can be taken for practice computation. In summary, the book is a useful and accurate elementrary introduction to factor analysis, provided it is supplemented by material on statistical factor analysis and, as just noted, determinants. For a student seeking a thorough understanding of the derivation of factor analysis and its methods, other textbooks present the same material more concisely and in greater detail.
Bruce Bloxom
VANDERBILT UNIVERSITY
REFERENCE JSreskog, Karl G. A general method for analysis of covariance structure. Biometrika, 1970, 57, 239-251. John Sonquist, Elizabeth Baker and James Morgan. Searching for Structure (Alia,s--AIDIII). Ann Arbor, Michigan: Institute for Social Research, 1971. Pp vi + 287. $5.00. Enthusiastic users of AID II, the Automatic Interaction Detector Program will welcome this newer version of the program and the substantially more detailed manual. Previous experience with the program has obviously led the authors to be much more
264
PSYCHOMETRIKA
cautious in recommending its use. They advise that data sets must have at least a thousand cases, that the dependent variable may not have extreme cases or severe bimodalities, that the predictors must be classifications in a single dimension, and that a theory must be applied in the selection of predictors. Th4 authors state that the principle of the program is to simulate "the procedures of a good researcher in searching for the predictors that increase his power to account for the variance of the dependent variable" . . . the focus being on "power in reducing error, i.e., on importance rather than on significance". Although the previous manual and other publications state that the program can and should be used to detect interactions in the ANOVA sense, there is little mention of this in the current manual. A previous validation study by the senior author [Sonquist, 1970] seemed to clearly show that A I D was not at all useful for this pfirpose, although this was not the author's conclusion. If A I D is not good for this, it is really not clear what it is good for. The authors say that it is comparable to "the activity of a researcher investigating a body of data with a basic theory about what variables are important,. They contrast it with analysis of variance, which they claim "insists that effects, main or interaction are to be measured over the whole sample" and thus assumes what is often not true". They seem to suggest that there are insurmountable problems connected with nonorthogonal ANOVA and that the "basic additivity of the variances does not hold anyway with real data". They claim, on the other hand, that the procedures involved in AID are "simple, robust and easy to understand". They certainly are simple and easy to understand, but how could they possibly be robust, given the limitations suggested in the first paragraph. It. is also stated that "a similar run on another data set will probably produce something similar, at least for the first few steps"; can this be considered robustness? A I D seems to be a method in search of an application. What it does can be simply described; starting with a dependent variable and a number of categorical predictors on an ordered scale, it successively splits the sample along one category so that the sum of squares between groups after each split (in the one-way ANOVA sense) is as large as possible. This leads to a tree diagram determined by the order of the various splits. The structure of the data is that of a multifactor nonorthogonal ANOVA, and ANOVA could Obviously (and this reviewer believes properly) be applied. If there were equal numbers of observations in the cells, the presence or absence of interactions could easily be determined by examining differences in cell means or in averages of cell means. The A I D algorithm, on the other hand, does not seem to reduce to comparisons of cell means even for this simple case. ANOVA is, of course, considerably more complicated in the nonorthogonal case, but within sampling error the presence or absence of interactions still comes down to the question of whether certain contrasts in cell means are different from zero or not. If the sample is so large that the means are essentially known, the question of interaction is answered completely by looking at the magnitude of suitable contrasts in the means. Since the authors are concerned with importance rather than with statistical significance, they need only specify how large an effect they will consider important. Sonquist's simulation showed that A I D could not be relied upon to correctly detect t h e presence or absence of interactions even for the case of a noninteractive model with no error present. Why should one expect more of it when error is present? For those who do have some use for AID, there are several improvements which will be of interest. The user has been provided more control over the algorithm. He can prespecify a tree structure to be used as a starting point, and he can specify a priority of interest in the independent variables forcing certain ones to be used first. An extensive reeoding and variable generation facility provides both logical and arithmetic operations. This also enables the user to control the selection of observations for use in the A I D run. A covarianee facility has been added but the justification for its use is not clear. The authors note that the old program would not detect an interaction if there were no main effects in
REWEWS
265
the factors involved. A new feature has been introduced to "lookahead", performing a split on a factor which would ordinarily not be considered, to see if this led to offsetting splits in opposite directions. They do not recommend use of this option since it is timeconsuming, and they say that they "have been unable to find a real world example of this." The authors comment that the number of options provided by the program may seem overwhelming, and indeed they do, with about 70 pages devoted to program instructions. In addition familiarity with the previous version of AID is assumed in the new manual. Learning to use AID-III seems to involve a substantial investment of time and effort, given its questionable value in research. L. L. T H U R S T O N E P S Y C H O M E T R I C L A B O R A T O R Y
Elliot M. Cramer
REFERENCE Sonquist, J. Multivariate model building: The validation of a search strategy. Ann Arbor, Mich. : Institute for Social Research, 1970. Frank Andrews, James Morgan, and John Sonquist. Multiple Classification Analysis: a report on a computer program for multiple regression using categorical predictors. Ann Arbor, Michigan: Institute for Social Research, 1969. Pp 211. $5.00 This report is basically a manual for a computer program. The phrase multiple classification analysis (MCA) will be unfamiliar to many, but as noted in the monograph, it is analysis of variance for a main effects model with unequal numbers of observations in cells. Alternatively it may be thought of as regression analysis with suitable dummy variables. The authors are social scientists and the problems they use the program for are clearly survey problems. They are not concerned with ANOVA tables and tests of significance, although some guidance is offered on how to obtain such tests. One concerned with ~uch problems will be far better served by the more recently written computer programs for nonorthogoual analysis of variance. What this program has to offer are basically estimates of parameters (regression coefficients) and sums of squares accounted for by additive ANOVA models along with considerable flexibility in data input. The output, though clear, is unattractive in format with virtually all output being in "exponential" type notation. I would judge, that the program has one chief virtue which may be important to those dealing with surveys; it can handle very large problems, probably as large as one is likely to encounter. The program itself is rather dated, using an iterative scheme apparently invented by the authors of an earlier version, possibly one written as early as 1959. Apparently nothing formal is known about its convergence properties. Contemporary programs would probably use orthogoualization methods in preference. Some discussion is provided indicating the advantages of MCA and its relation to ANOVA and regression analysis. The authors are properly concerned with the limitations of the additive model and offer some suggestions for how to explore for interactions, but the program is not really suitable for this purpose, nor is it intended for it. Their suggestion that another program written by Morgan, AID, be used for this purpose is certainly ill advised. The defects of this program were noted in a previous review [Cramer, 1971]. The usefulness of this monograph will probably be limited to those requiring the special features of the computer program. L. L. T H U R S T O N E P S Y C H O M E T R I C L A B O R A T O R Y Elliot M. Cramer REFERENCE Cramer, E. M. Review of Multivariate Model Building: The Validation of a Search Strategy, by John A. Sonquist. Psychometrika, 1971, 36, 44(}-442.
266
PSYCHOMETRIKA
John A. Sonquist and James N. Morgan. The Detection of Interaction Effects: a report on a computer program for the selection of optimal combinations of explanatory variables. Ann Arbor, Michigan: Institute for Social Research, 1970. Pp xi + 296. $5.00 The title of this monograph, originally issued in 1964, has the flavor of analysis of variance and seems to suggest that it deals with the identification of interactions in the ANOVA sense. This is confirmed by statements within the monograph, and the computer program discussed, the Automatic Interaction Detector (AID), is apparently widely used for this purpose. A validation undertaken by Sonquist was reported in this same monograph series and reviewed ia this journal [Cramer, 1971]. This validation indicated that A I D was incapable of reliably detecting main effects in an errorless situation, erroneously indicating the presence of interactions. This is not surprising given the way the procedure operates. Given a dependent variable and a number of predictor variables, it optimally divides the sample int~) two groups using one of the predictors so that the sum of squares between groups is a maximum. This will clearly be much affected by the numbers of observations having ~zarious characteristics, even in the errorless situation. It is for this reason that the method fails. The presence or absence of main effects or interactior~s is a function of true group means and cannot be affected by the numbers of observations in groups i n an errorless situation. The program described here is obsolete, having been superseded by a new program. A I D - I I I which appears to have substantially more flexibility. L. L. THURSTONE PSYCHOMETRIC LABORATORY
Elliot M. Cramer
REFERENCE Cramer, E. M. Review of Multivariate Model Building: The Validation of a ~earch Strategy, by John A. Sonquist. Psychometrika, 1971, 35, 440-442.