European Journal of Information Systems (2012) 21, 1–5 & 2012 Operational Research Society Ltd. All rights reserved 0960-085X/12 www.palgrave-journals.com/ejis/
GUEST EDITORIAL
Some considerations for articles introducing new and/or novel quantitative methods to IS researchers
Wynne W. Chin1,2, Iris Junglas3 and Jose´ L. Rolda´n4 1
C. T. Bauer College of Business, University of Houston, Houston, U.S.A.; 2Department of Service Systems Management and Engineering, Sogang University, Seoul, Korea; 3Accenture Institute for High Performance, Boston, U.S.A.; 4 Departamento de Administracio´n de Empresas y Marketing, Universidad de Sevilla, Sevilla, Spain
European Journal of Information Systems (2012) 21, 1–5. doi:10.1057/ejis.2011.46; published online 15 November 2011
In our call for this special issue on quantitative methods, we noted that researchers often must devote time to investigate, design and develop sound methods or approaches to best conduct their research. Sometimes viewed as a ‘sidetrack’ topic, or as a byproduct of the main research, such learning is often excluded from the journal literature and restricted to less accessible works. Thus the aim of this special issue was to provide an opportunity for disseminating recently developed and advanced techniques for conducting quantitatively oriented information systems (IS) research, ranging from exemplars to clearly written descriptions of how new, improved or advanced approaches to quantitative research have been carried out. We were open to tutorials that provide an overview for new techniques as they compare to existing ones as well as to literature reviews, philosophical essays and analytical research papers. We were impressed by the large number of submissions. Unfortunately, given our target schedule, we regrettably had to refer a number of otherwise excellent papers to be resubmitted to the general review system. Quite a number of those papers attempted to provide a didactic overview of new methods that have not been employed in the IS discipline, but argued potentially beneficial to IS researchers. But again, our time frame precluded us from developing them to what most assuredly would have been excellent introductory articles to the IS academy. We did, however, provide a number of considerations and recommendations to those authors for improving their papers, highlighting the fact that such tutorial papers have other objectives beyond those normally associated with empirical research. These points will be presented briefly later. Instead, we like to begin by conveying our perspective of the role of quantitative methods. We view them as part of the toolkit for empirical research that can be applied under different philosophical views of science. Unfortunately, while empirical investigation is at the heart of social science scholarship, quantitative methods are often portrayed as at odds with qualitative methods. While both are often treated as separate incommensurable entities, other scholars have recognized that both can go hand in hand. For some, qualitative techniques are used first to gain a general understanding of a phenomenon and subsequently form a basis to generate theories, hypotheses, or simply more meaningful understanding through additional quantitative analysis. To put it in Kuhn’s words (1961, p. 213): ‘Large amounts of qualitative work have usually been prerequisite to fruitful quantification in the physical sciences’. But we believe the converse can also be true, where quantitative methods can be used to explore and generate new understanding, opening the door for qualitative methods to dig deeper into a particular research area. Latent Curve models (e.g., Bollen & Curren, 2006), for example, can
2
Guest Editorial
be used in an idiographic sense to explore individuallevel differences such as the rate of learning or interpretation/perceptions of phenomenon. Exploratory structural equation modeling (SEM) allows the data to speak more without imposing a priori models as required by confirmatory factor analysis or unrealistic zero cross-loading constraints as in exploratory factor analysis (Asparouhov & Muthe´n, 2009). As Ian Dey (1993, p. 4) stated in his qualitative research text: In practice, it is difficult to draw as sharp a division between qualitative and quantitative methods as that which sometimes seem to exist between qualitative and quantitative researchers. In my view, these methods complement each other, and there is no reason to exclude quantitative methods, such as enumeration and statistical analysis, from the qualitative toolkit.
Likewise, William W. Cooley (Fry et al, 1981, p. 145) made the following comment: Paul Lazarsfeld was also very important in helping me think about the qualitative-quantitative distinction. His first big job was to get me out of the qualitative vs. quantitative trap. His work is living proof that it is not an either/or proposition, it is a matter of degree of emphasis. Every research study is inevitably both qualitative and quantitative to some degree. But even more important, he recognized the importance of the interaction between qualitative and quantitative “types” in working toward an improved understanding of educational processes.
Particularly, for the IS field, Straub (1989, p. 148) stated: This call for renewed scientific rigor should not be interpreted in any way as a preference of quantitative to qualitative techniques, or confirmatory to exploratory research. In what has been termed the ‘active realist’ view of science (Cook & Campbell, 1979), each has a place in uncovering the underlying meaning of phenomena, as many methodologists have pointed out (Glaser & Strauss, 1967; Lincoln & Gupta, 1985).
As a historical starting point of reference, it may be worth noting that the first formalized quantitative approaches date back as far as the 18th century. Scholars, such as Wilhelm Wundt (1832–1920), a physician, psychologist, physiologist and philosopher; Hermann von Helmholtz (1821–1894), a physician and physicist; and Gustav Fechner (1801–1887), a psychologist, were some of the early pioneers who inspired a field later to be known as ‘experimental psychology’. At that time, similar to today, experimental psychology tried to quantify what was then deemed unquantifiable phenomena by developing repeatable experiments, and through the use of measurements, to examine cognition and behaviors (von Helmholtz, 1862). The best example of what experimental psychology dealt with during that period is reflected in the Weber– Fechner Law. Together with Ernst Heinrich Weber (1795–1878), a physician, Fechner came to the conclusion
European Journal of Information Systems
Wynne W. Chin et al
that the subjectively perceived strength of sensory impressions is not linear to the objectively measured strength of the physical stimulus provided. In other words, they made apparent what we, in the IS discipline, describe as the discrepancy between conceptual and measurement model. And like the Weber–Fechner Law, the IS discipline 200 years later has to deal with the same challenge: what is ‘objectively’ measurable might not always be in accordance with what is subjectively perceived. This discrepancy makes our life as IS scholars challenging (i.e., the difference between a conceptual ‘truth’ and what we can quantitatively capture). Whether we consider that an objective reality exists or not, others would argue that all we can measure is the degree of belief in the truth of a knowledge claim. And maybe like Heisenberg, just the mere fact of trying to measure a phenomenon might introduce a bias that distorts said quantitative measurement. Nevertheless, the genesis of ‘experimental psychology’ as a field gave the impetus or the need for quantitative analysis – often equated to statistical analysis. Yet, the underlying philosophical view concerning the usage of such quantitative techniques can be equally varied. Now, we come back to our comments to authors to consider when introducing a quantitative technique. In contrast to a book, an introductory article is limited in the number of pages when explaining to the reader the value of the new method. In particular, care should be taken in the selection of data analysis examples. Not only should the context be one that is recognizable and relatable for a large percentage of IS readers, we argued it should also not be biased to convey that this new method is always superior. Ideally, it should be chosen to not only highlight the conditions under which this new technique performs better, but also when it performs worse or identical to what we would label as the ‘default’ method. In terms of better performance, we want to know whether it is a matter of degree or a matter of kind where a researcher may end up with an opposite and wrong conclusion. The point is that a new technique is not created in a vacuum – it typically is developed to overcome some of the weaknesses of a more commonly used ‘default’ procedure. In the case of SEM, as an example, the raison d’eˆtre for its creation was to account for measurement error and possibly systemic bias affecting measures, factors that are not assumed to exist when using traditional path analysis. But along with the benefits that a new technique provides also comes costs. Usually, a new technique involves more complexity and possibly more computational effort. Therefore, we suggested some discussion of costs vs benefits for such new procedures be articulated. In particular, can guidelines be provided to inform IS researchers as to the boundary conditions where the ‘default’ method would yield the same conclusion and only differ with a minimal loss in accuracy?
Guest Editorial
We also noted that a few papers drew heavily from other excellent published papers. In fact, we felt the original articles did a better job and the authors’ attempt was not unlike a poor revival of a classic Broadway play. This led us to suggest that authors should consider when it is better to simply refer the readers to excellent didactic examples from a particular published article rather than attempting to come up with a new, but inferior, example to highlight a particular aspect of the new technique. In certain instances, the technique is not static, but continues to evolve. As such, does the paper provide adequate discussion as to how and why the method has changed over time in terms of new options or capabilities? As an example, the Partial Least Squares (PLS) methodology continues to evolve with recent techniques such as multigroup invariance testing (Chin & Dibbern, 2010) and cross-validation procedures (Chin, 2010) in part due to a vibrant new community of statisticians and scientists who meet semi-annually (e.g., Vinzi et al, 2009). We also suggested that another criterion of a successful introductory/tutorial paper should be the comprehensiveness of the bibliography and the provision of guidance for researchers interested in learning more about a particular technique. Beyond providing an extensive list of books and articles on a particular method, we suggested that enough information be provided to help typical objectives facing a reader interested in further study. For example, which articles or books would you recommend as a good starting point for beginners vs intermediate or advance material? Which references provide applied examples vs theoretical elaboration? Are partitioning of the references consistent with the pedagogical outline of the paper? For example, when discussing how to use a technique in both experimental as well as survey-based settings, are the references identified accordingly? Finally, in terms of the comprehensive nature of the reference list and how the material is synthesized to a cogent cohesive introductory review, we present a list of criteria adapted from Schwarz et al’s (2007) discussion on what constitutes a good review article. While not necessarily a requirement for all such papers, it may be helpful to ask the authors whether their pedagogical paper: Provides a reasonably comprehensive list of references. Provides a solid basis for researchers interested in applying the method to their own research, including the boundary conditions/contexts in which this technique may be most applicable. Highlights ‘classics’ that all should read, but also provides a partial foundation for researchers interested in expanding on the technique. Provides an understanding of the evolution of the method to its current state. Explains the assumptions and problem areas that people are working on to improve this method.
Wynne W. Chin et al
3
Ideally is more than the sum of its parts: it creates meta-knowledge about the technique. Is reasonably thorough, accurately convey the sense of the source references, not ramble and have a structure that allows the reader to readily grasp its content. Clearly elucidates the contribution of a particular researcher and puts the contribution in proper context of the overall knowledge. Does a thorough covering of the technique and gives insights into benefits, weaknesses and trade-offs vs alternative methods; these insights may not be at the abstract level of an overarching and enduring framework, but represent a cogent summary of the current standing of this particular method to applied researchers. Succeeds in simplifying key concepts and past applications. Identifies gaps and significant problems yet to be addressed by this technique, but not go beyond that. Using the principles put forth, we selected six papers from a large number of submissions to the special issue. While it was rather difficult to apply the entire set of principles, we feel confident that we have chosen a representative and most illustrative range of topics and methods. We end by providing a short summary of each of the papers. One of the papers brings to life what McGrath (1981) called the ‘three-horned dilemma’ of scientific inquiry; it showcases that the three dilemmas, comprising (a) the precision of measurement, manipulation and control over behavioral variables, (b) the realism of context, and (c) the generalizability with regard to the population, are still applicable to the IS discipline nowadays. Using the metaphor of a triangle with each dilemma marking one of three corners, McGrath argues that whenever one tries to maximize one dilemma, he/she will end up reducing the other two. In their paper titled ‘Toward the improved treatment of generalization of knowledge claims in IS research: drawing general conclusions from samples’, Seddon and Scheepers tackle one of these problems: the generalizability of existing IS research studies. More specifically, their paper showcases that IS researchers have often, and maybe too often, paid attention to the dilemmas of precision and realism and have mostly neglected the question of generalizability. For example, only 53% of published MIS Quarterly and Information Systems Research articles in the period 2007–2008 discuss generalizability as part of their study; only 24% discuss boundary conditions; and only 26% discuss the population studied. By developing a framework that can be applied by researchers and reviewers alike, the authors are not only able to provide practical relevance, but also to increase the level of research rigor of IS scholarship. In their article ‘Conceptualizing models using multidimensional constructs: a review and guidelines for their
European Journal of Information Systems
4
Guest Editorial
use’, Polites, Roberts and Thatcher provide IS researchers with insights into how to appropriately conceptualize the measurement of multidimensional constructs. Their use fosters the advancement of IS research by allowing to capture the complexity of concepts with comparatively simple abstractions. However, substantial challenges remain when theorizing about their form and implications. In this context, the authors aim to shed light on the language used to describe, and the choices necessary to conceive, multidimensional constructs. They provide an overview of the basic nomenclature used to describe multidimensional constructs by presenting a single set of terms; they also review the usage of multidimensional constructs in the existing IS literature, along with guidelines on how to theoretically model multidimensional constructs. Finally, the authors provide a foundation for future empirical investigations that incorporates the proper use of multidimensional constructs. New forms of media have entered our daily lives. Emails, blogs and tweets are an essential part of our communication pattern and reflect what we, as human beings, believe in and do. For us as IS scholars, these new forms constitute new sources of empirical data, waiting to be studied and analyzed. In their paper ‘Quantitative approaches to content analysis: identifying conceptual drift across publication outlets’, the authors Indulska, Horvoka and Recker showcase two analyses techniques that are able to overcome the human bias in coding and interpretation of unstructured textual data. Qualitative data are analyzed with the help of ‘latent semantic analysis’ and ‘data mining’ by analyzing, as well as visualizing, how concepts (i.e., semantic units) emerge, diverge or converge, and even migrate. More specifically, latent semantic analysis (LSA) uncovers relationships within a body of text by calculating the Euclidian distance between meanings and determines if concepts are semantically correlated or disjoint. Data mining, on the other hand, is able to pinpoint words that often occur together by identifying those in its immediate vicinity (let’s say in the same block). While both approaches are quite similar in their objective of making semantic relationships transparent, they are also quite distinct in their approaches the authors show in a tutorial-style study, using both in a complementary fashion can be beneficial. The paper entitled ‘Latent semantic analysis: five methodological recommendations’, authored by Evangelopoulos, Zhang and Prybutok, examines several methodological issues that arise in the context of LSA – an emerging quantitative method for the analysis of textual data. LSA describes the semantic content as a set of vectors and provides the opportunity to glean textual insights that were not predicated upon a set of a priori assumptions. Although LSA has received little attention in the mainstream IS literature so far (aside from information retrieval), the authors identify several areas where the application of LSA is of great interest to IS research, including the review of quantitative literature and the analysis of textual data in computer-mediated
European Journal of Information Systems
Wynne W. Chin et al
communication, among others. The authors also put forward five methodological recommendations that need to be addressed by IS researchers when applying LSA in order to achieve precise results. Such guidelines are illustrated with the help of four short studies, involving the analysis of abstracts for papers published in the European Journal of Information Systems during the period of 1991 through 2008. The article ‘Designing IS service strategy: an information acceleration approach’, by Richard, Coltman and Keating proposes a new quantitative methodology that combines the discrete choice experiment (DCE) technique with the information acceleration (IA) approach. A DCE is a choicebased conjoint technique that tests the explanatory power of underlying decision models by systematically varying the characteristics (attributes) of interest according to statistical design theory. IA, on the other hand, is a technique that supports a more accurate prediction of future choice situations by the construction of scenarios. The conjunction of the two methods provides both explanation and prediction, which are important tenets of IS research. The combining of DCE and IA also enables IS scholars to better control for within-subject variances (addressing heterogeneous preferences) as well as betweensubject variances (addressing heterogeneous errors arising from differing levels of uncertainty). A combined approach, as the authors show, provides greater realism in quantitative modeling by minimizing the impact of uncertainty. As an illustrative empirical application of how measurement can be improved, the study showcases a common IS problem, that is, assisting managers to better understand how tangible and intangible IS resources influence the development of customized IS service strategies. The example also shows that a combined DCE and IA approach is able to tackle response variability by experimentally addressing within and between-subject differences in uncertainty. The article ‘Analysing quadratic effects of formative constructs by means of variance-based structural equation modeling’, by Henseler, Fassott, Dijkstra and Wilson, jointly addresses two relevant issues in IS research: the use of formative constructs, and the study of non-linear relationships, particularly quadratic effects. The authors observe that only limited knowledge exists of how to model quadratic effects, particularly in the case of non-linear effects of formative constructs since Chin et al’s paper (1996). The paper raises the question of how to analyze the quadratic effects of formative constructs in an optimal fashion by means of variance-based SEM (i.e., PLS). Presenting six PLS-based approaches for estimating such effects, suggested in the context of interaction effects, the authors empirically compare each on the basis of a Monte Carlo simulation that includes the assessment of point estimate accuracy, statistical power and predictive capability. Highlighting the performance of a hybrid and a two-stage approach, the authors are able to provide a framework for analyzing quadratic effects in PLS path models.
Guest Editorial
We hope you enjoy this special issue! The Editors: Wynne W. Chin, Iris Junglas, and Jose´ L. Rolda´n
Acknowledgements This research was partially supported by World Class University program funded by the Ministry of Education,
Wynne W. Chin et al
5
Science and Technology through the National Research Foundation of Korea (R31-20002) and by the Sogang University Research Grant of 2011. In addition, this research was also partially supported by the Junta de Andalucı´a (Consejerı´a de Economı´a, Innovacio´n y Ciencia) Spain (Proyecto de investigacio´n de excelencia SEJ-6081).
References ASPAROUHOV T and MUTHe´N B (2009) Exploratory structural equation modeling. Structural Equation Modeling 16, 397–438. BOLLEN KA and CURREN PJ (2006) Latent Curve Models: A Structural Equation Perspective. Wiley-Interscience, Hoboken, NJ. CHIN WW (2010) Bootstrap cross-validation indices for PLS path model assessment. In Handbook of Partial Least Squares Concepts, Methods and Applications (VINZI VE, CHIN WW, HENSELER J and WANG H, Eds), pp 83–97, Springer-Verlag, Berlin. CHIN WW and DIBBERN J (2010) A permutation based procedure for multi-group PLS analysis: results of tests of differences on simulated data and a cross cultural analysis of the sourcing of information system services between Germany and the USA. In Handbook of Partial Least Squares Concepts, Methods and Applications (VINZI VE, CHIN WW, HENSELER J and WANG H, Eds), pp 171–193, Springer-Verlag, Berlin. CHIN WW, MARCOLIN BL and NEWSTED PR (1996) A partial least squares latent variable modeling approach for measuring interaction effects. Results from a Monte Carlo simulation study and voice mail emotion/ adoption study. In Proceedings of the Seventeenth International Conference on Information Systems (DEGROSS JI, JARVENPAA S and SRINIVASAN A, Eds), pp 21–41, Association for Information Systems, Cleveland, OH.
DEY I (1993) Qualitative Data Analysis: A User-friendly Guide for Social Scientists. Rutledge, New York. FRY G, CHANTAVANICH S and CHANTAVANICH A (1981) Merging quantitative and qualitative research techniques: toward a new research paradigm. Anthropology & Education Quarterly 12, 145–158. KUHN TS (1961) The function of measurement in modern physical science. ISIS 52(2), 161–193. MCGRATH JE (1981) Dilemmatics: the study of research choices and dilemmas. American Behavioral Scientist 25(2), 154–179. SCHWARZ A, MEHTA M, JOHNSON N and CHIN WW (2007) Understanding frameworks and reviews: a commentary to assist us in moving our field forward by analyzing our past. The Data Base for Advances in Information Systems 38(3), 29–50. STRAUB D (1989) Validating instruments in MIS research. MIS Quarterly 13(2), 147–169. VINZI V, TENENHAUS M and GUAN R (2009) Proceedings of the 6th International Conference on Partial Least Squares and Related Methods, 4–7 September, Publishing House of Electronics Industry, Beijing, China. VON HELMHOLTZ H (1862/1995) On the relation of natural science to science in general. In Science and Culture: Popular and Philosophical Essays (CAHAN D, Ed), pp 76–95, The University of Chicago Press, Chicago.
European Journal of Information Systems