Eur.J. Inf. Systs. (1995)4, 143-158
© 1995 Operational Research Society Ltd. All rights reserved 0960-085X/95 $ 1100
Factors affecting perceptions of CASE effectiveness J. IIVARI Department of Computer Science and Information Systems, University of Jyvaskyla, P. O. Box 35, SF-40351 Jyvaskyla, Finland
CASE (Computer Aided Software/Systems Engineering) tools are claimed to increase the productivity of systems development and the quality of developed systems. Existing empirical research on CASE effectiveness is entirely based on subjective, perceptual data. In order to interpret the existing evidence, it is useful to understand factors affecting these perceptions. This paper studies the impact of demographic variables such as education and experience, CASE adoption process variables such as training and participation, and CASE product variables such as perceived trialability, perceived complexity, perceived demonstrability and perceived compatibility on CASE effectiveness perceptions such as relative advantage, quality and productivity of systems development and work unit effects. Results based on a survey of 105 individual CASE users show that these effectiveness criteria are highly intercorrelated and that CASE experience, participation in CASE selection and implementation, perceived quality and quantity of training, and especially perceived complexity affect CASE effectiveness perceptions.
Introduction CASE (Computer Aided Software/Systems Engineering) tools are claimed to increase information systems (IS) and software (SW) development effectiveness in terms of productivity of systems development and the quality of the developed systems. The results of previous research (Norman & Nunamaker, 1989; Wijers & van Dort, 1990; Stobart et al., 1991; Aaen et al., 1992; Kusters & Wijers, 1993) are to some extent paradoxical, however. While most of them report positive rather than negative impact on the quality of developed systems and to a lesser extent on the productivity of the development process, the adoption of CASE technology has been much slower than one would expect. Kemerer (1992), for example, reports that one year after introduction, 70% of the CASE tools are never used, 25% are used by only one group, and 5% are widely used but not to capacity. Most of the empirical studies on CASE effectiveness are based on subjective, perceptual data (Coupe, 1994). Coupe criticises methods of measuring the impact of CASE tools. He notes that existing studies have relied on developers' perceptions, omitting system users' perceptions and objective software metrics. He also notes that retrospective design has been adopted in most, if not all, previous work on CASE effectiveness. He outlines two alternative research designs: longitudinal research, in which the same personnel develop systems without CASE and with CASE, and cross-
sectional approach, in which CASE users and nonCASE users are sampled simultaneously. He also expresses his preference for longitudinal research, recognising that it is more difficult to implement than the cross-sectional approach and obviously more so than the retrospective approach. It is beyond the scope of this paper to discuss the benefits and drawbacks of objective and subjective data and alternative research designs in any detail. Moore and Benbasat (1991) contend, however, that subjective perceptions provide a sounder basis for theory development than more objective data. In view of the relative ease of the retrospective design applying perceptual measurement, one can anticipate that this approach will dominate research on CASE effectiveness also in the future. In this situation it is important to understand the factors affecting perceptual measures on CASE effectiveness. Based on retrospective data obtained from 105 respondents, this paper reports on factors affecting CASE effectiveness perceptions such as relative advantage (Rogers, 1983; Moore & Benbasat, 1991), quality and productivity of systems development (Aaen et al., 1992) and work unit impacts (Van de Ven & Ferry, 1980). The factors include demographics such as the education arid experience of respondents, their participation in the CASE adoption process, quality of training provided, perceived trialability, perceived complexity, perceived demonstrability and perceived compatibility of the CASE tools.
144
Factors affecting perceptions of CASE effectiveness
Earlier research on CASE effectiveness Most of the earlier empirical studies indicate positive rather than negative impact of CASE tools on IS/SW development effectiveness. Norman and Nunamaker (1989) reported that software engineers generally perceived their productivity as being improved with the use of CASE technology. Banker and Kauffman (1991) found an order of magnitude increase in software development productivity in their analysis of 20 projects which applied a CASE tool emphasising reusability. Aaen et al. (1992) found that the majority of respondents (about 72%) in their sample of 102 organizations perceived that the objectives of improved quality of systems, improved systems development procedures and increased standardisation were met to a significant degree (values 5-7 on a seven-point Likert scale). The majority of respondents (61%) considered that the objective of improved productivity was met only to a minor degree (values 4-5). Consistent with this, most of the respondents (68%) perceived only a slight or no decrease in project duration (values 3-4) and 43% disagreed (values 1-3) with the claim that CASE tools have reduced the development cost (25% disagreed totally) while 28% agreed with the claim (values 5-7). Aaen et al. (1992) also found perceptions of considerable quality improvements from using CASE tools, in particular in documentation, understandability and general product quality. Finally, 47% of the respondents indicated good (values 5-7) profitability of CASE investments while 31% assessed it to be poor (values 1-3). Kusters and Wijers (1993) reported that CASE tools were assessed as reasonable (value 3 on a 5-point scale) for objectives such as quality of documentation, consistency of developed systems products, standardisation, adaptability and system quality, but as of only slight usefulness (value 2) for the remaining objectives. In view of these predominantly positive findings, one could expect CASE adoption to be faster than observed in practice. Aaen et al. (1992) reported that only a small minority (=£20%) was close to routine use of CASE tools (values 6-7), even though 24% of the organizations had used them for more than three years. About half the respondents had used the tools for one or two projects, and in the majority (3=56%) of organizations 25% or fewer of the analysts used CASE tools. Kusters and Wijers (1993) found in their analysis of 262 Dutch organizations that, in 20% of the organizations, more than 75% of analysts used CASE tools on a regular basis, and in 18% the rate was between 51% and 75%. Still, in 37% of the organizations less than 26% of the analysts used CASE on a regular basis. Selemat et al. (1994) reported that 25% of surveyed Malaysian organizations abandoned CASE tools after purchasing them.
I. livari
The discrepancy between the findings concerning the CASE impacts and CASE adoption is paradoxical. Assuming that CASE adoption decisions are based on 'rational' arguments concerning the CASE impact on the systems quality and the systems development productivity, one could expect the adoption rate in organizations to be faster. One can imagine a host of explanations for this discrepancy. One explanation is that CASE tools are relatively expensive and associated training costs may exceed the original price of the tool (Huff, 1992). S0rensen (1993) found the tool introduction support (training) to be influential. Wynekoop et al. (1992) observed the expectations to be relevant: the acceptance and level of use of a CASE tool were high when an individual perceived the CASE tool to meet or exceed expectations concerning the relative advantage before using it. Another possible explanation might be that CASE adoption decisions are dominated by 'non-rational' arguments (LeQuesne, 1988; Norman et al., 1989; Orlikowski, 1989). A further explanation might be that perceptual measures of CASE impacts have not necessarily been very reliable. With a few exceptions (e.g. Rai & Howard, 1994; Ramiller, 1994), previous CASE research has paid quite scant attention to the reliability and validity of perceptual measures used. This study seeks to shed light on perceptions of CASE effectiveness. Compared with previous research, it pays special attention to the reliability and validity of the used measures. Coupe (1994) recognises that IS users' perceptions of the developed systems are probably influenced by their education, age, experience, participation with development, implementation strategy and expectations. The present paper analyses largely the same variables, and their influence on CASE users' perceptions. It also studies the impact of factors such as perceived trialability, perceived complexity, perceived demonstrability and perceived compatibility borrowed from the innovation diffusion literature (Tornatzky & Klein, 1982, Rogers, 1983).
Research model Theoretically, this paper is based on the part of innovation adoption and diffusion theory that identifies a number of characteristics of innovation that may explain the adoption and diffusion of innovations. Tornatzky and Klein (1982), in their meta-analysis, identify ten most frequently considered characteristics: compatability, relative advantage, complexity, cost, communicability, divisibility, profitability, social approval, trialability and observability. Rogers (1983), in his classic book, proposes a more succinct set of five characteristics which is 'as mutally exclusive and universally relevant as possible' (p. 211): trialability,
Factors affecting perceptions of CASE effectiveness
observability, complexity, compatibility and relative advantage. Trialability: the degree to which an innovation may be experimented with on a limited basis before adoption. Observability: the degree to which the results of an innovation are visible. Complexity: the degree to which an innovation is perceived as difficult to understand and use. Compatibility: the degree to which an innovation is perceived as being consistent with existing values, past experiences and needs of potential adopters. Relative advantage: the degree to which an innovation is perceived as better than its precursor. This paper adopts the list of Rogers (1983) with a slight modification. Moore and Benbasat (1991) propose the name 'result demonstrability' to tap the core idea of observability, because observability has a connotation of visibility of an innovation. This paper prefers to use a shorter term, 'demonstrability', for this dimension. Rogers (1983) notes that each of the five characteristics 'is somewhat empirically interrelated with the other four, but they are conceptually distinct' (p. 211). Despite this observation, to the present writer's knowledge the previous research on innovation diffusion has not been interested in the interrelationships between the five characteristics, but has traditionally assumed them to be indepedent. Tornatzky and Klein (1982) note that the analysis of the interdependencies between the innovation characteristics is one of the neglected areas. In accordance with Rogers (1983), this paper emphasises the conceptual distinction between the five characteristics, but pays special attention to the interrelationships as depicted in Figure 1.
145
I. livari
As Figure 1 illustrates, perceived trialability of an innovation is assumed to influence its perceived complexity, and perceived complexity to affect perceived demonstrability. Perceived demonstrability is presumed to have an effect on perceived compatibility, and perceived compatibility to influence relative advantage. Perceived relative advantage as the dependent variable is extended to include the quality and productivity of IS/SW development (Aaen et al., 1992) and work unit impact (Van de Ven & Ferry, 1980). The model also includes a number of demographic variables (education and experience), CASE adoption process variables (participation and the quantity and quality of training) as variables that are assumed to affect the above perceptions. In view of the minimal previous research on the interdependencies between the innovation characteristics, the model of Figure 1 is tentative. The interrelationships among the five innovation characteristics are based on the following reasoning. [The model of Figure 1 is partly based on the fact that the focus of the paper lies in the perceptions of the five innovation characteristics. If complexity, for example, were measured in 'objective' terms, it would be without doubt an independent factor on the left in Figure 1.] Perceived trialability has the most narrow focus on an innovation. Perceived complexity is broader, requiring understanding of the underlying idea of an innovation and how to use it. Trialability can be assumed to facilitate especially the ease of use aspect of the innovation, as evidenced by the significance of prototypes in the user interface design (Gould et al., 1991). The order between perceived complexity and perceived demonstrability is based on the reasoning that better under-
Education /
Perceived relative advantage
/ CASE experience Trialability
Perceived complexity
Perceived demonstrability
Participation
Perceived compatibility
Perceived quality and productivity effects Perceived work unit effects
Training — •
Figure 1 The conceptual model.
146
Factors affecting perceptions of CASE effectiveness
standing of an innovation and how to use it (lower perceived complexity) help to anticipate the results of an innovation (higher perceived demonstrability), whereas higher perceived demonstrability does not necessarily help to understand better the innovation. The results of an innovation may be demonstrated using a black box view of innovation, for example. Perceived demonstrability is assumed to facilitate the assessment of compatibility. The latter is not the only concern of the innovation: an understanding of the context in which the innovation is to be used, and an assessment of what the innovation means in that context, are also required. Perceived result demonstrability is assumed to help the assessment of the above meaning of an innovation in the adopting organization. Because compatibility describes the fit between the innovation and the adopting unit rather than the innovation as such, one cannot expect the association between the three characteristics of an innovation discussed above and perceived compatibility to be as strong as that between trialability, complexity and demonstrability. Finally, the relationship between (perceived) compatibility and the (perceived) relative advantage emphasises that the relative advantage is contextual, dependent on the organization content in which the innovation is used, and the fit between the innovation and the adopting organization. Because relative advantage is essentially dependent on the
I. livari
earlier practice before the innovation came into being, one cannot expect the relationship between relative advantage and the four characteristics of an innovation discussed above to be very strong, if an innovation is understood as an idea or product such as a CASE tool rather than as a change in the adopting organization that takes the preceding state into consideration.
Research design The sample
The survey was carried out during the spring of 1993. Based on customer information of CASE tool vendors operating in Finland and previous Finnish studies on CASE adoption, a list of organizations was identified. After telephone contacts, 52 organizations were identified as co-operative. The questionnaires were mailed to the contact persons who distributed them within the organization. The contact persons were mainly IS managers. The respondents returned the completed questionnaires directly. Of the 322 questionnaires mailed, 109 were returned from 35 organizations. Four of the returned questionnaires were rejected. As a result, the final response rate was 32.6% at the level of questionnaires and 67.3% at the level of organizations. The profiles of the 35 organizations and 105 respondents are depicted in Tables 1 and 2. Table 1 shows that 43% of the organizations in the sample were software
Table 1 Profile of the participant organizations (n = 35). Industry Public administration Manufacturing Software house Insurance Finance Others Number of employees in organization 1-10 26-50 51-100 101-500 >500 Number of IS employees in the organization 1-10 11-25 26-50 51-100 101-500 >500 Average size of SW projects 1-2 persons 3-5 persons 6-10 persons 11-20 persons > 20 persons No answer
Number
Percentage
7 5 15 4 3 1
20% 14% 43% 11% 9% 3%
2 2 3 11 17
6% 6% 9% 31% 49%
6 5 6 4 10 4
17% 14% 17% 11% 29% 11%
5 20 5 1 2 2
14% 57% 14% 3% 6% 6%
CASE experience < 1 year 1-2 years 3-5 years > 5 years No answer CASE adoption rate as the percentage of developers using CASE 0%
1-25% 26-50% 51-75% > 75% CASE tools used ADW Oracle IEW
Prosa Excelerator Teamwork Deft POSE Design aid Line II IEF
Microtool CASE 2000
Number
Percentage
1 10 21 2 1
3% 29% 60% 6% 3%
3 20 5 4 3
9% 57% 14% 11% 9%
13 13 11 5 4 3 2 2 2 2 1 1 1
37% 37% 31% 14% 11% 9% 6% 6% 6% 6% 3% 3% 3%
Factors affecting perceptions of CASE effectiveness
147
J. livari
Table 2 Profile of the respondents (n = 105).
Occupation General manager Project leader Systems analyst Programmer Other No answer Education High school Vocational school College University degree Other
Number
Percentage
9 38 51 1 5 1
9% 36% 49% 1% 5% 1%
3 3 32 65 2
3% 3% 30% 62% 2%
houses. The organizations were relatively large by Finnish standards, but the majority of the average IS/SW development projects were relatively small, comprising only 3-5 persons. Not surprisingly, CASE experience of the organizations had increased when compared with the earlier study in Finland (Aaen et al., 1992), but the adoption rate was still found to be low: in 57% of the organizations less than 25% of the developers used CASE tools. Among the different CASE tools, ADW, Oracle and IEW were clear market leaders in the present sample. As Table 1 indicates, the number of CASE tools used in the 35 organizations was 60. This shows that, on average, each organization has 1.7 CASE tools. The aim of this paper is to focus on the perceptions concerning CASE technology rather than individual CASE tools. An organization may utilise several CASE tools. They together provide a certain functionality (measured as a factor of compatibility), interfaces (measured as interface compatibility) and methodology support (measured as method compatibility). The following statistical analysis is entirely based on the individual data (n = 105). As Table 2 shows, 85% of the individual respondents were either project leaders or systems analysts, and 62% had a university degree. Almost half of them (46%) had CASE experience of 3-5 years and exactly 50% of 1-2 projects, the average length of projects being eleven months. Measurement
Referring to Downs and Mohr (1976), Moore and Benbasat (1991) claim that the innovation characteristics should be measured as perceptual attributes rather than as more objective primary attributes in order to establish a basis for a general theory. In accordance with this recommendation, this paper applies perceptual measures to measure innovation characteristics. The reliability of the measures is assessed using Cronbach's alpha values. There is no exact threshold for the required reliability, but
CASE experience in years < 1 year 1-2 years 3-5 years > 5 years No answer CASE experience as the number of projects 0 1-2 3-5 >5
Number
Percentage
13 40 48 3 1
12% 38% 46% 3% 1%
20 53 26 6
19% 50% 25% 6%
Nunnally (1978) proposes the threshold 0.5-0.6 for exploratory research. In view of the minimal previous research on the interdependencies between the innovation characteristics, this paper applies the upper value of 0.6 suggested for exploratory research. Education: Education was measured using a five-point scale: comprehensive school, high school, vocational school, college and university degree. Experience: CASE experience was measured as years of personal experience in CASE and as the number of projects completed using CASE. The reliability of two items was somewhat low (0.62) but acceptable. Participation: The idea of the measure for participation was based on Doll and Torkzadeh (1989). Differing from their detailed decomposition of the systems development process in eight stages, CASE adoption process was divided into two phases: selection and implementation. Concerning each phase, the perception of the actual participation was asked. The reliability of the two items measuring the actual participation was 0.92. Training: Training was interpreted to cover internal (in-house) training, vendor courses and self studies. It was measured using two items, the first questioning the adequacy of training and the second the quality of training. The reliability of the two items was 0.84. Perceived trialability: Rogers (1983) defines trialability as 'the degree to which an innovation may be experimented with on a limited basis' (p. 231). This clearly differs from whether the respondent actually experienced trial use of the innovation. In some cases an individual may experience the level of actual trial use to be low, even though he may perceive the trialability of an innovation to be high. The measure for perceived trialability was influenced by the operationalisation of Moore and Benbasat (1991), even though most of their items for trialability concern the innovation adoption process in the sense of the actual opportunities created for the trial use rather than trialability per se. In the present paper, trialability was measured using two
148
Factors affecting perceptions of CASE effectiveness
items asking the respondent's opinion about whether it is easy to become familiar with CASE through trial use, and whether it is easy to learn to use CASE by experimenting only. The reliability of the two-item scale was somewhat low (0.65) but acceptable. Perceived complexity: Perceived complexity was modified from the short form of the corresponding measure proposed by Moore and Benbasat (1991). They interpret perceived complexity as 'perceived ease of use' in the sense of the Technology Acceptance Model (Davis, 1989), and use phrases like 'interaction with', 'easy to use' and 'learning to operate'. These phrases, which may lead the respondent to think in a narrow sense of the quality of the user interface only, were modified to the form 'to apply/to use'. The idea of the phrase 'to apply' was to tap the aspect of whether the underlying ideas, assumptions and limitations of CASE are easy to understand and apply. The reliability of the four-item measure for perceived complexity was 0.82. Perceived demonstrability: Perceived demonstrability was measured using the short form of the corresponding measure proposed by Moore and Benbasat (1991). The reliability of the four items was 0.90. Perceived compatibility. Rogers (1983) defines compatibility as the degree to which an innovation is perceived as being consistent with existing values, past experiences and the needs of potential adopters. This indicates that the concept is very broad (Tornatzky & Klein, 1982), allowing a number of interpretations. There are several alternatives for measuring perceived compatibility. Moore and Benbasat proposed a measure that interprets compatibility in very individualistic terms (Ramiller, 1994). This interpretation seems to be more appropriate in the case of innovations in which individual human beings are the crucial adopting units. CASE tools can be considered more as organizational innovations (Fichman, 1992) however, in which the compatibility with adopting organization is essential. Therefore the measure for compatibility proposed by Moore and Benbasat was not deemed appropriate in the present context. Recently, Ramiller (1994) proposed a measure for perceived compatibility that consists of five factors: the efficacy of the technology, knowledge and control, experience of work and sense of professionalism, management impact and response, and traditional support. His measure has the drawback that it is very complex, consisting of 32 items, and was not available for our survey. In order to support the practical relevance of the concept of compatibility, the measure applied in this paper is adapted from previous CASE research (Aaen et al., 1992; Mosley, 1992). The respondents were asked to rate the tools using 23 items. The measurement of each item ranged 'very poor' to 'very good' on a five-point scale. This operationalisation emphasises
J. livari
compatibility with the needs of the adopting organization, but compatibility with analysis and implementation methods (see Table 3) includes an aspect of past experience and culture. The 23 items of perceived compatibility were subjected to factor analysis using principal component analysis with varimax rotation. The results are shown in Table 3. They indicate that one can discern seven factors which can be labelled as: Use compatibility, Vendor compatibility, Functional compatibility, Interface compatibility, Project support compatibility, Method compatibility and Hardware compatibility. The items included in each factor are underlined. The reliabilities of the seven factors were 0.80, 0.78, 0.64, 0.83, 0.52, 0.63 and 0.42, respectively. Because the reliabilities of factors 5 (0.52) and 7 (0.42) were unacceptable, the four items included in these factors were dropped from the further analysis. Perceived relative advantage: Perceived relative advantage was measured using the short form instrument proposed by Moore and Benbasat (1991). The reliability of the five items was 0.84. Perceived productivity and quality effects: Productivity and quality effects were measured using seven items adapted from Aaen et al. (1992). They were factoranalysed using principal component analysis and varimax rotation, the results of which are shown in Table 4. The analysis gave two factors, the first of which can be labelled 'Productivity effects' and the second 'Quality effects'. The factor structure is clear, except in the case of item 2 (effects on the functionality of new applications). Reliability analysis indicated that inclusion of item 2 in factor 1 would decrease the reliability of factor 1, which without item 2 was 0.83. Its inclusion in factor 2, in contrast, increased the reliability of factor 2, which was 0.80 after inclusion of item 2. Based on this analysis, item 2 was included in factor 2. The items of productivity and quality effects obtained from previous CASE research were extended by four items describing work unit effectiveness (Van de Ven & Ferry, 1980). They concerned changes in the morale in the work unit, achievement of goals, innovativeness and reputation for excellence because of CASE adoption. The items of quantity and quality of output in Van de Ven and Ferry (1980) were dropped as redundant for the above quality and productivity effects. The four items turned out to be intercorrelated so that their reliability was 0.85, and will be treated as one variable in the subsequent analysis. Data analysis The analysis of the model of Figure 1 is based on path analysis using least squares regression analysis to test the proposed model in which perceived trialability, perceived complexity, perceived demonstrability, factors of perceived compatibility and indicators of CASE
Factors affecting perceptions of CASE effectiveness
J. livari
149
Table 3 Factors of perceived compatibility. Fl Tool documentation Vendor support Vendor reliability Growth potential of the tool Hardware requirements Time to master the tool Tool reliability User-friendliness Quality of diagrams Supported development phases Check capabilities Central data dictionary Reporting capabilities Response time Text editor quality Adaptability of the tool Multi-user capabilities Frequency for new releases Project management support Interfaces to code-generators DBMS (database) interfaces Compatibility with earlier analysis and design methods Compatibility with earlier implementation methods Eigenvalue Percentage of variance Cumulative percentage
F2
F3
0.35 0.80 0.74 0.61 0.67 0.67 0.79 0.64
F4
F5
-0.44
0.34
F6
F7
0.33
0.82 -0.31 0.32 0.36
0.39 0.69 O75
0.59 0.67 0.74 0.68
0.53 0.31 0.32
0.36
0.60 073
0.71 0.79 0.78
0.71 0.85
5.24 22.8 22.8
3.07 13.4 36.1
1.74 7.6 43.7
1.64 7.1 50.9
0.32
0.81
1.50 6.5 57.4
1.32 5.7 63.1
1.22 5.3 68.4
Table 4 Factors of perceived quality and productivity effects.
CASE tool has greatly speeded up development of new applications CASE tool has enhanced the functionality of the new applications CASE tool has definitely made developers more productive CASE tool has significantly reduced the costs of software development CASE tool has improved the quality of software products CASE tool has significantly reduced the costs of software maintenance CASE tool has significantly improved the documentation of software products Eigenvalue Percentage of variance Cumulative percentage
effectiveness, each in turn, are treated as dependent variables, and the preceding variables in Figure 1 as independent variables. Special care was taken to minimise violations of the assumptions underlying path analysis. Multiple regression analysis assumes: (1) interval or ratio scale measurement, (2) linearity, (3) homoscedasticity, i.e. the constancy of the residual terms across the values of the predictor variables, (4) independence of residuals, (5) normality of residuals and (6) no multicollinearity (Hair et al., 1992). Path analysis additionally requires: (7) a clearly defined causal system, (8) a high degree of measurement reliability, (9) one-way causation, (10) a structural
Fl
F2
0.89 0.51 0.85
0.53
QJ1
0.35 0.40 3.81 54.5 54.5
0.35 0.80 0.71 0.82 1.05 15.0 69.5
system of variables whereby a change in one variable is a linear function of changes in other variables and (11) uncorrelated residuals (Kerlinger and Pedhazur, 1973; Feldman, 1975; Billings & Wroten, 1978). Billings and Wroten (1978) consider that the assumption of equal intervals is not critical, concluding that carefully constructed measures employing reasonable numbers of values and containing multiple items will yield data with sufficient interval properties. If the five-value multi-item scales are accepted to approximate interval scales, education forms the only violation of the first assumption. Linearity of the relationships was tested visually using standardised residual and
150
Factors affecting perceptions of CASE effectiveness
partial regression plots. None of the variables was found to violate this assumption. Homoscedasticity was tested using the Levene test. With a few exceptions (7 of 98) the test did not indicate significant violations. All the seven violations were significant at the level of 0.05. Independence of residuals was assessed using the Durbin-Watson statistic which ranges from 0 to 4. The values varied between 1.34 and 2.23, the value 2 indicating that there is no autocorrelation. Normality of residuals was assessed visually using normal probability plots and the modified Kolmogorov-Smirnov test (Lilliefors). No violations were detected. Multicollinearity was tested using the tolerance values. The lowest tolerance value (0.396) far exceeded the cut-off value of 0.01 suggested by Hair etal. (1992). Figure 1 defines a clear causal system of one-way causation. Path analysis requires a higher degree of measurement reliability and validity than ordinary regression analysis because measurement problems are magnified in path analysis (Feldman, 1975; Billings & Wroten, 1978). As explained above, the 16 measures used in this paper are based on previous validated measures as far as possible. With four exceptions their reliabilities are more than 0.75. The relationships among the variables in the causal system are assumed to be linear and additive, excluding curvelinear and multiplicative relationships. The linearity was tested visually using the scatter plots. Uncorrelated residuals mean that residuals of endogenous variables should not correlate among themselves or with other endogenous variables not occurring later in the causal system (James, 1980). Only one violation was found in 144 intercorrelations between the (studentised) residuals (the correlation between the residuals of vendor compatibility and functional compatibility, r = 0.33, p< 0.001). Accordingly, among the correlations between the residuals and the endogenous variables, the correlation between the residual of vendor compatibility and functional compatibility (r = 0.28, p < 0.01) and the correlation between the residual of functional compatibility and vendor compatibility (r = 0.25, p < 0.05) contradicted the assumption of uncorrelated residuals. Taken together, the specific assumptions of regression and path analysis were reasonably satisfied. There were a number of missing values, especially in the case of 'participation', so that when the independent variables were regressed on CASE effectiveness there were only 66 complete cases. In order to increase the statistical power, organizational averages when available (i.e. when the respondent was not the only one from that organization) were used for the missing values (Hertel, 1976), except in the case of CASE effectiveness variables, extending the number of cases to 100. The purpose of not using the organization averages in the case of effectiveness variables was to
J. livari
minimise the risk of inflated correlations (Hertel, 1976). The two 'samples' were compared by running regression analyses using both materials. They showed high consistency, and because of this the results based on the larger material are reported.
Results The intercorrelations among the study variables are depicted in Table 5. The correlation matrix indicates that the four perceptual indicators of CASE effectiveness are highly intercorrelated. It also indicates that all the independent variables 1-6, except education, have significant corrrelations with at least one of the CASE effectiveness variables (p =£ 0.05). Tables 6 to 10 report the results of the path analysis. The direct effects are derived as standardised regression coefficients. The indirect effects are calculated as the sum of the products of all the directed paths from cause to effect, taking into account the intercorrelations between the exogenous variables. For instance, the indirect effect of perceived trialability on perceived demonstrability is (0.38 X 0.60) = 0.23. The common cause effect reports the amount of covariation between two variables as a result of common prior variables in the model. For example, the common cause effect of perceived trialability on the relationship between perceived complexity and perceived demonstrability is 0.38 x (-0.13) = -0.05. The total common effect is the sum of the common effects of all the preceding variables. The difference between the correlation and the sum of the direct, indirect and common cause should be near to zero if the path model faithfully captures the relationships in the data. Kerlinger and Pedhazur (1973) mention the value 0.05 as a possible limit, suggesting that if the number of discrepancies beyond that limit is relatively small, the model corresponds to the data. As Tables 6 to 10 show, there are only a few discrepancies where the absolute value is greater than the limit of 0.05. Tables 6 to 10 also show the results of the hierarchical regression analysis in which the independent variables were entered as blocks, as indicated by the numbers before the independent variables. The final columns (dR2) in Tables 7 to 10 indicate the additional variance explained by the blocks when they were entered in the specified order. Table 6 shows the results in which perceived trialability was the dependent variable. The results show that only CASE experience had a significant fi value (0.28, p =s 0.01) and that the external variables (education, CASE experience, participation and training) were unable to explain a significant part of the variance of perceived trialability. Table 7 shows the results when perceived complexity was the dependent variable. In addition to CASE
Education Experience Participation Training Perceived trialability Perceived complexity Perceived demonstrability Use compatibility Vendor compatibility Functional compatibility Interface compatibility Method compatibility Perceived relative advantage Productivity effects Quality effects Work unit effects
0.13
2
0.10 0.29**
3 0.00 0.13 0.21*
4
6
-0.05 - 0 .02 0.23* 0 .46*** -0.03 0 .24* 0.07 0 .23* 0 .47***
5
*p « 0.05 **p sO.01 ***p =s 0.001 Note 1: The number of cases may vary because of missing values. Note 2: Perceived complexity is measured inversely as the perceived case to use/apply.
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.
Variable
Table 5 Intercorrelations between the study variables.
0 .09 0 .45*** 0 0 .12 0 .16 0 .62***
7 9 10
0 .16 -0.07 - 0 .06 0 .35*** 0.34*** 0 .32*** 0 24*** 0.33*** 0 .23* 0 .29** 0.53*** 0 .21* 0.09 0 .24* 0 .24* 0 .31** 0.42*** 0 0 .40*** 0.31** 0 ; 40 *** 0.22* 0 .26** 0 .49***
8 0.02 0.12 0.19* 0.37*** 0.14 0.26** 0.20* 0.10 0.38*** 0.30**
11 0.09 0.17 0.30** 0.15 0.01 0.10 0.17 0.12 0.23* 0.21* 0.13
12
14
75
16
0.02 -0.03 -0.15 -0.03 0.25* 0.34** 0.20 0.19 0.22* 0.24* 0.10 0.19 0.12 0.30** 0.28** 0.35*** 0.23* 0.04 0.16 0.12 0.42*** 0.44*** 0.37*** 0.40*** 0.40*** 0.20* 0.36*** 0.25** 0.07 0.28** 0.12 0.17 0.30** 0.36*** 0.37*** 0.43*** 0.22* 0.29** 0.21* 0.15 0.10 0.11 0.21** 0.15 0.20* 0.24* 0.20* 0.22* 0.56*** 0.50*** 0.42*** 0.59*** 0.66*** 0.59***
13
152
J. livari
Factors affecting perceptions of CASE effectiveness
Table 6 Results of path analysis predicting perceived trialability.
Education Experience Participation Training R2
Direct effect
Indirect effect
r
Unexplained effect
-0.06 0.28** -0.12 0.02 0.07***
0.03 -0.05 0.09 0.02
-0.05 0.23* -0.03 0.07
-0.02 0.00 0.00 0.03
*p « 0.05 m
p « 0.001
Table 7 Results of path analysis predicting perceived complexity.
1. Education Experience Participation Training 2. Trialability
Direct effect
Indirect effect
-0.06 0.33*** 0.14 0.12 0.38***
0.07 0.15 0.12 0.11
Common cause
0.08
r
Unexplained effect
-0.02 0.46*** 0.24* 0.23* 0.47***
-0.03 -0.02 -0.02 0.00 0.01
R2 *p **
dR2
0.25*** 0.14*** 0.39***
0.05 £0.01
•*V« o.ooi Note: Perceived complexity is measured inversely as the perceived ease to use/apply.
experience (/3 = 0.33, p « 0.001), perceived trialability had a significant ft value (0.38, p *£ 0.001). The regression equation explained 39% variance in perceived complexity (p =£ 0.001), of which the external variables accounted for 25% (p =£ 0.001) and trialability an additional 14% (p « 0.001). Table 8 depicts the results of the analysis in which perceived demonstrability was the dependent variable. It shows a significant direct effect of perceived complexity (ft = 0.60, p =e 0.001). Among the external variables, only participation had a significant direct
effect (/3 = 0.18, p «0.05), but experience, participation and training had relatively high indirect and total effects on perceived demonstrability. In the case of experience, this is to a considerable degree because of its effect on perceived complexity. The independent variables together explained 49% (p *s 0.001) variance in perceived demonstrability, of which the external variables accounted for 26% (p =s 0.001) and the perceived complexity 22% (p « 0.001). Table 9 summarises the results of analyses in which each factor of perceived compatibility in turn was the
Table 8 Results of path analysis predicting perceived demonstrability.
1. Education Experience Participation Training 2. Trialability 3. Complexity R2
Direct effect
Indirect effect
0.05 0.16' 0.18* -0.07 -0.13 0.60***
0.06 0.31 0.20 0.21 0.23
Common cause
0.05 0.04
r
Unexplained effect
0.09 0.45*** 0.37*** 0.12 0.16 0.62
-0.02 -0.02 -0.01 -0.02 0.01 -0.02
'p=s0.10 *p =£ 0.05 **p«0.01 ***p=s 0.001 Note: Perceived complexity is measured inversely as the perceived ease to use/apply.
dR2
0.26*** 0.01 0.22*** 0.49***
Factors affecting perceptions of CASE effectiveness
153
I. livari
Table 9 Results of path analysis for factors of perceived compatibility. Direct effect
Indirect effect
(a) Predicting perceived use compatibility 1. Education 0.11 Experience 0.15 Participation 0.19' 0.20* Training 2. Trailability 0.21* 3. Complexity -0.13 4. Demonstrability 0.27*
0.08 0.24 0.18 0.10 -0.02 0.16
r
Unexplained effect
0.04 0.25 0.08
0.16 0.35*** 0.34*** 0.29** 0.24* 0.31** 0.40***
-0.03 -0.04 -0.03 -0.01 0.01 0.03 0.05
0.07 0.17 0.26
-0.07 0.34*** 0.33*** 0.53*** 0.09 0.42*** 0.31**
-0.05 -0.05 -0.02 -0.03 0.01 -0.01 0.04
0.04 0.12 0.19
-0.06 0.32*** 0.23* 0.21* 0.24* 0.45*** 0.40***
-0.02 -0.02 -0.02 -0.01 0.03 0.00 0.02
0.01 0.13 0.13
0.02 0.12 0.19* 0.37*** 0.14 0.26** 0.20*
-0.04 -0.05 -0.03 -0.08 -0.04 -0.05 -0.01
0.02 0.14 0.07
0.09 0.17' 0.30** 0.15 0.01 0.10 0.17'
-0.02 -0.03 -0.01 -0.01 0.00 0.01 0.03
Common cause
R2
(b) Predicting perceived vendor compatibility 1. Education -0.09 Experience 0.17' Participation 0.12 Training 0.45*** 2. Trialability -0.09 3. Complexity 0.25* 4. Demonstrability 0.01
0.07 0.22 0.23 0.11 0.10 0.01
R2
(c) Predicting perceived functional compatibility 1. Education -0.09 0.05 Experience 0.10 0.24 Participation 0.07 0.18 Training 0.11 0.11 2. Trialability 0.07 0.10 3. Complexity 0.21 0.11 4. Demonstrability 0.19 R2
(d) Predicting perceived interface compatibility 1. Education 0.04 Experience -0.07 Participation 0.08 Training 0.39*** 2. Trialability 0.11 3. Complexity 0.13 4. Demonstrability 0.08
0.02 0.24 0.14 0.06 0.06 0.05
R2
(e) Predicting perceived method compatibility 1. Education 0.05 0.06 Experience 0.10 0.10 Participation 0.24* 0.07 Training 0.09 0.07 2. Trialability 0.02 -0.03 3. Complexity -0.09 0.04 4. Demonstrability 0.07 R2
AR2
0.23*** 0.03 0.00 0.04* 0.31***
0.41*** 0.00 0.04* 0.00 0.45***
0.15** 0.03' 0.06** 0.02 0.26***
0.20*** 0.03' 0.02 0.00 0.25***
0.11* 0.00 0.00 0.00 0.11
'p=s0.10 *p =s 0.05 **p =£ 0.01 * " p s 0.001 Note: Perceived complexity is measured inversely as the perceived ease to use/apply.
dependent variable. They emphasise the significance of direct effects of perceived quality and quantity of training on perceived use compatibility (/? = 0.20, p =s 0.05), vendor compatibility (/? = 0.45, p =s 0.001)
and interface compatibility (ft = 0.39, p =£ 0.001). The direct effect of participation was notable in the case of use compatibility (/3 = 0.19, p^O.10) and method compatibility (jS = 0.24, p=s0.05). CASE experience
154
Factors affecting perceptions of CASE effectiveness
had a notable direct effect on vendor compatibility (/? = 0.17, p ss 0.10). CASE experience and also participation had relatively high indirect and total effects on the factors of perceived compatibility. Each of the three innovation characteristics had a significant impact on at least one factor of perceived compatibility: perceived trialability on use compatibility (/? = 0.21, /?=s0.05), perceived complexity on vendor compatibility (yS = 0.25, p =s 0.05) and perceived demonstrability on use compatibility (/3 = 0.27', p=£0.05). The results indicate that the regression models explained significant proportions of the variance of use, vendor, functional and interface compatibility (31%, 45%, 26% and 25%, p =£ 0.001 each). Most of the variance was explained by external variables (23%, p =£ 0.001; 41%, p ss 0.001; 15%, p =s 0.01; and 20%, pssO.Ol). Finally, Table 10 shows the results of regression analysis in which perceived relative advantage, productivity effects, quality effects and work unit effects were taken as dependent variables. Among the individual variables, the results emphasise the significance of perceived complexity as a factor affecting CASE
J. livari
effectiveness perceptions. Perceived complexity had a significant direct effect on perceptions of relative advantage (y3 = 0.29, /?=s0.10), productivity effects 08 = 0.43, p=s0.01), quality effects (3 = 0.26, p =£ 0.10) and work unit effects (/3 = 0.31, p =£ 0.05). Training had a notable direct effect on perceptions of quality effects (0.21, p=s0.10). Further, the indirect effects of CASE experience, participation and perceived quality and quantity of training on the effectiveness measures are relatively high. The complete regression equations explained very consistently the variance of the four effectiveness measures (perceived relative advantage 28%, p =£ 0.01; productivity effects 27%, pssO.01; quality effects 31%, p =£ 0.001; and work unit effects 28%, p « 0.01). A considerable part of the variances was explained by the four external variables (14%, p^O.05; 11%, p ss 0.05; 14%, p =£ 0.01; and 17%, p =e 0.01, respectively). Perceived trialability did not increase significantly the explained variance of any of these four CASE effectiveness indicators, but perceived complexity did in the case of perceived relative advantage (7%, p =£ 0.01), productivity effects (9%, p =£ 0.01), quality effects (8%, p =£ 0.01) and work unit effects (5%,
Table 10 Results of path analysis predicting perceived relative advantage, productivity effects, quality effects and work unit effects. Direct effect (a) Perceived relative advantage 1. Education Experience Participation Training 2. Trialability 3. Complexity 4. Demonstrability 5. Use compatibility Vendor compatibility Functional compatibility Interface compatibility Method compatibility
-0.04 0.09 0.00 -0.11 -0.06 0.29' 0.13 0.15 0.20 -0.18 -0.01 0.13
Indirect effect 0.07 0.27 0.24 0.23 0.12 0.06 0.02
Common cause
r
Unexplained effect
0.04 0.02 0.18 0.08 0.09 0.20 0.08 0.04
0.02 0.34*** 0.24* 0.12 0.12 0.42*** 0.40*** 0.28** 0.30** 0.15 0.11 0.20*
-0.01 -0.02 0.00 0.00 0.02 0.05 0.07 0.05 0.01 0.13 0.04 0.03
0.02 0.03 0.27 0.09 0.17 0.16 0.17 -0.00
-0.03 0.20* 0.10 0.30** 0.23* 0.44*** 0.20* 0.12 0.36** 0.22* 0.15 0.20*
-0.01 -0.02 -0.01 0.02 0.05 0.03 0.04 0.06 0.04 0.09 0.06 0.04
R2
(b) Perceived productivity effects i. Education Experience Participation Training 2. Trialability 3. Complexity 4. Demonstrability 5. Use compatibility Vendor compatibility Functional compatibility Interface compatibility Method compatibility 2 R
-0.00 -0.01 -0.06 0.15 0.03 0.43** -0.10 -0.03 0.15 -0.03 -0.08 0.16
-0.02 0.23 0.17 0.13 0.13 -0.05 -0.01
dR2
0.14** 0.00 0.07** 0.01
0.05 0.28**
0.11* 0.02 0.09** 0.00
0.04 0.27** (continued)
Factors affecting perceptions of CASE effectiveness
155
I. livari
Table 10 (cont.). Direct effect (c) Perceived quality effects 1. Education Experience Participation Training 2. Trialability 3. Complexity 4. Demonstrability 5. Use compatibility Vendor compatibility Functional compatibility Interface compatibility Method compatibility R2 (d) Perceived work unit effects 1. Education Experience Participation Training 2. Trialability 3. Complexity 4. Demonstrability 5. Use compatibility Vendor compatibility Functional compatibility Interface compatibility Method compatibility
Indirect effect
-0.15 -0.05 0.03 0.21' -0.08 0.26' 0.20 -0.13 0.11 0.07 -0.18' 0.16
0.01 0.24 0.18 0.06 0.07 0.13 -0.02
-0.04 0.04 0.03 0.15 0.01 0.31* -0.04 -0.01 0.22 -0.09 -0.03 0.16
0.03 0.24 0.21 0.19 0.09 -0.01 -0.01
r
Unexplained effect
0.02 -0.02 0.16 0.17 0.22 0.18 0.19 0.03
-0.15 0.19' 0.19' 0.28** 0.04 0.37*** 0.36*** 0.07 0.37*** 0.29** 0.10 0.22*
-0.01 0.00 -0.02 0.01 0.03 0.00 0.02 0.03 0.04 0.04 0.09 0.03
0.03 0.08 0.25 0.14 0.19 0.18 0.17 0.05
-0.03 0.25* 0.22* 0.35*** 0.16 0.40*** 0.25* 0.17 0.43*** 0.21* 0.21* 0.24*
-0.02 -0.03 -0.02 0.01 0.03 0.02 0.05 0.04 0.02 0.12 0.06 0.03
Common cause
R2
AR2
0.14** 0.00 0.08** 0.02
0.07 0.31***
0.17** 0.01 0.05* 0.00
0.05 0.28**
*p « 0.05 ***p=s 0.001 Note: Perceived complexity is measured inversely as the perceived ease to use/apply.
p «0.05): Even though not significant, the perceived compatibility factors explained 5%, 4%, 7% and 5% of the variance of the four effectiveness measures.
Discussion The analysis clearly indicates that the classical innovation characteristics are interrelated and gives some support for the model of Figure 1. Figure 2 summarises the significant direct effects at the level of 0.1 or greater. Trialability was found to be a significant determinant of perceived complexity, and perceived complexity of perceived demonstrability. Perceived demonstrability affected significantly only use compatibility among the factors of perceived compatibility. Perceived relative advantage was most notably affected by perceived complexity, whereas perceived compatibility was found to be quite weak in explaining perceived relative advantage. The results also indicate that CASE effectiveness perceptions are significantly explained by external
factors, especially CASE experience, participation and training, even though only training had a notable direct effect on the perceptions of CASE effectiveness. The results also show that CASE effectiveness perceptions are significantly affected by the perceptions of the CASE product. Inclusion of the characteristics of CASE, especially perceived complexity, increased significantly the explained variance of the perceptions in CASE effectiveness. The results show that perceptions of compatibility, especially use compatibility and vendor compatibility, are much more heavily affected by external factors than by the perceptions of CASE. Training in particular was found to be a significant determinant of use compatibility, vendor compatibility and interface compatibility. In the case of vendor compatibility, one explanation may be that CASE training is often organized by vendors. Participation was a significant factor affecting both use compatibility and method compatibility. The explanation in the latter case may be that participation makes it possible to ensure that a CASE tool supporting the
156
Factors affecting perceptions of CASE effectiveness
J. livari
Perceived relative advantage
Education
Perceived productivity effects
Perceived demonstrability
Perceived luality effects
Method compatibility
.21' /
Perceived work unit effects
p « .05 p =s .01 p =s .001 Figure 2 Summary of the statistically significant direct effects.
method preferred by the participant is selected, or at least helps the reasons for selecting some other tool to be better understood. The results described in this paper also give credence to the earlier perceptual measures of CASE effectiveness. They indicate that one can develop, based on items of CASE effectiveness applied in earlier CASE research (Aaen et al., 1992), perceptual measures for CASE productivity and quality effects which are highly intercorrelated with perceived relative advantage. Perceived relative advantage was measured using an instrument (Moore & Benbasat, 1991) that is close to perceived usefulness (Davis, 1989). Perceived usefulness, as part of the Technology Acceptance Model, is one of the most validated instruments in the IS field and its relevance as a determinant of IT use has been demonstrated in several studies (Davis et al., 1989; Mathieson, 1991; Adams etal, 1992). Perceived usefulness is an individually oriented measure which corresponds to individual impact in the framework of DeLone and McLean (1992). To the author's knowledge, the relationship between perceived usefulness and measures of organizational effectiveness is an unexplored area. Since the items of perceived productivity and quality effects (see Table 4) clearly attempted to measure organizational effects (at the level of IS function), the high correlations between perceived relative advantage (which is almost the same as perceived usefulness) and perceived productivity and quality effects give some promise that the former could
be an indicator of the latter. The correlations found may partly be explained by the measurement bias brought about by the fact that the same respondents assessed both perceived relative advantage and perceived productivity and quality effects. The relationship between individual impacts and organizational impacts clearly requires additional research. The study has its limitations. It is a retrospective analysis, a technique criticised by Coupe (1994), for example. The sampling procedure may also be biased. The results are based on CASE users in one country and the CASE user organizations were selectively contacted, based on vendor data. Moreover, the contact persons in each organization distributed the questionnaires within each organization. Because of the small number of organizations and respondents, it was not practical to estimate the possible non-response bias. As always, one can imagine additional variables that could have been incorporated in the model. Despite these limitations, the study has practical implications from the viewpoint of CASE adoption. Not surprisingly, it emphasises the significance of perceived complexity as a determinant of perceived CASE effectiveness. This finding is largely consistent with earlier research on the Technology Acceptance Model (Davis, 1989; Davis et al., 1989). It is obvious that CASE tools are often too complex to understand, and when their complexity is perceived to be high it is difficult to appreciate their advantages. Therefore any means of affecting these perceptions can be expected to
157
Factors affecting perceptions of CASE effectiveness
be significant in the management of CASE adoption. Besides perceived complexity, external variables, especially CASE experience, participation and the perceived quality and quantity of training, turned out to be important determinants of perceived CASE effectiveness. Among them, CASE experience had a direct effect on perceived complexity, more experienced CASE users perceiving lower CASE complexity. Participation and training had only a moderate effect on perceived complexity. One explanation for this finding in the case of training might be that training can be expected to have a mixed effect on perceived complexity. On the one hand, it may increase perceived complexity when introducing different aspects of the tool. On the other hand, when helping the prospective user to apply and use the tool, it attempts to decrease the perceived complexity. It is obvious that future CASE training should pay more attention to the significance of perceived complexity in CASE adoption, so devising strategies in which the perceived complexity is under focus all the time. The strategies can, of course, be built on the system features, gradually proceeding from the key functionalities to more esoteric features. One should also consider the strategy of introducing a general understanding of CASE technology, the specific CASE tool in question,
its basic idea, assumptions and limitations, together with more technical aspects of the tool and how to use it. The significance of perceived complexity also has implications for CASE vendors. On the one hand, it implies a concern for the increased complexity of CASE tools. Even though there are good reasons for more integrated CASE environments, including more and more versatile support for IS development, such increased complexity may emerge as a hindrance for adoption in practice. On the other hand, gradual evolution towards more integrated environments may facilitate the situation, as evidenced by the above finding that CASE experience was a significant determinant of perceived complexity. The significance of perceived complexity also suggests that any intelligent technical means, such as tutoring, hypertext and expert system features that increase the understandability of a CASE tool without improperly sacrificing its functionality, might be a very profitable investment in CASE tool development. Acknowledgements - The author wishes to express his gratitude to M.Sc. Jarmo Huttunen and Pasi Tappinen for data collection, to M.Sc. Marko Muotka for some statistical analysis, and to the anonymous reviewers for their constructive comments.
References AAEN I, SILTANEN A, S0RENSEN C and TAHVANAINEN V-P (1992) A
tale of two countries: CASE experiences and expectations. In The Impact of Computer Supported Technologies on Information Systems Development, IFIP Transactions A-8 (KENDALL KE, LYYTINEN K and DEGROSS J, Eds), pp. 61-91. North-Holland,
Amsterdam.
empirical research. In Proceedings of the Thirteenth International Conference on Information Systems, Dallas, Texas (DEGROSS JI, BECKER JD and ELAM JJ, Eds), pp. 195-206.
GOULD JD, BOIES SJ and LEWIS C (1991) Making usable, useful,
productivity-enhancing computer applications. Communications of the ACM 34(1), 74-85.
ADAMS DA, NELSON RR and TODD PA (1992) Perceived usefulness,
HAIR JF, ANDERSON RE, TATHAM RL and BLACK WC (1992)
ease of use, and usage of information technology: a replication. MIS Quarterly 16(2), 227-247. BANKER RD and KAUFFMAN RJ (1991) Reuse and productivity in integrated computer-aided software engineering: an empirical study. MIS Quarterly 15(3), 375-401. BILLINGS RS and WROTEN SP (1978) Use of path analysis in industrial/organizational psychology: criticism and suggestions. Journal of Applied Psychology 63(6), 677-688. COUPE RT (1994) A critique of the methods for measuring the impact of CASE software. European Journal of Information Systems 3(1), 28-36. DAVIS FD (1989) Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly 13(3), 319-339.
Multivariate Data Analysis with Readings, 3rd edn. Macmillan, New York. HERTEL BR (1976) Minimizing error variance introduced by missing data routines in survey analysis. Sociological Methods and Research 4,459-474. HUFF CC (1992) Elements of a realistic CASE tool adoption budget. Communications of the ACM 35(4), 45-54. JAMES LR (1980) The unmeasured variables problem in path analysis. Journal of Applied Psychology 65(4), 415-421. KEMERER CF (1992) How the learning curve affects CASE tool adoption. IEEE Software 9(3), 23-28. KERLINGER N and PEDHAZUR EJ (1973) Multiple Regression in Behavioral Research. Holt, Rinehart & Winston, New York.
DAVIS FD, BAGOZZI RP and WARSHAW PR (1989) User acceptance
CASE-tools: results of a survey. In CASE '93, Proceedings of the 6th International Workshop on CASE, Singapore, pp. 2-10. IEEE Computer Society Press, Los Alamitos, California. LEQUESNE PN (1988) Individual and organizational factors and the design of IPSEs. Computer Journal 31(5), 391-397. MATHIESON K (1991) Predicting user intentions: comparing the technology acceptance model with the theory of planned behavior. Information Systems Research 2(3), 173-191. MOORE GC and BENBASAT I (1991) Development of an instrument to measure the perceptions of adopting an information technology innovation. Information Systems Research 2(3), 192-222. MOSLEY V (1992) How to assess tools efficiently and quantitatively. IEEE Software 9(3), 28-32. NORMAN RJ and NUNAMAKER JF Jr (1989) CASE productivity
of computer technology: a comparison of two theoretical models. Management Science 35(8), 982-1003. DELONE WH and MCLEAN ER (1992) Information systems success: the quest for the dependent variable. Information Systems Research 3(1), 60-95. DOLL WJ and TORKZADEH G (1989) Discrepancy model of end-user computing involvement. Management Science 35(10), 1151-1171. DOWNS GW Jr and MOHR LB (1976) Conceptual issues in the study of innovation. Administrative Science Quarterly 21(4), 700-714. FELDMAN J (1975) Considerations in the use of causal-correlational techniques in applied psychology. Journal of Applied Psychology 60(6), 663-670. FICHMAN RG (1992) Information technology diffusion: a review of
KUSTERS RJ and WIJERS GM (1993) On the practical use of
158
Factors affecting perceptions of CASE effectiveness
perceptions of software engineering professionals. Communications of the ACM 32(9), 1102-1108 NORMAN RJ, CORBITT GF, BUTLER MC and MCELROY DD (1989)
CASE technology transfer: a case study of unsuccessful change. Journal of Systems Management 40(5), 33-37. NUNNALLY JC (1978) Psychometric Theory. McGraw-Hill, New York. ORLIKOWSKI WJ (1989) Division among the ranks: the social implications of CASE tools for system developers. In Proceedings of the Tenth International Conference on Information Systems, Boston, Massachusetts (DEGROSS JI, HENDERSON JC and KOSYNSKIBR, Eds).
RAI A and HOWARD GS (1994) Propagating CASE usage for software development: an empirical investigation of key organizational correlates. OMEGA International Journal of Management Science 22(2), 133-147. RAMILLER N (1994) Perceived compatibility of information technology innovations among secondary adopters: towards reassessment. Journal of Engineering and Technology Management 11(1), 1-23. ROGERS EM (1983) Diffusion of Innovations, 3rd edn. Free Press, New York. SELEMAT MH, Choong CY and Othman AT (1994) Non-use phenomenon of CASE tools: Malaysian experience. Information
J. livari
and Software Technology 26(9), 531-537. S0RENSEN C (1993) What influences regular CASE use in organizations? An empirically based model. Scandinavian Journal of Information Systems 5, 25-50. STOBART SC, THOMPSON JB and SMITH P (1991) Use, problems,
benefits and future direction of computer-aided software engineering in United Kingdom. Information and Software Technology 33(9), 629-636. TORNATZKY LG and KLEIN KJ (1982) Innovation characteristics and innovation adoption-implementation: a meta-analysis of findings. IEEE Transactions on Engineering Management EM-29(1), 28-45. VAN DE VEN AH and FERRY DL (1980) Measuring and Assessing Organizations. Wiley, Chichester. WIJERS GM and VAN DORT HE (1990) Experiences with the use of CASE-tools in the Netherlands. In Advanced Information Systems Engineering. Lecture Notes in Computer Science, No. 436 (STEINHOLTZ B,
S0LVBERG A and
BERGMAN
L, Eds),
pp.
5-20.
Springer-Verlag, Berlin. WYNEKOOP JL, SENN JA and CONGER SA (1992) The implemen-
tation of CASE tools: an innovation diffusion approach. In The Impact of Computer Supported Technologies on Information Systems Development, IFIP Transactions A-8 (KENDALL KE, LYYTINEN K and DEGROSS J, Eds), pp. 25-41. North-Holland,
Amsterdam.
About the author Juhani livari is a full professor in information systems at the University of Jyvaskyla. He received his MSc and PhD degrees from the University of Oulu. His research has broadly focused on theoretical foundations of information systems, information systems development methodologies and approaches, organizational analysis, implementation and acceptance of information systems, and quality of information
systems. His current research interest lies in object-oriented analysis and design. Professor livari has published in journals such as Data Base, European Journal of Information Systems, Information & Management, Information and Software Technology, Information Systems, Information Systems Journal, International Journal of Information Management and MIS Quarterly, as well as in a number of conference proceedings.