Quality & Quantity (2005) 39: 137–154 DOI 10.1007/s11135-004-1672-y
Ó Springer 2005
Improvement and Transfer of Practice-directed Knowledge FRANS M. VAN EIJNATTENw and LIEUWE DIJKSTRA Faculty of Technology Management (TM), Eindhoven University of Technology (TU/e), Pav.U10–HPM, P.O. Box 513, 5600 MB Eindhoven, The Netherlands
Abstract. In this paper, we propose a method of working that aims to explicitly organize action research and consultancy activities. Principles of action research are discussed and comparisons are made with practice-directed research in general, particularly Argyris’ idea of an action science. The main feature of our proposal is a distinction between implementation (the how to or organization of an intervention) and justification (the why or plausibility of the intended effect occuring). Some simple rules are proposed to register and use these two aspects (which we label modules) to improve and disseminate practice-directed knowledge. The method is illustrated by an intervention in socio-technical re-designing. Key words: action research, action science, evaluation of interventions, intended effects, dissemination of practice-directed knowledge.
1. Introduction In this paper, we concentrate on action research and consultancy work, activities that usually are characterized by the absence of explicit, standardized, problem-directed knowledge, and by the lack of fixed procedures and methods. We take the view that structuring these activities might upgrade their scientific status and at the same time make it easier to improve, save and transfer practice-directed knowledge. We propose a method of working that organizes these activities by means of a few simple rules. Action research as a scientific discipline refers to a strategy of change where the scientist participatively supports people in changing their situation in a desired way. The essence of the action research model is a set of instructions that tells the persons involved what to do – how to act – to achieve a desired end under existing conditions. This is a departure from the w
Author for correspondence: Dr. Frans M. van Eijnatten, Associate Professor, Faculty of Technology Management, Institute for Business Engineering and Technology Application, TU/e Phone: +31 40 247 24 69 (GMT/MET+1); Fax: +31 40 243 71 61; E-mail: F.M.v.
[email protected]
138
FRANS M. VAN EIJNATTEN AND LIEUWE DIJKSTRA
traditional idea of science as an abstract system of general laws and initial conditions (the ‘covering-law model’). This concerns both the fundamental and applied versions. Fundamental research is directed towards expanding the reach of explanation of theories, usually by disconfirming new predictions, whereas applied research starts from the expectation that generally accepted knowledge will prove effective or ‘true’ under known conditions. Action research in all its manifestations is opposed to both versions of the covering-law model, because of an unbridgeable gap between theory and practice (Friedman, 2001: 160). Therefore, action research is a choice in favour of a specific conception of research, based on views and convictions about the social function of science. Under the umbrella of action research, a large diversity of movements is to be found. For our purpose, two schools or communities of inquiry, should be distinguished. Both schools dissociate themselves from the idea that the covering-law model can provide actionable knowledge by abstracting from contextually bound pragmatics (Argyris et al., 1985: 40). However, they differ from each other fundamentally with respect to normative rules for action practice. The first school, inseparably related to the work of Argyris, adopts the normative ideas of neo-positivism,1 particularly the quality criteria of objectivation and testing (Argyris et al., 1985: 54). Following the footsteps of this research tradition, we call this school ‘action science’. In general, our paper concurs with action science on the main formal methodological points. As regards content, however, it will turn out that we do not adopt the idea of what is called action theories in this research tradition. The second interpretation of action research is based on a conglomerate of related normative ideas about action research that is primarily conceived as an activity to serve socially deprived groups. It is a socio-political commitment with an emancipatory objective. Following the general lines of thinking in these communities of inquiry, we call it ‘emancipatory action research’. In addition to rejecting the covering-law model of applied science, these communities also reject neo-positivistic quality criteria for emancipatory action research, and replace them by new choice-points for quality and validity assessment in accordance with what is called ‘a participative worldview’ (Reason and Bradbury (Eds.), 2001: 6/7). As a consequence of these completely different normative sets, there is a notable gap between the communities of inquiry of action science and emancipatory action research. In section 2, Argyris’ model of action science will be discussed in the context of forms of interventions for intended effects.2 We first proceed with the idea of emancipatory action research in more detail. Although there seem to be so many variations that it is a risky enterprise to heap them together, we shall attempt to characterize them in outline as one single movement.
IMPROVEMENT AND TRANSFER OF PRACTICE-DIRECTED KNOWLEDGE
139
A general feature is their turning away from ‘‘a modernist worldview based on a positivist philosophy’’ (Reason and Bradbury (Eds.), 2001: XXIII) towards ‘‘a participatory, democratic process concerned with developing practical knowing in the pursuit of worthwhile human purposes, grounded in a participatory worldview which we believe is emerging at this historical moment’’ (Reason and Bradbury 2001: 1). It is all about ‘‘theories which contribute to human emancipation,’’ leading ‘‘not just to new practical knowledge, but to new abilities to create knowledge’’ (Reason and Bradbury, 2001: 2) for the benefit of ‘‘the liberation of the oppressed and underprivileged of this world’’. (Reason and Bradbury 2001: 3). This results in a shift ‘‘from a concern with idealist questions in search of ‘Truth’ to concern for engagement, dialogue, pragmatic outcomes and an emergent, reflexive sense of what is important’’. (Bradbury and Reason 2001: 447). Derived validity and quality points of interest are such as the participative-relational practices, the practical usefulness of research, and enduring consequences (Bradbury and Reason, 2001: 448). Quality aspects like epistemological forms and the assessment of worthwhileness are difficult to access in concrete terms of tangible criteria, with no exceptions for people who are not adverse to this thinking in advance – like the authors – but who do not belong to the inner circle of this community of inquiry. Bradbury and Reason formulate eight choice-points concerning quality and validity in the conception of emancipatory action research (Bradbury and Reason 2001: 454). In this paper, we leave emancipatory action research aside, because this community of inquiry is persistently opposed to systematically recording knowledge about actions. Emancipatory action research does not make a division between the roles of the researcher and the group in need of help. Its adherents reject the basic ideas of what they call ‘positivistic’ science on moral grounds, thus excluding a meaningful discussion about methodological problems in engaged action science. In note 1, we comment on a popular misunderstanding about ‘positivism’ that is implicitly present in the emancipatory conception of action research. Finally, consultancy work has many guises, ranging from generally advice at a distance to detailed participation in a change process. Argyris notes that consultants are a rich natural resource for building usable knowledge, though usually this knowledge is stored in the mind of the practitioner as an uncoded repertoire of effective practices. Therefore, consulting and research should not be separated because solving practical problems does not contribute directly to theory development (Argyris, 1993: 283/284), but might do so if knowledge and practice in consulting were more formally structured. In this paper, we restrict the range of consultancy work to activities in which there is a clear distinction between the consultant and the situation. In summary, in this paper, our domain of interest is research activities in a single, unique situation, where interventions are based on an explicit effects
140
FRANS M. VAN EIJNATTEN AND LIEUWE DIJKSTRA
plan, where a catalogue of well-documented considerations makes the occurrence of the desired event plausible, and where there is a clear demarcation between research activities and the alteration process. In section 2, we work out similarities and differences between fundamental, scientific research and practice-directed research. Next, in section 3, we present a work scheme that concretizes what the architecture of an explicit coding of an intervention might be in practice. Section 4 gives an interpretation of this work scheme in the light of regulative ideas and in section 5 the proposed method is illustrated by an example from socio-technical re-designing. 2. Modeling Interventions for Intended Effects A main distinction can be made between theory-directed and practice- or policy-directed research. Other current formulations to express this idea are scientific or fundamental research versus practice-directed research, the distinction between explanatory and action models, and the distinction between basic and applied research. In the research tradition of Argyris and his adherents, the standard term for the combination of practical problem solving and theory building is ‘action science’. This is practice-directed research in a particular sense, based on an explicit strategy of the following form: ‘‘1. in situation X (conditions), 2. then do Z (strategy), 3. to achieve Y (goal).’’ (Friedman, 2001: 161) Such ‘‘a theory of action must meet three requirements’’: (1) ‘‘describe and understand reality’’, (2) ‘‘invent new solutions to problems’’ and (3) ‘‘prescribe what actions are to be taken and how they are to be implemented’’ (Argyris, 1993: 249/250). A typical feature in making a strategy explicit that way is to reject the use of theory in the more conventional sense of initial conditions, and laws or regularities. The argument is that: ‘‘the rules that produce valid positivist explanations of social problems cannot produce the knowledge needed to do something about them. Applied science fails to bridge this gap because it functions according to the same positivist rules and standards as basic science.’’ (Friedman, 2001: 160). As a technology, this conception of action science often is difficult to understand as a sequence of clearly described, explicit steps of intervention.
IMPROVEMENT AND TRANSFER OF PRACTICE-DIRECTED KNOWLEDGE
141
Friedman presents a helpful review in an attempt to overcome this difficulty and to clarify the key features of action science in the light of the practice of social and organizational change (Friedman, 2001: 161–3). One main topic in practice-directed research in general, from the point of view of the neo-positivistic idea of science, is the intersubjective agreement about the research design and the results. However, an explicit documentation of an intervention usually does not exist. In particular, usually there is no explicit account of the reasons why it is plausible for an intervention to produce an intended effect. Consequently, the evaluation of an intervention is often excluded or is done under deficient conditions. This makes it impossible to systematically improve an intervention (to ‘learn from experience’) and to transfer valid documented, ‘actionable knowledge’ (Argyris, 1993: 249–285). In this paper, we contend that an important role can be played by conventional regulative ideas in practice-directed research, originating from neopositivistic research practice. Regulative ideas are guidelines that govern and direct scientific conduct. Popper, borrowing from Kant, made these ideas generally accepted in the empirical scientific community (Swanborn, 1996: 19). The fundamental concept in the distinction between theory-directed and practice- or policy-directed research is explanation. Explanation means making a phenomenon, state, event or occurrence plausible and intelligible by referring to laws or regularities that hold. The general structure consists of a set of laws and initial conditions from which predictions of future events can be derived (Argyris et al., 1985: 17). The classic formulation states that ‘‘to give a causal explanation of an event means to deduce a statement which describes it, using as premises of the deduction one or more universal laws, together with certain singular statements, the initial conditions’’ (Popper, 1959: 59). In a less strict way of thinking, intermediate phenomena usually explain (‘interpret’) a causal relationship and make it plausible. For example, the relationship between the socio-economic background and elementary school performance has the intermediate explanatory links (1) parents’ educational level, (2) a child’s linguistic skills and (3) the school’s quality. Directly connected to this model of describing, explaining and predicting events, is the idea of testing and falsifying. Practice or policy-directed research has the same structure as theory-directed research, but here a particular cause-and-effect relationship is involved, namely the relationship between a means, measure or intervention on the one hand and an intended or desired practical consequence on the other. The intervention is the equivalent of a cause in theory-directed research, and the considerations that underlie the intervention are parallel to the explanatory chain in fundamental research. Finally, the plausibility of the occurrence of an effect in practice-directed research is equivalent to the prediction of an event in fundamental, theory-directed research. In other words, there are striking architectural similarities between theory-directed and
142
FRANS M. VAN EIJNATTEN AND LIEUWE DIJKSTRA
practice-directed knowledge. Basic research is designed to test theories and to produce generalizations; action or applied research seeks to solve client problems and to produce workable solutions. However, both are mainly concerned with the validity and generalizability of propositions (Argyris, 1993: 281). Therefore, instead of accentuating the differences between theory-directed and practice-directed research, it is more fruitful to stress their structural similarity, and to develop operational quality criteria for practice-directed knowledge in analogy to fundamental research. This calls for an explicit documentation or coding of the underlying considerations of an intervention for two reasons. In the first place, such a coding may serve as a justification in advance, and secondly, it afterwards offers an opportunity to evaluate the intervention. This coding should be performed in advance for yet another reason. In the unlikely event of the intervention not producing the intended effects, inspection of the set of documented underlying considerations is a powerful means for identifying factors that have caused the failure. This, in turn, may be a clue to a re-adjustment (improvement) of the intervention. In section 4, we will argue in more detail that there is a striking structural resemblance between proving whether a prediction in theory-directed research will come true, and whether an intended effect in practice-directed research has occurred. Our emphasis on the failure of an intervention should not be conceived as a lack of appreciation of a successful intervention, quite the reverse. Rather it has to do with the analogy between the absence of an intended effect in practice-directed research, and disconfirmation of a prediction in theorydirected research. Both cases have in common the powerful means to falsify or empirically disconfirm (Argyris et al., 1985: 4). But obviously, the failure of an intervention usually is a disappointment for both the people involved and the scientist. Also, it should be noted that in the case of a successful intervention an evaluation is not obsolete, because there is the necessity to unambiguously assign the effect to the intervention. In the next section, we will further discuss which questions should be answered when evaluating a successful intervention. In summary, practice-directed research, as we perceive it, is guided by the following questions: 1. What organizational design is required to realize the intended effect? 2. Why is it plausible that the intended effect occurs? (Optionally): 3. Why did the intended effect not occur? 4. Which re-adjustments are needed to make the intervention more successful? In the next section, we further develop these questions to a general method of working in practice-directed research.
IMPROVEMENT AND TRANSFER OF PRACTICE-DIRECTED KNOWLEDGE
143
3. The Architecture of Interventions A work scheme to formalize the coding of interventions for intended effects can be developed from two possible starting points. First, in the context of existing practice-directed research, the notion of a causal model to bring about an intended effect usually consists of reconstructing customary – perhaps even not well-considered – strategies in practice, as sets of statements that specify under what conditions a strategy will have an intended effect. This is the conception of the action science model that we mentioned in section 2. Secondly, starting from fundamental, theory-directed research, the intervention model takes the shape of laws or regularities, initial conditions, and derived predictions about intended effects. This is the classical hypotheticaldeductive model that we mentioned in section 2. Argyris correctly argues that usually such models are not based on a regressive action model that specifies how to create the initial conditions (Argyris et al., 1985: 19; 1993: 262). If, for example, in a theoretical construction social cohesion plays a decisive role as a condition for the occurrence of an intended effect, then the problem is the lack of a theory of action that tells us how to strengthen social cohesion. We do not dispose of any knowledge of documented action models that contain detailed instructions about the creation of conditions like this. In our view, Argyris’ objection counts all the more in its support of ‘theoretical’ considerations that underly interventions in practice-directed research, usually consisting of knowledge about and insights into the situation, instead of laws or regularities. Of course, laws and rules from fundamental research are not excluded, but it should be realized that solid scientific and actionable knowledge is at most a lucky incidental circumstance. In Argyris’ words, a theory of action ‘‘tells the person or group that uses it how to act effectively; how to design and implement actions in such a way that the actions achieve the intended consequences’’. This kind of theorizing is directed towards promoting or inhibiting learning in behavioural systems to realize goals, denoted as ‘governing variables’ or desired values, that can be thought of as ‘a continuum with a preferred range’ (Argyris et al., 1985: 84). Such theories ‘must be subjected to the most rigorous tests available’ to produce ‘‘actionable knowledge’’. Robust tests are characterized by (1) a ‘‘specification of what will and will not happen’’; (2) ‘‘it predicts the conditions under which the hypotheses will hold’’, and (3) ‘‘are not influenced by the indeterminacy principle’’ (Argyris, 1993: 270/271). For these reasons, practice-directed research benefits more from an explicit and precise account of considerations that may not stand the test of scientific criticism, rather than from hiding or stashing them. Such an explicit account has three additional advantages:
144
FRANS M. VAN EIJNATTEN AND LIEUWE DIJKSTRA
1. It enables a check in advance on the feasibility, the logical structure and the chance of success of the intervention, 2. The people involved or a potential client is given full information about the foundation, the practical details and the chance of success of the proposed intervention, which may serve to convince them to give the researcher or consultant the job, 3. A complete set of conditions are created for the evaluation, improvement and transfer of knowledge for future interventions. In conformity with closing questions 1 and 2 in the preceding section, we distinguish two elements or modules in coding an intervention. We label these elements the implementation module and the justification module. In this view, the first phase in coding practice-directed research consists of two activities of completion: 1. Implementation: the specification of the set of instructions for the organizational design of the intervention to realize the intended effect (the how aspect of the intervention), 2. Justification: the catalogue of considerations that make the occurrence of the intended effect plausible under the specified organizational conditions (the why aspect of the intervention). The distinction between implementation and justification is important although they may often interact. Modification of the implementation module may become necessary because the actual situation has changed during the intervention. Adjustment of goals, on the other hand, may also become necessary as a consequence of changed views about situational relations and conditions. This line of reasoning runs parallel to single- and double-loop learning as presented in the work of Argyris. When the intended consequences are not achieved, and the response to that is a supposedly more effective strategy to correct mistakes to satisfy the original ‘governing variables’, then the action is called ‘single-loop learning’. When the reaction is re-examination and the implementation of a new strategy to satisfy adjusted ‘governing variables’, then the action is called ‘double-loop learning’ (Argyris et al., 1985: 86). Repeated cycles of the unchanged implementation module to achieve a fixed goal (fixed governing variables or preferred values) in our line of reasoning is identical to Argyris’ single-loop learning. The process of adjusting implementation and goals in our line of reasoning is identical to Argyris’ double-loop learning. However, an important point of difference still remains between the single- and double-loop learning models on the one hand, and the above model on the other. From their nature, it follows that both single- and double-loop learning are located in and restricted to the implementation module, and that the justification module is absent in action science. For,
IMPROVEMENT AND TRANSFER OF PRACTICE-DIRECTED KNOWLEDGE
145
as we have indicated earlier in several places, the heart of action science consists of instructions on what action the agent of intervention should take to achieve the intended effect (the how aspect of the intervention). So, the methodological foundation of a theory of action unquestionably is the notion of causality. In variable language, this stands for cause and effect. Design causality explains how the components of a pattern of variables arise, and ‘‘it suggests ways to change the pattern itself and its components with it’’ (Argyris, 1993: 266). But why the interventioner’s action is likely to be successful – our justification module – by explicitly formulating a catalogue of considerations that make the occurrence of the intended effect plausible under actual conditions, does not form part of the conventional idea of action science. It is a crucial element in the working method that we propose here, particularly in the event of the failure of an intervention. Both occurrence and failure are the basis for robust evaluation of interventions. If the intended effect occurs, evaluation for the purpose of knowledge transfer should do the following: 1. Unambiguously confirm that the effect is caused by the implementation module, 2. Make clear which conditions for the occurrence of the effect are characteristic of a unique situation or application. If the intended effect does not occur, the evaluation should identify the causes of the failure: 1. Which implementation elements were not correctly used or did not work? 2. Which elements in the justification module that made the occurrence plausible were false? On the other hand, are considerations missing which might explain the failure? Was the time span sufficiently large to realize the intended effect? A method of working that has much resemblance with the above way of reasoning originates in recent developments in evaluation research. Leeuw presents a review of methods for reconstructing program theories. As a research tradition, this line of thinking aims at outlining the underlying mechanisms of a program as ‘an explicit theory or model of how a program causes the intended or observed outcomes’ (Leeuw, 2003). He concludes that in recent decades evaluation of social programs and policies in general has gone through a boom, whilst the methods to reconstruct underlying reasons to expect intended effects have come off badly. Leeuw makes a distinction between three main methods: 1. a policy-scientific approach, 2. a strategic assessment approach, 3. an elicitation approach.
146
FRANS M. VAN EIJNATTEN AND LIEUWE DIJKSTRA
For our purpose, the first and the last approach are of interest. The policyscientific approach includes six steps to identify in many ways underlying mechanisms, to reformulate them as ‘if-then’ statements, and to evaluate the logical consistency and the corresponding empirical contents. The elicitation approach is directed towards examining mental models and establishing cognitive maps. This comes close to Argyris’ conception of making explicit ‘theories in use’ (in contrast to ‘espoused theories’) and is actually part of his double-loop learning process that we mentioned earlier (Argyris and Scho¨n, 1978). In the discussion, we briefly come back to this theme. Pawson and Tilley’s Context-Mechanism-Outcome (CMO) configurations, intended to create cumulative, transferable knowledge, have a striking substantial resemblance with our above work scheme, and the CMO methodology has roughly the same objectives, but in a completely different terminology and structure. In the discussion, we return to their pioneering and inspiring ideas about realistic evaluation (Pawson and Tilley, 1997). 4. Regulative Ideas in Practice-Directed Research In the modern philosophy of science, the ideal of objectivity has been replaced by intersubjective agreement about research results as the next best criterion (Argyris et al., 1985: 54; Swanborn 1996: 20). Our account follows Swanborn’s line of argument. To come to grips with this idea in the context of practice-directed research, it is helpful to recall that the necessary requirement for intersubjective agreement in general is controllability that is manifestly enabled by precision and publicity. The underlying methodological condition for controllability, however, is falsifiability. The latter concept presupposes precision and publicity. Inaccurate or not public hypotheses cannot satisfactorily be disconfirmed, and as a consequence, the idea of control based on intersubjective agreement no longer makes sense. For this reason, falsifiability is considered to be the heart of regulative ideas. Once the requirement of falsifiability is satisfied, two practical criteria serve to evaluate research and offer information to achieve intersubjective agreement: reliability and validity. Reliability presupposes (1) researcher, (2) time, and (3) instrument independence; reliability controls are carried out by replication and examining the inter-researcher identity of research conduct. Validity is a much more complicated criterion, because many research results are based on argumentation and interpretation. Therefore, it is difficult to unambiguously decide whether random and systematic errors (the manifestations of invalidity) are absent. The most concrete expression of validity is generalizability. Reproducibility, which is usually rated under regulative ideas, is a derived criterion, based on the criteria of controllability and reliability.
IMPROVEMENT AND TRANSFER OF PRACTICE-DIRECTED KNOWLEDGE
147
5. An Illustration of the Proposed Method To illustrate the method, we elaborate both the implementation and justification modules for a practice-directed research project, to improve the efficiency and quality of production, using socio-technical systems design as the theoretical inspiration. For reasons of confidentiality, the following case is fictitious; however, it is based on the experiences of one of the authors. A small hearing-aids factory faced the fact that turnover had reduced significantly over the past 2 years. A recent market analysis showed that its products were more expensive and of a lower quality than that of the competition. An independent evaluation by an auditing agency revealed that both the efficiency and quality of production suffered, because of the bureaucratic structure of the organization. The span of control was rather low; the number of hierarchical levels was high. As a consequence, the company employed too many middle managers that did not add value to the process. Moreover, the motivation of workers was rather low because of the highly fragmented, monotonous work, and the inability to influence the quality of their work directly. Maintenance and quality departments were ‘technical kingdoms’ with their own priorities other than the production department. The management of this factory was forced by its shareholders to reverse the downward trend by an order of magnitude. One of the authors was invited to participate in the design and implementation of an intervention for that purpose. Before we accepted the commission, we elaborated the diagnosis in greater detail, which showed the strong division of labour as a conglomerate of highly fragmented jobs. In addition, we found correlates in terms of high stocks of work in progress, a messy production workspace, no sense of quality among workers, conflicts between professional staff and production management about the maintenance of machines, an overloaded top-management team predominantly occupied with all kinds of ad hoc operational activities, high absenteeism and poorly educated workers. Our analysis led to the conclusion that the functional structure caused simple and monotonous production jobs, carried out by unmotivated workers with insufficient education, who were directly supervised by a whole ‘army’ of foremen and middle managers, causing contradictory instructions, role inconsistencies and cloudy responsibilities. Because the environment was and still is turbulent, with many competitors around, the basic problem was the bureaucratic structure of the organization, which was slow, and lacked appropriate quality and efficiency. In such a structure, opportunities for disturbance are high, because nobody has a complete view, and managers are too distant from the place where the actual work is carried out. A high price and low quality of products is an inherent characteristic of the current structure.
148 5.1.
FRANS M. VAN EIJNATTEN AND LIEUWE DIJKSTRA
JUSTIFICATION MODULE
The above-mentioned in-depth diagnosis convinced us that on a general level contra-indications were present for applying a bureaucratic structure. On the basis of Socio-Technical Systems Design theory, we proposed a streamlined production structure with parallel flows for different product families, which make it possible to establish self-managing work teams. Below, we give our theoretical implementation module in outline. However, first we proceed with the justification module, because it implicitly expresses the reasons why the current production situation is not producing the results the shareholders want. In an organization with parallel production flows, processes are not grouped on the basis of business functions, but on the basis of specific product/market/technology combinations. Converting one single production flow into two separate dedicated parallel subflows reduces the overall external variety (as measured by the total number of operational trajectories) with 83% (c.f., De Sitter, 1998). ‘‘After parallelization, each subsystem accounts for only a part of the original (environmental) variety. (. . .) This intervention has a major impact on the overall complexity of the system. (. . .) The idea of parallelizing workflows to enable team design is also vigorously expressed in Mathews (1994) under the title ‘segmentation by product or process’ (p. 56), while its effectiveness is transparently demonstrated in the case of Bendix Mintex (Mathews et al., 1993).’’ Van Eijnatten and Van der Zwaan, 1998: 301. There is convincing evidence that this basically logistic intervention paves the way for the successful implementation of self-managing teams who are executing ‘whole tasks’, which in turn will enhance the motivation of the workers: ‘‘Due to the reduced need for control, the self-managing teams can control a larger part of the paralleled production flow. By controlling rather large segments of the flow, the groups will become real ‘whole task groups’’. Van Eijnatten and Van der Zwaan, 1998: 300/301. Because of the self-managing teams, there is less need for coordination and control by management, so the foremen and middle managers become obsolete. The disturbance probability is reduced because of the integration of execution and control, and of feedback within the self-managing teams. Vital initial conditions for the above-mentioned interventions include the following:
IMPROVEMENT AND TRANSFER OF PRACTICE-DIRECTED KNOWLEDGE
149
1. Financial investments in a multitude of small-capacity production machines, 2. Extensive training and education of production personnel, 3. The integration of staff and production activities. 5.2.
IMPLEMENTATION MODULE
Up till now, the why of the intervention has been discussed. We now return to the how, the implementation, in more detail. Socio-Technical Systems Design theory specifies exactly how the intervention should be implemented. What follows are some design-sequence rules and suggestions as to which teams, methods and techniques should be used (Van Eijnatten et al., 1992: 190–194; Van Eijnatten, 1993: 65; De Sitter et al., 1997): 1. Re-design the production structure aspect system first, and then proceed with the design of the control structure aspect system. 2. Re-design the production structure aspect-system top-down, from macro to micro (parallelization and segmentation of production flows). Start by re-designing the production structure at the level of the company as a whole, using Production Flow Analysis (PFA, Burbidge, 1975) or the Semi-Parallel Streams technique3 (SPS, Hoevenaars, 1991; Van Eijnatten, 1993: 107–108; Hoevenaars and Van Eijnatten, 1994: 19–23), and continue at a meso level with segmentation of production flows, using cohesion analysis (Van Amelsvoort, 1992: 146; Wild, 1972). The re-design of the production structure precedes the re-design of process technology. 3. Re-design the control structure aspect system by a bottom-up allocation of control cycles in the sequence: local/interlocal/global (De Sitter et al., 1997), according to the following principle: Control as many aspects as possible as near to the spot where they originate and allocate responsibility for coordination clearly and firmly with those whose efforts require coordination (Emery, F., and Emery, M., 1989: 10). Start at the local level by allocating control loops to the self-managing work teams. A large amount of production control can be allocated to the teams (all operational control, some tactic control and a bit of strategic control). Continue with the allocation of control loops at the interlocal level to multi-disciplinary staff teams. Here, the intergroup coordination of quality, logistics, personnel, automation, etc. will take place using socalled ‘star roles’). The focus here is on tactic production control. Finish the allocation of control loops at the global or organizational level, by allocating most strategic control, some tactic control and a very small amount of operational control to the management team (Van Amelsvoort, 1992: 116/183–186). As to implementation, an intensive socio-technical training programme for both workers and managers has to be executed, in which they experientially
150
FRANS M. VAN EIJNATTEN AND LIEUWE DIJKSTRA
learn to re-design their own organization according to the principle of ‘selfdesign by knowledge transfer’ (De Sitter, 1993: 171). 5.3.
EVALUATION OF THE INTERVENTION
A first evaluation of the new organization after almost 6 months of functioning showed that the training programme had been successfully administered. In parallel, the socio-technical re-design was done by management and workers, in close collaboration with a team of socio-technical consultants, resulting in two independent flows of different types of hearing aids: one for standard series, and one for special products, being tailor-made according to the diagnosis of the audits. In both streams, two self-managed teams of 12 persons were formed to execute the whole production process. All foremen and most middle managers were gone. However, a thorough analysis of the productivity and quality of the new structure revealed that results were very poor. Also, management reported that the organizational climate was suffering, and that workers were quarrelling most of the time. They wanted to re-install the old structure! Workers complained that they felt criticized all the time and pressured by their colleagues to do things right the first time. In conclusion, it seemed that the socio-technical solution was not paying off at all. To correct for the apparent failure and thus improve the intervention model, we first considered the implementation module: The training was administered well and employees were competent in socio-technical principles. They showed an interest in and commitment to the redesign of their organization. The re-design was carried out in full accordance with the socio-technical design rules, as specified above. The parallel streams match the organization’s needs for control perfectly. Furthermore, the two parallel flows were designed correctly, using the type of product as the main entry. Further segmentation of product flows was not needed, because the self-managed teams already had the ideal number of people (Van Amelsvoort and Scholtes, 1993: 29). The allocation of control activities was done according to the suggestions given (Van Amelsvoort, 1992: 183–186). Consequently, the how seems to be fine and the implementation module does not need correction or adjustment. We now turn to the evaluation of the theoretical justification module. The basic claim that self-management fosters productivity and motivation seems to be seriously put into question in this case, because the results were far from what was predicted. However, when we looked at the process aspects of group formation, we discovered that teams go through different developmental stages: ‘forming, storming, norming, performing, adjourning’
IMPROVEMENT AND TRANSFER OF PRACTICE-DIRECTED KNOWLEDGE
151
(Tuckman and Jensen, 1977: 419–427). Teams cannot be expected to be productive before the performing stage is reached, which takes at least 1 year following their inception. Moreover, Van de Bovenkamp and Jongkind (2002: 55–71) assert that self-managed groups go through different crises in their development. On the basis of this evidence, we added to the justification module as an explanation for the ‘unsuccessful implementation’ that the new self-managed teams in the hearing-aids company could well be in the ‘storming’ phase of their team-development process. This idea is supported by the report of the management that people were ‘quarreling’ all the time. On the basis of this new insight, we advised the management not to return to the old bureaucratic structure but to stay with the new team structure for at least another year. In summary, we left the implementation module unchanged, but added to our justification module Tuckman and Jensen’s model of group-development stages. Thus, the two modules were in accordance with actual results and ready for further testing. 6. Discussion Our conclusion is that adopting these ideas in practice-directed research is not just possible, but even self-evident because they can have a major impact on the effectiveness, transferability, and acceptance as a scientific activity of practice-directed research. This particularly concerns the development in practice of the neo-positivistic quality criterion of controllability. Our inclination to seek alliance with neo-positivistic regulative ideas is primarily motivated by our belief that committed, involved action science can greatly profit from the regulative idea of controllability. In the last decennia of the past century evaluation research – and more specifically policy research and knowledge utilization – expanded enormously, resulting in an abundance of textbooks and specialized journals. As we mentioned at the end of section 3, the subject matter of this paper has much ground in common with those fields, in particular the idea of eliciting underlying assumptions of interventions. Therefore, we pay so much attention to the work of Argyris, who introduced and developed this methodology but without explicitly codifying it as a method (Argyris, 1993; Argyris and Scho¨n, 1978), and whose work still is a basic reference in organizational research. Yet, apart from Argyris, we do not refer profoundly to the abovementioned fields because – to our best knowledge – there are important, possibly mainly practical, differences between our approach and current evaluation research. Characteristically, the evaluation research traditions have been developed in the wake of large-scale governmental programmes or policy, whereas we are interested in repetition and standardization in small-scale organizational
152
FRANS M. VAN EIJNATTEN AND LIEUWE DIJKSTRA
settings. Our entrance in the arena is action research and consultancy. As an additional practical consequence, in the situations that we have in mind, most part of the advocated methods to get information to reconstruct the logic of an intervention are excluded. For example, both the policy-scientific and the elicitation approach presuppose the existence of a large amount of documentation and the possibility by some means or other, of a survey of statements about the probability of succeeding (Leeuw, 2003). In situations we have in mind, the main and only source of information is the researcher or consultant. Next, we start from the existence of theoretical principles that guide an intervention instead of the primacy of a desired change or an intended outcome. In our paper, the Socio-Technical Systems Theory is at stake, but it may as well be any other theoretical orientation. Besides, in programme evaluation, in the reconstructing phase a lot of theoretical insights, regularities and hypotheses from the social and behavioural disciplines prove to be useful and productive (Leeuw, 1991/2003). However, as a departure from programme evaluation, for situations in action research and consultancy, we stress the urgency of formulating a complete as possible catalogue of justification in advance, aiming at cycli of replication, improvement and transfer. Different from the mainstream of programme evaluation literature, Pawson and Tilley also stress the last point as a pivotal activity in evaluation. Our approach is in full accordance with their ideas and proposals about the use of replications for the benefit of cumulation and transfer of practicedirected knowledge. Pawson and Tilley’s CMO configurations are, in a completely different terminology, substantially identical with our implementation and justification modules (Pawson and Tilley, 1997). However, we differ somewhat in our appreciation of neo-positivism. We search deliberately for connections with this tradition in order to bridge the gap between action practice and science, whilst Pawson and Tilley try to move away. In our opinion, it is difficult to see why in realistic evaluation one should not assess, that the principle of falsification still – or perhaps precisely – is the pushing power of cumulation of knowledge. Here is a crossroads where Pawson and Tilley take a different road from most evaluation theorists. In our view, this is smoothly compatible with the insight that classic neo-positivism in its rigid experimental form is outdated because, as we argue in section 3, with falsification, productive features like controllability and reproducibility are brought in that make an intervention transparent. Notes 1. In accordance with conventions, we use the term ‘neo-positivism’ for the version of positivism that was introduced by Popper and where, contrary to initial positivism, the idea of falsifiability plays a crucial role in the empirical test of theories.
IMPROVEMENT AND TRANSFER OF PRACTICE-DIRECTED KNOWLEDGE
153
2. Peters and Robinson (1984) distinguish between ‘weak’ and ‘strong’ concepts of action science. The strong version pretends not just to be a social methodology, but also a new social theory, different from existing theories of practice. They argue why it is not sufficient to call it a ‘paradigm’. This discussion is not relevant for our paper. 3. Unfortunately, most technical papers about the redesign techniques are in Dutch. However, wherever possible we cite English language literature.
References Amelsvoort, P. J. L. M. van & Scholtes, G. H. (2001). Self-managing Work Teams: Design, Implementation and Consultation. Oss, Netherlands: ST-Groep. Amelsvoort, P. J. L. M. van (1992). Het vergroten van de bestuurbaarheid van productieorganisaties (Increasing the liability of production organizations). Eindhoven, Netherlands: Technische Universiteit, Ph.D. thesis (in Dutch). Argyris, C. (1993). Knowledge for Action. San Francisco: Jossey-Bass Publishers. Argyris, C. & Scho¨n, D. A. (1978). Organizational Learning: a Theory of Action Perspective. Readding, MA: Addison-Wesley. Argyris, C., Putman, R. & McLain Smith, D. (1985). Action Science. San Francisco: JosseyBass. Bovenkamp, M van de & Jongkind, R. (2002). Chaordische groepsdynamica (Chaordic group dynamics). In: F. M. van Eijnatten, J. M. G. Poorthuis & J. Peters (eds.), Inleiding in Chaosdenken: theorie en praktijk (Introduction in Chaos thinking: Theory and practice). Assen, Netherlands: Van Gorcum (in Dutch), pp. 55–71. Bradbury, H. & Reason, P. (2001). Conclusion: Broadening the bandwidth of validity: Issues and choice-points for improving the quality of action research. In: P. Reason & H. Bradbury (eds.), Handbook of Action Research. New York: Sage, pp. 447–455. Burbidge, J. L. (1975). Introduction to Group Technology. London: Heinemann. Eijnatten, F. M. van (1993). The paradigm That Changed the Workplace. Stockholm/Assen: Arbetslivcentrum/Van Gorcum. Eijnatten, F. M. van (1995). Analysis of the Dutch approach of Integral Organizational Renewal (IOR). An innovation in socio-technical concepts, or just a local manifestation of global Socio-Technical Systems Design? Paper presented at the International Colloquium on ‘Organizational Innovation and the Socio-Technical Systems Tradition’, Melbourne, Australia, May 26/27. Eijnatten, F. M. van, Dijkstra, L. & Galen, M. C. van (2001). Dialogue for emergent order: An empirical study of the development of the organisational mind in a Dutch manufacturing firm. Working paper presented at the 17th EGOS Colloquium, Lyon, 5–7 July 2001, Sub Theme 5: Complexity. Eijnatten, F. M. van, Hoevenaars, A. M. & Rutte, C. G. (1992). Holistic and participative redesign: STSD modelling in The Netherlands. In: D. Hosking & N. Anderson (eds.), Organizational Changes and Innovation: Psychological Perspectives and Practices in Europe. London: Routledge, pp. 183–207. Eijnatten, F. M. van & Zwaan, A. H. van der (1998). The Dutch IOR approach to organizational design. An alternative to Business process Re-engineering? Human Relations 51 (3): 289–318. Emery, F. E. & Emery, M. (1989). Participative Design: Work and Community Life. Canberra: Australian National University, Centre for Continuing Education. Friedman, V. J. (2001). Action science: Creating communities of inquiry in communities of practice. In: P. Reason & H. Bradbury (eds.), Handbook of Action Research. New York: Sage, pp. 159–171.
154
FRANS M. VAN EIJNATTEN AND LIEUWE DIJKSTRA
Hoevenaars, A. M. (1991). Productiestructuur en organisatievernieuwing: de mogelijkheid tot parallelliseren nader onderzocht (Production structure and organizational renewal: An evaluation of the opportunities and limitations of parallalization of production flows). Eindhoven, Netherlands: Eindhoven University of Technology, Ph.D. thesis (in Dutch). Hoevenaars, A. M. & Eijnatten, F. M. van (1994). Een ontwerptechniek voor het stroomsgewijs organiseren van de productiestructuur (A design tool for streamlining the production structure). In: J. F. den Hertog & J. J. Ramond (eds.), Competente vernieuwers: een proeve van het TAO-programma (Competent Innovators: A portfolio of the TAO Program). Maastricht, Netherlands, MERIT (in Dutch), pp. 19–23. Leeuw, F. L. (1991). Policy theories, knowledge utilization, and evaluation. Knowledge and Policy 4(3): 73–91. Leeuw, F. L. (2003). Reconstructing program theories: Methods available and problems to be solved. American Journal of Evaluation 24, to appear. Mathews, J. (1994). Catching the Wave. Workplace reform in Australia. St Leonards, NSW: Allen and Unwin. Mathews, J., Griffiths, A. & Watson, N. (1993). Socio-technical Redesign: The Case of Cellular Manufacturing at Bendix Mintex. Sydney: Industrial Relations Research Centre, The University of New South Wales, UNSW Studies in Organizational Analysis and Innovation, no. 10. Pawson, R. & Tilley, N. (1997). Realistic Evaluation. London: Sage Publications. Peters, M. & Robinson, V. (1984). The origins and status of action research. Journal of Applied Behavioral Science 20: 113–124. Popper, K. R. (1959). The Logic of Scientific Discovery. New York: Basic Books. Reason, P. & Bradbury, H. (2001). Preface. In: P. Reason & H. Bradbury (eds.), Handbook of Action Research. New York: Sage, p. XXIII. Reason, P. & Bradbury, H. (2001). Introduction. In: P. Reason, & H. Bradbury (eds.), Handbook of Action Research. New York: Sage, pp. 1–13. Sitter, L. U. de (1993). A socio-technical perspective. In: F. M. van Eijnatten (ed.), The Paradigm That Changed the Work Place. Assen/Stockholm: Van Gorcum/Arbetslivcentrum, pp. 158–184. Sitter, L. U. de (1998). Synergetisch produceren. Human resources mobilization in de productie: een inleiding in de structuurbouw. (Synergetic Manufacturing. Human resources Mobilization in Production: An Introduction in the Design of Structure). Assen, Netherlands: Van Gorcum (in Dutch). Sitter, L. U. de, Hertog, J. F. den & Dankbaar, B. (1997). From complex organizations with simple jobs to simple organizations with complex jobs. Human Relations 50(5): 497–534. Swanborn, P. G. (1996). A common base for quality control criteria in quantitative and qualitative research. Quality and Quantity 30: 19–35. Tuckman, B. W. & Jensen, M. C. (1977). Stages of small group development revisited. Group and Organization Studies 2(4): 410–427. Wild, R. (1972). Mass Production Management: The Design and Operation of Production Flowline Systems. London: Wiley.