Research in EngineeringDesign (1996) 8:125-138 © 1996 Springer-VerlagLondon Limited
Research in
El gineering esign
Why Mechanical Design Cannot be like VLSI Design Daniel E. Whitney MIT, Center for Technology,Policy and Industrial Development,Cambridge, USA
Abstract. It is widely agreed that the design methods and computer support of VLSI design are generally more mature than those of mechanical items. Why is this so, and is there any hope of the gap being significantly closed? This paper argues that there are fundamental reasons, that is, reasons based on natural phenomena, that keep mechanical design from approaching the ideal of VLSI design methods. The argument is accompanied by examples and brief histories of the evolution of VLSI design methods and attempts to systemize mechanical design. Brief attention is also given to the paradoxical fact that VLSI design itself is currently receding from the ideal and taking on the flavor and problems of mechanical design. This paper is based on several reports and working papers with limited circulation. The one relied on primarily is Whitney et al. (C.S. Draper Lab Report R-2577, Dec., 1993). The author wishes to acknowledge the contributions of his co-authors of that report. He also acknowledges the many discussions he had with people mentioned in the footnotes who enriched his approach to this topic. Keywords. Integral design; Mechanical design; Modular design; Product architecture; VLSI design
1. I n t r o d u c t i o n It is widely agreed that design methods and especially computer support of design is generally more mature in electronics than it is in complex electro-mechanical (CEM) products. This realization has given rise to speculation that electronics design and manufacturing methods, especially those used so successfully in VLSI digital products, might be applied to CEM products with good results. This topic is worthy of intense study, and this paper is intended to lay out some of the issues. The main issue is whether there are fundamental blockages to such a transfer of method, or whether the transfer has not taken place simply because of inertia or lack of appreciation of the potential benefits. Correspondence and offprint requests to." D. E. Whitney, Center for Technology, Policy and Industrial Development, E-20-243, MIT, Cambridge, MA 02139, USA. Supported by ONR Grant N00014-94-t-0655.
Claimed benefits of the VLSI design paradigm fall into several classes: Design benefits: VLSI systems are extremely complex, small and efficient, and can be designed by relatively few people empowered by welt-integrated design tools; a microprocessor with 3 million'parts' can be designed and brought to full production in 3 years by about 300 people, whereas a car with about 10,000 parts requires the efforts of 750-1500 people over about 4 years, and an airplane with 3 million parts may require 5000 people 5 yearsJ Manufacturing benefits: the 'same' manufacturing processes or even the same manufacturing equipment can be used to make an endless variety of VLSI items; by contrast, especially at the most efficient high volumes, CEM production facilities are dedicated to one design or at most a few variations of limited scope.
Are these benefits transferrable from VLSI to CEM items? To begin the discussion it is necessary to classify CEM items and choose one class for further discussion. C E M products can be classified roughly as follows: • those that are primarily signal processors, • those that process and transmit significant power. Examples of the two classes can be found in Table 1. The distinction is not merely academic, for two reasons. A major trend in recent decades has been the replacement of mechanical signal processors first by analog electronics and more recently by digital electronics. Signal processing behavior is generally carried out more economically, accurately and reliably by electronics. The replacement is physically possible because signal processing is, or can be, 1The exact number of parts, people, and years are all subject to considerable counting errors, depending on what is considered a part, when in the design cycle time zero is declared to fall, and whether people at supplier companies are counted. Different companies in the same industry also have widelydifferenttime and effort efficiencies.Ignoring these difficulties,the disparities claimed above are generallyaccepted ordinally if not quite quantitatively.
126
D. E. Whitney
Table 1. Examples of CEM signal processors and power processors Signal processors
• Four digit mechanical gear gas meter dial (1 mW?) • Ball-head typewriter (30 mW peak at the ballhead?) • Sewing machine (1 W?)
• Marchand calculator (10 W?)
Process and transmit significant power
• Polaroid camera (30 W peak?) • Missile seeker head (50 W peak?) • Laser printer (1 kW, much of which is heat) • Automobile automatic transmission (50 kW +) • Automobile (100 kW+) (half or more dissipated as heat from engine) • Airplane (10 MW_+) • Ship (40 MW+)
accomplished at very low power levels because the power is merely the manifestaton of a fundamentally logical behavior. The power itself is not actually required to perform any physical function, such as motion. The lower limit of required power is probably given by quantum physics and so far no practical signal processing devices operate at such levels. For the forseeable future, therefore, the amount of power needed for signal processing can be expected to continue falling. Replacement by VLSI has not overtaken CEMs whose primary function is signal processing in those cases where they retain cost advantages (gas meter readouts) or if they produce low to medium power output motion (sewing machines, dot matrix printers). More significantly, the replacement has not occurred where significant power is the basis for the system's behavior and the main expression of its basic functions. The discussion that follows focuses on such power-level CEMS. The presence of significant power in CEMs and its absence in VLSI is the root of the reasoning in this paper. The paper is organized as follows. Section 2 briefly describes the process by which VLSI items are designed. This is primarily a top-down process supported by a wide variety of integrated computerized design tools. Section 3 describes mechanical design in order to establish a few main contrasts. Section 4 discusses efforts to make VLSI and C E M design more systematic. Section 5, the main contribution, presents the differences between these two processes in terms of claimed fundamentals. Section 6 closes with some final remarks. A few caveats: the author has some experience in
mechanical design but none in VLSI design. He has drawn his impresssions of VLSI from reading and extensive discussions with knowledgeable people in that field whose writings or personal communications are noted in passing. It is not the intention of this paper to compare the 'difficulty' of mechanical design with that of VLSI. Both are extremely difficult and challenge the best practitioners.
2. Sketch of VLSI Design There are basically three classes of VLSI, distinguished by the aggressiveness of their design in terms of circuit density, size of individual circuit elements, and width of connecting lines: dynamic random access memories (DRAMs), microprocessors, and application-specific integrated circuits (ASICs). DRAMs represent the cutting edge, requiring the smallest features and redesign of individual device elements at every new generation. ASICs are at the other end, having relatively large devices and line widths and relatively fewer devices on a chip. Microprocessors are in between. To put it another way, a team of five designers, using high level design tools (more on this below) can design an ASIC control chip for a laser printer. But 300 people are on the design team for an advanced microprocessor with millions of devices on it. 2 A generic approximate list of the steps comprising design of a microprocessor is as follows: 1. Processor requirements are set, comprising data bus width, speed, operations in the instruction set, onboard facilities such as cache memory, floating point, etc., plus a target power use and dissipation. 2. A physical implementation is chosen, such as CMOS, which presents a family of candidate processes and materials. If the design is the first in a new generation with smaller lines and features, then a series of manufacturing process validation tests is made with the objective of developing the device library and 2 Information for this section was obtained from the following sources: interview 9/19/94 with Fred Harder of Hewlett-Packard; various discussions with Gene Meieran of InteI during 1994 and 95; presentation to the MIT VLSI Seminar series by Ted Equi of DEC, March 15, 1994;presentation "Trends in Integrated Circuit Design' by Mark Bohr of Intel, 11/22/94; proceedings of NSF Workshop on New Paradigms for Manufacturing chaired by Dr Bernard Chern, May 2-4, 1994 and discussions with symposium participants, especiallyCarver Mead and Carlo Sequin; members of the ad hoc National ResearchCouncilstudy team on Information Technologyin Manufacturing, especiallyLouise Trevillyanof IBM and Gene Meieran of Intel; presentations 'Component Design Process' by John Dhuse and 'Mask Design' by Barbara Christie, Mark Chavez and Tara Brown, all of Intel, Feb 1, 1994.
Why Mechanical Design Cannot be like VLSI Design
design rules. The library contains models of individual transistors, gates made of several transistors, and higher level logic devices made of several gates. Two kinds of models are made: circuit performance models and geometric models. The circuit models cover electrical, timing and heat factors. The geometric models cover the shapes of patterns to be made on chips using photolithographic and ~chemical methods, called 'processes' below, together with descriptions of those processes. The design rules comprise limits on the geometry, such as size, width of conductor lines, spacing between lines, radii of curvature, etc. Different devices may use different processes but often these can be combined on the same chip by suitable shielding or masking. Ideally, the device library consists of 'verifed' models in the dual sense that both circuit performance and process capability have been tested and verified. 3. A target chip size is chosen, and a preliminary division of the real estate is done, including assigning the main functional areas (cache, FPU, etc.) to regions of the chip and connecting them with 'wiring'. 4. The main functions are designed and their logic is captured in a high level language such as VHDL. Some verification is possible at this stage. 5. As early as possible, some simulations are done to see what kinds of timing problems might occur; these problems arise due to the time it takes signals to propagate along the wires between the main elements as well as the time for an element to process a signal. The main real estate decisions are revisited to correct timing problems. Most designs are conservative and do not depend on close differences between signal arrival times. 6. Logical implementations of the operations of each element are designed; the overall logic of the design can be tested at this point using simulators, and errors in the logic can be fixed. Often, VHDL specifications can be converted automatically into logic diagrams. 7. Electronic implementations of the logic are designed, comprising circuit designs built up from device library elements; logic is designed locally, that is, each logic element is individually converted into an electronic equivalent; the circuits repeat the topology of the logic. Detailed routing of conductor lines is done with the aid of computer algorithms. Thermal analyses are done to see if hot spots exist. These analyses use power loss models of the individual circuit elements and combine them with duty cycle estimates for the different functional elements of the chip. 8. Physical implementations of the circuit elements are drawn from the device library and linked together electrically using conducting lines of a given width and separation. All devices and conductor lines consist of
127
layers of different material. Each layer is possibly a different shape or made from a different material. The shapes of the patterns on these layers are supposed to conform to the design rules that correspond to the capability of the manufacturing processes. Associated with each device is a circuit behavior model that is used in the simulations mentioned next. These are 'verified models' as they come from the library, but, as discussed below, it is not always possible to use the devices as is, so the models may not retain their validity. Smaller line width and separation are desirable in order to reduce the size of the final chip; however, they have negative impacts: higher resistance (hence higher power dissipation and heat) and higher capacitance (which slows down the signal transmission speed or causes false signals to arise in adjacent lines). 9. The circuit is simulated using the library's circuit behavior models of the devices and conducting lines, giving rise to more accurate estimates of possible timing or heat problems. Problems and errors are addressed. 10. Artwork similar to that used in multi-color printing is generated to represent the patterns on the individual layers that define the devices and conducting lines. This artwork will become the basis for masks that are used to control the deposition or etching of materials in the process of building up the circuit. 11. The artwork or mask layouts are checked for conformance with the design rules. 12. Masks are generated from the artwork. In order to improve the yield of the process, the process designers may alter details of the masks, generally thickening or thinning lines and features, adding layers to permit some of the features to be made, and so on. 13. The masks are taken to a wafer fab, a cleanroom with highly controlled filtered air and an array of plating and etching equipment. Here several test wafers are made. If the chip is a simple ASIC then the design usually works the first time. If it is a complex microprocessor then several iterations are often necessary, with the total time being a year to 18 months to the first test wafer and as much as another year debugging. The reason that simulations do not find all the bugs is that simulation takes so long to run even a few seconds of simulated time. 14. Manufacture of a chip requires hundreds of steps in which the different layers are applied to a silicon wafer base. Some layers comprise insulators while others comprise conductors or semiconductors. Wafers as large as 8 in in diameter are processed so as to produce dozens or hundreds of chips at a time. Process steps include creating patterns by photolithography using the masks as masters. The patterns
128
D, E. Whitney
SYSTEM SPECIFICATIONS: FUNCTIONS, LOGIC, TIMING: EXAMPLE: VHDL, "OCCAM"
[iiili;i;i~i:i!i!ii::iiiii:.::ii::iSIMULATION AND | iiiiii! NETLIST Iiiiiiiii:.iiii~i~i~iii:~ :~i PREPARATION: [: ::!i:.:~i~ii~i~:.i:ii~i~ii:~i:.ii EXAMPLE[ ::::i:,:.i:i:i filliiiiiii!::i: iii "POWERVIEW"
i i~ii::i:;i::}:,ii!!iiiiii]PHYSICAL LAYOUT ! FUNCTION LOCATION ~ INTERCONNECTS AND ROUTING: EXAMPLE"GLOW"
ii[ :
:.ii!?:.i:::| iiii~il i~:-:~:.:~i~i~ i:i~9~ii!l ii::i::i::iiiil
;i'ii!!iiiili!i t ;i:iiiii!ii!i
i!! elSiGNiiii
i;
DATABUS OR FRAMEWORK
I
I
I:~ ...................~:~:~::~:~:~:~:::~::~::~i~i~::~i;~i~!::~i~i~:iii~`iii~i~i~i~:ii!~i~!~!~iSIMULATION ~i~;|:i~i~!~:!~
II
lii!iiiiiii~!iiii:iiiiiiii::i I LAYOUT I~:iiiiiili~ii!:ii::iiii!ii::i~i::~:::i~iiii ] ~;}ilili:}iiii!!ilililil}i;iiiiii~iii:i] PLANNING: |ii)i:ili!:~!iill ~ii!iii!iiiiiiiiiii!l!!i::?i;;;il;i:.::i~iiii~};?iiii!!!!i: i! ~i ~ii ;;~] ;iii)i}iii;i;;iiiiiiiiiiiiii:::: EXAMPPLE: }::i'.iiiiii:i::iil;i i;;i::;iiii~iiiiii:;;:,il;iiii;iiiii)iiiiii)i!i)iii?.ii::iiiiiiiiiiiiii;~iiiii~i;iil;);!;?i;iiiii; ii;ii ;i
]
~
!
:
~
l
........................................................
[i!!i!iiii:! :!:!iiiiii::iiiiii }iiii!ii!iii............Aii~
.....................
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: PLACE AND ::::::::::::::::::::::::::::: ::::::::::::::::::::::::::::::::::::::::::: ::::: ROUTE
:~:,i~:i ~,i~,!~i!i~;i ~i i:~:i;!~~iil!i~'~!i:~~'~i:~,: ~i~',~i~;,:i
i i i!ili i i iii!iiiii?i!i i iiiiilil iiiiiiii
I::.iii:::;~i:.i[::i!ii!iii :~:~':~:;:: ........................................................
iiiiiit
i~,?:~!ii?i~i !~ii ~:i ~:'~,!ii'~:,:ii:i:~ilil '::,i',i',i':i':!:~;: i:i':!i:!~;ii~:~:::~ii:,i~!!i~:i;ii' ',i~ :!',!i;! :]
i!iiiii::::::i:::
Fig. 1. Sketch of the VLSI design process, showing important steps and names of typical CAD tools. Design of all but the most challenging products divides into distinct phases, with component design and library construction being first, system design second, and manufacturing process design third. During component design there can be extensive manufacturing process design and verification so that system design can proceed with assurance that manufacturing is possible, The link between component verification and system design is the set of design rules.
are the basis for either adding or removing material. Material is added by such means as vapor deposition or plasma diffusion. Material is removed by etching. Both surface patterns and bridges between layers are created in this way. Other steps comprise adding material to an existing layer by means of gaseous diffusion. Tests are performed at various points along the way, and often chips are reworked by totally removing a layer that contains defects. Final tests determine which chips are definitely bad. When the wafer is sawed into individual chips, the bad ones are discarded. The fraction remaining is called the yield. These chips are put into packages that contain connections, and are given comprehensive tests. Additional failures are detected at this stage. It is important to note that the process described above really works best for ASICs. For that kind of chip, it is an effective top-down approach in which many of the steps are done automatically. Few items in all of technology can be designed so automatically by proceeding from step to step, algorithmically converting requirements and symbolic representations of behavior into specific geometry without interven-
tion by a person. Certainly it cannot be done in mechanical systems, at least where efficiency in space, weight or power are required. Figure 1 captures the essence of the above process, dividing the effort into three distinct stages: Stage 1: component design and process verification, Stage 2: system design, and Stage 3: manufacturing process design. The significant point, especially true for ASICs and at least partially true for microprocessors, is that component design is a separate activity which creates a library of verified devices along with both simulations of their behavior and the geometry and process recipes needed to make them. For the Intel Pentium processor, Stages 1 and 2 took 300 people about 3 years, and Stage 2 involved another 300 people. The Pentium has 3 million transistors and requires about 1000 process steps to fabricate, including 28 mask steps. By contrast, the 486 processor has about 1 million transistors and requires 300 fabrication steps, including 10 mask steps. Approximately the same amount of design time and person-years were needed to design it, indicating that a strong increase in CAD power and accumulated
Why Mechanical Design Cannot be like VLSI Design
knowledge occurred between the 486 and the Pentium. Every company that designs VLSI products depends on a wide array of computer tools. These tools support most of the steps listed above. A few of them are available from CAD vendors, mostly for the purpose of simulating individual devices or certain process steps. The majority are written and maintained by the VLSI companies themselves. They deal with chip layout, timing, device simulation, simulation of entire arrays of devices, design rule checking, wire routing, and so on. The data produced by these tools are for the most part transferrable to other tools as required, although some data transfers have to be facilitated by purpose-written framework software. Also, there are data exchange standards that permit designs to be transmitted electronically between companies. Furthermore, the companies in this industry have expended tremendous effort to obtain both the knowledge and the materials they need to sustain the growth in complexity of their products. Relentless reduction in device sizes, increases in device densities, and reductions in allowed defects (see Figs 2-4) have required enormous increases in material purity and solution of a wide variety of material and chemical interaction problems. 3 The industry has also united behind a consensus 'roadmap' of required or anticipated performance goals published by SEMATECH, a technology development consortium funded by the industry and government (see Table 2). Typical line widths and feature sizes are now 0.35 gm and by 1998 will be 0.25 gm. Current 16 MB memory chips have features sizes around 0.365 gm. Table 2 shows that 0.15-0.18 gm is the target for 2001. At that point, it is unlikely that further progress will be possible using light as the exposure medium and masks as the format for design embodiment [9]. In the mechanical product industries, there is little that corresponds to any of these efforts. There is nothing corresponding to VHDL in the sense that it captures so much of the required functionality; high 3 The theoretical basis for this effect is called 'Murphy's law' after a person actually named Murphy. His analysis utilized classical operations research methods and showed that the probability of a chip being damaged by a dirt particle is proportional to the area of the chip divided by the area of the wafer. If there are N chips on a wafer and k particles large enough to cause damage land on each unit area of the wafer, then the probability of a chip being destroyed by a particle is proportional to k/N. Thus larger chips and fewer chips per wafer are more vulnerable, and chips with smaller features are vulnerable to smaller particles, which are more numerous. The most vulnerable of all is a technology called wafer scale integration, meaning in effect one huge chip per wafer. This technology has yet to become practical because of dirt particles: only one particle per wafer is needed to render its one chip useless.
129 Logic Transistor Density Trend 1E+08
5
1E+07 1E-l.06
o
1E+OS
so
I1E+04 1975
I 198o
I 1985
I ........... I, 1990 1995
2000
Year
Fig. 2. Trends in density of transistors per chip.
Die Size Trends
lOOO
E 100
(n o
~5 10
1975
1980
1985
1990
1995
2000
Year
Fig. 3. Trends in size of individual VLSI chips.
Defect Density Requirements 100
"
8O
II!111 I I I I I!11~, I I' ,I,,,,l gllll"q J~hfflll 100 m m 2 kJJ]R
] ]N[qJ.l)d~ 300 m m 2 ]
I I1JIl\4rlMJ~ soo mm 2 I I IIIK~t-'r'l PUlII, ,I ,,I, II IIIII IIIII1\ I~1 IIII111 I IIIIIIII IIIIII ~J NIIlllk I,, I IIIIIII
o~ 60 "o >- 40
IIIIII
2O
NIKIIIII\I IIIIII I 'kl I'KIII ,~k, I II IIIII 111111 I IIIIIIII /11111111 0.1 1.0 10 Defects/cm 2 J IIIII
0
0.01
Fig. 4. Required defect density vs die size to obtain a given yield.
level languages for CEM systems, such as they are, capture at most a few aspects of function. Most companies have only a few CAD tools, and these are primarily purchased from outside vendors. These tools cover only a few of the important steps in product or process design. There are standards for a number of commodity items such as fasteners, and ongoing efforts in some industries have ted to standards for interchanging business data, but not technical design data. No comprehensive effort like SEMATECH or
130
D.E. Whitney
Table 2. SEMATECH overall roadmap technology characteristics, giving goals for several requirements over the next several years Year when 9oaI below is to be achieved
1992
1995
1998
2001
2004
Feature size, gm Gates/chip Bits/chip DRAM SRAM Wafer processing cost ($/cm 2) Chip size (mm 2) Logic or microprocessor DRAM Wafer diameter (mm) Defect density (defects/cm 2) No. of interconnect levels - logic Maximum power (W/die) High performance Portable Power supply voltage (V) Desktop Portable No. of I/Os Performance (MHz) Off chip On chip
0.5 300 K
0.35 800 K
0.25 2M
0.18 5M
0.12 10 M
0.10 20 M
16 M 4M $4
64 M 16 M $3.9
256 M 64 M $3.8
1G 256 M $3.7
4G 1G $3.6
16 G 4G $3.5
600 320 200-400 0.03 5
800 500 200-400 0.01 5-6
1000 700 200-400 0.004 6
2000 1000 200-400 0.002 6-7
40-120 4
40-200 4
250 132 200 0.1 3
400 200 200 0.05 4-5
10 3
15 4
30 4
40 4
5 3.3 500
3.3 2.2 750
2.2 2.2 1500
2.2 1.5 2000
1.5 1.5 3500
60 120
100 200
175 350
250 500
350 700
consensus like the ' r o a d m a p ' exists in C E M industries, primarily because their technologies are too diverse. 4 The point of the above design process description is that if digital logic can be used, system design and device design are decoupled. Intensive process design is carried out in conjunction with the design of a library of 9eneric product components. Design of a specific product occurs at the system level by combining modules made of verified devices found in the library. To a considerable degree, specific process design need not be carried out in connection with a specific product that uses the generic process and devices. The result is that extremely complex digital systems can be designed with dramatic reduction in cost and product development time. This basic point will be expanded in Section 5 of the paper. The contrast between this set of circumstances and that of C E M design is discussed next. Those who hope to transfer VLSI successes into C E M design look mainly to the points in this paragraph to support their hopes. These hopes are based on the obvious benefits. The question is whether there are fundamental blockages. 4 Gene Meieran of Intel argues that the apparently logical surface of the VLSI industry masks the scale of the industry's enormous effort to control materials and processes, and to understand basic phenomena, without which the industry as we know it would not exist. By implication, the CEM industry has not tried hard enough.
2007
1.5 1.5 5000 500 1000
3. Sketch of CEM Design The situation in C E M design is quite different from VLSI. The Boeing 777 has, by various estimates, between 2.5 million and 7.5 million parts. Design took about 5 years and involved about 5000 engineers at Boeing plus some thousands of others at avionics, engine and other subcontractors. In C E M design, there is nothing comparable with the creation of generic modules and there is no cell library from which parts can be drawn, with a few exceptions. These exceptions are mainly such items as fasteners, motors, valves, pipe fittings, and finishes like paint. They are typically catalog items supplied by subcontractors and are not often designed to suit the C E M product. See Fig. 5 for a schematic representation of this process for comparison with Figure 1. The designer puts most of the effort into: • converting an elaborate set of requirements on function, size, space, power, longevity, cost, field repair, recurring maintenance and user interface into a geometric layout; • identifying subsystems that will carry out the functions; • allocating functions and space to the subsystems within the allowed space; • breaking the subsystems into individual parts;
Why Mechanical Design Cannot be like VLSI Design
131
Fig. 5. Simplifiedsketch of typical CEM product design. Design steps shown occur after generation of requirements is complete. Compared to Fig. 1, there is no equivalent in the form of a free-standing Stage 1 Component Library Preparation and Stage 2 System Design. Main function carriers must be designed from scratch or adapted from prior designs and modified to be consistent with evolving system concepts. System analysis and verification tools are almost totally absent. Analysis tools are not integrated and over one physical medium or phenomenon at a time. Only in special cases can manufacturing tooling or processes be created directly from CAD data. • designing those parts and fitting them into the allocated space; • determining allowable variations in part and system parameters (tolerances on geometry, voltage, pressure, temperature, hardness, surface finish, etc.); • predicting off-nominal behaviors and failure modes and designing mitigators into the parts and systems; • identifying fabrication and assembly methods, their costs and yields; • identifying design verification plans (simulations and prototypes of both parts and systems at various levels of fidelity); • revisiting m a n y of the initial decisions up to the system level if their consequences, as discovered in later steps, result in technical or financial infeasibitities. While this list sounds superficially like the tasks of VLSI design, the process is profoundly different because each part and subsystem is an individual on which all the above steps must be applied separately. Each part will typically participate in or contribute to several functions and will perform in several media (gas, solid, electricity, heat ...). Put another way, C E M and VLSI items differ in how one designs the ' m a i n function carriers', the parts that actually carry out the product's desired functions: • in VLSI these parts are made up by combining library devices; a few types are leveraged into systems with millions of parts; a modular approach
to system design works, in which parts can be designed and operated independently; in C E M these parts are designed specifically for the product, although they m a y be variants of past parts designed for similar products; thousands of distinct parts must be designed to create a product with a similar total number of parts, and m a n y must be verified first individually and again in assemblies by simulation a n d / o r prototype testing; a modular approach works sometimes, but not in systems subjected to severe weight, space Or energy constraints; in constrained systems, parts must be designed to share functions or do multiple jobs; design and performance of these parts are therefore highly coupled from one part to another (see Footnote t 1).
4. A Brief History of Trends in Systematizing VLSI and Mechanical Design 4.1. Systematizing VLSI Design 5 The top-down VLSI design process was enabled in the late 1970s by Carver Mead and Lynn Conway 5 Material for this subsection is drawn from interviews with Fred Harder of Hewlett-Packard and the introduction to the NSF Workshop report OnNew Paradigmsfor Manufacturing[Mukherjee and Hilibrand] written by Carver Mead.
132 whose textbook [2] showed how to use library elements with proven behavior and design rules to create 'cookbook' designs. Even university students were able to design working VLSI systems. Before that time one had to be an experienced electrical engineer to design VLSI. That is, one had to know about the electrical behavior of circuit elements and had to design each logical device anew in order to build up an integrated circuit. Designs were individualistic and did not necessarily conform to design rules in support of manufacturability. Design success, especially in terms of layout and space utilization, depended on the skill and style of individual designers. ' [Today,] virtually all high-performance designs use highly structured methodology, with well-defined datapaths and separate control logic, as taught in the VLSI design courses starting in the mid 1970s. These designs clearly out-perform random-logic designs, which were the industry norm during the [previous] period...,6 From the late 1970s, when the Mead-Conway method began to be adopted and supported with computer tools, until around 1990 one could be a logic designer and create VLSI. That is, one could begin with a functional statement of the chip's requirements and systematicaly follow the process outlined in Section 2 and, with help from domain experts and the CAD tools, emerge with a workable design. A downside to the Mead-Conway approach was that the library elements (called 'standard cells') were not adjusted in shape when they were placed on the chip. The practical result is that such designs take up more space than space-optimized designs produced by EEs in the 1970s. Apparently, the period from about 1978 and 1990 was a golden age in VLSI design. This period has come to an end, more or less, for several reasons. First, so many elements are now required for advanced processors that the loss of space created by using canned library elements can no longer be tolerated. Larger chips are more vulnerable to manufacturing failures, especially tiny particles of dirt. (These trends were illustrated in Figs 2-4.) Process yield is the focus of manufacturing, and low yields mean loss instead of profit. Second, smaller elements and more closely spaced conductors require the skill of EEs to design and debug properly. No longer can one blindly convert logic to circuits and expect them to work. Obeying the design rules will result in a chip that is too large, while pushing the rules requires understanding the currents and fields. Element design and system design are no longer 6 Mead,in [Mukherjeeand Hilibrand].
D.E. Whitney independent, and VLSI design is taking on much of the character of mechanical design. So we have come full circle, and it again requires EEs to design VLSI. VLSI systems are among the most complex things designed, and logical or system-level errors can occur. They represent the same kinds of errors that occur in other system designs: lack of coordination, imperfect interface specifications between subsystems, lack of a comprehensive database with all product data available to all designers, incompatible versions of the design being used by different groups of designers, and so on. System level tools to handle these problems are either not available or are just becoming available. The best tools handle individual steps or aspects of the design. The industry will have to address this problem because circuit complexity will rise even as product development time must fall.7
4.2. Systematizing CEM Design Past efforts to systematize CEM design fall into three categories: 1. modeling techniques that span several physical media; 2. 'systematic' design; 3. expert systems, neural nets, and other computerscience-based methods. Modeling techniques that span several media originated with the book Dynamical Analogies by Olsen of Bell Labs in the 1920s, followed by Acoustics by Leo Beranek in the 1940s and Analysis and Design of Engineering Systems by H. M. Paynter in the 1960s [5]. Each of these books shows that systems composed of discrete mechanical, electrical, electromagnetic, acoustic and thermal elements can be modeled by a unified set of dynamic equations that capture power or energy transactions between the elements. Paynter unified these with a single notational modeling system called bond graphs. Typical systems that can be modeled by bond graphs include telephones (acoustic, magnetic, electrical/electronic), hydro-electric generating stations (fluid, thermal, mechanical, electro-magnetic), and active vehicle suspension systems (elasto-dynamic mechanical, active control). Paynter and his students developed computer simulation languages that translate bond graph models of systems directly into computer code. 7Typicalmicroprocessors today have about 3 milliontransistors [3]. Industryexperts estimate that some kinds of chips will have as many as 50-100 million transistors by 2000. The growth in transistors per chip greatlyexceeds the growth in productivityof chip designers, even with the CAD tools presentlyavailable [4].
Why MechanicalDesign Cannot be like VLSI Design
-
This method works well for discrete systems made of connected elements (water reservoir - water penstock turbine - shaft - generator - transformer transmission line - load - ground) but is awkward for distributed systems such as compressed gases or vibrating elements with thousands of natural modes. More important, the models capture power relationships but not geometry. An excellent example is Peugeot's model of one of its automatic transmissions. This model is made partly of bond graphs and partly from custom computer code. It is so accurate that it captures the fact that this transmission displays somewhat rough shifting. But this model was constructed from symbolic elements by a designer, not from geometric models of the parts, the fluid, the clutch plates and their friction material, and so on. Also, it cannot predict such failure modes as overheating clutch plates, accumulation of dirt in the controller, leaks in the case, or elastic deflection of case and gears, etc. 'Systematic' design is taught and researched mostly in Germany where it fits well with the German tendency to write standards (VDI) for processes. The main protagonists are Professors Pahl and Beitz of the Technical University of Berlin. Their research comprises methods for identifying the required functions of an engineering system and step-by-step reducing the requirements to smaller and smaller elements of a complete design. Beitz has developed a software system that supports each of the individual steps in this process. At present, the designer must make the intellectual connections between each of the steps, which is covered by its own piece of software. The method seems most applicable to discrete systems made of identifiable elements such as 'motor', 'gearbox', 'foundation', 'bearing', and so on. The bond graph method seems most suited to this kind of design as well. Design of these individual elements is usually impossible by a similar reduction (see discussion below on the integrated nature of CEM design) unless compromises are admitted in weight, size or energy consumption. Some design researchers are not fond of this approach for this reason and others, but little else with a holistic approach has been offered. Expert systems and other computer-science-based approaches have been tried recently. Examples include a system at the Machine Tool Research Center in Aachen for designing spindle bearing systems for machine tools. This is a complex but repetitive design process in the sense that the main steps are known in advance but a great deal of expertise is needed to do it well. The expert system created for this purpose is described by its originators as capable of keeping a novice designer from making serious mistakes and
133 permitting him or her to generate a competent design in about 8 h. An expert can do the same in about an hour, while the same expert can design a better spindle without the computer. Furthermore, the system has no imagination and simply regenerates the kinds of designs its programmers built it to do. The main reason for this is that these systems have been constructed by asking 'experienced designers' what they do, rather than seeking the engineering fundamentals behind the problem. However, relying solely on the fundamentals will result in a very incomplete design system since crucial knowledge about bearing types, lubricants, and so on, defy engineering modeling at the moment. Other attempts to use expert system methods in design have similar characteristics. Commercial software exists to help create this kind of design aid. An example is ICAD, which permits an expert system to be written that not only interacts with a knowledge engine but can also obtain data from CAD files. Rule-based designs are ideally suited to this approach, such as those of civil engineering where numerous building codes must be checked during design. However, an attempt by a US car maker to define the entire design of a connecting rod in ICAD resulted in over 1000 rules. This is impressive in itself but it is not obvious that an efficient or easily modified procedure was obtained. This brief history indicates that several attempts to systematize CEM design exist and are ongoing but they have serious drawbacks. As discussed below, they do not address some fundamental problems in CEM design. In order to assess the potential for relieving these problems in CEM design and winning some of the advantages available to VLSI designers, we need to look more closely at the characteristics of CEM items and their design.
5. Fundamental Differences between VLSI and CEM Design The previous sections were primarily preparation, review and restatement of things known to many readers, and establishment of vocabulary and assumptions. This section comprises the heart of the author's contribution, an attempt to restate the foregoing in a more logical way, appealing to fundamental factors and avoiding to the extent possible any historical factors or artifacts. I think there are fundamental reasons why VLSI design is different from mechanical design, and I think the differences will persist. My conclusions are
134
D.E. Whitney
Table 3. Summary of differences between VLSI and mechanical design (adapted from Whitney et al. [6] from a table prepared by T. L. De Fazio Issue
VLSI
Mechanical systems
Component Design and verification
Model-driven single function design based on single function components; design based on rules once huge effort to verify single elements is done; few component types needed Is the same in systems as in isolation; dominated by logic, described by mathematics; design errors do not destroy the system Follows rules of logic in subsystems, follows those rules up to a point in systems; logical implementation of main functions can be proven correct; system design is separable from component design; simulations cover all significant behaviors; main system functions are accomplished by standard elements; building block approach can be exploited and probably is unavoidable; complete verification of all functions is impossible Described by logical union of component behaviors; main function dominates
Multi-function design with weak or single-function models; components verified individually, repeatedly, exhaustively; many component types needed Is different in systems and in isolation; dominated by power, approximated by mathematics, subject to system- and life-threatening side effects Logic captures a tiny fraction of behavior; system design is inseparable from component design; main function design cannot be proven correct; large design effort is devoted to side effects; component behavior changes when hooked into systems; building block design approach is unavailable, wasteful; complete verification of avoidance of side effects is impossible
Component Behavior System Design and verification
System Behavior
summarized in Table 3 and the reasoning is sketched below. An essential feature of the argument is to distinguish carefully between parts or components on the one hand and products or systems on the other. Table 3 displays this distinction. The primary fundamental factors distinguishing CEM and VLSI design are stated in the four points below. Point 1: Mechanical systems carry significant power, from kilowatts to gigawatts. A characteristic of all engineering systems is that the main functions are accompanied by side effects or off-nominal behaviors. In VLSI, the main function consists of switching between 0 and 5 (or 3 or 2.4) V, and side effects include capacitance, heat, wave reflections and crosstalk. In mechanical systems typical side effects include imbalance of rotating elements, crack growth, fatigue, vibration, friction, wear, heat and corrosion. The most dangerous of mechanical system side effects occur at power levels comparable with the power in the main function. In general there is no way to 'design out' these side effects. A VLSI system will interpret anything between 0 and 0.5 V as 0, or between 4.5 and 5 V as 5. There is no mechanical system of interest that operates with 10~o tolerances. A jet engine rotor must be balanced to within 10-2~o or better or else it will simply explode. Multiple side effects at high power levels are a fundamental characteristic of mechanical systems. One result of this fact is that mechanical system designers often spend more time anticipating and
No top level description exists; union of component behaviors irrelevant; off-nominal behaviors may dominate
mitigating a wide array of side effects than they do assembling and satisfying the system's main functions. Gaps in engineering knowledge are mainly responsible for the consequent difficulty. This dilution of design focus is one reason why mechanical systems require so much design effort for apparently so little complexity of output compared to VLSI. But this judgement is mistaken. A correct accounting of 'complexity of output' must include the side effects, which are also 'outputs' that cannot be ignored during design and are usually quite complex. Point 2. VLSI systems are signal processors. Their operating power level is very low. They process tiny amounts of power and only the logical implications of this power matter (a result of the equivalence of digital logic and Boolean algebra). Side effects can be overpowered by correct formulation of design rules: the power level in cross-talk can be eliminated by making the lines farther apart, for example. Thus, in effect, erroneous information can be halted in its tracks because its power is so low, something that cannot be done with typical side effects in powerdominated CEM systems, s Furthermore, VLSI elements do not back-load each other. That is, they do not draw significant power from each other but instead pass information or control in one direction only. These facts combine to permit VLSI s This point (that information can be blocked when desired but significant power cannot) was made in an interview by Dr Mark Matthews of University of Bath, UK.
Why Mechanical Design Cannot be like VLSI Design
circuit elements to be connected together in buildingblock fashion.9 An enormously important and fundamental consequence is that a VLSI element's behavior is essentially unchanged almost no matter how it is hooked to other elements or how many it is hooked to. That is, once the behavior of an element is understood, its behavior can be depended on to remain unchanged when it is placed into a system regardless of that system's complexity. The result of this is that VLSI design can proceed in two essentially independent stages, of which the first (design of components) shares most of its features with mechanical design while only the second (design of systems) is different: Stage 1. Logic elements are designed and processes are designed to make them. This requires enormous effort involving lithography, metallurgy, chemistry, electric field analysis, purification of fluids and gases, and training of people, to name a few. Stage 2. Once this difficult step is done, the results can be expressed as design rules and the product designers can use the elements as described above. The problems in Stage 2 are almost completely logical or reducible to mathematical description because the systems are signal processors or logic implementors. Designers can focus their efforts on system issues like floor planning, timing, basic architecture, system logic, and so on. 1° Furthermore, due to the mathematical nature of VLSI digital logic and its long-understood relation to Boolean algebra, the performance of VLSI systems can often be proven correct, not simply simulated to test correctness. But even the ability to simulate to correctness is unavailable to mechanical system designers. Why is this so? Point 3: Single vs multiple functions per device. An important reason why is that mechanical components themselves are fundamentally different from VLSI components. Mechanical components perform multiple functions, and logic is usually not one of them. This multi-function character is partly due to basic physics (rotating elements transmit shear loads and store rotational energy; both are useful as well as unavoidable) and partly due to design economy. VLSI 9 If fanout limits are reached, amplifiers can be inserted at some cost in space, power and signal propagation. But this is not fundamental. 10 The situations where this characterization is invalid provide valuable cautions: VLSI that stretches the state of the art encounters severe system-level difficulties. The separation described here may not be dependable in the future as processing speeds and chip sizes increase. Timing and heat problems are early harbingers. A 33 MHz 486 is in fact a 40 Mhz 486 that did not pass the 40 MHz test. This flavor of this story is distinctly non-digital.
135
elements perform exactly one function, namely logic. They do not have to support loads, damp vibrations, contain liquids, rotate, slide or act as fasteners or locators for other elements. Furthermore, each kind of element performs exactly one logical function. Designers can build up systems bit by bit, adding elements as functions are required. A kind of cumulative design and design re-use can be practised, allowing whole functional blocks, such as arithmetic logic units, to be reused en bloc. The absence of back-loading aids this process. However, design economy dominates mechanical design: if one element were selected for each identified function, such systems would inevitably be too big, too heavy, or too wasteful of energy. For example, the outer case of an automatic transmission for a car carries drive load, contains fluids, maintains geometric positioning for multitudes of internal gears, shafts and clutches, and provides the base for the output drive shafts and suspension system. Not only is there no other way to design such a case but mechanical designers would not have it any other way. They depend on the multifunction nature of their parts to obtain efficient designs. Building block designs are inevitably either breadboards or kludges. 11 But the multifunction nature of mechanical parts forces designers to redesign them each time to tailor them to the current need, again sapping the effort that should or could be devoted to system design. VLSI designers, by contrast, depend on the single function nature of their components to overcome the logical complexity challenges of their designs. One can observe the consequences of this Jundamental difference by observing that in VLSI the 'main function carriers' are standard proven library elements while in mechanical systems only support elements like 11 In many products, building block design is used successfully. This design option is discussed by Ulrich [7]. An important example is personal computers. The main modules have emerged by consensus, technical differentiation and differing rate of technical change: case, motherboard, disk drive(s), keyboard, power Supply and display. But displays are often integrated with cases, and chips on motherboards are constantly being integrated. In laptops, keyboard, case and display are integrated. Relatively significant power exists inside disk drives and displays, but nowhere else in these products. In automobiles, the main building blocks are body, chassis and suspension, engine, transmission, drive train and controls. In older, simpler, heavier, noisier, less efficient rear wheel drive cars, these elements were quite separate. Today's more efficient, lighter weight, quieter front wheel drive cars comprise the same building blocks, but their design is increasingly integrated, making car design much more complex than in the past. Professor David Ellison of Wharton points out that customers' perception of automobile quality increasingly focuses on characteristics such as noisevibration- harshness, which a~e controlled only by integrated design efforts.
136
fasteners are proven library elements; everything else is designed to suit. 12 VLSI elements don't back load each other because they maintain a huge ratio of output impedance to input impedance, perhaps 6 or 7 orders of magnitude. If one tried to obtain such a ratio between say a turbine and a propeller, the turbine would be the size of a house and the propeller the size of a muffin fan. No one will build such a system. Instead, mechanical system designers must always match impedances and accept back-loading. This need to match is essentially a statement that the elements cannot be designed independently of each other. The existence of multiple behaviors means that no analysis based on a single physical phenomenon will suffice to describe the element's behavior; engineering knowledge is simply not that far advanced, and multi-behavior simulations similarly are lacking. Even single-behavior simulations are poor approximations, especially in the all-important arena of time- and scale-dependent13 side effects like fatigue, crack growth and corrosion, where the designers really worry. In these areas, geometric details too small to model or even detect are conclusive in determining if (or when, since many are inevitable 14) the effect will occur. And when component models are lacking, there is a worse lack of system models and verification methods. Point 4: Ability or inability to separate component design from system design. The fundamental consequence of back-loading is that mechanical elements hooked into systems no longer behave the way they did in isolation. (Automotive transmissions are always tested with a dynamometer applying a load; so are engines.) Furthermore, these elements are more complex than VLSI elements due to their multifunction behavior. This makes them harder to understand even in isolation, much less in their new ~2 De Fazio (private communication) observes that pumps and motors are catalog items and often are main function carriers or strongly participate in carrying main functions. However, most such components are designed without weight as a prime consideration. Attempts to use them in high performance robots, for example, prove fruitless. Several alternatives have been used: hydraulics, whose torque per unit weight exceeds that of electrics by about a factor of 10; special magnetic materials that yield high torque density direct drive electric motors; and integrated designs in which motor casings and bearing systems are integrated with robot structure. 13 Professor Henry Paynter [5] offered the insight that some scaledependent effects skip several orders of magnitude, exerting their influence right from the atomic level to the macro stage without creating an intermediate-scale effect that can be observed or mitigated. 14 'Structural failure is inevitable. The task of the structural designer is to ensure that it does not occur within the useful life of the structure? The first sentence of a structural design textbook.
D.E. Whitney
role as part of systems. VLSI elements are in some sense the creations of their designers and can be tailored to perform their function, which is easy in principle to understand. Mechanical elements are not completely free creations of their designers unless, like car fenders, they carry no loads or transmit no power. The fact that mechanical components change behavior when connected into systems means that systems must be designed together with components, and designs of components must be rechecked at the system level. No such second check is required in VLSI, as long as the design rules are obeyed. 15 For this reason, CEM items cannot be designed by the strict top-down Stage 1-Stage 2 process described above for VLSI systems.
6. Final R e m a r k s The above arguments suggest that a number of success factors in VLSI may be blocked from application in mechanical design. These success factors include reuse of past designs, the ability to separate product design from process design, the ability to use the same factory to make a wide variety of products, the ability to constrain design by prior process requirements to speed the (design process and reduce errors, and the emergence design tool hierarchies to improve the design process. People wishing to improve mechanical design often look wistfully at VLSI design in the hope of making some of these things happen in the CEM domain. However:
• Re-use of library elements may be inapplicable because inefficient designs would result. 'Good' mechanical design usually does not reuse components. Many horror stories are available! Integrated designs are often called 'refined', indicating that great effort was invested in combining elements, capitalizing on multiple behaviors to achieve design objectives efficiently, and so on. is This statement requires that 'design rule' be interpreted to mean that component functions are preserved, not simply that the manufacturing process will not generate defects. Verification of design rules thus needs to include functional testing of entire devices. If or when it does not, then the above statement is invalid, and system level checking for component misbehavior will be needed. The more aggressive a design is in terms of packing components together, the more likely such checks will be necessary. Thermal effects caused by combining too many high dissipation components near each other can also cause system level problems,
Why MechanicalDesign Cannot be like VLSI Design • Direct conversion of specifications to CEM system design is unlikely to be applicable because logic is the language of VLSI's specifications and system description, and this language is not only conclusive and provably correct but it captures all the behaviors that the system will exhibit once the component design rules are known. In mechanical systems there is no specification language. Instead we have quality function deployment or other semi-mystical attempts to convert w h a t the customer wants into hard engineering specifications. It is premature to say that there will never be a mechanical specification language, but mechanical systems are not primarily driven by logic; instead they are driven by power flows shared by or exchanged between many physical phenomena. The mathematical representations we have at present apply to single phenomena: stress, fluids, electromagnetic fields, dynamics; but these are not integrated into one set of equations except in the case of bond graphs which imply a building-block approach, which has its own above-mentioned disadvantages. • There may not in fact be a 'clean separation between VLSI manufacturing and VLSI design' since the VLSI components must be designed as verifiably manufacturable. But there is a compensating separation between VLSI component design and VLSI system design. Since this separation does not exist in efficient mechanical designs, this valuable property may be blocked from exploitation in the mechanical world. • The reason why 'an enormous variety of VLSI products can be built' from the same process is that the variety is embodied at the system level. At the component level, only one item can be made by each process. VLSI escapes the consequences of the process-dependence of components because VLSI systems can be designed independently of component design. On the mechanical side, this separation does not exist, indicating why 'a great variety of mechanical products' can't be made by the same process. 16 The process-dependence of components has inevitable linkages to the whole product system. • 'Process-constrained design' can indeed be practised in mechanical systems and routinely is. That's how 16An exceptionto this may be foundwhenthe 'process' is assembly. This is the case where a family of products can be created at the time of assemblyby using a varietyof similar parts. An exampleis described by WhitneyI-8] in whichNippondenso uses this strategy to make a wide varietyof instrument panel meters, alternators and radiators from a small repertoire of parts.
137 we decide if a particular machine is suitable: can it deliver the tolerances needed, for example? If not, the design may have to be changed. But many factors contribute to tolerance capability, and it is a random variable, due in large part to the power needed to remove metal efficiently. So the process constraints are much harder to determine and the effort is not completely rewarded. • 'Tool hierarchies' can be used in VLSI because at the system level the information is entirely logical and connective, and the tools in question are used in system-level design. This information is transformed and augmented from stage to stage in the design process but its essential logical/connective identity is preserved all the way to the masks. This is not possible in mechanical systems, where the abstract design specifications are not logical homologues (much less homomorphs) of the embodiments and likely never will be. Instead, there is tremendous conversion needed, with enormous additional information required at each stage. A stick figure diagram of an automatic transmission captures only the logic of the gear arrangements and shifting strategy. It fails totally to capture torques, deflections, heat (a basic property, not a mere side effect since huge energy is released during shifts, just like the heat that emerges when logic gates switch, and for exactly the same reason!), wear, noise, shifting smoothness, and so on, all of which are essential behaviors. • In case CEM designers, upon reaching this point in the paper, feel that they are the only ones with real design challenges, it should be noted that VLSI design may be approaching a crisis brought on by increasing miniaturization. Current VLSI depends on a combination of chemicals and materials, including the basic dielectrics and conductors that make up the circuit elements and interconnecting lines. It appears [9] that this combination will no longer be applicable in two or three more process generations (i.e. much beyond 2000). Conductors are becoming so small and close together that first order RC effects cause long delays as the feature sizes decrease. Ironically, the transistors themselves switch faster as their features decrease in size. Signals already propagate about 1000 times more slowly than transistors switch, and use of present materials will only make the trend worse. The process equipment industry will have to be part of the solution [10-1, providing essential contributions in basic solid state physics and chemistry knowledge. That is, VLSI design may not be like VLSI design much longer.
138
D.E. Whitney
References 1. Stix, G, (1995) Toward 'point one', Scientific American, Feb., 90-96. 2. Mead, C.; Conway, L. (1980) Introduction to VLSI Systems, Addison-Wesley, Reading, MA. Mukherjee, A.; Hilibrand, J. (Editors) (1994) Workshop: New Paradigms for Manufacturing. May 2-4, NSF, Washington, DC, 3. Geppert, L. (1993) Not your father's CPU, IEEE Spectrum, Dec., 20-23. 4. (1994) They've got the whole world on a chip. Business Week, April 23, 132-35. 5. Paynter, H.M. (1960) Analysis and Design of Engineering Systems, MIT Press, Cambridge, MA. 6. Whitney, D.E.; Nevins, J.J.; De Fazio, T.L.; Gustavson, R.E. (•993) Problems and issues in design and manufacture of
7. 8.
9,
10.
complex electro-mechanical systems. C. S. Draper Lab Report R-2577, Dec., 1993. A copy of this report is available as: http://elib.cme.nist.gov/made/papers/Whitney/paper.html. Ulrich, K. (1995) The role of product architecture in the manufacturing firm. Research Policy, 24. Whitney, D.E. (1993) Nippondenso Co. Ltd: A case study of strategic product design. Research in Engineering Design, 5, Dec., 1-20. Bohr, M. Interconnect Scaling: The Real Limiter to High Performance VLSI. Seminar given at MIT Nov 7, 1995. Sinha, A. (1995) Achieving business success through product excellence in the semiconductor manufacturing equipment industry, Seminar delivered at MIT Nov 21. Dr Sinha is Vice President of the Chemical Vapor Deposition Business Group at Applied Materials, Inc., one of the largest manufacturers of semiconductor manufacturing equipment.