J o u r n a l o f Intelligent M a n u f a c t u r i n g (1993) 4, 9 5 - 1 0 7
Modeling workpiece vibrations with neural networks I B R A H I M N. T A N S E L a, A L E X A N D E R AMIR WAGIMAN 2
T Z I R A N I S 2 and
1Department of Mechanical Engineering, Florida International University, Miami, FL 33199, USA 2Mechanical Engineering Department, Tufts University, Medford MA 02155, USA
The entire workpiece on a lathe vibrates when it is excited at a single point. Frequency and time-domain/time-series techniques can estimate the force-displacement relationships between excitation and the individual points on the workpiece. In this paper, the use of single neural network is proposed to represent the force-displacement relationship between the applied excitation force and the vibration of the whole workpiece. The accuracy of the proposed approach is evaluated on the experimental data. Also, another neural network is used to store the frequency response characteristics of the workpiece.
Keywords: Neural networks, backpropagation neural network, machine tool structures, structural analysis
1. Introduction The metal removal capacity of machine tools depends on the dynamic characteristics of their structures (Tobias, 1965; Koenigsberger and Tlusty, 1970; Weck, 1980; Nigm, 1981). Many techniques have been developed to obtain the magnitude and phase relationship between the excitation force and workpiece vibrations (displacement, velocity, acceleration) of individual points on the workpiece. In this paper, the use of a single neural network model is proposed to represent the dynamics of the whole workpiece on a lathe. To obtain the dynamic characteristics of workpiece structures, various forms of excitation forces, including pure harmonic (Sadek and Fenner, 1973; Tlusty et al., 1974), sweep sine wave, impulse, random, actual cutting force (Moriwaki and Iwata, 1976; Lu et al., 1983), or pulse (Yuce et al., 1983), can be used. The excitation force and structure vibrations are measured during the experiment and analyzed with Fast Fourier Transformation (FFT)-based techniques (Otnes and Enochson, 1978; Broch, 1984; Herlufsen, 1985), or time-domain approaches (Eman and Wu, 1980; Kim et al., 1982; Tansel and Abdulsater, 1987; Shin et al., 1989). Among these different techniques, random excitation and model0956-5515 © 1993 Chapman & Hall
ing with time-series analysis have several advantages. Random excitations cover a large frequency range and time-series analysis techniques obtain the difference equation of the system from the data of a single experiment. Time-series methods require much shorter data sequences than the FFT-based techniques, and do not have a fixed resolution and leakage problem. All of the above techniques estimate the dynamic characteristics of the workpiece between the excitation point and the point where the displacements are measured. Some studies have been done to represent the dynamic characteristics of whole machine tool structures with the help of analytical methods and experimental data (Weck and Muller, 1976; Ismail and Tlusty, 1980); however, these approaches rely on the validity of their basic assumptions. One way of modeling with the location information (static), and time-dependent data (dynamic) without any initial assumptions is the use of self-learning artificial intelligence systems (Tansel, 1990). In this study, neural networks will be used to model the workpiece vibrations by considering the location and time history of the signals. Artificial neural networks are developed under the influence of natural neural networks (McCulloch and Pitts, 1943; Cowan and Sharp, 1988). An artificial neural
Tansel et al.
96 network, which will be referred to as a neural network in the remainder of the paper, consists of an input layer, a hidden layer or layers, and an output layer. Each layer has one or more neurons. The types of interconnections between the neurons (feedforward, feedback, complex, grid, local) and the learning algorithm vary from one approach to another. The main advantages of neural networks are their trainability and massively parallel structure. During training, neural networks can study from just a few cases to millions of cases, and establish a model by themselves. The inputs in each case could be static or dynamic variables. A properly trained neural network can make decisions directly without using any externally furnished rules. The parallel structure of the neural networks allows the use of several processing elements, and by using special hardware, computational time can be reduced dramatically. In this paper, the use of neural networks is proposed to simulate, with only one model, the dynamic characteristics of the whole workpiece, and to store the frequency response data. An infinite number of time-series models would be necessary to represent the dynamics of the whole workpiece (between the excitation point and every other point on the workpiece). The number of nodes in the hidden layer of the neural network will be selected (less than 15) to force the system to learn the relationship between the inputs and output, and to make accurate one-step-ahead predictions on the test data that it has not seen before. Three neural networks (referred to as NN1, NN2 and NN3) were used for the following purposes in this study: (1) Two neural networks were trained with the experimental excitation force and displacement data. During collection of the training data of the first neural network (NN1), excitation was applied to one point on the workpiece and displacement of the workpiece was measured at seven different locations. The second neural network (NN2) was trained with the excitation force and displacement data at six different locations and used to investigate the accuracy of NN2 at untrained locations on the workpiece; (2) To store the magnitude characteristics (of the frequency response) of the whole workpiece, one neural network (NN3) was trained to learn the magnitude of the workpiece at different frequencies and locations on the workpiece. The magnitude of the workpiece was obtained by using the time-series method seven times (for each considered test point), and stored by using a neural network. All the developed neural networks are valid for the experimental conditions. Similar to all of the other experimental procedures, the experimental data must be collected and new neural networks must be trained if the
dimensions, the material, or the clamping forces of the workpiece are changed. The theoretical background of the study, experimental procedure, and use of the neural networks will be outlined in the following sections.
2. Theoretical background The time-series approach, neural networks, and the use of neural networks to represent the time-domain signals will be discussed in this section. 2.1. The time-series approachs When the input and output signals of a system are available, autoregressive vector (ARV) models can be used to represent the system (Tansel and Abdulsater, 1987):
u(i) = 2 ~bku(i- k)
(1)
k=l
where u(i) is a (m + 1) * 1 data vector that can be written as:
u(i) = [y(i), x,(i), x2(i),
...,
Xm(i)]T
(2)
In the last two equations, n represents the order of the model; ~bk represents the (m + 1) * (m + 1) forward prediction filter matrices; y(i) represents the output; xl(i) . . . Xm(i ) represents the input; m represents the number of inputs to the system. The system can be excited with a random or pseudorandom force. During the experiment, the input (excitation forces), xl(i) . . . . , xm(i), and output of the system (displacement), y(i), are measured. The recursive multichannel maximum entropy method (RMMEM) (Morf et al., 1978) can be used to obtain the ~b~ matrices from the experimental data. However, this type of model represents the displacement of a single point on the workpiece where the experimental data are collected. The frequency response characteristics of the system can be obtained by deriving the discrete transfer function of the structure from Equation (1): ~bla2 B + ~b212B 2 + • • • + ~n12 B" Y(B) = 1 - (~11I B - - 4)211 B 2 - - . . . - - 6 n i l B n XI(B)
(3)
and replacing the backshift operator with B = e -jwT. 2.2. Neural networks The first studies on the development of neural nets started in the 1940s (McCulloch and Pitts, 1943; Cowan
Modeling workpiece vibrations with neural networks and Sharp, 1988). However, the neural nets acquired their present form from contributions made by many researchers from many different fields, such as Amari et al. (1977), Hopfield (1982), Grossberg (1987), and Rumelhart et al. (1986). A neural network is presented in Fig. 1. The inputs are connected to the input layer. Each connection between the layers has a weight, wji, and each node has a logistic activation function. The jth element of the actual output pattern produced by the presentation of input pattern, p(Opi), is represented with the following equation (Rumelhart et al., 1986): O~
0
(4)
* INet TNODrS *
0
generalize the rules and the model would not work well when a new data set was presented.
2.3. Representing the time-domain signals with neural networks The output (one-step-ahead) of a linear system can be calculated by using the parameters of Equation 2 and the input and output history of the system:
y(i) = q~llly(i - 1) + 4)211y(i - 2) + . . . + 42nlly(i - n) "1- ~bll2Xl(i) -t- ~b212x1(i- 2) + . . . + 4)nlaxl(i- n + 1)
(5)
1 1 + exp[- (WjiOpi + 0j)]
where 0j is a bias that works as a threshold. The values of the weights and thresholds can be selected by using the back propagation method given by Rumelhart et al. (1986). This process has two phases. In the first phase, the output value, Opj, is calculated for each unit after the input is presented and propagated forward. The output values, Opj, are compared with the expected output and the 6ej error signal is calculated. In the second phase, the error signal is passed through the network backward and weight changes are made. The user must decide the number of hidden units when the back propagation method is used. This selection must be made very carefully. The system cannot model the given information if it has too few hidden units; however, too many hidden units would not force the program to
i
97
0 ............. 0 I
It should be realized that this type of system can be replaced by a parallel distributed system that has 2n input units and one output unit. There would be 2n links between the input units and the single output unit. Each link would have a weight equal to the ~b parameters in Equation 5. It would not be necessary to have any non-linear activation function. Similar arrangements have been used in digital filtering hardware for years. The neural network in Fig. 1 can represent Equation 5 if the 2n inputs and one output of the neural network are the following:
Inputs: y ( i - 1), y(i - 2), . . . , y(i - n), xt(i), x l ( i - 2), . . . , x l ( i - n + 1) Output: y(i) In this study, one more input, the location information, was added to the inputs of the neural network and the number of inputs was increased to 2n + 1. The training of the neural network requires a very long time compared to the model estimation time of RMMEM when a single processor is used; however, such a replacement has two advantages: (1) The neural networks can represent the non-linearities of the system better than the linear time-series models; (2) The neural networks can accept other parameters in the model, while the time-series models, including the RMMEM, can work with only the time history of the inputs and output. In this study, the position will be added to the 2n inputs of a neural network, which replaces an nth order time-series model. The neural network worked with 2n + 1 inputs and allowed the simulation of the structural dynamics of the whole workpiece.
T OUTPUT NODES
Fig. 1. A typical neural network.
3. Experimental procedure and training of neural networks In this section, the experimental procedure will be outlined first, after which the training of the neural
Tansel et al.
98
IVELOCrfY PICK-UPS
networks to model the dynamic characterstics of the system in the time and frequency domain will be discussed.
1 2 3 4 5 6 7 8 SPINDLE
(;£;;IL
3.1. Experimental procedure To identify the structural dynamics of a long slender bar, a 1045 steel workpiece 760mm long and 65mm in diameter was attached to a lathe between the spindle and tailstock. The workpiece was excited in a direction perpendicular to its axis by using an electro-hydraulic exciter. The excitation was applied to one end of the workpiece and experimental data were collected by measuring the excitation force and displacements. The experiment was repeated seven times and data were collected when the displacement sensor was placed at seven different locations (with 80mm intervals) on the workpiece. The excitation and displacement measurement points are shown in Fig. 2. For each test point, 500 force and displacement measurements were digitized and stored with a digital oscilloscope. The data were transferred to an IBM PS2/80 computer and arranged appropriately for presentation to the neural network.
3.2. Modeling the dynamic characteristics of the workpiece with neural networks Three neural networks (NN1, NN2 and NN3) were used in this study. All of the neural networks were trained separately. To estimate the one-step-ahead displacement of a point, the excitation force and displacement history (number of lags) of the system must be those given in Equation 5. The length of the time history, or order of the time-domain/time-series models is selected by considering Akaike's information criteria (AIC) (Akaike, 1971; 1974). The AIC suggested 20th to 25th order A R V models for the experimental data. For simplicity, 10 lags were assigned to the neural network. For the whole workpiece, 1400 cases (200 samplings per test point × 7 test points) were prepared. For each case, there were 21 inputs and one output (Fig. 3):
Inputs: history of excitation force (10 lags) F(i), F(i - 1), F ( i - 2), F(i - 3), F(i - 4), F(i - 5), F(i - 6), F(i - 7), F(i - 8), F(i - 9) history of displacement (10 lags) x(i - 1), x ( i - 2), x ( i - 3), x(i - 4), x ( i - 5), x(i - 6), x ( i - 7), x ( i - g), x ( i - 9), x(i - 10) distance from the exciter X Output: present displacement x(i) All of the 1400 cases were randomly presented to NN1. Another neural network was trained without using the experimental data, which were collected at the fifth point on the workpiece. Twelve hundred cases (6 test points x
............
......
[EXC-.o.po..T/ Fig. 2. Excitation and displacement measurement points. INPUTS F(i) F(i-1) F(i-2) - - - ~
J
J t NEURA' NETWORK i OUTPUT
F(i-9) X(i-1) ---I=,,-
X(i)
x0-2) - - ~
X(i-lOl-~b, X
Fig. 3. Inputs and output of a neural network, which represents the dynamics of a workpiece.
200 samplings per point) were randomly presented to NN2. 3.3. Storing the frequency reponse characteristics of the workpiece structure with the neural networks The parameters of the A R V model in Equation 1 were estimated from the experimental data for every test point. The magnitude of the estimated models was calculated after the dis'crete transfer functions were found in the form given by Equation 3. Since the 8th to 25th order models had very similar frequency response characteristics in the 200 to 300 Hz range, the magnitudes of the 8th order models were taught to NN3. The inputs and the output of this neural network were the following:
Inputs: Frequency Position (X) Output- Magnitude
4. Results and discussion Three neural networks were used in this study. The performance of NN1 and NN2 was examined by calculating the accuracy of one-step-ahead forecasting of the
Modeling workpiece vibrations with neural networks
99
training and testing data. Training data consisted of information given to the neural network during the training session. The testing data consisted of new cases, which were not presented during the training. NN3 was tested on the training data. To select the number of hidden nodes, several neural networks were tried with 5 to 15 nodes in their single hidden layers. The convergence of the neural networks was compared at the first 500 000 to 1 000 000 trials. The accuracy of NN1, NN2 and NN3 was best when 11, 11, and 25 hidden nodes were used, respectively. Fourteen hundred cases were presented to NN1 over 7 000 000 times. After 4 000 000 presentations, the neural network could estimate more than 90% of the presented cases with less than 3% error. After 7 000 000 presentations, the neural network could estimate 96% of the presented cases with less than 3% error and the rest with less than 7% error. The error calculations were done by using the following equation:
one-step-ahead accuracy of NN1 is demonstrated in Fig. 4 on the data of three test points. The convergence and accuracy of the neural network models depend on the implementation of the algorithm and hardware. Even with the same software and hardware, two different final results can be observed (Rumelhart et al., 1986), or the program may converge to a local minima and give unacceptable results. The above numbers are included to demonstrate the number of presentations required to train the neural network. The accuracy of NN1 was also tested on cases that it had not encountered before. For each test point, 50 cases were prepared by following the procedure in Section 4, and then presented to the neural network. The one-stepahead prediction accuracy of the neural network is demonstrated in Fig. 5 for the three test points. Eleven hidden nodes were also used in the single hidden layer of NN2. The neural network was trained by presenting 1200 cases (the same training data of NN1, except that the data of the fifth test point were not included). The accuracy of the one-step-ahead forecasting is demonstrated on the training and testing data. In Fig. 6, the accuracy of the estimations is demonstrated on the data of three test points. In Figs 7a, 7b and 7c, the accuracy of the neural network estimations is studied on the test data of three points, which it had not seen during training. The accuracy of the model is also investigated
Error =
x(i) -xes,(i) 1.2 ( X m a x -- Xmin)
where xest(i) is the one-step-ahead forecasting of the neural network, and x,,,ax and Xmin are the maximum and minimum displacements. The accuracy of the estimations did not improve after 7 000 000 presentations, and the search was stopped at 7334713 presentations. The
I STRUCTURAL DATA WITH ONE SENSOR MISSING ] N E U R A L N E T W O R K ESTIMATION AT X = 8 I 80 .
40
0
.
.
.
.
u
...............
,,
!
"
. . . . . . . . . . . . . . . . . . . . . .
-60-
a°o (a)
8o 16o
20o
TIME (milliseconds)
Fig. 4. One-step-ahead prediction accuracy of a neural network. The distance between the excitation and test point is (a) 80 rnm, (b) 320 mm and (c) 560 mm.
Tansel et al.
100
I
STRUCTURAL DATA 56] NEURAL NETWORK ESTIMATIONAT X = ORIGINALDATA
E
R
ESTIMATEDDATa
o,
w I
<, a. t~ m
-601 0
~o
io
~o
(b)
n
i
8o 16o 1~o 1~o 1~o ~8o 200 TIME (milliseconds)
STRUCTURAL DATA WITH ONE SENSOR MISSING N E U R A L N E T W O R K ESTIMATION AT X = 32
~°°/ E
9
ILl
)¢ 40" kZ 20" ILl
m
I
,,, 0
|
o;
~
-20-
~
-40M
-60
(c)
o
2'o
io
6'0
80
lC)0
120
140
TIME (milliseconds)
160
180
200
101
Modeling workpiece vibrations with neural networks
INEURAL TESTING STRUCTURAL DATA I NETWORK ESTIMATION AT X =8
it ,,,, 2o1
.....
~_~'~
"~ . . . . . . . . . .~.~ . . . ... . . .
~ -40 -60
200
265
(a)
2i0
2i5
2~,0
225
TIME (milliseconds)
I TESTING STRUCTURAL DATA (1 SENSOR MIS) I NEURAL NETWORK
AE E
ESTIMATION
A T X = 32]
6040"
i¢
o, w
20-
x
I-Z IJJ
0~
I.IJ
0<¢ -20 .J ft.
Q -40-
-6O 200 (b)
205 '
210 ' 215 ' TIME (milliseconds)
22"0
225
Fig. 5. One-step-ahead prediction accuracy of a neural network. The distance between the excitation and test point is (a) 80 mm, (b) 320 mm and (c) 560 mm. The reference points are new; they were not used during training.
Tansel e t al.
102
I TESTING STRUCTURAL DATA (1 SENSOR MIS) I NEURAL NETWORK ESTIMATION AT X = 56
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
J . . . . . . . . . . . . . . . . . . . . . . . . .
E
~
-lO-
Ill X
ESTIMATEDDATAi
"-" -20 I-Z I,i.I -30 ILl
J
"~-~
jr t"...
2',
;7
-40
0..
'g
m
-5O
-60 200
265
2io
2i5
#,0
225
TIME (milliseconds)
(c)
I
STRUCTURALDATA
NEURAL NETWORK ESTIMATION AT X = 8
I
80
EE UJ x
I-Z IJ,,I
60IESTIMATED DATA
40II
20J
0"
uJ -20 O ,,¢ •0. -J -40 U) Q -60 -8( (~)
II
0
~o
I
4o
~o
I
8o
16o ~5o i~o ~o
~o
2oo
TIME (milliseconds)
Fig. 6. One-step-ahead prediction accuracy of a neural network. The distance between the excitation and test point is (a) 80 ram, (b) 320 mm and (c) 560 ram.
103
Modeling workpiece vibrations with neural networks
STRUCTURALDATA
INEURAL NETWORK ESTIMATIONAT-X
=
32,1 .
100" E
80"
~
60-
x
40-
~
20-
M
ORIGINALuDATA
1
ESTIMATEDDATA
~ o; -20 -40 U
-60
i
0
20
4:o 6'0
8o 16o i~o i~,o 16o 18o 200
TIME (milliseconds)
(b)
I STRUCTURAL DATA WITH ONE SENSOR MISSING I NEURAL NETWORK ESTIMATION AT X = 56 J 80
~
,60-
=
O~IGINAL DATA
ESTIMATEDDATA = - - ~
4020-
=
'
i
~-6 (c)
0
20
,
40
,
60
,
i
i
80 100 120 140 TIME (milliseconds)
i
160
180
200
104
Tansel et al.
TESTING STRUCTURAL DATA (1 SENSOR NEURAL NETWORK ESTIMATION AT
60t
ORIGINAL DATA
-w-
'40 O
,,,
ESTIMATED DATA
201
f-.~./<..
o! .............................~
~
~
.........................
"
-20-
--I 0.
"
~- -4e
-6O
200
26s
(a)
2~o
2~s
2~o
22s
TIME (milliseconds)
I
TESTING STRUCTURAL DATA NEURAL NETWORK ESTIMATION AT X = 32
6O 4020-
o'
]
¢~ -20-
't
-40 -60
(b)
200
°27'"~'°~T~
ESTIMATED DATA ]
26s
2io
2i5
22o
225
TIME (milliseconds)
Fig. 7. One-step-ahead prediction accuracy of a neural network. The distance between the excitation and test point is (a) 80 mm, (b) 320 ram, (c) 560 mm and (d) 400 mm. The reference points are new; they were not used during training.
Modeling workpiece vibrations with neural networks
105
TESTING STRUCTURAL DATA
I
NEURAL NETWORK ESTIMATION AT X = 56
~
I
...............................................................................................................................................
-10
?~I?INALDATA
L'li ~ 20}
50
°1
ESTIMATEDDATA,
v
/i"
"~'---'~"
,,,~,
-60 200
. 205
.
(c)
. 210
.
215
225
220
TIME (milliseconds)
STRUCTURAL DATA WITH ONE SENSOR MISSING [ NEURAL NETWORK ESTIMATIONAT X = 40 I 80 t
A
E 60E
,¢
oI
LU X
m
40-
~
N
"-" 20Z "'
I m
m
0~
O '~ . J -20
U
B
Ill
•
D.
N
-40-
-60 0 (d)
1o
Xo
do
8'o l bo 1~o 1~o 18o 1~o 2o0 TIME (milliseconds)
106
Tansel et al.
on the data of the test point that was not used during training (Fig 7d). NN2 demonstrated that the system is capable of simulating the response of any point on the workpiece, even if it was not trained with the data at that point. In this study, NN1 and NN2 were trained with 1200 or 1400 cases. There were only eleven hidden nodes to represent these 21-input/one-output systems. These small neural networks could not learn over 1000 cases if the inputs and outputs were arbitrary and could not make excellent predictions on a test set that they had never seen before. The reported results indicate that there is a relationship between the selected input and output parameters, and neural networks are capable of storing this information. NN3 was trained with the magnitudes of the transfer functions of seven test points of the workpiece. In this case, the single hidden layer of the neural network had 25 nodes. The magnitudes of the transfer functions of the workpiece and the trained neural networks representation of the same information are plotted in Fig. 8. The main advantage of neural networks is the representation of the characteristics of the whole workpiece with a single model. Time-series models estimate the parameters of a difference equation, and it is not possible to include the location information in an equation similar to Equation 1. The disadvantage of neural networks is the long training period required. The back propagation-type neural networks use a sigmoid function. To represent the characteristics of analog systems, the parameters of the sigmoid function must be selected very carefully. The neural network requires millions of presentations to have less than 3% error in 90% of the cases when the training set has 1400 cases and each case has 21 inputs and one output. The training period could be dramatically reduced by using parallel processors.
5. Conclusion The dynamic characteristics of a long slender bar were represented with neural networks. Two neural networks were trained to estimate the one-step-ahead displacements and frequency response of the whole workpiece by using a single model. The one-step-ahead displacement prediction error of the neural network on the training and testing data was less than 7%. The frequency response (magnitude) of the workpiece was also stored by the neural network with the same accuracy. The neural networks learned the time-domain characteristics of the data with eleven hidden nodes. These results indicated that there is a relationship between the selected 21 inputs and the output, and neural networks are capable of learning this information. The neural networks have three main advantages over the conventional time-series techniques. These advantages are the acceptance of static (location) and dynamic inputs (time-dependent force and displacement values), representation of the nonlinear characteristics of the structure, and the capability of using special parallel processing hardware for fast modeling and estimation. On the other hand, neural networks require very long training times on present-day single processor computers to learn the analog information accurately.
References Akaike, H. (1971) Information theory and an extension of the maximum likelihood principle, Problems in Control and Information Theory, Hungary Akadimiai Kiado. Akaike, H. (1974) A new look at the statistical model identification, IEEE Transactions on Automata Conference, AC-19, 716-723.
P. tS~
Fig. 8. The storage capability of the neural network when it is trained with the magnitudes of the transfer functions of the structural dynamics.
Modeling workpiece vibrations with neural networks
107
Amari, S., Yoshida, K. and Kanatani, K. (1977) A mathematical foundation for statistical neurodynamics, SIAM Journal of Applied Mechanics, 33(1), 95-126. Broch, J. T. (1984) Mechanical Vibration and Shock Measurement, Bruel and Kjaer. Cowan, J. D. and Sharp, D. H. (1988) Neural nets and artificial intelligence, in The Artificial Intelligence Debate, Graubard, S. R. (ed). Eman, K. F. and Wu, S. M. (1980) A comparative study of classical techniques and the dynamic data systems (DDS) approach for machine tool structure identification, in Proceedings of NAMR Conference, pp. 401-404. Grossberg, S. (1987) The Adaptive Brain, I and II, NorthHolland, Amsterdam. Herlufsen, H. (1985) Dual channel FFT analysis, Digital Signal Analysis, Bruel and Kjaer. Hopfield, J. J. (1982) Neural networks and physical systems with emergent collective computational abilities, Proceedings of the National Academy of Sciences, 79, 2554-2558. Ismail, F. and Tlusty, J. (1980) Dynamics of a workpiece clamped on a lathe, in Proceedings of NAMR Conference, pp. 388-394. Kim, K. J., Eman, K. F. and Wu, S. M. (1982) Modal analysis of mechanical structures by time series approach, in
Nigm, M. M. (1981) A method for the analysis of machine tool chatter, International Journal of MTDR, 21,251-261. Otnes, R. K. and Enochson, L. (1978) Applied Time Series Analysis, 1, John Wiley and Sons, New York. Rumelhart, D. E., Hilton, G. and Williams, R. J. (1986) Learning internal representations by error propagation,
Proceedings of lOth NAMR Conference. Koenigsberger, F. and Tlusty, J. (1970) Machine Tool Structures, Pergamon Press, Oxford, England. Lu, B. H., Lin, Z. H., Hwang, X. T. and Kn, C. H. (1983) On-line identification of dynamic behavior of machine tool structures during stable cutting, Annals of the CIRP, 32(1), 315-318. McCulloch, W. S. and Pitts, W. H. (1943) A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics, 5, 115-121. Morf, M., Vieira, A., Lee, D. T. and Kailath, T. (1978) Recursive mnltichannel maximum entropy spectral estimation, IEEE Transactions on Geoscience Electronics, GE16(2). Moriwaki, T. and Iwata, K. (1976) In-process analysis of machine tool structure dynamics and prediction of machining chatter, Journal of Engineering for Industry, Transactions of ASME, 301-305.
Parallel Distributed Processing: Explorations in the Microstructure of Cognition, 1, Rumelhart, E. and McClelland, J. L. (eds), MIT Press. Sadek, M. M. and Fenner, R. F . (1973) On-line dynamic testing of machine tools using a mini-computer, The Production Engineer, 175-179. Shin, Y. C., Eman, K. F. and Wu, S. M. (1989) Experimental complex modal analysis of machine tool structures, Transactions of ASME, Journal of Engineering for Industry, 11, 116-124. Tansel, I. N. (1990) Neural network approach for representation and simulation of 3-D cutting dynamics, Transactions of North American Metal Research Institution of SME, May, 193-200. Tansel, I. N. and Abdulsater, G. (1987) Frequency response estimation accuracy of the recursive multichannel maximum entropy method, Modal Testing and Analysis, ASME 1987, pp. 97-103. Tlusty, J., Lau, K. C. and Parthiban, K. (1974) Use of shock compared with harmonic excitation in machine tool structure analysis, Journal of Engineering for Industry, Transactions of ASME, 187-195. Tobias, S. A. (1965) Machine Tool Vibration, Blackie and Son, London. Weck, M. (1980) Handbook of Machine Tools - Metrological Analysis and Performance Tests, 4, John Wiley and Sons, New York. Weck, M. and Muller, W. (1976) Visual representation of the dynamic behavior of machine tool structure, Annals of CIRP, 263-266. Yuce, M., Sadek, M. M. and Tobias, S. A. (1983) Pulse excitation technique for determining frequency response of machine tools using an on-line minicomputer and a non-contacting electromagnetic exciter, lnternationai Journal of Machine Tool Design Research, 23(1), 39-51.