كتاب Identification of Dynamic Systems - An Introduction with Applications
منتدى هندسة الإنتاج والتصميم الميكانيكى
بسم الله الرحمن الرحيم

أهلا وسهلاً بك زائرنا الكريم
نتمنى أن تقضوا معنا أفضل الأوقات
وتسعدونا بالأراء والمساهمات
إذا كنت أحد أعضائنا يرجى تسجيل الدخول
أو وإذا كانت هذة زيارتك الأولى للمنتدى فنتشرف بإنضمامك لأسرتنا
وهذا شرح لطريقة التسجيل فى المنتدى بالفيديو :
http://www.eng2010.yoo7.com/t5785-topic
وشرح لطريقة التنزيل من المنتدى بالفيديو:
http://www.eng2010.yoo7.com/t2065-topic
إذا واجهتك مشاكل فى التسجيل أو تفعيل حسابك
وإذا نسيت بيانات الدخول للمنتدى
يرجى مراسلتنا على البريد الإلكترونى التالى :

Deabs2010@yahoo.com


-----------------------------------
-Warning-

This website uses cookies
We inform you that this site uses own, technical and third parties cookies to make sure our web page is user-friendly and to guarantee a high functionality of the webpage.
By continuing to browse this website, you declare to accept the use of cookies.
منتدى هندسة الإنتاج والتصميم الميكانيكى
بسم الله الرحمن الرحيم

أهلا وسهلاً بك زائرنا الكريم
نتمنى أن تقضوا معنا أفضل الأوقات
وتسعدونا بالأراء والمساهمات
إذا كنت أحد أعضائنا يرجى تسجيل الدخول
أو وإذا كانت هذة زيارتك الأولى للمنتدى فنتشرف بإنضمامك لأسرتنا
وهذا شرح لطريقة التسجيل فى المنتدى بالفيديو :
http://www.eng2010.yoo7.com/t5785-topic
وشرح لطريقة التنزيل من المنتدى بالفيديو:
http://www.eng2010.yoo7.com/t2065-topic
إذا واجهتك مشاكل فى التسجيل أو تفعيل حسابك
وإذا نسيت بيانات الدخول للمنتدى
يرجى مراسلتنا على البريد الإلكترونى التالى :

Deabs2010@yahoo.com


-----------------------------------
-Warning-

This website uses cookies
We inform you that this site uses own, technical and third parties cookies to make sure our web page is user-friendly and to guarantee a high functionality of the webpage.
By continuing to browse this website, you declare to accept the use of cookies.



 
الرئيسيةالبوابةأحدث الصورالتسجيلدخولحملة فيد واستفيدجروب المنتدى

شاطر
 

 كتاب Identification of Dynamic Systems - An Introduction with Applications

اذهب الى الأسفل 
كاتب الموضوعرسالة
Admin
مدير المنتدى
مدير المنتدى
Admin

عدد المساهمات : 19025
التقييم : 35575
تاريخ التسجيل : 01/07/2009
الدولة : مصر
العمل : مدير منتدى هندسة الإنتاج والتصميم الميكانيكى

كتاب Identification of Dynamic Systems - An Introduction with Applications  Empty
مُساهمةموضوع: كتاب Identification of Dynamic Systems - An Introduction with Applications    كتاب Identification of Dynamic Systems - An Introduction with Applications  Emptyالسبت 07 ديسمبر 2024, 2:10 pm

أخواني في الله
أحضرت لكم كتاب
Identification of Dynamic Systems - An Introduction with Applications
Rolf Isermann · Marco Munchhof

كتاب Identification of Dynamic Systems - An Introduction with Applications  I_o_d_10
و المحتوى كما يلي :


Contents
1 Introduction . 1
1.1 Theoretical and Experimental Modeling 1
1.2 Tasks and Problems for the Identification of Dynamic Systems 7
1.3 Taxonomy of Identification Methods and Their Treatment in This
Book . 12
1.4 Overview of Identification Methods . 15
1.4.1 Non-Parametric Models . 15
1.4.2 Parametric Models . 18
1.4.3 Signal Analysis 19
1.5 Excitation Signals 21
1.6 Special Application Problems 23
1.6.1 Noise at the Input 23
1.6.2 Identification of Systems with Multiple Inputs or Outputs 23
1.7 Areas of Application 24
1.7.1 Gain Increased Knowledge about the Process Behavior 24
1.7.2 Validation of Theoretical Models . 25
1.7.3 Tuning of Controller Parameters 25
1.7.4 Computer-Based Design of Digital Control Algorithms 25
1.7.5 Adaptive Control Algorithms 26
1.7.6 Process Supervision and Fault Detection . 26
1.7.7 Signal Forecast 26
1.7.8 On-Line Optimization 28
1.8 Bibliographical Overview 28
Problems 29
References . 30
2 Mathematical Models of Linear Dynamic Systems and Stochastic
Signals 33
2.1 Mathematical Models of Dynamic Systems for Continuous Time
Signals . 33
2.1.1 Non-Parametric Models, Deterministic Signals . 34X Contents
2.1.2 Parametric Models, Deterministic Signals 37
2.2 Mathematical Models of Dynamic Systems for Discrete Time Signals 39
2.2.1 Parametric Models, Deterministic Signals 39
2.3 Models for Continuous-Time Stochastic Signals . 45
2.3.1 Special Stochastic Signal Processes . 51
2.4 Models for Discrete Time Stochastic Signals 54
2.5 Characteristic Parameter Determination 58
2.5.1 Approximation by a First Order System . 59
2.5.2 Approximation by a Second Order System . 60
2.5.3 Approximation by nth Order Delay with Equal Time
Constants . 63
2.5.4 Approximation by First Order System with Dead Time . 68
2.6 Systems with Integral or Derivative Action . 69
2.6.1 Integral Action 69
2.6.2 Derivative Action 70
2.7 Summary . 71
Problems 71
References . 72
Part I IDENTIFICATION OF NON-PARAMETRIC MODELS IN THE
FREQUENCY DOMAIN — CONTINUOUS TIME SIGNALS
3 Spectral Analysis Methods for Periodic and Non-Periodic Signals 77
3.1 Numerical Calculation of the Fourier Transform . 77
3.1.1 Fourier Series for Periodic Signals 77
3.1.2 Fourier Transform for Non-Periodic Signals 78
3.1.3 Numerical Calculation of the Fourier Transform 82
3.1.4 Windowing . 88
3.1.5 Short Time Fourier Transform . 89
3.2 Wavelet Transform . 91
3.3 Periodogram . 93
3.4 Summary . 95
Problems 96
References . 97
4 Frequency Response Measurement with Non-Periodic Signals . 99
4.1 Fundamental Equations . 99
4.2 Fourier Transform of Non-Periodic Signals . 100
4.2.1 Simple Pulses . 100
4.2.2 Double Pulse 104
4.2.3 Step and Ramp Function 106
4.3 Frequency Response Determination . 108
4.4 Influence of Noise 109
4.5 Summary . 117Contents XI
Problems 119
References . 119
5 Frequency Response Measurement for Periodic Test Signals 121
5.1 Frequency Response Measurement with Sinusoidal Test Signals . 122
5.2 Frequency Response Measurement with Rectangular and
Trapezoidal Test Signals . 124
5.3 Frequency Response Measurement with Multi-Frequency Test
Signals . 126
5.4 Frequency Response Measurement with Continuously Varying
Frequency Test Signals 128
5.5 Frequency Response Measurement with Correlation Functions . 129
5.5.1 Measurement with Correlation Functions 129
5.5.2 Measurement with Orthogonal Correlation . 134
5.6 Summary . 144
Problems 144
References . 144
Part II IDENTIFICATION OF NON-PARAMETRIC MODELS WITH
CORRELATION ANALYSIS — CONTINUOUS AND DISCRETE TIME
6 Correlation Analysis with Continuous Time Models 149
6.1 Estimation of Correlation Functions . 149
6.1.1 Cross-Correlation Function 150
6.1.2 Auto-Correlation Function 153
6.2 Correlation Analysis of Dynamic Processes with Stationary
Stochastic Signals 154
6.2.1 Determination of Impulse Response by Deconvolution . 154
6.2.2 White Noise as Input Signal . 157
6.2.3 Error Estimation . 158
6.2.4 Real Natural Noise as Input Signal 161
6.3 Correlation Analysis of Dynamic Processes with Binary Stochastic
Signals . 161
6.4 Correlation Analysis in Closed-Loop 175
6.5 Summary . 176
Problems 177
References . 177
7 Correlation Analysis with Discrete Time Models . 179
7.1 Estimation of the Correlation Function . 179
7.1.1 Auto-Correlation Function 179
7.1.2 Cross-Correlation Function 181
7.1.3 Fast Calculation of the Correlation Functions . 184
7.1.4 Recursive Correlation 189XII Contents
7.2 Correlation Analysis of Linear Dynamic Systems 190
7.2.1 Determination of Impulse Response by De-Convolution 190
7.2.2 Influence of Stochastic Disturbances 195
7.3 Binary Test Signals for Discrete Time . 197
7.4 Summary . 199
Problems 199
References . 200
Part III IDENTIFICATION WITH PARAMETRIC MODELS — DISCRETE TIME
SIGNALS
8 Least Squares Parameter Estimation for Static Processes . 203
8.1 Introduction . 203
8.2 Linear Static Processes 205
8.3 Non-Linear Static Processes 210
8.4 Geometrical Interpretation . 212
8.5 Maximum Likelihood and the Cramér-Rao Bound . 215
8.6 Constraints 218
8.7 Summary . 218
Problems 219
References . 220
9 Least Squares Parameter Estimation for Dynamic Processes 223
9.1 Non-Recursive Method of Least Squares (LS) . 223
9.1.1 Fundamental Equations . 223
9.1.2 Convergence 229
9.1.3 Covariance of the Parameter Estimates and Model
Uncertainty . 236
9.1.4 Parameter Identifiability 246
9.1.5 Unknown DC Values . 255
9.2 Spectral Analysis with Periodic Parametric Signal Models 257
9.2.1 Parametric Signal Models in the Time Domain 257
9.2.2 Parametric Signal Models in the Frequency Domain . 258
9.2.3 Determination of the Coefficients . 259
9.2.4 Estimation of the Amplitudes 261
9.3 Parameter Estimation with Non-Parametric Intermediate Model 262
9.3.1 Response to Non-Periodic Excitation and Method of Least
Squares 262
9.3.2 Correlation Analysis and Method of Least Squares
(COR-LS) 264
9.4 Recursive Methods of Least Squares (RLS) . 269
9.4.1 Fundamental Equations . 270
9.4.2 Recursive Parameter Estimation for Stochastic Signals . 276
9.4.3 Unknown DC values . 278
9.5 Method of weighted least squares (WLS) . 279
Contents XIII
9.5.1 Markov Estimation . 279
9.6 Recursive Parameter Estimation with Exponential Forgetting 281
9.6.1 Constraints and the Recursive Method of Least Squares 283
9.6.2 Tikhonov Regularization 284
9.7 Summary . 284
Problems 285
References . 287
10 Modifications of the Least Squares Parameter Estimation 291
10.1 Method of Generalized Least Squares (GLS) 291
10.1.1 Non-Recursive Method of Generalized Least Squares (GLS) 291
10.1.2 Recursive Method of Generalized Least Squares (RGLS) . 294
10.2 Method of Extended Least Squares (ELS) 295
10.3 Method of Bias Correction (CLS) . 296
10.4 Method of Total Least Squares (TLS) 297
10.5 Instrumental Variables Method (IV) . 302
10.5.1 Non-Recursive Method of Instrumental Variables (IV) . 302
10.5.2 Recursive Method of Instrumental Variables (RIV) 305
10.6 Method of Stochastic Approximation (STA) 306
10.6.1 Robbins-Monro Algorithm 306
10.6.2 Kiefer-Wolfowitz Algorithm . 307
10.7 (Normalized) Least Mean Squares (NLMS) . 310
10.8 Summary . 315
Problems 316
References . 316
11 Bayes and Maximum Likelihood Methods 319
11.1 Bayes Method . 319
11.2 Maximum Likelihood Method (ML) . 323
11.2.1 Non-Recursive Maximum Likelihood Method 323
11.2.2 Recursive Maximum Likelihood Method (RML) 328
11.2.3 Cramér-Rao Bound and Maximum Precision . 330
11.3 Summary . 331
Problems 331
References . 332
12 Parameter Estimation for Time-Variant Processes . 335
12.1 Exponential Forgetting with Constant Forgetting Factor 335
12.2 Exponential Forgetting with Variable Forgetting Factor . 340
12.3 Manipulation of Covariance Matrix . 341
12.4 Convergence of Recursive Parameter Estimation Methods . 343
12.4.1 Parameter Estimation in Observer Form . 345
12.5 Summary . 349
Problems 350
References . 350XIV Contents
13 Parameter Estimation in Closed-Loop 353
13.1 Process Identification Without Additional Test Signals . 354
13.1.1 Indirect Process Identification (Case a+c+e) 355
13.1.2 Direct Process Identification (Case b+d+e) . 359
13.2 Process Identification With Additional Test Signals 361
13.3 Methods for Identification in Closed Loop 363
13.3.1 Indirect Process Identification Without Additional Test
Signals . 363
13.3.2 Indirect Process Identification With Additional Test Signals . 364
13.3.3 Direct Process Identification Without Additional Test Signals 364
13.3.4 Direct Process Identification With Additional Test Signals 364
13.4 Summary . 365
Problems 365
References . 366
Part IV IDENTIFICATION WITH PARAMETRIC MODELS — CONTINUOUS
TIME SIGNALS
14 Parameter Estimation for Frequency Responses . 369
14.1 Introduction . 369
14.2 Method of Least Squares for Frequency Response Approximation
(FR-LS) 370
14.3 Summary . 374
Problems 376
References . 376
15 Parameter Estimation for Differential Equations and Continuous
Time Processes . 379
15.1 Method of Least Squares 379
15.1.1 Fundamental Equations . 379
15.1.2 Convergence 382
15.2 Determination of Derivatives 383
15.2.1 Numerical Differentiation . 383
15.2.2 State Variable Filters . 384
15.2.3 FIR Filters 391
15.3 Consistent Parameter Estimation Methods 393
15.3.1 Method of Instrumental Variables . 393
15.3.2 Extended Kalman Filter, Maximum Likelihood Method 395
15.3.3 Correlation and Least Squares . 395
15.3.4 Conversion of Discrete-Time Models 398
15.4 Estimation of Physical Parameters 399
15.5 Parameter Estimation for Partially Known Parameters 404
15.6 Summary . 405
Problems 406Contents XV
References . 406
16 Subspace Methods . 409
16.1 Preliminaries 409
16.2 Subspace 413
16.3 Subspace Identification 414
16.4 Identification from Impulse Response 418
16.5 Some Modifications to the Original Formulations 419
16.6 Application to Continuous Time Systems . 420
16.7 Summary . 423
Problems 423
References . 424
Part V IDENTIFICATION OF MULTI-VARIABLE SYSTEMS
17 Parameter Estimation for MIMO Systems 429
17.1 Transfer Function Models . 429
17.1.1 Matrix Polynomial Representation 431
17.2 State Space Models . 432
17.2.1 State Space Form 432
17.2.2 Input/Output Models . 438
17.3 Impulse Response Models, Markov Parameters 439
17.4 Subsequent Identification 441
17.5 Correlation Methods 441
17.5.1 De-Convolution . 441
17.5.2 Test Signals . 442
17.6 Parameter Estimation Methods . 443
17.6.1 Method of Least Squares 446
17.6.2 Correlation Analysis and Least Squares 446
17.7 Summary . 447
Problems 449
References . 449
Part VI IDENTIFICATION OF NON-LINEAR SYSTEMS
18 Parameter Estimation for Non-Linear Systems 453
18.1 Dynamic Systems with Continuously Differentiable Non-Linearities 453
18.1.1 Volterra Series . 454
18.1.2 Hammerstein Model 455
18.1.3 Wiener Model . 457
18.1.4 Model According to Lachmann 458
18.1.5 Parameter Estimation . 458
18.2 Dynamic Systems with Non-Continuously Differentiable
Non-Linearities 460XVI Contents
18.2.1 Systems with Friction 460
18.2.2 Systems with Dead Zone 464
18.3 Summary . 465
Problems 465
References . 466
19 Iterative Optimization 469
19.1 Introduction . 469
19.2 Non-Linear Optimization Algorithms 471
19.3 One-Dimensional Methods . 473
19.4 Multi-Dimensional Optimization 476
19.4.1 Zeroth Order Optimizers 477
19.4.2 First Order Optimizers 478
19.4.3 Second Order Optimizers . 480
19.5 Constraints 484
19.5.1 Sequential Unconstrained Minimization Technique 484
19.6 Prediction Error Methods using Iterative Optimization 491
19.7 Determination of Gradients 494
19.8 Model Uncertainty . 495
19.9 Summary . 496
Problems 498
References . 499
20 Neural Networks and Lookup Tables for Identification . 501
20.1 Artificial Neural Networks for Identification 501
20.1.1 Artificial Neural Networks for Static Systems 502
20.1.2 Artificial Neural Networks for Dynamic Systems . 512
20.1.3 Semi-Physical Local Linear Models . 514
20.1.4 Local and Global Parameter Estimation 518
20.1.5 Local Linear Dynamic Models . 519
20.1.6 Local Polynomial Models with Subset Selection 524
20.2 Look-Up Tables for Static Processes . 530
20.3 Summary . 534
Problems 534
References . 535
21 State and Parameter Estimation by Kalman Filtering 539
21.1 The Discrete Kalman Filter 540
21.2 Steady-State Kalman Filter 545
21.3 Kalman Filter for Time-Varying Discrete Time Systems 546
21.4 Extended Kalman Filter . 547
21.5 Extended Kalman Filter for Parameter Estimation . 548
21.6 Continuous-Time Models 549
21.7 Summary . 549
Problems 550Contents XVII
References . 550
Part VII MISCELLANEOUS ISSUES
22 Numerical Aspects . 555
22.1 Condition Numbers . 555
22.2 Factorization Methods for P . 557
22.3 Factorization methods for P1 . 558
22.4 Summary . 562
Problems 562
References . 563
23 Practical Aspects of Parameter Estimation 565
23.1 Choice of Input Signal 565
23.2 Choice of Sample Rate 567
23.2.1 Intended Application . 568
23.2.2 Fidelity of the Resulting Model 568
23.2.3 Numerical Problems 569
23.3 Determination of Structure Parameters for Linear Dynamic Models . 569
23.3.1 Determination of Dead Time . 570
23.3.2 Determination of Model Order . 572
23.4 Comparison of Different Parameter Estimation Methods 577
23.4.1 Introductory Remarks 577
23.4.2 Comparison of A Priori Assumptions . 579
23.4.3 Summary of the Methods Governed in this Book 581
23.5 Parameter Estimation for Processes with Integral Action 586
23.6 Disturbances at the System Input 588
23.7 Elimination of Special Disturbances . 590
23.7.1 Drifts and High Frequent Noise 590
23.7.2 Outliers 592
23.8 Validation . 595
23.9 Special Devices for Process Identification 597
23.9.1 Hardware Devices . 597
23.9.2 Identification with Digital Computers . 598
598
Problems 599
References . 599
Part VIII APPLICATIONS
23.10 Summary .XVIII Contents
24 Application Examples 605
24.1 Actuators . 605
24.1.1 Brushless DC Actuators . 606
24.1.2 Electromagnetic Automotive Throttle Valve Actuator 612
24.1.3 Hydraulic Actuators 617
24.2 Machinery 628
24.2.1 Machine Tool . 628
24.2.2 Industrial Robot . 633
24.2.3 Centrifugal Pumps . 636
24.2.4 Heat Exchangers . 639
24.2.5 Air Conditioning . 644
24.2.6 Rotary Dryer 645
24.2.7 Engine Teststand . 648
24.3 Automotive Vehicles 651
24.3.1 Estimation of Vehicle Parameters . 651
24.3.2 Braking Systems . 655
24.3.3 Automotive Suspension . 663
24.3.5 Internal Combustion Engines 674
24.4 Summary . 679
References . 680
Part IX APPENDIX
A Mathematical Aspects 685
A.1 Convergence for Random Variables . 685
A.2 Properties of Parameter Estimation Methods 687
A.3 Derivatives of Vectors and Matrices . 688
A.4 Matrix Inversion Lemma 689
References . 690
B Experimental Systems 691
B.1 Three-Mass Oscillator . 691
References . 696
Index . 697
24.3.4 Tire Pressure 667List of Symbols
Only frequently used symbols and abbreviations are given.
Letter symbols
a parameters of differential of difference equations, amplitude
b parameters of differential or difference equations
c spring constant, constant, stiffness, parameters of stochastic difference equations, parameters of physical model, center of Gaussian function
d damping coefficient, direct feedthrough, parameters of stochastic difference equations, dead time, drift
e equation error, control deviation e D w y
e number e D 2:71828 : : : (Euler’s number)
f frequency .f D 1=Tp; Tp period time), function f .: : :/
fS sample frequency
g function g.: : :/, impulse response
h step response, undisturbed output signal for method IV, h  yu
i index
i D p1 imaginary unit
j integer, index
k discrete number, discrete-time k D t=T0 D 0; 1; 2; : : : (T0: sample time)
l index
m mass, order number, model order, number of states
n order number, disturbance signal
p probability density function, process parameter, order number of
a stochastic difference equation, parameter of controller difference equation, number of inputs, p.x/ probability density function
q index, parameter of controller difference equation, time shift operator with x.k/q1 D x.k 1/XX List of Symbols
r number of outputs
rP penalty multiplier
s Laplace variable s D ı C i!
t continuous time
u input signal change U , manipulated variable
w reference value, setpoint, weight, w.t/ window function
x state variable, arbitrary signal
y output signal change Y , signal
yu useful signal, response due to u
yz response due to disturbance ´
´ disturbance variable change Z, Z-transform variable ´ D
eT0s
A denominator polynomial of process transfer function
B numerator polynomial of process transfer function
A denominator polynomial of closed-loop transfer function
B numerator polynomial of closed-loop transfer function
C denominator polynomial of stochastic filter equation, covariance
function
D numerator polynomial of stochastic filter equation, damping ratio
F filter transfer function
G transfer function
I second moment of area
J moment of inertia
K constant, gain
M torque
N discrete number, number of data points
P probability
Q denominator polynomial of controller transfer function
R numerator polynomial of controller transfer function, correlation
function
S spectral density, sum
T time constant, length of a time interval
T0 sample time
TM measurement time
TP period time
U input variable, manipulated variable (control input)
V cost function
W complex rotary operator for DFT and FFT
Y output variable, control variable
Z disturbance variable
a vector
b biasList of Symbols XXI
b; B input vector/matrix
c; C output vector/matrix
e error vector
g vector of inequality constraints with g.x/  0
h vector of equality constraints with h.x/ D 0
n noise vector
s search vector
u manipulated variables for neural net
v output noise
w state noise
x vector of design variables
y output vector
z operating point variables for neural net
A arbitrary matrix, state matrix
C covariance matrix, matrix of measurements for TLS
D direct feedthrough matrix
G transfer function matrix
G noise transfer function matrix
H Hessian matrix, Hadamard matrix
I identity matrix
K gain matrix
P correlation matrix, P D T
S Cholesky factor
T similarity transform
U input matrix for subspace algorithms
W weighting matrix
X state matrix
Y output matrix for subspace algorithms
AT transposed matrix
˛ factor, coefficients of closed-loop transfer function
ˇ factor, coefficients of closed-loop transfer function
activation function
ı decay factor, impulse function, time shift
" correlation error signal, termination tolerance, small positive
number
 damping ratio
 noise-to-signal ratio
 parameter
 forgetting factor, cycle time of PRBS generator
 membership function, index, time scaling factor for PRBS, order
of controller transfer function
 index, white noise (statistically independent signal), order of
controller transfer functionXXII List of Symbols
 measurement disturbance
 number  D 3:14159 : : :
% Step width factor for stochastic approximation algorithms
 time, time difference
' angle, phase
! angular frequency, ! D 2=TPI TP period, rotational velocity
!.t/ D P '.t/
!0 undamped natural frequency
 change, deviation
… product
† sum
˚ validity function, activation function, weighting function
wavelet
correction vector
data vector
 parameter vector
 augmented error matrix
˙ covariance matrix of a Gaussian distribution, matrix of singular
values
˚ transition matrix
data matrix
Mathematical abbreviations
exp .x/ D ex exponential function
dim dimension
adj adjoint
† phase (argument)
arg argument
cond condition number
cov covariance
det determinant
lim limit
max maximum (also as index)
min minimum (also as index)
plim probability limit
tr trace of a matrix
var variance
Ref: : :g real part
Imf: : :g imaginary part
QS controllability matrix
QSk extended reversed controllability matrixList of Symbols XXIII
Ef: : :g expected value of a statistical variable
F Fourier transform
H Hermitian matrix
H Hankel matrix
H.f .x// Hilbert transform
H Heaviside function
L Laplace transform
QB observability matrix
QBk extended observability matrix
 ´ transform directly from s transform
T Markov parameter matrix, Töplitz matrix
Z ´ transform
G.i!/ conjugate complex, sometimes denoted as G.i!/
k  k2 2-norm
k  kF Frobenius norm
V first derivative of V with respect to 
V second derivative of V with respect to 
rf .x/ gradient of f .x/
r2f .x/ Hessian matrix of f .x/
xO estimated or observed variable
xQ estimation error
xN average, steady-state value
xP first derivative with respect to time t
x.n/ n-th derivative with respect to time t
x0 amplitude or true value
x00 value in steady state
x mean value
xS sampled signal
xı Dirac series approximation
x normalized, optimal
xd discrete-time
x00 steady state or DC value
AŽ pseudo-inverse
f =A
f =BA oblique projection
Abbreviations
ACF auto-correlation function, e.g. Ruu./
ADC analog digital converter
ANN artificial neural network
AGRBS amplitude modulated GRBS
APRBS amplitude modulated PRBS
AR auto regressive
ARIMA auto regressive integrating moving average process
orthogonal projectionXXIV List of Symbols
ARMA auto regressive moving average process
ARMAX auto regressive moving average with external input
ARX auto regressive with external input
BLUE best linear unbiased estimator
CCF cross-correlation function, e.g. Ruy./
CDF cumulative distribution function
CLS bias corrected least squares
COR-LS correlation analysis and method of least squares
CWT continuous-time wavelet transform
DARE differential algebraic Riccatti equation
DFT discrete Fourier transform
DSFC discrete square root filter in covariance form
DSFI discrete square root filter in information form
DTFT discrete time Fourier transform
DUDC discrete UD-factorization in covariance form
EIV errors in variables
EKF extended Kalman filter
ELS extended least squares
FFT Fast Fourier Transform
FIR finite impulse response
FLOPS floating point operations
FRF frequency response function
GLS generalized least squares
GRBS generalized random binary signal
GTLS generalized total least squares
IIR infinite impulse response
IV instrumental variables
KW Kiefer-Wolfowitz algorithm
LLM local linear model
LPM local polynomial model
LOLIMOT local linear model tree
LPVM linear parameter variable model
LQR linear quadratic regulator
LRGF locally recurrent global feedforward net
LS least squares
M model
MA moving average
MIMO multiple input,
ML maximum likelihood
MLP multi layer perceptron
MOESP Multi-variable Output Error State sPace
N4SID Numerical algorithms for Subspace State Space IDentification
NARX non-linear ARX model
NDE non-linear difference equation
NFIR non-linear FIR model
multiple outputList of Symbols XXV
NN neural net
NOE non-linear OE model
ODE ordinary differential equation
OE output error
P process
PCA principal component analysis
PDE partial differential equation
PDF probability density function p.x/
PE prediction error
PEM prediction error method
PRBS pseudo-random binary signal
RBF radial basis function
RCOR-LS recursive correlation analysis and method of least squares
RGLS recursive generalized least squares
RIV recursive instrumental variables
RLS recursive least squares
RLS-IF recursive least squares with improved feedback
RML recursive maximum likelihood
SISO single input, single output
SNR signal to noise ratio
SSS strict sense stationary
STA stochastic approximation
STFT short time Fourier transform
STLS structured total least squares
SUB subspace
SUMT sequential unconstrained minimization technique
SVD singular value decomposition
TLS total least squares
WLS weighted least squares
WSS wide sense stationary
ZOH zero order hold
Index
A-optimal, 566
ACF, see auto-correlation function (ACF)
activation, 503
actuators, 605
adaptive control, 353, 356
air conditioning, 644–645
aliasing, 42, 80
amplitude-modulated generalized random
binary signal (AGRBS), 174
amplitude-modulated pseudo-random binary
signal (APRBS), 174
analog-digital converter (ADC), 39
ANN, see network, artificial neural network
(ANN)
AR, see model, auto-regressive (AR)
ARMA, see model, auto-regressive
moving-average (ARMA)
ARMAX, see model, auto-regressive
moving-average with exogenous input
(ARMAX)
artificial neural networks, see network,
artificial neural network (ANN)
ARX, see model, auto-regressive with
exogenous input (ARX)
auto-correlation function (ACF), 48, 55,
153–154, 179–181, 184–189, 264
auto-covariance function, 50, 55
auto-regressive (AR), see model, autoregressive (AR)
auto-regressive moving-average (ARMA),
see model, auto-regressive movingaverage (ARMA)
auto-regressive moving-average with exogenous input (ARMAX), see model,
auto-regressive moving-average with
exogenous input (ARMAX)
auto-regressive with exogenous input
(ARX), see model, auto-regressive
with exogenous input (ARX)
automotive applications, see tire pressure
electric throttle, see DC motor
engine, see engine, internal combustion
engine
engine teststand, see engine teststand
one-track model, see one-track model
automotive braking system, 655–663
hydraulic subsystem, 655–658
pneumatic subsystem, 658–663
automotive suspension, 663–665
automotive vehicle, 651–679
averaging, 256, 278
a priori assumptions, 18, 423, 449, 570, 579,
595
a priori knowledge, 10, 18, 404
bandpass, 20
Bayes
rule, 322
estimator, 331
method, 319–323
rule, 320
Bayesian information criterion (BIC), 574
best linear unbiased estimator (BLUE), 217
bias, 687
bias correction, see least squares, bias
correction (CLS)
R. Isermann, M. Münchhof, Identification of Dynamic Systems,
DOI 10.1007/978-3-540-78879-9, © Springer-Verlag Berlin Heidelberg 2011698 Index
bias-variance dilemma, 502
bilinear transform, 389
binary signal, 127
discrete random, see discrete random
binary signal (DRBS)
generalized random, see generalized
random binary signal (GRBS)
pseudo-random, see pseudo-random
binary signal (PRBS)
random, see random binary signal (RBS)
bisection algorithm, 476
Box Jenkins (BJ), see model, Box Jenkins
(BJ)
Brushless DC motor (BLDC), see DC motor
Butterworth filter, 385
canonical form
block diagonal form, 438
controllable canonical form, 386, 436, 438
Jordan canonical form, 438
observable canonical form, 436, 438, 522
simplified P-canonical form, 439
CCF, see cross-correlation function (CCF)
centrifugal pumps, 636–638
characteristic values, 16, 58–71, 585
2-distribution, 237
chirp, see sweep sine
closed-loop identification, 19, 23, 175–176,
353–365
direct identification, 359–361, 364–365
indirect identification, 355–359, 363–364
closed-loop process, see process, closed-loop
CLS, see least squares, bias correction (CLS)
comparison of methods, 15, 581
condition of a matrix, 555–556
constraint, 218, 283–284, 472, 484–486
controllability matrix, 44, 410, 435, 436, 440
convergence, 246, 382, 685–686
non-recursive least squares (LS), 229–235
recursive least squares (RLS), 343–349
convolution, 34, 42, 54, 454
COOK’s D, 594
COR-LS, see least squares, correlation and
least squares (COR-LS)
correlation analysis, 16, 20, 154–161, 190
correlation function
fast calculation, 184–189
recursive calculation, 189
correlogram, see auto-correlation function
(ACF)
cost function, 204, 470, 572
covariance function, 50
covariance matrix, 303, 343
blow-up, 340
manipulation, 341–343
Cramér-Rao bound, 217, 330–331
cross-correlation function (CCF), 48, 55,
150–153, 181–189, 264
cross-covariance function, 50, 55
data matrix, 211
data vector, 225
DC motor
brushless DC motor (BLDC), 606–612
classical DC motor, 612–617
feed drive, see machining center
de-convolution, 154–161, 175–176,
190–197, 585
for MIMO systems, 441–442
dead time, 42, 570–572
dead zone, 464
decomposition
singular value decomposition (SVD), see
singular value decomposition (SVD)
derivatives, 383–393, 494–495
design variables, 472
DFBETAS, 594
dfference equation, 43
DFFITS, 594
DFT, see discrete Fourier transform (DFT)
difference equation, 57, 225
stochastic, 276
differencing, 255, 278
differential equation
ordinary differential equation (ODE), see
ordinary differential equation (ODE)
partial differential equation (PDE), see
partial differential equation (PDE)
digital computer, 598
discrete Fourier transform (DFT), 80, 86
discrete random binary signal (DRBS),
163–164
discrete square root filtering in covariance
form (DSFC), 557–558
discrete square root filtering in information
form (DSFI), 558–561
discrete time Fourier transform (DTFT), 79Index 699
discretization, 387–391
distribution
2-distribution, see 2-distribution
Gaussian, see normal distribution
normal, see normal distribution
disturbance, 8
downhill simplex algorithm, 477
drift elimination, 590
DSFC, see discrete square root filtering in
covariance form (DSFC)
DSFI, see discrete square root filtering in
information form (DSFI)
DTFT, see discrete time Fourier transform
(DTFT)
efficiency, 216, 217, 688
EKF, see extended Kalman filter (EKF)
ELS, see least squares, extended least
squares (ELS)
engine
engine teststand, 648–651
internal combustion engine, 512, 533,
674–679
ergodic process, 47
error
equation error, 13, 225, 380
input error, 13
metrics, 204, 226, 470, 578
output error, 13, 58
sum of squared errors, 204
error back-propagation, 507
errors in variables (EIV), 300, 589
estimation
consistent, 233
efficient, see efficiency
explicit, 256, 279
implicit, 256, 279
sufficient, 688
estimator
consistent, 687
consistent in the mean square, 687
unbiased, 687
Euclidian distance, 212, 503, 507
excitation
persistent, see persistent excitation
exponential forgetting, 281–284, 335
constant forgetting factor, 335–340
variable forgetting factor, 340–341
extended Kalman filter (EKF), 17, 395,
547–549, 584
extended least squares (ELS), see least
squares, extended least squares (ELS)
fast Fourier transform (FFT), 82–88
FFT, see fast Fourier transform
filter
Butterworth, see Butterworth filter
FIR, 391
finite differencing, 384
finite impulse response (FIR), see model,
finite impulse response (FIR), 391
Fisher information matrix, 218, 336
Fletcher-Reeves algorithm, 479
forgetting factor, 336, 339
Fourier
analysis, 16, 20, 99
series, 77–78
transform, 35, 78–82, 99–108
FR-LS, see least squares, frequency response
approximation (FR-LS)
frequency response, 35, 37
frequency response approximation (FR-LS),
see least squares, frequency response
approximation (FR-LS)
frequency response function, 99, 108–117,
134, 369, 585
frequency response measurement, 16
friction, 460–464, 488
Gauss-Newton algorithm, 483
Gaussian distribution, see normal distribution
generalization, 501
generalized least squares (GLS), see least
squares, generalized least squares
(GLS)
generalized random binary signal (GRBS),
172–174
generalized total least squares (GTLS), see
least squares, generalized total least
squares (GTLS)
generalized transfer function matrix, 430
Gibbs phenomenon, 78
Givens rotation, 560
GLS, see least squares, generalized least
squares (GLS)
golden section search, 475700 Index
gradient, 472
gradient descent algorithm, see steepest
descent algorithm
gradient search, see steepest descent
algorithm
GTLS, see least squares, generalized total
least squares (GTLS)
Hammerstein model, 455–458
Hankel matrix, 411, 418, 440
heat exchangers, 639–642
Heaviside function, 34
Hessian matrix, 473
Hilbert transform, 370
hinging hyperplane tree (HHT), see network,
hinging hyperplane tree (HHT)
Householder transform, 560
hydraulic actuator, 617–628
identifiability, 246–255, 363, 403, 459
closed-loop, 355–357, 360
structral, 250
identification
definition of, 2, 8
implicit function theorem, 402
impulse response, 34, 40, 58, 66
MIMO system, 439–440
industrial robot, 633–636
information criterion
Akaike, see Akaike information criterion
(AIC)
Bayesian, see Bayesian information
criterion (BIC)
information matrix, 575–576
innovation, 542
input
persistently exciting, 251
instrumental variables
recursive (RIV), see least squares,
recursive instrumental variables (RIV)
instrumental variables (IV), 393
non-recursive, see least squares, nonrecursive instrumental variables
(IV)
internal combustion engine, see engine,
internal combustion engine
intrinsically linear, 215
IV, see least squares, non-recursive
instrumental variables (IV)
Kalman filter, 540–547
extended, see extended Kalman filter
(EKF)
Kalman-Bucy filter, 549
Kalman-Schmidt-Filter, 547
steady-state Kalman filter, 545–546
Kiefer-Wolfowitz algorithm, see least
squares, Kiefer-Wolfowitz algorithm
(KW)
Kronecker delta, 56
Kurtosis, 596
KW, see least squares, Kiefer-Wolfowitz
algorithm (KW)
L-optimal, 566
Laplace transform, 36, 99
layer, 504
least mean squares (LMS), see least squares,
least mean squares (LMS)
least squares, 331
bias, 235
bias correction (CLS), 296–297, 582
continuous-time, 379–383, 582
correlation and least squares (COR-LS),
264–267, 395, 446–447, 583
covariance, 236–238
direct solution, 229
eigenvalues, 346–347
equality constraint, 218
exponential forgetting, see exponential
forgetting
extended least squares (ELS), 295–296,
582
frequency response approximation
(FR-LS), 370–374, 583
generalized least squares (GLS), 291–294,
582
generalized total least squares (GTLS),
300
geometrical interpretation, 212–214
instrumental variables (IV), 393
Kiefer-Wolfowitz algorithm (KW),
307–310
least mean squares (LMS), 310–315
MIMO system, 446
non-linear static process, 210–212, 216
non-parametric intermediate model,
262–269
non-recursive (LS), 223–245, 581Index 701
non-recursive instrumental variables (IV),
302–304, 582
non-recursive least squares (LS), 558–560
normalized least mean squares (NLMS),
310–315, 584
recursive (RLS), 269–278, 345–349
recursive correlation and least squares
(RCOR-LS), 267
recursive extended least squares (RELS),
584
recursive generalized least squares
(RGLS), 294
recursive instrumental variables (RIV),
305, 584
recursive least squares (RLS), 557,
560–561, 584
recursive weighted least squares (RWLS),
280–281
start-up of recursive method, 272–274,
340
stochastic approximation (STA), 306–315,
584
structured total least squares (STLS), 301
Tikhonov regularization, see Tikhonov
regularization
total least squares (TLS), 297–301, 582
weighted least squares (WLS), 279–280,
373
Levenberg-Marquart algorithm, 484
leverage, 593
likelihood function, 215, 320, 324
linear in parameters, 214
LMS, see least squares, least mean squares
(LMS)
locally recurrent and globally feedworward
networks (LRGF), see network, locally
recurrent and globally feedforward
networks (LRGF)
log-likelihood function, 325
LOLIMOT
seenetwork, local linear model tree
(LOLIMOT), 508
look-up tables, 530
LRGF, see network, locally recurrent
and globally feedforward networks
(LRGF)
LS, see least squares, non-recursive (LS)
MA, see model, moving-average (MA)
machining center, 628–630
Markov estimator, 279–280, 331
Markov parameters, 410, 437, 439–440
matrix calculus, 688
matrix inversion lemma, 689
matrix polynomial model, 431
maximum likelihood, 215–216, 321
non-linear static process, 216
non-recursive (ML), 323–327, 583
recursive (RML), 328–329, 584
maximum likelihood (ML), 395
maximum likelihood estimator, 331
ML, see maximum likelihood, non-recursive
(ML)
MLP, see network, multi layer perceptron
(MLP)
model
auto-regressive (AR), 57
auto-regressive moving-average (ARMA),
58
auto-regressive moving-average with
exogenous input (ARMAX), 58
auto-regressive with exogenous input
(ARX), 58
black-box, 5, 34
Box Jenkins (BJ), 58
canonical state space model, 447
dead time, see dead time
finite impulse response (FIR), 58
fuzzy, 509
gray-box, 5
Hammerstein, see Hammerstein model
Hankel model, 440
input/output model, 438–439
Lachmann, 458
local polynomial model (LPM), 524
matrix polynomial, see matrix polynomial
model
moving-average (MA), 57
non-linear, 454–458
non-linear ARX (NARX), 523
non-linear finite impulse response (NFIR),
455
non-linear OE (NOE), 523
non-parametric, 13, 15, 34
order, 572–577
P-canonical, 430
parallel, 469
parametric, 13, 18, 37, 39702 Index
projection-pursuit, 457
semi-physical model, 514
series model, 470
series-parallel model, 470
simplified P-canonical, 431
state space, 432–439, 447
state space model, 409
structure parameters, 569–577
Uryson, 457
white-box, 5
Wiener, see Wiener model
model adjustment, 16
model uncertainty, 236–238, 495–496
modeling
experimental, 3, 7
theoretical, 3, 7
moving-average (MA), see model,
moving-average (MA)
multi layer perceptron (MLP), see network,
multi layer perceptron (MLP)
multifrequency signal, 127, 567
Nelder-Mead algorithm, see downhill
simplex algorithm
network
artificial neural network (ANN), 17, 501,
586
hinging hyperplane tree (HHT), 510
local linear model tree (LOLIMOT), 508
locally recurrent and globally feedforward
networks (LRGF), 512
multi layer perceptron (MLP), 504
radial basis function network (RBF), 507,
508
structure, 504
neuron, 503
Newton algorithm, 481
Newton-Raphson algorithm, 476
NFIR, see model, non-linear finite impulse
response (NFIR)
NLMS, see least squares, normalized least
mean squares (NLMS)
non-linear ARX model (NARX), see model,
non-linear ARX (NARX)
non-linear finite impulse response (NFIR),
see model, non-linear finite impulse
response (NFIR)
non-linear OE model (NOE), see model,
non-linear OE (NOE)
norm
Frobenius norm, 298
normal distribution, 216
normalized least mean squares (NLMS), see
least squares, normalized least mean
squares (NLMS)
objective function, 472
observability matrix, 45, 410, 440
one-track model, 651–654
optimization
bisection algorithm, 476
constraints, 484–486
downhill simplex algorithm, 477
first order methods, 476, 478
Gauss-Newton algorithm, 483
golden section search, 475
gradient, 494–495
gradient descent algorithm, 478
iterative, 585
Levenberg-Marquart algorithm, 484
multi-dimensional, 476–484
Newton algorithm, 481
Newton-Raphson algorithm, 476
non-linear, 471
one-dimensional, 473–476
point estimation, 474
quasi-Newton algorithms, 482
region elimination algorithm, 474
second order methods, 476, 480
trust region method, 484
zeroth order methods, 474, 477
order test, see model, order
ordinary differential equation (ODE), 3, 37,
380
orthogonal correlation, 134–143, 585
orthogonality relation, 213
oscillation, 214
outlier detection and removal, 592–594
output vector, 211
P-canonical structure, 430, 431
parameter covariance, 303
parameter estimation, 16
extended Kalman filter (EKF), 548–549
iterative optimization, see optimization
method of least squares, see least squares
parameter vector, 211, 225
parameter-state observer, 346Index 703
partial differential equation (PDE), 3, 37
PCA, see principal component analysis
(PCA)
PDE, see partial differential equation (PDE)
penalty function, 484
perceptron, 504
Periodogram, 93–95
persistent excitation, 250
point estimation algorithm, 474
Polak-Ribiere algorithm, 480
pole-zero test, 576
polynomial approximation, 215, 387
power spectral density, 50, 56
prediction error method (PEM), 491–494
prediction, one step prediction, 225
predictor-corrector setting, 541
principal component analysis (PCA), 301
probability density function (PDF), 46
process
closed-loop, 588
continuous-time, 379–383, 420, 454,
460–464, 549
definition of, 1
integral, 586–588
integrating, 69
non-linear, 454–458
process analysis, 1
time-varying, 335–349
process coefficients, 37, 399
processes
statistically indepdendent, 51
projection
oblique, 413
orthogonal, 413
prototype function, 91
pseudo-random binary signal (PRBS),
164–172, 196, 198, 442, 443, 567, 588
pulse
double, 104
rectangular, 102
simple, 100
trapezoidal, 101
triangular, 102
QR factorization, 559
quantization, 39
quasi-Newton algorithms, see optimization,
quasi-Newton algorithms
radial basis function network (RBF), see
network, radial basis function network
(RBF)
ramp function, 106
random binary signal (RBS), 161–162
RBF, see network, radial basis function
network (RBF)
realization, 44
minimal, 45
rectangular wave, 124
recursive generalized least squares (RGLS),
see least squares, recursive generalized
least squares (RGLS)
recursive least squares, see least squares,
recursive (RLS)
recursive parameter estimation
convergence, 343–349
region elimination algorithm, 474
regression, 205
orthonormal, 300
residual test, 576
resonant frequency, 62
RGLS, see least squares, recursive
generalized least squares (RGLS)
Ricatti equation, 545
ridge regression, see Tikhonov regularization
RIV, see least squares, recursive instrumental
variables (RIV)
RLS, see least squares, recursive (RLS)
RML, see maximum likelihood, recursive
(RML)
Robbins-Monro algorithm, 306–307
rotary dryer, 645
sample rate, 567–569
sample-and-hold element, 42
sampling, 39, 42, 79, 381, 567–569
Schroeder multisine, 128
sequential unconstrained minimization
technique (SUMT), 484
Shannon’s theorem, 42, 79
short time Fourier transform (STFT), 20,
89–90
signal
amplitude-modulated generalized random
binary, see amplitude-modulated
generalized random binary signal
(AGRBS)704 Index
amplitude-modulated pseudo-random
binary, see amplitude-modulated
pseudo-random binary signal
(APRBS)
discrete random binary, see discrete
random binary signal (DRBS)
genralized random binary, see generalized
random binary signal (GRBS)
pseudo-random binary, see pseudorandom binary signal (PRBS)
random binary, see random binary signal
(RBS)
simplex algorithm, see downhill simplex
algorithm
singular value decomposition (SVD), 299,
420
spectral analysis, 93, 257–261
spectral estimation
parametric, 20
spectrogram, 90
spectrum analysis, 20
STA, see least squares, stochastic
approximation (STA)
state space, 38, 43, 409, 432–439
state variable filter, 384
stationary
strict sense stationary, 46
wide sense stationary, 47
steepest descent algorithm, 478
step function, 34, 106
step response, 34, 58, 59, 61, 65
STFT, see short time Fourier transform
(STFT)
STLS, see least squares, structured total least
squares (STLS)
stochastic approximation (STA), see least
squares, stochastic approximation
(STA)
stochastic signal, 45, 54
structured total least squares (STLS), see
least squares, structured total least
squares (STLS)
subspace
of a matrix, 413
subspace identification, 414–418
subspace methods, 17, 409–423, 586
SUMT, see sequential unconstrained
minimization technique (SUMT)
SVD, see singular value decomposition
sweep sine, 128–129
system
affine, 33
biproper, 224
definition of, 1
dynamic, 512
first order system, 59
linear, 33
second order system, 60
system analysis, 1
Takagi-Sugeno fuzzy model, 509
Taylor series expansion, 388
test signal, 21, 565–567
A-optimal, 566
D-optimal, 566
L-optimal, 566
MIMO, 442–443
Tikhonov regularization, 284
time constant, 59
tire pressure, 667–674
TLS, see least squares, total least squares
(TLS)
total least squares (TLS), see least squares,
total least squares (TLS)
training, 501
transfer function, 36, 37, 39, 42, 44
transition matrix, 38
trust region method, see Levenberg-Marquart
algorithm
UD factorization, 557
validation, 12, 595–597
Volterra
model, 458
series, 454–455
wavelet transform, 20, 91–93
weighted least squares (WLS), see least
squares, weighted least squares (WLS)
white noise, 52, 56
Wiener model, 457–458
window
Bartlett window, 89, 90
Blackmann window, 89, 91
Hamming window, 89, 90
Hann window, 89, 91
windowing, 88–89Index 705
WLS, see least squares, weighted least
squares (WLS)
Yule-Walker equation, 232, 234
´-transform, 40
zero padding, 88


كلمة سر فك الضغط : books-world.net
The Unzip Password : books-world.net
أتمنى أن تستفيدوا من محتوى الموضوع وأن ينال إعجابكم

رابط من موقع عالم الكتب لتنزيل كتاب Identification of Dynamic Systems - An Introduction with Applications
رابط مباشر لتنزيل كتاب Identification of Dynamic Systems - An Introduction with Applications
الرجوع الى أعلى الصفحة اذهب الى الأسفل
 
كتاب Identification of Dynamic Systems - An Introduction with Applications
الرجوع الى أعلى الصفحة 
صفحة 2 من اصل 1
 مواضيع مماثلة
-
» كتاب Dynamic Response of Linear Mechanical Systems - Modeling, Analysis and Simulation
» كتاب Kinematic and Dynamic Simulation of Multibody Systems
» كتاب Simulation of Dynamic Systems with MATLAB and Simulink
» كتاب Parallel Manipulators - Design, Applications and Dynamic Analysis
» كتاب Parallel Manipulators - Design, Applications and Dynamic Analysis

صلاحيات هذا المنتدى:لاتستطيع الرد على المواضيع في هذا المنتدى
منتدى هندسة الإنتاج والتصميم الميكانيكى :: المنتديات الهندسية :: منتدى الكتب والمحاضرات الهندسية :: منتدى الكتب والمحاضرات الهندسية الأجنبية-
انتقل الى: