2.5 MODEL VERIFICATION AND VALDATION 29 The values of Q2 lie between zero and one,with values close to one indicating a high level of inequality and values close to zero indicating a high level of equality between measurement and simulation.A further approach isrecreated in the target functions of simulated annealing and genetic algorithms: Q3= (2.10 1+2- n In this case values close to one indicate a good correspondence and lower values indicate a correspondingly poorer agreement. Although these measures achieve a significantly better quantification of the cor- respondence between measurement and simulation than the visual comparison. unresolved problems remain.For example,in some cases it is worthwhile to derive the individual values and draw upon general properties for comparison.One pos- sibility is to make a comparison over the frequency range instead of over time,see Murray-Smith [289]. Validation based upon a system identification One significant criterion for the validation of a model is how well or badly it can be identified,see previous section on parameter estimation and system identification. Cobelli et al.[72]classify the validation methods as identifiable and nonidentifiable models,whereby the former is described as the simpler and the latter as the more complex model.The applications considered stem from the field of physiology and medicine. If a model is clearly identifiable then the procedure of parameter estimation can be used to validate a predetermined model structure.In the first step the parameters of the model are identified to minimise the difference between measured and simulated data.Then the following information can be obtained about the validity of the model structure: A high standard deviation of the estimated parameters in the identification for various sets of measured data can indicate an invalid model,but it can also indicate non-negligible measurement errors. Systematic deficits in the approximation of the measured values by the simula tion indicate that the structure of the model does not correctly reflect reality. Conversely,differences between identified and any known,nominal parameters can be evaluated.This is particularly interesting if the variance of the individua parameter estimates is known. Furthermore,it is also possible to subject the identified parameters to a plausi- bility analysis.In this connection,all available information on the system should be used to discover inconsistencies in the identifed parameters
2.5 MODEL VERIFICATION AND VALIDATION 29 The values of Q2 lie between zero and one, with values close to one indicating a high level of inequality and values close to zero indicating a high level of equality between measurement and simulation. A further approach is recreated in the target functions of simulated annealing and genetic algorithms: Q3 = 1 1 + 1 n n i=1 (yi − zi) 2 (2.10) In this case values close to one indicate a good correspondence and lower values indicate a correspondingly poorer agreement. Although these measures achieve a significantly better quantification of the correspondence between measurement and simulation than the visual comparison, unresolved problems remain. For example, in some cases it is worthwhile to derive the individual values and draw upon general properties for comparison. One possibility is to make a comparison over the frequency range instead of over time, see Murray-Smith [289]. Validation based upon a system identification One significant criterion for the validation of a model is how well or badly it can be identified, see previous section on parameter estimation and system identification. Cobelli et al. [72] classify the validation methods as identifiable and nonidentifiable models, whereby the former is described as the simpler and the latter as the more complex model. The applications considered stem from the field of physiology and medicine. If a model is clearly identifiable then the procedure of parameter estimation can be used to validate a predetermined model structure. In the first step the parameters of the model are identified to minimise the difference between measured and simulated data. Then the following information can be obtained about the validity of the model structure: A high standard deviation of the estimated parameters in the identification for various sets of measured data can indicate an invalid model, but it can also indicate non-negligible measurement errors. Systematic deficits in the approximation of the measured values by the simulation indicate that the structure of the model does not correctly reflect reality. Conversely, differences between identified and any known, nominal parameters can be evaluated. This is particularly interesting if the variance of the individual parameter estimates is known. Furthermore, it is also possible to subject the identified parameters to a plausibility analysis. In this connection, all available information on the system should be used to discover inconsistencies in the identified parameters
30 2 PRINCIPLES OF MODELLING AND SIMULATION Most procedures and tools for system identification are only suitable for linear models.Furthermore,various aspects of even nonlinear models can be considered if a linearisation is performed. Validation based upon the model distortion'approach The 'model distortion'approach,see Butterfield [54]and Cameron [58].is sim- ilar to validation by identification.The main idea behind this is to calculate the 'distortion'of parameters necessary to obtain simulation results that precisely cor- respond with the measurements for every point in time.The gap between nominal parameters and the newly determined parameters,which alters from one moment to the next,becomes a measure for the quality of the model.In particular,it is possible to investigate whether these new parameters lie within an accepted vari- ation of the nominal parameters.Once again,measuring precision is a problem in this approach,and this can significantly limit the value of the possible predictions. The 'model distortion'approach was originally used for the validation of models for heavy water reactors. Validation based upon a sensitivity analysis It is not generally possible to precisely determine the value of the parameters of a simulation model.However,it is almost always possible to define intervals within which the value of a parameter always lies.The value of a model is questionable if the variation of a parameter within the interval leads to significant variations in the simulation results.This is generally because parameters enter the model behaviour in nonlinear form.In such cases,sensitivity analysis can supply important indica- tions of validity problems,see Kleijnen [193].In the simplest case.the sensitivity Sis determined using the perturbation method for a property of the circuit Fand a parameter P,by varying the parameter by AP and evaluating the change in the circuit value aF: S= aF △F ≈A (2.11) It is often worthwhile to standardise the sensitivity in this connection: S aF/F P.△F aP/P≈F.AP (2.12) However,this can lead to problems if F or P are close or equal to zero Validation based upon a Monte-Carlo simulation The sensitivity analysis described in the previous section allows us to investigate the effects of a parameter or possibly to set the individual sensitivities of several
30 2 PRINCIPLES OF MODELLING AND SIMULATION Most procedures and tools for system identification are only suitable for linear models. Furthermore, various aspects of even nonlinear models can be considered if a linearisation is performed. Validation based upon the ‘model distortion’ approach The ‘model distortion’ approach, see Butterfield [54] and Cameron [58], is similar to validation by identification. The main idea behind this is to calculate the ‘distortion’ of parameters necessary to obtain simulation results that precisely correspond with the measurements for every point in time. The gap between nominal parameters and the newly determined parameters, which alters from one moment to the next, becomes a measure for the quality of the model. In particular, it is possible to investigate whether these new parameters lie within an accepted variation of the nominal parameters. Once again, measuring precision is a problem in this approach, and this can significantly limit the value of the possible predictions. The ‘model distortion’ approach was originally used for the validation of models for heavy water reactors. Validation based upon a sensitivity analysis It is not generally possible to precisely determine the value of the parameters of a simulation model. However, it is almost always possible to define intervals within which the value of a parameter always lies. The value of a model is questionable if the variation of a parameter within the interval leads to significant variations in the simulation results. This is generally because parameters enter the model behaviour in nonlinear form. In such cases, sensitivity analysis can supply important indications of validity problems, see Kleijnen [193]. In the simplest case, the sensitivity S is determined using the perturbation method for a property of the circuit F and a parameter P, by varying the parameter by P and evaluating the change in the circuit value F: S = ∂F ∂P ≈ F P (2.11) It is often worthwhile to standardise the sensitivity in this connection: S = ∂F/F ∂P/P ≈ P · F F · P (2.12) However, this can lead to problems if F or P are close or equal to zero. Validation based upon a Monte-Carlo simulation The sensitivity analysis described in the previous section allows us to investigate the effects of a parameter or possibly to set the individual sensitivities of several
2.5 MODEL VERIFICATION AND VAUDATION 31 parameters off against each other.Now the parameters and their variations are not independent of each other with regard to their effect upon the events of the simulation.On the other hand,for reasons related to the running time it is not possible to itemise all combinations of parameter variations and subject each to a sensitivity analysis.Nevertheless,in order to do justice to these cross-sensitivities to some degree we can predetermine intervals and statistical distributions for the 'suspect'parameters and run a large number of simulations,each with statistically dispersive parameters.However,we cannot prove the validity of the simulation in this manner,we can only say that the check has failed,or has not failed,after a certain number of experiments.In the former case the matter is clear,in the second the risk of the failure of validity has in any case been reduced.For this reason.this method is also called risk analysis by Kleijnen [9]The methodology described is already built into many circuit simulators.It is generally not used for the validation of models,but for the evaluation of the yield of fabricated circuits taking into consideration the component tolerances Validation based upon model hierarchy This method aims to achieve the validation of a model based upon the validation of its components,whereby the interconnection of the components occurs directly within the model and thus is noncritical in relation to validation. A simple example of this is the validation of the model of a circuit,where this is described in the form of a net list of components such as transistors,diodes, etc.If we assume that the net list represents the actual connection structure of the circuit,then the validation of the circuit model is transformed into the validation of the component model.If only a few component types are used,which can be individually modified by parameterisation to give the desired component,then the validation of all circuits created from these components requires only one valida tion of the component model.Thus the validation of circuit models can in principle be considered as having been solved.The only further point of interest is the con sideration of macromodels for circuit blocks such as operational amplifiers,which offer advantages in terms of simulation speed due to more abstract modelling. A similar approach is also followed in the object-oriented modelling of multi- body systems or in the creation of block-oriented models for control engineering. although the diversity of basic models is significantly greater in these cases.An example of this is the 'open loop simulation method described by Gray and Murray-Smith [123],in which a system model is broken down into component models,which are each individually simulated with real measured data at the inputs.An example application for this is the rotor dynamics of a helicopter Validation based upon inverse models In [44]Bradley et al.consider the modelling of a helicopter.To validate the developed model,flight trials are performed in which the pilot has to perform a
2.5 MODEL VERIFICATION AND VALIDATION 31 parameters off against each other. Now the parameters and their variations are not independent of each other with regard to their effect upon the events of the simulation. On the other hand, for reasons related to the running time it is not possible to itemise all combinations of parameter variations and subject each to a sensitivity analysis. Nevertheless, in order to do justice to these cross-sensitivities to some degree we can predetermine intervals and statistical distributions for the ‘suspect’ parameters and run a large number of simulations, each with statistically dispersive parameters. However, we cannot prove the validity of the simulation in this manner, we can only say that the check has failed, or has not failed, after a certain number of experiments. In the former case the matter is clear, in the second the risk of the failure of validity has in any case been reduced. For this reason, this method is also called risk analysis by Kleijnen [193]. The methodology described is already built into many circuit simulators. It is generally not used for the validation of models, but for the evaluation of the yield of fabricated circuits taking into consideration the component tolerances. Validation based upon model hierarchy This method aims to achieve the validation of a model based upon the validation of its components, whereby the interconnection of the components occurs directly within the model and thus is noncritical in relation to validation. A simple example of this is the validation of the model of a circuit, where this is described in the form of a net list of components such as transistors, diodes, etc. If we assume that the net list represents the actual connection structure of the circuit, then the validation of the circuit model is transformed into the validation of the component model. If only a few component types are used, which can be individually modified by parameterisation to give the desired component, then the validation of all circuits created from these components requires only one validation of the component model. Thus the validation of circuit models can in principle be considered as having been solved. The only further point of interest is the consideration of macromodels for circuit blocks such as operational amplifiers, which offer advantages in terms of simulation speed due to more abstract modelling. A similar approach is also followed in the object-oriented modelling of multibody systems or in the creation of block-oriented models for control engineering, although the diversity of basic models is significantly greater in these cases. An example of this is the ‘open loop’ simulation method described by Gray and Murray-Smith [123], in which a system model is broken down into component models, which are each individually simulated with real measured data at the inputs. An example application for this is the rotor dynamics of a helicopter. Validation based upon inverse models In [44] Bradley et al. consider the modelling of a helicopter. To validate the developed model, flight trials are performed in which the pilot has to perform a
32 2 PRINCIPLES OF MODELLING AND SIMULATION ev the pilot and he form a control loop in which even the small t deviations lr a validation nevertheless,Bradley et al.propose to consider also the inverse of the simulation.In this case the desired flight movements are predetermined.An inverse model in the form of an ideal pilot calculates the necessary control of the helicopter.This avoids the accumulation of faults described above.Thus the validity of the helicopter model is demonstrated on the basis of outputs supplied from the inputs generated using the inverse model.The criteria from the previous section on direct validation based upon measured data,can again be applied here. 2.6 Model Simplification In some cases the precision of some (sub)models is greater than is necessary for the purposes of the simulation.This is not critical as long as the efficiency of the simulation is not a problem.However,if the simulation times become too great then it makes sense to consider the simplification of models,see for example Kortum and Troch [203]or Zeigler [4351.According to Zeigler the following strategies can be drawn upon to achieve the simplification of a basic model Omission of components,variables and/or interaction rules .Replacement of deterministic descriptions by stochastic descriptions .Coarsening the value range of variables .Grouping of components into blocks and combining the associated variables. The first method assumes that not all factors are equally important for the Typically, viour of a model depends primarily upon a few order em the resulting model.Here too the principle applies that the validity of a model is always established from the point of view of the application.A further difficulty is that the omission of components,variables or interaction rules can have side effects for other parts of the model.For example,an eliminated variable may leave behind gaps in various interaction rules,which each need to be carefully closed. This process is not trivial. The second principle is based upon the observation that in many cases a stochas- tic formulation is significantly more simple to create than a complete deterministic description.Thus,in the investigation of the performance of a computer,for example,a proportionately weighted mix of instructions is used,instead of con- sidering individual programmes and their sequence
32 2 PRINCIPLES OF MODELLING AND SIMULATION predetermined manoeuvre. His control inputs are used as the stimuli for the simulation. A validation of the model cannot be achieved for certain manoeuvres because the pilot and helicopter form a control loop in which even the smallest deviations quickly accumulate to form large discrepancies between reality and simulation. His measured control movements are correct only for reality. In order to achieve a validation nevertheless, Bradley et al. propose to consider also the inverse of the simulation. In this case the desired flight movements are predetermined. An inverse model in the form of an ideal pilot calculates the necessary control of the helicopter. This avoids the accumulation of faults described above. Thus the validity of the helicopter model is demonstrated on the basis of outputs supplied from the inputs generated using the inverse model. The criteria from the previous section on direct validation based upon measured data, can again be applied here. 2.6 Model Simplification In some cases the precision of some (sub)models is greater than is necessary for the purposes of the simulation. This is not critical as long as the efficiency of the simulation is not a problem. However, if the simulation times become too great then it makes sense to consider the simplification of models, see for example Kortum¨ and Troch [203] or Zeigler [435]. According to Zeigler the following strategies can be drawn upon to achieve the simplification of a basic model: • Omission of components, variables and/or interaction rules. • Replacement of deterministic descriptions by stochastic descriptions. • Coarsening the value range of variables. • Grouping of components into blocks and combining the associated variables. The first method assumes that not all factors are equally important for the determination of the behaviour of a model. Typically, these factors are classified as first and second-order effects. The behaviour of a model usually depends primarily upon a few first-order effects, whilst the second-order effects, although numerous, can generally be neglected without significantly detracting from the validity of the resulting model. Here too the principle applies that the validity of a model is always established from the point of view of the application. A further difficulty is that the omission of components, variables or interaction rules can have side effects for other parts of the model. For example, an eliminated variable may leave behind gaps in various interaction rules, which each need to be carefully closed. This process is not trivial. The second principle is based upon the observation that in many cases a stochastic formulation is significantly more simple to create than a complete deterministic description. Thus, in the investigation of the performance of a computer, for example, a proportionately weighted mix of instructions is used, instead of considering individual programmes and their sequence
2.7 SIMULATORS AND SIMULATION 33 mends the coarsening of the value range of variables,such tran the varia on from an analogue to a digita ideration les,and of course also the components and interaction rules,are initially retained.But one value now covers a whole value interval in the original model,the individual value of which can no longer be activated.This may lead to changes in the formulation of interaction rules. Finally,the fourth principle is based upon the combination of components and variables.For example,the distortion of a capacitive pressure element in space can be determined by a large number of positional variables.From an electrical point of view,however,only the resulting capacitance of the structure is of interest and not the strain.The capacitance,on the other hand,represents a single numeric value which is,however,partially determined by the mechanical strain. All these methods thus serve to obtain a simulatable description from the more theoretical basis of a conceptual model without,in the process,losing the validity of the application cases of interest. 2.7 Simulators and Simulation 2.7.1 Introduction The models introduced in the numerous ways. sim Bef vere made t cons mechani nt tha sam nships D ween the var vas th mo in this context is,for example,the tide prediction device (1879)by ord Kelvin or the mechanical differential analyzer (1930)by Vannevar Busch.After the sec- ond world war the development of electronics resulted in the analogue computer, which was successfully implemented in the aircraft industry,for example.The field of simulation gained new impetus with the introduction of the digital com- puter.which brought the advantage that adaptation to a new simulation problem did not require changes to the hardware,but only different software.Today we differentiate between a whole range of simulator classes in the field of application of mechatronics and micromechatronics,the most important of which are listed in Table 2.1. 2.7.2 Circuit simulation A circuit simulation considers networks of components such as transistors.diodes, resistors,capacitors.coils,etc.The variables that are of interest here are gener- ally voltages and currents.These are represented in continuous form.Nonlinear. differential-algebraic equation systems have to be solved,which arise as a result 3 The word simulation is derived from the Latin verb simulare,which means to feign
2.7 SIMULATORS AND SIMULATION 33 The third point recommends the coarsening of the value range of variables, such as occurs in electronics in the transition from an analogue to a digital consideration. In this approach, the variables, and of course also the components and interaction rules, are initially retained. But one value now covers a whole value interval in the original model, the individual value of which can no longer be activated. This may lead to changes in the formulation of interaction rules. Finally, the fourth principle is based upon the combination of components and variables. For example, the distortion of a capacitive pressure element in space can be determined by a large number of positional variables. From an electrical point of view, however, only the resulting capacitance of the structure is of interest and not the strain. The capacitance, on the other hand, represents a single numeric value which is, however, partially determined by the mechanical strain. All these methods thus serve to obtain a simulatable description from the more theoretical basis of a conceptual model without, in the process, losing the validity of the application cases of interest. 2.7 Simulators and Simulation 2.7.1 Introduction The models introduced in the previous sections can be automatically evaluated in numerous ways. This is called simulation.3 Before electronics came into being, attempts were made to construct mechanical equipment that displayed the same relationships between the variables as was the case in the model. Worth mentioning in this context is, for example, the tide prediction device (1879) by Lord Kelvin or the mechanical differential analyzer (1930) by Vannevar Busch. After the second world war the development of electronics resulted in the analogue computer, which was successfully implemented in the aircraft industry, for example. The field of simulation gained new impetus with the introduction of the digital computer, which brought the advantage that adaptation to a new simulation problem did not require changes to the hardware, but only different software. Today we differentiate between a whole range of simulator classes in the field of application of mechatronics and micromechatronics, the most important of which are listed in Table 2.1. 2.7.2 Circuit simulation A circuit simulation considers networks of components such as transistors, diodes, resistors, capacitors, coils, etc. The variables that are of interest here are generally voltages and currents. These are represented in continuous form. Nonlinear, differential-algebraic equation systems have to be solved, which arise as a result 3 The word simulation is derived from the Latin verb simulare, which means to feign