24 2 PRINCIPLES OF MODELLING AND SIMULATION allows us to obtain better parameter estimates,as is the case in the weighted method of least squares. 2.5 Model Verification and Validation 2.5.1 Introduction As defined in Section 2.1,model verification answers the question of whether the implementable model reflects the conceptual model within the specified bound- aries of accuracy,whereas the purpose of model validation is to show whether the implementable model is suitable for fulfilling the envisaged task within its field of application.In what follows the most important methods in this field will be introduced.These originate from a very wide range of fields of application,some of which lie outside the field of engineering sciences.They are,however,gen- literature can be found in Kleijnen [193],Cobelli et al.[72]and Murray-Smith [2881,[2891. 2.5.2 Model verification Verification on the basis of the implementation methodology The most direct form of verification takes place as early as the implementation stage and aims to ensure that,where possible,the errors to be identified by verification do not occur at all.This requires intervention into the methodology of model implementation.In this context,the same boundary conditions often apply as those for the development of software since,in this field too,a formal description based upon syntax and semantics is used for the formulation of a given technical con Accordingly,most of the mechanisms that are used for software development also come into play here in order to avoid implementation errors.A few key words here,see Kleijnen [193],are:Modular modelling,object-oriented modelling or the 'chief modeller'principle,in which the actual implementation is as far as possible performed by a single person,whilst the other colleagues of the 'chief modeller' relieve him of all other tasks.In addition,there is the modular testing of submodels so that modelling errors are recognised as early as possible and at lower levels.A further important aspect of verification lies in the correct definition of the scope of the model and in the ongoing checking to ensure that this scope is adhered to.Extrapolations beyond the guaranteed range should generally be treated with extreme caution. Plausibilit的y tests Plausibility tests can also make a contribution to verification (and validation).see also Kramer and Neculau [206].This is particularly true if they can be performed by
24 2 PRINCIPLES OF MODELLING AND SIMULATION allows us to obtain better parameter estimates, as is the case in the weighted method of least squares. 2.5 Model Verification and Validation 2.5.1 Introduction As defined in Section 2.1, model verification answers the question of whether the implementable model reflects the conceptual model within the specified boundaries of accuracy, whereas the purpose of model validation is to show whether the implementable model is suitable for fulfilling the envisaged task within its field of application. In what follows the most important methods in this field will be introduced. These originate from a very wide range of fields of application, some of which lie outside the field of engineering sciences. They are, however, general enough to be used in a technical context. Good overviews of the underlying literature can be found in Kleijnen [193], Cobelli et al. [72] and Murray-Smith [288], [289]. 2.5.2 Model verification Verification on the basis of the implementation methodology The most direct form of verification takes place as early as the implementation stage and aims to ensure that, where possible, the errors to be identified by verification do not occur at all. This requires intervention into the methodology of model implementation. In this context, the same boundary conditions often apply as those for the development of software since, in this field too, a formal description based upon syntax and semantics is used for the formulation of a given technical content. Accordingly, most of the mechanisms that are used for software development also come into play here in order to avoid implementation errors. A few key words here, see Kleijnen [193], are: Modular modelling, object-oriented modelling or the ‘chief modeller’ principle, in which the actual implementation is as far as possible performed by a single person, whilst the other colleagues of the ‘chief modeller’ relieve him of all other tasks. In addition, there is the modular testing of submodels, so that modelling errors are recognised as early as possible and at lower levels. A further important aspect of verification lies in the correct definition of the scope of the model and in the ongoing checking to ensure that this scope is adhered to. Extrapolations beyond the guaranteed range should generally be treated with extreme caution. Plausibility tests Plausibility tests can also make a contribution to verification (and validation), see also Kramer and Neculau [206]. This is particularly true if they can be performed by
2.5 MODEL VERIFICATION AND VAUDATION 25 means of simple manual calculations.They are based upon analytical considerations or the results of an initial simulation.The following criteria could possibly be drawn upon for plausibility tests: Causaliry The cause should precede effect in reality and in the model.Any devi- ation from this principle indicates serious deficits in the model. Balance principles The principles of the conservation of energy and matter apply not only to the physical reality,but also for the model itself. Current/voltage laws Currents,forces and moments at a point add up to zero. Voltages and velocities add up to zero in a closed loop.These relationships apply for any electronic or mechanical system with concentrated parameters. Value range State and output variables and parameters are normally associated with an applicable range of values.Although this is not necessarily precisely defined,unrealistic values can be recognised very quickly.For example;areas, volumes,energies and entropies can never be negative. Consistency ofuits Model equations are generally formulated without units.Nev- ertheless,it is often worthwhile using the consistency of units as a criterion for verification. Verification on the basis of alternative models There are often several methods or tools available for modelling and subsequent simulation.If two approaches are independent of each other in terms of method- ology and realisation.then they can be used for mutual verification.This arises because the probability of different errors producing the same effects falls,as the number of independent simulation experiments rises.Still simpler is the case where an approach has already been verified.In this case verification is established directly by means of a sufficient number of experiments,and a comparison between the model that has already been verified and the model to be verified.We see from this that absolute verification remains limited to a very small number of fields of application.In all other cases it is much more a case of deciding how many exper iments must be performed before we are prepared to regard a model as having been verified.In this context,moreover,the required degree of correspondence. and consequently the accuracy of the model,has to be defined in advance. Let us now illustrate this verification procedure on the basis of a few examples We can use a logic simulator for the simulation of digital circuits,or-when con sidering the underlying transistor circuit-a circuit simulator can also be used In principle,both simulators should deliver the same results,with the circuit
2.5 MODEL VERIFICATION AND VALIDATION 25 means of simple manual calculations. They are based upon analytical considerations or the results of an initial simulation. The following criteria could possibly be drawn upon for plausibility tests: Causality The cause should precede effect in reality and in the model. Any deviation from this principle indicates serious deficits in the model. Balance principles The principles of the conservation of energy and matter apply not only to the physical reality, but also for the model itself. Current/voltage laws Currents, forces and moments at a point add up to zero. Voltages and velocities add up to zero in a closed loop. These relationships apply for any electronic or mechanical system with concentrated parameters. Value range State and output variables and parameters are normally associated with an applicable range of values. Although this is not necessarily precisely defined, unrealistic values can be recognised very quickly. For example; areas, volumes, energies and entropies can never be negative. Consistency of units Model equations are generally formulated without units. Nevertheless, it is often worthwhile using the consistency of units as a criterion for verification. Verification on the basis of alternative models There are often several methods or tools available for modelling and subsequent simulation. If two approaches are independent of each other in terms of methodology and realisation, then they can be used for mutual verification. This arises because the probability of different errors producing the same effects falls, as the number of independent simulation experiments rises. Still simpler is the case where an approach has already been verified. In this case verification is established directly by means of a sufficient number of experiments, and a comparison between the model that has already been verified and the model to be verified. We see from this that absolute verification remains limited to a very small number of fields of application. In all other cases it is much more a case of deciding how many experiments must be performed before we are prepared to regard a model as having been verified. In this context, moreover, the required degree of correspondence, and consequently the accuracy of the model, has to be defined in advance. Let us now illustrate this verification procedure on the basis of a few examples. We can use a logic simulator for the simulation of digital circuits, or — when considering the underlying transistor circuit — a circuit simulator can also be used. In principle, both simulators should deliver the same results, with the circuit
26 2 PRINCIPLES OF MODELLING AND SIMULATION simulator giving greater accuracy at a higher cost as a result of its analogue consideration method. For simplified applications it is often possible to put forward analytical solutions that can be used for verification purposes.An example of this is the mechanical deformation of a rectangular or round plate under load,which can be calcu- lated very simply in the form of an analytical equation.The resulting elastic line provides a starting point for the verification of the implementation of finite, mechanical elements. Verification based upon visual inspection and animation Another important verification method is the visual inspection('eyeballing', Kleijnen [193])of the sequence of a simulation using a debugger or comparable tool.Simulators for hardware description languages often offer the use of such tools,which permit the representation of sequential modelling code as it is pro cessed.Other forms of visualisation are represented by marks in Petri nets or the current state in state diagrams.However,visualisation can be used not only for the evaluation of the simulation process,but also for the representation of the simula- tion results.This is also vital because the resulting columns of figures are generally unsuitable for providing an overview of the system behaviour.The simplest and most widespread form is the x/y diagram,the x-axis of which is often time.In the field of electronic circuits this is usually sufficient.However,for the evaluation of mechanical behaviour,this is often not the case.In such cases animation procedures facilitate a better evaluation of the simulation results and thus better verification It is self-evident that the animation,like any other tool to aid understanding of a model,also makes a contribution to validation,but this is not the subject of the present chapter. Verification of the runtime behaviour Occasionally tools are used that identify those parts of a model that contribute significantly to the running time.The classic approach to this is to determine the instruction currently being processed at regular intervals.This sampling allows us to obtain statistical information on the frequency of the execution of instruc- tions and modules.This is entirely sufficient for the given purpose,but does not overload the running time of the programme under investigatio The informa tion extracted can be used to selectively accelerate a model,which is of decisive importance particularly for more complex models which already have considerable running times Formal verification Formal verification will be considered here from the point of view of formal meth- ods for the verification of digital circuits originating from microelectronics.Since
26 2 PRINCIPLES OF MODELLING AND SIMULATION simulator giving greater accuracy at a higher cost as a result of its analogue consideration method. For simplified applications it is often possible to put forward analytical solutions that can be used for verification purposes. An example of this is the mechanical deformation of a rectangular or round plate under load, which can be calculated very simply in the form of an analytical equation. The resulting elastic line provides a starting point for the verification of the implementation of finite, mechanical elements. Verification based upon visual inspection and animation Another important verification method is the visual inspection (‘eyeballing’, see Kleijnen [193]) of the sequence of a simulation using a debugger or comparable tool. Simulators for hardware description languages often offer the use of such tools, which permit the representation of sequential modelling code as it is processed. Other forms of visualisation are represented by marks in Petri nets or the current state in state diagrams. However, visualisation can be used not only for the evaluation of the simulation process, but also for the representation of the simulation results. This is also vital because the resulting columns of figures are generally unsuitable for providing an overview of the system behaviour. The simplest and most widespread form is the x/y diagram, the x-axis of which is often time. In the field of electronic circuits this is usually sufficient. However, for the evaluation of mechanical behaviour, this is often not the case. In such cases animation procedures facilitate a better evaluation of the simulation results and thus better verification. It is self-evident that the animation, like any other tool to aid understanding of a model, also makes a contribution to validation, but this is not the subject of the present chapter. Verification of the runtime behaviour Occasionally tools are used that identify those parts of a model that contribute significantly to the running time. The classic approach to this is to determine the instruction currently being processed at regular intervals. This sampling allows us to obtain statistical information on the frequency of the execution of instructions and modules. This is entirely sufficient for the given purpose, but does not overload the running time of the programme under investigation. The information extracted can be used to selectively accelerate a model, which is of decisive importance particularly for more complex models which already have considerable running times. Formal verification Formal verification will be considered here from the point of view of formal methods for the verification of digital circuits originating from microelectronics. Since
2.5 MODEL VERIFICATION AND VALDATION 21 the design of digital circuits is increasingly based upon modelling in hardware description languages,we can no longer differentiate the verification methods for the designs from the verification of the corresponding models.Now if the desigr and simulation models are exactly the same.there is no need for verification Occasionally.however.models have also been specially prepared for the simu- lation,which may be necessary for performance s.In this case it may be useful to perform a formal verification.This can be divided into two main fields: 'equivalence checking'and 'model checking'. In the first case we are concered with the functional comparison of a de scription with a reference description.One example could be the comparison between a gate net list and a reference model on register-transfer level,which has been intensively simulated during the design process This largely corre sponds with the verification based upon alternative models.However,in this case simulation results are not compared,as is the case for the alternative model verification.Instead formal,mathematical methods are used to find proof of eauivalence. 'Model checking',on the other hand,is concerned with using mathematical methods to verify certain predictions about a circuit.So,for example,for a traffic light circuit you could exclude the possibility of all sides showing a green light [2111.This is based upon the automatic construction of a formalised proof for the prediction in question.A similar principle is followed by Damm er al.in [77]fo the formal verification of state diagrams of automotive systems.'Model checking' can also be used for the validation of a model. 2.5.3 Model validation Introduction The validity of a model is always partially dependent upon the desired applications. This is clearly illustrated by the validation criteria listed below,see also Murray- Smith 12881: Empirical validity Correspondence between measurements and simulations. Theoretical validity Consistency of a model with accepted theories Pragmatic validity Capability of the model to fulfil the desired purpose,e.g.as part of a regulator. Heuristic validity Potential for testing hypotheses,for the explanation of phe- nomena and for the discovery of relationships These different validation requirements are the reason for the development of a whole range of validation strategies.In addition to the methods presented in the following sections there are also a few basic strategies that improve the degree to
2.5 MODEL VERIFICATION AND VALIDATION 27 the design of digital circuits is increasingly based upon modelling in hardware description languages, we can no longer differentiate the verification methods for the designs from the verification of the corresponding models. Now if the design and simulation models are exactly the same, there is no need for verification. Occasionally, however, models have also been specially prepared for the simulation, which may be necessary for performance reasons. In this case it may be useful to perform a formal verification. This can be divided into two main fields: ‘equivalence checking’ and ‘model checking’. In the first case we are concerned with the functional comparison of a description with a reference description. One example could be the comparison between a gate net list and a reference model on register-transfer level, which has been intensively simulated during the design process. This largely corresponds with the verification based upon alternative models. However, in this case simulation results are not compared, as is the case for the alternative model verification. Instead formal, mathematical methods are used to find proof of equivalence. ‘Model checking’, on the other hand, is concerned with using mathematical methods to verify certain predictions about a circuit. So, for example, for a traffic light circuit you could exclude the possibility of all sides showing a green light [211]. This is based upon the automatic construction of a formalised proof for the prediction in question. A similar principle is followed by Damm et al. in [77] for the formal verification of state diagrams of automotive systems. ‘Model checking’ can also be used for the validation of a model. 2.5.3 Model validation Introduction The validity of a model is always partially dependent upon the desired applications. This is clearly illustrated by the validation criteria listed below, see also MurraySmith [288]: Empirical validity Correspondence between measurements and simulations. Theoretical validity Consistency of a model with accepted theories. Pragmatic validity Capability of the model to fulfil the desired purpose, e.g. as part of a regulator. Heuristic validity Potential for testing hypotheses, for the explanation of phenomena and for the discovery of relationships. These different validation requirements are the reason for the development of a whole range of validation strategies. In addition to the methods presented in the following sections there are also a few basic strategies that improve the degree to
28 2 PRINCIPLES OF MODELLING AND SIMULATION which models can be validated.In general,simpler models are easier to handle,and thus also easier to validate.In some cases it is also a good idea to take the model apart and then validate only the components and their connection together.Finally, it is occasionally worthwhile to selectively improve the quantity and quality of the measured data from the real system,which can.for example,be achieved by a design of the experiment layout that is tailored to the problem. Direct validation based upon measured data Validation should ensure the correspondence between the executable model and reality.To achieve this it is necessary to take measurements on real systems in order to compare these with the results of a simulation.Models are often used to obtain predictions about the future behaviour of a system.If this model is predictively valid,it follows that the predictions are correct in relation to reality However,the reverse is not necessarily true!It is quite possible for faulty models to produce correct predictions by coincidence.So we cannot say that a model is valid on the basis of simulation experiments,but at best that the model is not valid if false predictions are made.In principle a greater number of simulation experiments does not change the situation.Only the probability that the model is predictively valid increases with the number of experiments The possibility of performing experiments in reality and recording their results by measurement is limited.Correspondingly,the available data tends to be scarce in some cases.As a result of the lack of support points,this can cause difficul ties in validation.But the opposite case can also lead to problems.If plenty of measurement data is available,a great deal of effort is occasionally necessary to extract the relevant content from the data. An initial clue is provided by the visual comparison of measured data and simulation results in order to ensure that the input data of the model is represented as precisely as possible in the simulation.Furthermore,a whole range of measured variables can be used to check the correspondence between measured data and simulation results.So it is possible.as demonstrated by Murrav-Smith in 12891.to define various Q functions for the time-discrete case,which represents a degree of correspondence between the measured response zi and the result of the simulation yi.The following formula shows the first possibility: Q1=y-z)w·y-z) (2.8) i=l where wi denotes weight.This formula can also be viewed as a weighted variant of equation(2.5).Another possibility is to use Q2 to define a normalised degree of inequality: 0 (2.9)
28 2 PRINCIPLES OF MODELLING AND SIMULATION which models can be validated. In general, simpler models are easier to handle, and thus also easier to validate. In some cases it is also a good idea to take the model apart and then validate only the components and their connection together. Finally, it is occasionally worthwhile to selectively improve the quantity and quality of the measured data from the real system, which can, for example, be achieved by a design of the experiment layout that is tailored to the problem. Direct validation based upon measured data Validation should ensure the correspondence between the executable model and reality. To achieve this it is necessary to take measurements on real systems in order to compare these with the results of a simulation. Models are often used to obtain predictions about the future behaviour of a system. If this model is predictively valid, it follows that the predictions are correct in relation to reality. However, the reverse is not necessarily true! It is quite possible for faulty models to produce correct predictions by coincidence. So we cannot say that a model is valid on the basis of simulation experiments, but at best that the model is not valid if false predictions are made. In principle a greater number of simulation experiments does not change the situation. Only the probability that the model is predictively valid increases with the number of experiments. The possibility of performing experiments in reality and recording their results by measurement is limited. Correspondingly, the available data tends to be scarce in some cases. As a result of the lack of support points, this can cause difficulties in validation. But the opposite case can also lead to problems. If plenty of measurement data is available, a great deal of effort is occasionally necessary to extract the relevant content from the data. An initial clue is provided by the visual comparison of measured data and simulation results in order to ensure that the input data of the model is represented as precisely as possible in the simulation. Furthermore, a whole range of measured variables can be used to check the correspondence between measured data and simulation results. So it is possible, as demonstrated by Murray-Smith in [289], to define various Q functions for the time-discrete case, which represents a degree of correspondence between the measured response zi and the result of the simulation yi. The following formula shows the first possibility: Q1 = n i=1 (yi − zi) · wi · (yi − zi) (2.8) where wi denotes weight. This formula can also be viewed as a weighted variant of equation (2.5). Another possibility is to use Q2 to define a normalised degree of inequality: Q2 = n i=1 (yi − zi) 2 n i=1 y2 i + n i=1 z2 i (2.9)