Processor Time Execute Simulation Small Complexity of Crude Elabora FIGURE 78.3 Design constraints and trade-offs. 78.6 The Interdisciplinary nature of Simulation The subject of computer-aided design and analysis of communication systems is very much interdisciplinary in nature. The major disciplines that bear on the subject are communication theory, DSP, numerical analysis and stochastic process theory. The roles played by these subjects is clear. The simulation user must have knowledge of the behavior of communication theory if the simulation results are to be understood. The analysis chniques of communication theory allow simulation results to be verified. Since each subsystem in the overall communication system is a signal processing operation, the tools of dsp provide the algorithms to realize filters alysis techniques are used extensively in the development of signal pro cessing algorithms. Since communication systems involve random data signals, as well as noise and other disturbances, the concepts of stochastic process theory are important in developing models of these quantities and also for determining performance estimates. 78.7 Model design Practicing engineers frequently use models to investigate the behavior of complex systems. Traditionally, models have been physical devices or a set of mathematical expressions. The widespread use of powerful digital computers now allows one to generate computer programs that model physical systems. Although the detailed development and use of computer models differs significantly from their physical and mathematical counter- parts, the computer models share many of the same design constraints and trade-offs. For any model to be useful one must guarantee that the response of the model to stimuli will closely match the response of the target system, the model must be designed and fabricated in much less time and at significantly less expense than the target system, and the model must be reasonably easy to validate and modify. In addition to these constraints, designers of computer models must assure that the amount of processor time required to execute ne model is not excessive. The optimal model is the one that appropriately balances these conflicting require ments Figure 78.3 describes the typical design trade-off faced when developing computer models. A somewhat surprising observation is that the optimal model is often not the one that most closely approximates the target system. A highly detailed model will typically require a tremendous amount of time to develop, will be difficult to validate and modify, and may require prohibitive processor time to execute. Selecting a model that achieves a good balance between these constraints is as much an art as a science. Being aware of the trade-offs which exist,and must be addressed, is the first step toward mastering the art of modeling e 2000 by CRC Press LLC
© 2000 by CRC Press LLC 78.6 The Interdisciplinary Nature of Simulation The subject of computer-aided design and analysis of communication systems is very much interdisciplinary in nature. The major disciplines that bear on the subject are communication theory, DSP, numerical analysis, and stochastic process theory. The roles played by these subjects is clear. The simulation user must have knowledge of the behavior of communication theory if the simulation results are to be understood. The analysis techniques of communication theory allow simulation results to be verified. Since each subsystem in the overall communication system is a signal processing operation, the tools of DSP provide the algorithms to realize filters and other subsystems. Numerical analysis techniques are used extensively in the development of signal processing algorithms. Since communication systems involve random data signals, as well as noise and other disturbances, the concepts of stochastic process theory are important in developing models of these quantities and also for determining performance estimates. 78.7 Model Design Practicing engineers frequently use models to investigate the behavior of complex systems. Traditionally, models have been physical devices or a set of mathematical expressions. The widespread use of powerful digital computers now allows one to generate computer programs that model physical systems. Although the detailed development and use of computer models differs significantly from their physical and mathematical counterparts, the computer models share many of the same design constraints and trade-offs. For any model to be useful one must guarantee that the response of the model to stimuli will closely match the response of the target system, the model must be designed and fabricated in much less time and at significantly less expense than the target system, and the model must be reasonably easy to validate and modify. In addition to these constraints, designers of computer models must assure that the amount of processor time required to execute the model is not excessive. The optimal model is the one that appropriately balances these conflicting requirements. Figure 78.3 describes the typical design trade-off faced when developing computer models. A somewhat surprising observation is that the optimal model is often not the one that most closely approximates the target system. A highly detailed model will typically require a tremendous amount of time to develop, will be difficult to validate and modify, and may require prohibitive processor time to execute. Selecting a model that achieves a good balance between these constraints is as much an art as a science. Being aware of the trade-offs which exist, and must be addressed, is the first step toward mastering the art of modeling. FIGURE 78.3 Design constraints and trade-offs. 8574/ch078/frame Page 1753 Wednesday, May 6, 1998 11:08 AM
IX(y -fn -fc -fr FIGURE 78.4 Amplitude spectrum of a bandpass signal. 78. 8 Low-Pass Models In most cases of practical interest the physical layer of the communication system will use continuous time (CT) signals, while the simulation will operate in discrete time(DT). For the simulation to be useful, one must develop DT signals and systems that closely match their CT counterparts. This topic is discussed at length in introductory DSP texts. A prominent result in this field is the Nyquist sampling theorem, which states that if a CT signal has no energy above frequency f, Hz, one can create a DT signal that contains exactly the same information by sampling the CT signal at any rate in excess of 2 fh samples per second. Since the execution time of the simulation is proportional to the number of samples it must process, one naturally uses the lowest sampling rate possible. While the Nyquist theorem should not be violated for arbitrary signals, when the Ct signal is bandpass one can use low-pass equivalent(LPE) waveforms that contain all the information of the T signal but can be sampled slower than 2 r Assume the energy in a bandpass signal is centered about a carrier frequency of f Hz and ranges from f to f Hz, resulting in a bandwidth of f-f=W Hz, as in Fig. 78.4. It is not unusual for w to be many orders of magnitude less than fe The bandpass waveform x(t) can be expressed as a function of two low-pass signals. Two essentially equivalent LPE expansions are known as the envelope/phase representation[Davenport and Root,1958], x(t)=A(r)cos[2 f t+A(t (78.1) and the quadrature representation x(t)=x(t)cos(2I r)-x(r)sin(2I f t) (78.2) All four real signals A(n),0(o), x(o), and x, r)are low pass and have zero energy above W/2 Hz. A computer simulation that replaces x(o with a pair of LPE signals will require far less processor time since the LPE waveforms can be sampled at W as opposed to 2 h samples per second. It is cumbersome to work with two signals rather than one signal. A more mathematically elegant LPE expansion is c(t)= Rev(t)el2rjcry where v(o)is a low-pass, complex-time domain signal that has no energy above W/2 Hz. Signal w(t) is known as the complex envelope of x(n)[Haykin, 1983]. It contains all the information of x(n) and can be sampled at w mples per second without aliasing. This notation is disturbing to engineers accustomed to viewing all time domain signals as real. However, a complete theory exists for complex time domain signals, and with surprisingly little effort one can define convolution, Fourier transforms, analog-to-digital and digital-to-analog conversions, and many other signal processing algorithms for complex signals. If f and W are known, the LPE mapping one-to-one so that x(r) can be completely recovered from w(r). While it is conceptually simpler to sample the CT signals at a rate in excess of 2fh and avoid the mathematical difficulties of the LPE representation, the tremendous difference between f and W makes the LPE far more efficient for computer simulation. This type e 2000 by CRC Press LLC
© 2000 by CRC Press LLC 78.8 Low-Pass Models In most cases of practical interest the physical layer of the communication system will use continuous time (CT) signals, while the simulation will operate in discrete time (DT). For the simulation to be useful, one must develop DT signals and systems that closely match their CT counterparts. This topic is discussed at length in introductory DSP texts. A prominent result in this field is the Nyquist sampling theorem, which states that if a CT signal has no energy above frequency fh Hz, one can create a DT signal that contains exactly the same information by sampling the CT signal at any rate in excess of 2 fh samples per second. Since the execution time of the simulation is proportional to the number of samples it must process, one naturally uses the lowest sampling rate possible. While the Nyquist theorem should not be violated for arbitrary signals, when the CT signal is bandpass one can use low-pass equivalent (LPE) waveforms that contain all the information of the CT signal but can be sampled slower than 2 fh. Assume the energy in a bandpass signal is centered about a carrier frequency of fc Hz and ranges from fl to fh Hz, resulting in a bandwidth of fh – fl =W Hz, as in Fig. 78.4. It is not unusual for W to be many orders of magnitude less than fc. The bandpass waveform x(t) can be expressed as a function of two low-pass signals. Two essentially equivalent LPE expansions are known as the envelope/phase representation [Davenport and Root, 1958], x(t) = A(t) cos[2p fct + q(t)] (78.1) and the quadrature representation, x(t) = xc(t) cos(2pfct) – xs (t) sin(2pfct) (78.2) All four real signals A(t), q(t), xc(t), and xs (t) are low pass and have zero energy above W/2 Hz. A computer simulation that replaces x(t) with a pair of LPE signals will require far less processor time since the LPE waveforms can be sampled at W as opposed to 2 fh samples per second. It is cumbersome to work with two signals rather than one signal. A more mathematically elegant LPE expansion is x(t) = Re{v(t)ej 2pfct} (78.3) where v(t) is a low-pass, complex-time domain signal that has no energy above W/2 Hz. Signal v(t) is known as the complex envelope of x(t) [Haykin, 1983]. It contains all the information of x(t) and can be sampled at W samples per second without aliasing. This notation is disturbing to engineers accustomed to viewing all time domain signals as real.However, a complete theory exists for complex time domain signals, and with surprisingly little effort one can define convolution, Fourier transforms, analog-to-digital and digital-to-analog conversions, and many other signal processing algorithms for complex signals. If fc and W are known, the LPE mapping is one-to-one so that x(t) can be completely recovered from v(t). While it is conceptually simpler to sample the CT signals at a rate in excess of 2fh and avoid the mathematical difficulties of the LPE representation, the tremendous difference between fc and W makes the LPE far more efficient for computer simulation. This type FIGURE 78.4 Amplitude spectrum of a bandpass signal. 8574/ch078/frame Page 1754 Wednesday, May 6, 1998 11:08 AM