Section 1.2 Elements of an Electrical Communication System 7 robust to the variety of signal distortions.This can be accomplished by having the system adapt some of its parameters to the channel distortion encountered. The Receiver.The function of the receiver is to recover the message signal con- tained in the received signal.If the message signal is transmitted by carrier modulation, the receiver performs carrier demodulation to extract the message from the sinusoidal carrier.Since the signal demodulation is performed in the presence of additive noise and possibly other signal distortions,the demodulated message signal is generally degraded to some extent by the presence of these distortions in the received signal.As we shall see, the fidelity of the received message signal is a function of the type of modulation and the strength of the additive noise. Besides performing the primary function of signal demodulation,the receiver also performs a number of peripheral functions,including signal filtering and noise suppression. 1.2.1 Digital Communication System Up to this point,we have described an electrical communication system in rather broad terms based on the implicit assumption that the message signal is a continuous time-varying waveform.We refer to such continuous-time signal waveforms as analog signals and to the corresponding information sources that produce such signals as analog sources.Analog signals can be transmitted directly over the communication channel via carrier modulation and demodulated accordingly at the receiver.We call such a communication system an analog communication system. Alternatively,an analog source output may be converted into a digital form,and the message can be transmitted via digital modulation and demodulated as a digital signal at the receiver.There are some potential advantages to transmitting an analog signal by means of digital modulation.The most important reason is that signal fidelity is better controlled through digital transmission than through analog transmission.In particular, digital transmission allows us to regenerate the digital signal in long-distance transmission, thus eliminating effects of noise at each regeneration point.In contrast,the noise added in analog transmission is amplified along with the signal when amplifiers are periodically used to boost the signal level in long-distance transmission.Another reason for choosing digital transmission over analog is that the analog message signal may be highly redundant. With digital processing,redundancy may be removed prior to modulation,thus conserving channel bandwidth.Yet a third reason may be that digital communication systems are often cheaper to implement. In some applications,the information to be transmitted is inherently digital,e.g.,in the form of English text and computer data.In such cases,the information source that generates the data is called a discrete(digital)source. In a digital communication system,the functional operations performed at the trans- mitter and receiver must be expanded to include message signal discretization at the trans- mitter and message signal synthesis or interpolation at the receiver.Additional functions include redundancy removal,as well as channel coding and decoding. Figure 1.2 illustrates the functional diagram and the basic elements of a digital com- munication system.The source output may be either an analog signal,such as an audio
Section 1.2 Elements of an Electrical Communication System 7 robust to the variety of signal distortions. This can be accomplished by having the system adapt some of its parameters to the channel distortion encountered. The Receiver. The function of the receiver is to recover the message signal contained in the received signal. If the message signal is transmitted by carrier modulation, the receiver performs carrier demodulation to extract the message from the sinusoidal carrier. Since the signal demodulation is performed in the presence of additive noise and possibly other signal distortions, the demodulated message signal is generally degraded to some extent by the presence of these distortions in the received signal. As we shall see, the fidelity of the received message signal is a function of the type of modulation and the strength of the additive noise. Besides pe1forming the primary function of signal demodulation, the receiver also performs a number of peripheral functions, including signal filt , ering and noise suppression. 1.2.1 Digital Communication System Up to this point, we have described an electrical communication system in rather broad terms based on the implicit assumption that the message signal is a continuous time-varying waveform. We refer to such continuous-time signal waveforms as analog signals and to the corresponding information sources that produce such signals as analog sources. Analog signals can be transmitted directly over the communication channel via carrier modulation and demodulated accordingly at the receiver. We call such a communication system an analog communication system. Alternatively, an analog source output may be converted into a digital form, and the message can be transmitted via digital modulation and demodulated as a digital signal at the receiver. There are some potential advantages to transmitting an analog signal by means of digital modulation. The most important reason is that signal fidelity is better controlled through digital transmission than through analog transmission. In particular, digital transmission allows us to regenerate the digital signal in long-distance transmission, thus eliminating effects of noise at each regeneration point. In contrast, the noise added in analog transmission is amplified along with the signal when amplifiers are periodically used to boost the signal level in long-distance transmission. Another reason for choosing digital transmission over analog is that the analog message signal may be highly redundant. With digital processing, redundancy may be removed prior to modulation, thus conserving channel bandwidth. Yet a third reason may be that digital communication systems are often cheaper to implement. In some applications, the information to be transmitted is inherently digital, e.g., in the form of English text and computer data. In such cases, the information source that generates the data is called a discrete (digital) source. In a digital communication system, the functional operations performed at the transmitter and receiver must be expanded to include message signal discretization at the transmitter and message signal synthesis or interpolation at the receiver. Additional functions include redundancy removal, as well as channel coding and decoding. Figure 1.2 illustrates the functional diagram and the basic elements of a digital communication system. The source output may be either an analog signal, such as an audio
8 Introduction Chapter 1 Information Source Channel Digital source and encoder encoder modulator input transducer Channel Output Output Source Channel Digital signal transducer decoder decoder demodulator Figure 1.2 Basic elements of a digital communication system. or video signal,or a digital signal,such as computer output,which is discrete in time and has a finite number of output characters.In a digital communication system,the messages produced by the source are usually converted into a sequence of binary digits.Ideally,we would like to represent the source output(message)with as few binary digits as possible.In other words,we seek an efficient representation of the source output that results in little or no redundancy.The process of efficiently converting the output of either an analog or a dig- ital source into a sequence of binary digits is called source encoding or data compression. We shall describe source-encoding methods in Chapter 12. The source encoder outputs a sequence of binary digits,which we call the informa- tion sequence;this is passed to the channel encoder.The purpose of the channelencoder is to introduce,in a controlled manner,some redundancy in the binary information sequence that can be used at the receiver to overcome the effects of noise and interference encoun- tered in the transmission of the signal through the channel.Thus,the added redundancy serves to increase the reliability of the received data and improves the fidelity of the received signal.In effect,redundancy in the information sequence aids the receiver in decoding the desired information sequence.For example,a(trivial)form of encoding of the binary information sequence is simply to repeat each binary digit m times,where m is some positive integer.More sophisticated(nontrivial)encoding involves taking k informa- tion bits at a time and mapping each k-bit sequence into a unique n-bit sequence,called a code word.The amount of redundancy introduced by encoding the data in this man- ner is measured by the ratio n/k.The reciprocal of this ratio,namely,k/n,is called the rate of the code or,simply,the code rate.Channel coding and decoding are discussed in Chapter 13. The binary sequence at the output of the channel encoder is passed to the digital modulator,which serves as the interface to the communication channel.Since nearly all of the communication channels encountered in practice are capable of transmitting electrical signals (waveforms),the primary purpose of the digital modulator is to map the binary information sequence into signal waveforms.To elaborate on this point,let us suppose that the coded information sequence is to be transmitted one bit at a time at some uniform rate R bits/sec.The digital modulator may simply map the binary digit 0 into a waveform so(t) and the binary digit 1 into a waveform s(t).In this manner,each bit from the channel encoder is transmitted separately.We call this binary modulation.Alternatively,the modu- lator may transmit b coded information bits at a time by using M=24 distinct waveforms
8 Output signal Information source and input transducer Output transducer Source encoder Source decoder Channel encoder Channel decoder Figure 1.2 Basic elements of a digital communication system. Introduction Digital modulator Digital demodulator Chapter 1 Channel or video signal, or a digital signal, such as computer output, which is discrete in time and has a finite number of output characters. In a digital communication system, the messages produced by the source are usually converted into a sequence of binary digits. Ideally, we would like to represent the source output (message) with as few binary digits as possible. In other words, we seek an efficient representation of the source output that results in little or no redundancy. The process of efficiently converting the output of either an analog or a digital source into a sequence of binary digits is called source encoding or data compression. We shall describe source-encoding methods in Chapter 12. The source encoder outputs a sequence of binary digits, which we call the information sequence; this is passed to the channel encoder. The purpose of the channel encoder is to introduce, in a controlled manner, some redundancy in the binary information sequence that can be used at the receiver to overcome the effects of noise and interference encountered in the transmission of the signal through the channel. Thus, the added redundancy serves to increase the reliability of the received data and improves the fidelity of the received signal. In effect, redundancy in the information sequence aids the receiver in decoding the desired information sequence. For example, a (trivial) form of encoding of the binary information sequence is simply to repeat each binary digit m times, where m is some positive integer. More sophisticated (nontrivial) encoding involves taking k information bits at a time and mapping each k-bit sequence into a unique n-bit sequence, called a code word. The amount of redundancy introduced by encoding the data in this manner is measured by the ratio n / k. The reciprocal of this ratio, namely, k / n, is called the rate of the code or, simply, the code rate. Channel coding and decoding are discussed in Chapter 13. The binary sequence at the output of the channel encoder is passed to the digital modulator, which serves as the interface to the communication channel. Since nearly all of the communication channels encountered in practice are capable of transmitting electrical signals (waveforms), the primary purpose of the digital modulator is to map the binary information sequence into signal waveforms. To elaborate on this point, let us suppose that the coded information sequence is to be transmitted one bit at a time at some uniform rate R bits/sec. The digital modulator may simply map the binary digit 0 into a waveform so(t) and the binary digit 1 into a waveform s1 (t). In this manner, each bit from the channel encoder is transmitted separately. We call this binary modulation. Alternatively, the modulator may transmit b coded information bits at a time by using M = 2 k distinct waveforms
Section 1.2 Elements of an Electrical Communication System 9 5;(t),i=0,1,....M-1.This provides one waveform for each of the 2 possible k-bit sequences.We call this M-ary modulation (M>2).Note that a new k-bit sequence enters the modulator every k/R seconds.Hence,when the channel bit rate R is fixed,the amount of time available to transmit one of the M waveforms corresponding to a k-bit sequence is k times the time period in a system that uses binary modulation. At the receiving end of a digital communication system,the digital demodulator pro- cesses the channel-corrupted transmitted waveform and reduces each waveform to a sin- gle number that represents an estimate of the transmitted data symbol (binary or M-ary). For example,when binary modulation is used,the demodulator may process the received waveform and decide whether the transmitted bit is a 0 or a 1.In such a case,we say the demodulator has made a binary decision or hard decision.As an alternative,the demod- ulator may make a ternary decision;i.e.,it decides that the transmitted bit is either a 0 or 1 or it makes no decision at all,depending on the apparent quality of the received signal. When no decision is made on a particular bit,we say that the demodulator has inserted an eresure in the demodulated data.Using the redundancy in the transmitted data,the decoder attempts to fill in the positions where erasures occurred.Viewing the decision process per- formed by the demodulator as a form of quantization,we observe that binary and ternary decisions are special cases of a demodulator that quantizes to o levels,where o >2.In general,if the digital communication system employs M-ary modulation,where M rep- resents the M possible transmitted symbols,each corresponding to k log2 M bits,the demodulator may make a o-ary decision,where o M.In the extreme case where no quantization is performed,O=oo. When there is no redundancy in the transmitted information,the demodulator must decide which of the Mwaveforms was transmitted in any given time interval.Conse- quently o M,and since there is no redundancy in the transmitted information,no discrete channel decoder is used following the demodulator.On the other hand,when there is redundancy introduced by a discrete channel encoder at the transmitter,the O-ary output from the demodulator occurring every k/R seconds is fed to the decoder,which attempts to reconstruct the original information sequence from knowledge of the code used by the channel encoder and the redundancy contained in the received data. A measure of how well the demodulator and decoder perform is the frequency with which errors occur in the decoded sequence.More precisely,the average probability of a bit error at the output of the decoder is a measure of the performance of the demodulator- decoder combination.In general,the probability of error is a function of the code char- acteristics,the types of waveforms used to transmit the information over the channel,the transmitter power,the characteristics of the channel,i.e.,the amount of noise,and the method of demodulation and decoding. These items and their effect on performance will be discussed in detail in Chapters 8 through 10. As a final step,when an analog output is desired,the source decoder accepts the output sequence from the channel decoder and,from knowledge of the source encoding method used,attempts to reconstruct the original signal from the source.Due to channel decoding errors and possible distortion introduced by the source encoder and,perhaps, the source decoder,the signal at the output of the source decoder is an approximation to the original source output.The difference or some function of the difference between the
Section 1 .2 Elements of an Electrical Communication System 9 Si (t), i = 0, I, . . . , M - I. This provides one waveform for each of the 2 k possible k-bit sequences. We call this M-ary modulation (M > 2). Note that a new k-bit sequence enters the modulator every k/ R seconds. Hence, when the channel bit rate R is fixed, the amount of time available to transmit one of the M waveforms corresponding to a k-bit sequence is k times the time period in a system that uses binary modulation. At the receiving end of a digital communication system, the digital demodulator processes the channel-corrupted transmitted waveform and reduces each waveform to a single number that represents an estimate of the transmitted data symbol (binary or M-ary). For example, when binary modulation is used, the demodulator may process the received waveform and decide whether the transmitted bit is a 0 or a I. In such a case, we say the demodulator has made a binary decision or hard decision. As an alternative, the demodulator may make a ternary decision; i.e., it decides that the transmitted bit is either a 0 or 1 or it makes no decision at all, depending on the apparent quality of the received signal. "When no decision is made on a particular bit, we say that the demodulator has inserted an erasure in the demodulated data. Using the redundancy in the transmitted data, the decoder attempts to fill in the positions where erasures occurred. Viewing the decision process performed by the demodulator as a form of quantization, we observe that binary and ternary decisions are special cases of a demodulator that quantizes to Q levels, where Q 2: 2. In general, if the digital communication system employs M-ary modulation, where M represents the M possible transmitted symbols, each corresponding to k = log2 M bits, the demodulator may make a Q-ary decision, where Q 2: M. In the extreme case where no quantization is p®formed, Q = oo. When there is no redundancy in the transmitted information, the demodulator must decide which of the M ,waveforms was transmitted in any given time interval. Consequently Q = M, arid since there is no redundancy in the transmitted information, no discrete channel decoder is used following the demodulator. On the other hand, when there is redundancy introduced by a discrete channel encoder at the transmitter, the Q-ary output from the demodulator occurring every k / R seconds is fed to the decoder, which attempts to reconstruct the original information sequence from knowledge of the code used by the channel encoder and the redundancy contained in the received data. A measure of how well the demodulator and decoder perform is the frequency with which errors occur in the decoded sequence. More precisely, the average probability of a bit error at the output of the decoder is a measure of the performance of the demodulatordecoder combination. In general, the probability of error is a function of the code characteristics, the types of waveforms used to transmit the information over the channel, the transmitter power, the characteristics of the channel, i.e., the amount of noise, and the method of demodulation and decoding. These items and their effect on performance will be discussed in detail in Chapters 8 through 10. As a final step, when an analog output is desired, the source decoder accepts the output sequence from the channel decoder and, from knowledge of the source encoding method used, attempts to reconstruct the original signal from the source. Due to channel decoding errors and possible distortion introduced by the source encoder and, perhaps, the source decoder, the signal at the output of the source decoder is an approximation to the original source output. The difference or some function of the difference between the
10 Introduction Chapter 1 original signal and the reconstructed signal is a measure of the distortion introduced by the digital communication system. 1.2.2 Early Work in Digital Communications Although Morse is responsible for the development of the first electrical digital commu- nication system(telegraphy),the beginnings of what we now regard as modern digital communications stem from the work of Nyquist(1924),who investigated the problem of determining the maximum signaling rate that can be used over a telegraph channel of a given bandwidth without intersymbol interference.He formulated a model of a telegraph system in which a transmitted signal has the general form s0=∑ ang(t-nT), where g(t)represents a basic pulse shape and (a)is the binary data sequence of (1) transmitted at a rate of 1/T bits/sec.Nyquist set out to determine the optimum pulse shape that was bandlimited to W Hz and maximized the bit rate 1/T under the constraint that the pulse caused no intersymbol interference at the sampling times k/T,k =0,1,+2,.... His studies led him to conclude that the maximum pulse rate 1/T is 2W pulses/sec.This rate is now called the Nyguist rate.Moreover,this pulse rate can be achieved by using the pulses g(t)=(sin 2Wt)/2nWt.This pulse shape allows the recovery of the data without intersymbol interference at the sampling instants.Nyquist's result is equivalent to a ver- sion of the sampling theorem for bandlimited signals,which was later stated precisely by Shannon(1948).The sampling theorem states that a signal of bandwidth W can be recon- structed from samples taken at the Nyquist rate of 2W samples/sec using the interpolation formula s0=∑(品) sin2πW(t-n/2W) 2πW(t-n/2W) In light of Nyquist's work,Hartley (1928)considered the amount of data that can be reliably transmitted over a bandlimited channel when multiple amplitude levels are used. Due to the presence of noise and other interference,Hartley postulated that the receiver can reliably estimate the received signal amplitude to some accuracy,say As.This investigation led Hartley to conclude that there is a maximum data rate that can be communicated reli- ably over a bandlimited channel,when the maximum signal amplitude is limited to Amax (fixed power constraint)and the amplitude resolution is As. Another significant advance in the development of communications was the work of Wiener(1942),who considered the problem of estimating a desired signal waveform s(t)in the presence of additive noise n(t)based on observation of the received signal r(t)=s(t)+n(t).This problem arises in signal demodulation.Wiener determined the linear filter whose output is the best mean-square approximation to the desired signal s(t). The resulting filter is called the optimum linear (Wiener)filter. Hartley's and Nyquist's results on the maximum transmission rate of digital informa- tion were precursors to the work of Shannon (1948a,b),who established the mathematical
10 Introduction Chapter 1 original signal and the reconstructed signal is a measure of the distortion introduced by the digital communication system. 1.2.2 Early Work in Digital Communications Although Morse is responsible for the development of the first electrical digital communication system (telegraphy), the beginnings of what we now regard as modern digital communications stem from the work of Nyquist ( 1924 ), who investigated the problem of determining the maximum signaling rate that can be used over a telegraph channel of a given bandwidth without intersymbol interference. He formulated a model of a telegraph system in which a transmitted signal has the general form s(t) = L an g(t - nT), fl where g(t) represents a basic pulse shape and {a11 } is the binary data sequence of {±1} transmitted at a rate of 1 / T bits/sec. Nyquist set out to determine the optimum pulse shape that was bandlimited to W Hz and maximized the bit rate 1 / T under the constraint that the pulse caused no intersymbol interference at the sampling times k / T, k = 0, ± 1, ±2, .... His studies led him to conclude that the maximum pulse rate l/T is 2W pulses/sec. This rate is now called the Nyquist rate. Moreover, this pulse rate can be achieved by using the pulses g(t) = (sin 2rcWt)/2rcWt. This pulse shape allows the recovery of the data without intersymbol interference at the sampling instants. Nyquist's result is equivalent to a version of the sampling theorem for bandlimited signals, which was later stated precisely by Shannon ( 1948). The sampling theorem states that a signal of bandwidth W can be reconstructed from samples taken at the Nyquist rate of 2W samples/sec using the interpolation formula � ( n ) sin 2rcW (t - n/2W) s(t) = L, s - . 2W 2rcW (t - n/2W) II In light of Nyquist's work, Hartley ( 1928) considered the amount of data that can be reliably transmitted over a bandlimited channel when multiple amplitude levels are used. Due to the presence of noise and other interference, Hartley postulated that the receiver can reliably estimate the received signal amplitude to some accuracy, say A8• This investigation led Hartley to conclude that there is a maximum data rate that can be communicated reliably over a bandlimited channel, when the maximum signal amplitude is limited to Amax (fixed power constraint) and the amplitude resolution is A8. Another significant advance in the development of communications was the work of Wiener ( 1942), who considered the problem of estimating a desired signal waveform s(t) in the presence of additive noise n (t) based on observation of the received signal r (t) = s(t) + n (t). This problem arises in signal demodulation. Wiener determined the linear filter whose output is the best mean-square approximation to the desired signal s(t). The resulting filter is called the optimum linear (Wiener) filter. Hartley's and Nyquist's results on the maximum transmission rate of digital information were precursors to the work of Shannon ( 1948a,b), who established the mathematical
Section 1.2 Elements of an Electrical Communication System 11 foundations for information theory and derived the fundamental limits for digital commu- nication systems.In his pioneering work,Shannon formulated the basic problem of reliable transmission of information in statistical terms,using probabilistic models for information sources and communication channels.Based on this statistical formulation,he adopted a logarithmic measure for the information content of a source.He also demonstrated that the effect of a transmitter power constraint,a bandwidth constraint,and additive noise can be associated with the channel and incorporated into a single parameter,called the chan- nel capaciry.For example,in the case of an additive white(spectrally flat)Gaussian noise interference,an ideal bandlimited channel of bandwidth W has a capacity C given by bits/sec, WNo where P is the average transmitted power and No is the power-spectral density of the addi- tive noise.The significance of the channel capacity is as follows:If the information rate R from the source is less than C(R<C),then it is theoretically possible to achieve reliable (error-free)transmission through the channel by appropriate coding.On the other hand,if R C,reliable transmission is not possible regardless of the amount of signal process- ing performed at the transmitter and receiver.Thus,Shannon established basic limits on communication of information and gave birth to a new field that is now called information theory. Initially,the fundamental work of Shannon had a relatively small impact on the design and development.of new digital communication systems.In part,this was due to the small demand for digital information transmission during the 1950s.Another reason was the relatively large complexity and,hence,the high-cost digital hardware required to achieve the high efficiency and the high reliability predicted by Shannon's theory. Another important contribution to the field of digital communications is the work of Kotelnikov (1947).His work provided a coherent analysis of the various digital com- munication systems,based on a geometrical approach.Kotelnikov's approach was later expanded by Wozencraft and Jacobs(1965). The increase in the demand for data transmission during the last four decades,cou- pled with the development of more sophisticated integrated circuits,has led to the devel- opment of very efficient and more reliable digital communication systems.In the course of these developments,Shannon's original results and the generalization of his results on max- imum transmission limits over a channel and on bounds on the performance achieved have served as benchmarks against which any given communication system design can be com- pared.The theoretical limits,derived by Shannon and other researchers that contributed to the development of information theory,serve as an ultimate goal in the continuing efforts to design and develop more efficient digital communication systems. Following Shannon's publications came the classic work of Hamming (1950),which used error-detecting and error-correcting codes to combat the detrimental effects of channel noise.Hamming's work stimulated many researchers in the years that followed,and a variety of new and powerful codes were discovered,many of which are used today in the implementation of modern communication systems
Section 1 .2 Elements of an Electrical Communication System 11 foundations for information theory and derived the fundamental limits for digital communication systems. In his pioneering work, Shannon formulated the basic problem of reliable transmission of information in statistical terms, using probabilistic models for information sources and communication channels. Based on this statistical formulation, he adopted a logarithmic measure for the information content of a source. He also demonstrated that the effect of a transmitter power constraint, a bandwidth constraint, and additive noise can be associated with the channel and incorporated into a single parameter, called the channel capacity. For example, in the case of an additive white (spectrally flat) Gaussian noise interference, an ideal bandlimited channel of bandwidth W has a capacity C given by C = W log2 (1 + �) bits/sec, WNo where P is the average transmitted power and No is the power-spectral density of the additive noise. The significance of the channel capacity is as follows: If the information rate R from the source is less than C (R < C), then it is theoretically possible to achieve reliable (error-free) transmission through the channel by appropriate coding. On the other hand, if R > C, reliable transmission is not possible regardless of the amount of signal processing performed at the transmitter and receiver. Thus, Shannon established basic limits on communication of information and gave birth to a new field that is now called information theory. Initially, the fundamental work of Shannon had a relatively small impact on the design and development,of new digital communication systems. In part, this was due to the small demand for digital information transmission during the 1950s. Another reason was the relatively large complexity and, hence, the high-cost digital hardware required to achieve the high efficiency and the high reliability predicted by Shannon's theory. Another important contribution to the field of digital communications is the work of Kotelnikov ( 1947). His work provided a coherent analysis of the various digital communication systems, based on a geometrical approach. Kotelnikov's approach was later expanded by Wozencraft and Jacobs ( 1965). The increase in the demand for data transmission during the last four decades, coupled with the development of more sophisticated integrated circuits, has led to the development of very efficient and more reliable digital communication systems. In the course of these developments, Shannon's original results and the generalization of his results on maximum transmission limits over a channel and on bounds on the performance achieved have served as benchmarks against which any given communication system design can be compared. The theoretical limits, derived by Shannon and other researchers that contributed to the development of information theory, serve as an ultimate goal in the continuing efforts to design and develop more efficient digital communication systems. Following Shannon's publications came the classic work of Hamming (1950), which used error-detecting and error-correcting codes to combat the detrimental effects of channel noise. Hamming's work stimulated many researchers in the years that followed, and a variety of new and powerful codes were discovered, many of which are used today in the implementation of modern communication systems