FundamentalsofMultimedia,Chapter7Chapter 7Lossless CompressionAlgorithms7.1 Introduction7.2 Basics of Information Theory7.3 Run-Length Coding7.4 Variable-Length Coding (VLC)7.5 Dictionary-based Coding7.6 Arithmetic Coding7.7 Lossless Image Compression7.8 Further Exploration
Fundamentals of Multimedia, Chapter 7 Chapter 7 Lossless Compression Algorithms 7.1 Introduction 7.2 Basics of Information Theory 7.3 Run-Length Coding 7.4 Variable-Length Coding (VLC) 7.5 Dictionary-based Coding 7.6 Arithmetic Coding 7.7 Lossless Image Compression 7.8 Further Exploration
FundamentalsofMultimedia,Chapter77.1 IntroductionCompression:the process of coding that will effectivelyreduce thetotal number of bitsneeded to represent certaininformation.InputEncoderDecoderOutputStorage ornetworks(decompression)(compression)datadataFig.7.1:AGeneralDataCompressionScheme
Fundamentals of Multimedia, Chapter 7 7.1 Introduction • Compression: the process of coding that will effectively reduce the total number of bits needed to represent certain information. Encoder (compression) Decoder (decompression) Storage or networks Input Output data data Fig. 7.1: A General Data Compression Scheme
FundamentalsofMultimedia,Chapter7Introduction (cont'd)Ifthecompressionanddecompressionprocessesinducenoinformationloss,thenthe compression schemeislossless;otherwise, it is lossy.Compressionratio:Bo(7.1)compression ratio:B1Bo-numberofbitsbeforecompressionBi-numberofbitsaftercompression
Fundamentals of Multimedia, Chapter 7 Introduction (cont’d) • If the compression and decompression processes induce no information loss, then the compression scheme is lossless; otherwise, it is lossy. • Compression ratio: compression ratio = B 0 B 1 (7.1) B 0 – number of bits before compression B 1 – number of bits after compression
FundamentalsofMultimedia,Chapter77.2 Basics of Information TheoryThe entropyn of an information sourcewith alphabet S=[s1, $2,..., Sn} is:n1(7.2)n= H(S) = Zp;log2Pi=1n(7.3)Tpilog2Pi21pi - probability that symbol si will occur in S.1-indicatestheamountofinformation(self-informationlog2pas defined byShannon) contained in si,which correspondsto the number of bits needed to encode si
Fundamentals of Multimedia, Chapter 7 7.2 Basics of Information Theory • The entropy η of an information source with alphabet S = { s 1, s 2,.,s n } is: η = H ( S) = Xn i=1 p i log 2 1 p i (7.2) = − Xn i=1 p i log 2 p i (7.3) p i – probability that symbol s i will occur in S. log 2 1 pi – indicates the amount of information ( self-information as defined by Shannon) contained in s i, which corresponds to the number of bits needed to encode s i
FundamentalsofMultimedia,Chapter7Distributionof Gray-Level Intensities1/256-2/31/3255255(b)(aFig.7.2Histograms for Two Gray-level Images.Fig.7.2(a)showsthehistogramof an imagewith uni-form distribution of gray-level intensities, i.e., Vi pi = 1/256.Hence,the entropy of this image is:(7.4)log2256=8
Fundamentals of Multimedia, Chapter 7 Distribution of Gray-Level Intensities 0 0 255 255 1 2/3 1/3 1/256 (a) (b) i i pi pi Fig. 7.2 Histograms for Two Gray-level Images. • Fig. 7.2(a) shows the histogram of an image with uniform distribution of gray-level intensities, i.e., ∀i p i = 1 /256. Hence, the entropy of this image is: log 2 256 = 8 (7.4)