# Introduction to Information Theory and Data Compression 2nd Edition

₹495.00

- Publisher: CHAPMAN & HALL/CRC
- ISBN-13: 9781584883135
- Pages: 384
- Binding: Paperback
- Year of Pub / Reprint Year: 2011

## Description

**About The Book**

An effective blend of carefully explained theory and practical applications, this text imparts the fundamentals of both information theory and data compression. Although the two topics are related, this unique text allows either topic to be presented independently, and it was specifically designed so that the data compression section requires no prior knowledge of information theory.

The treatment of information theory, while theoretical and abstract, is quite elementary, making this text less daunting than many others. After presenting the fundamental definitions and results of the theory, the authors then apply the theory to memoryless, discrete channels with zeroth-order, one-state sources.

The chapters on data compression acquaint students with a myriad of lossless compression methods and then introduce two lossy compression methods. Students emerge from this study competent in a wide range of techniques. The authors’ presentation is highly practical but includes some important proofs, either in the text or in the exercises, so instructors can, if they choose, place more emphasis on the mathematics.

Introduction to Information Theory and Data Compression, Second Edition is ideally suited for an upper-level or graduate course for students in mathematics, engineering, and computer science.

**Table of Contents:**

Part I: Information Theory

ELEMENTARY PROBABILITY

Introduction

Events

Conditional Probability

Independence

Bernoulli Trials

An Elementary Counting Principle

On Drawing without Replacement

Random Variables and Expected, or Average, Value

The Law of Large Numbers

INFORMATION AND ENTROPY

How is Information Quantified?

Systems of Events and Mutual Information

Entropy

Information and Entropy

CHANNELS AND CHANNEL CAPACITY

Discrete Memoryless Channels

Transition Probabilities and Binary Symmetric Channels

Input Frequencies

Channel Capacity

Proof of Theorem 3.4.3, on the Capacity Equations

CODING THEORY

Encoding and Decoding

Prefix-Condition Codes and the Kraft-McMillan Inequality

Average Code Word Length and Huffman’s Algorithm

Optimizing the Input Frequencies

Error Correction, Maximum Likelihood Decoding, Nearest Code Word Decoding and Reliability

Shannon’s Noisy Channel Theorem

Error Correction with Binary Symmetric Channels and Equal Source Frequencies

The Information Rate of a Code

Part II: Data Compression

LOSSLESS DATA COMPRESSION BY REPLACEMENT SCHEMES

Replacement via Encoding Scheme

Review of the Prefix Condition

Choosing an Encoding Scheme

The Noiseless Coding Theorem and Shannon’s Bound

ARITHMETIC CODING

Pure Zeroth-Order Arithmetic Coding: dfwld

What’s Good about dfwld Coding: The Compression Ratio

What’s Bad about dfwld Coding and Some Ways to Fix It

Implementing Arithmetic Coding

Notes

HIGHER-ORDER MODELING

Higher-Order Huffman Encoding

The Shannon Bound for Higher-Order Encoding

Higher-Order Arithmetic Coding

Statistical Models, Statistics, and the Possibly Unknowable Truth

Probabilistic Finite State Source Automata

ADAPTIVE METHODS

Adaptive Huffman Encoding

Maintaining the Tree in Adaptive Huffman Encoding: The Method of Knuth and Gallager

Adaptive Arithmetic Coding

Interval and Recency Rank Encoding

DICTIONARY METHODS

LZ77 (Sliding Window) Schemes

The LZ78 Approach

Notes

TRANSFORM METHODS AND IMAGE COMPRESSION

Transforms

Periodic Signals and the Fourier Transform

The Cosine and Sine Transforms

Two-Dimensional Transforms

An Application: JPEG Image Compression

A Brief Introduction to Wavelets

Notes

APPENDICES

JPEGtool User’s Guide

Source Listing for LZRW1-A

Resources, Patents, And Illusions

Notes on and Solutions to Some Exercises

Bibliography

INDEX

Features:

Expanded discussion of the historical and theoretical basis of information theory that builds a firm, intuitive grasp of the subject

Reorganization of theoretical results along with new exercises, ranging from the routine to the more difficult, that reinforce students’ ability to apply the definitions and results in specific situations.

Simplified treatment of the algorithm(s) of Gallager and Knuth

Discussion of the information rate of a code and the trade-off between error correction and information rate

Treatment of probabilistic finite state source automata, including basic results, examples, references, and exercises

Octave and MATLAB image compression codes included in an appendix for use with the exercises and projects involving transform methods

Supplementary materials, including software, available for download

from the authors’ Web site at www.dms.auburn.edu/compression

Instructors

We provide complimentary e-inspection copies of primary textbooks to instructors considering our books for course adoption.

Request an

e-inspection copy

Share this Title

AddThis Sharing Buttons

Related Titles

1 of 3