Katana VentraIP

Data compression

In information theory, data compression, source coding,[1] or bit-rate reduction is the process of encoding information using fewer bits than the original representation.[2] Any particular compression is either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression. Lossy compression reduces bits by removing unnecessary or less important information.[3] Typically, a device that performs data compression is referred to as an encoder, and one that performs the reversal of the process (decompression) as a decoder.

"Source coding" redirects here. For the term in computer programming, see Source code.

The process of reducing the size of a data file is often referred to as data compression. In the context of data transmission, it is called source coding: encoding is done at the source of the data before it is stored or transmitted.[4] Source coding should not be confused with channel coding, for error detection and correction or line coding, the means for mapping data onto a signal.


Compression is useful because it reduces the resources required to store and transmit data. Computational resources are consumed in the compression and decompression processes. Data compression is subject to a space-time complexity trade-off. For instance, a compression scheme for video may require expensive hardware for the video to be decompressed fast enough to be viewed as it is being decompressed, and the option to decompress the video in full before watching it may be inconvenient or require additional storage. The design of data compression schemes involves trade-offs among various factors, including the degree of compression, the amount of distortion introduced (when using lossy data compression), and the computational resources required to compress and decompress the data.[5]

Only encoding sounds that could be made by a single human voice.

Throwing away more of the data in the signal—keeping just enough to reconstruct an "intelligible" voice rather than the full frequency range of human .

hearing

Outlook and currently unused potential[edit]

It is estimated that the total amount of data that is stored on the world's storage devices could be further compressed with existing compression algorithms by a remaining average factor of 4.5:1.[83] It is estimated that the combined technological capacity of the world to store information provides 1,300 exabytes of hardware digits in 2007, but when the corresponding content is optimally compressed, this only represents 295 exabytes of Shannon information.[84]

, Data Compression Basics

"Part 3: Video compression"

Pierre Larbier, , Ateme, archived from the original on 2009-09-05

Using 10-bit AVC/H.264 Encoding with 4:2:2 for Broadcast Contribution

at the Wayback Machine (archived 2017-08-30)

Why does 10-bit save bandwidth (even when content is 8-bit)?

at the Wayback Machine (archived 2017-08-30)

Which compression technology should be used?

(PDF), Wiley, archived (PDF) from the original on 2007-09-28

Introduction to Compression Theory

EBU subjective listening tests on low-bitrate audio codecs

(Guide for helping a user pick out the right codec)

Audio Archiving Guide: Music Formats

at the Wayback Machine (archived September 28, 2007)

MPEG 1&2 video compression intro (pdf format)

hydrogenaudio wiki comparison

by Guy E Blelloch from CMU

Introduction to Data Compression

Explanation of lossless signal compression method used by most codecs

at the Wayback Machine (archived 2010-03-15)

Videsignline – Intro to Video Compression

at the Wayback Machine (archived 2013-05-27)

Data Footprint Reduction Technology

What is Run length Coding in video compression