Top Document: comp.compression Frequently Asked Questions (part 1/3)
Previous Document:  What about patents on data compression algorithms?
Next Document:  Fake compression programs (OWS, WIC)
See reader questions & answers on this topic! - Help others by sharing your knowledge
[Note from the FAQ maintainer: this topic has generated and is still generating the greatest volume of news in the history of comp.compression. Read this before posting on this subject. I intended to remove the WEB story from the FAQ, but similar affairs come up regularly on comp.compression. The advertized revolutionary methods have all in common their supposed ability to compress random or already compressed data. I will keep this item in the FAQ to encourage people to take such claims with great precautions.] 9.1 Introduction It is mathematically impossible to create a program compressing without loss *all* files by at least one bit (see below and also item 73 in part 2 of this FAQ). Yet from time to time some people claim to have invented a new algorithm for doing so. Such algorithms are claimed to compress random data and to be applicable recursively, that is, applying the compressor to the compressed output of the previous run, possibly multiple times. Fantastic compression ratios of over 100:1 on random data are claimed to be actually obtained. Such claims inevitably generate a lot of activity on comp.compression, which can last for several months. Large bursts of activity were generated by WEB Technologies and by Jules Gilbert. Premier Research Corporation (with a compressor called MINC) made only a brief appearance but came back later with a Web page at http://www.pacminc.com. The Hyper Space method invented by David C. James is another contender with a patent obtained in July 96. Another large burst occured in Dec 97 and Jan 98: Matthew Burch <email@example.com> applied for a patent in Dec 97, but publicly admitted a few days later that his method was flawed; he then posted several dozen messages in a few days about another magic method based on primes, and again ended up admitting that his new method was flawed. (Usually people disappear from comp.compression and appear again 6 months or a year later, rather than admitting their error.) Other people have also claimed incredible compression ratios, but the programs (OWS, WIC) were quickly shown to be fake (not compressing at all). This topic is covered in item 10 of this FAQ. 9.2 The counting argument [This section should probably be called "The counting theorem" because some people think that "argument" implies that it is only an hypothesis, not a proven mathematical fact. The "counting argument" is actually the proof of the theorem.] The WEB compressor (see details in section 9.3 below) was claimed to compress without loss *all* files of greater than 64KB in size to about 1/16th their original length. A very simple counting argument shows that this is impossible, regardless of the compression method. It is even impossible to guarantee lossless compression of all files by at least one bit. (Many other proofs have been posted on comp.compression, please do not post yet another one.) Theorem: No program can compress without loss *all* files of size >= N bits, for any given integer N >= 0. Proof: Assume that the program can compress without loss all files of size >= N bits. Compress with this program all the 2^N files which have exactly N bits. All compressed files have at most N-1 bits, so there are at most (2^N)-1 different compressed files [2^(N-1) files of size N-1, 2^(N-2) of size N-2, and so on, down to 1 file of size 0]. So at least two different input files must compress to the same output file. Hence the compression program cannot be lossless. The proof is called the "counting argument". It uses the so-called pigeon-hole principle: you can't put 16 pigeons into 15 holes without using one of the holes twice. Much stronger results about the number of incompressible files can be obtained, but the proofs are a little more complex. (The MINC page http://www.pacminc.com uses one file of strictly negative size to obtain 2^N instead of (2^N)-1 distinct files of size <= N-1 .) This argument applies of course to WEB's case (take N = 64K*8 bits). Note that no assumption is made about the compression algorithm. The proof applies to *any* algorithm, including those using an external dictionary, or repeated application of another algorithm, or combination of different algorithms, or representation of the data as formulas, etc... All schemes are subject to the counting argument. There is no need to use information theory to provide a proof, just very basic mathematics. [People interested in more elaborate proofs can consult http://wwwvms.utexas.edu/~cbloom/news/nomagic.html ] In short, the counting argument says that if a lossless compression program compresses some files, it must expand others, *regardless* of the compression method, because otherwise there are simply not enough bits to enumerate all possible output files. Despite the extreme simplicity of this theorem and its proof, some people still fail to grasp it and waste a lot of time trying to find a counter-example. This assumes of course that the information available to the decompressor is only the bit sequence of the compressed data. If external information such as a file name, a number of iterations, or a bit length is necessary to decompress the data, the bits necessary to provide the extra information must be included in the bit count of the compressed data. Otherwise, it would be sufficient to consider any input data as a number, use this as the file name, iteration count or bit length, and pretend that the compressed size is zero. For an example of storing information in the file name, see the program lmfjyh in the 1993 International Obfuscated C Code Contest, available on all comp.sources.misc archives (Volume 39, Issue 104). A common flaw in the algorithms claimed to compress all files is to assume that arbitrary bit strings can be sent to the decompressor without actually transmitting their bit length. If the decompressor needs such bit lengths to decode the data (when the bit strings do not form a prefix code), the number of bits needed to encode those lengths must be taken into account in the total size of the compressed data. Another common (but still incorrect) argument is to assume that for any file, some still to be discovered algorithm might find a seed for a pseudo-random number generator which would actually generate the whole sequence of bytes contained in the file. However this idea still fails to take into account the counting argument. For example, if the seed is limited to 64 bits, this algorithm can generate at most 2^64 different files, and thus is unable to compress *all* files longer than 8 bytes. For more details about this "magic function theory", see http://www.dogma.net/markn/FAQ.html#Q19 Yet another popular idea is to split the input bit stream into a sequence of large numbers, and factorize those numbers. Unfortunately, the number of bits required to encode the factors and their exponents is on average not smaller than the number of bits of the original bit stream, so this scheme too cannot compress all data. Another idea also related to primes is to encode each number as an index into a table of primes and an offset relative to the indexed prime; this idea doesn't work either because the number of bits required to encode the index, the offset and the separation between index and offset is on average not smaller than the number of bits of the original bit stream. Steve Tate <firstname.lastname@example.org> suggests a good challenge for programs that are claimed to compress any data by a significant amount: Here's a wager for you: First, send me the DEcompression algorithm. Then I will send you a file of whatever size you want, but at least 100k. If you can send me back a compressed version that is even 20% shorter (80k if the input is 100k) I'll send you $100. Of course, the file must be able to be decompressed with the program you previously sent me, and must match exactly my original file. Now what are you going to provide when... er... if you can't demonstrate your compression in such a way? So far no one has accepted this challenge (for good reasons). Mike Goldman <email@example.com> makes another offer: I will attach a prize of $5,000 to anyone who successfully meets this challenge. First, the contestant will tell me HOW LONG of a data file to generate. Second, I will generate the data file, and send it to the contestant. Last, the contestant will send me a decompressor and a compressed file, which will together total in size less than the original data file, and which will be able to restore the compressed file to the original state. With this offer, you can tune your algorithm to my data. You tell me the parameters of size in advance. All I get to do is arrange the bits within my file according to the dictates of my whim. As a processing fee, I will require an advance deposit of $100 from any contestant. This deposit is 100% refundable if you meet the challenge. 9.3 The WEB 16:1 compressor 9.3.1 What the press says April 20, 1992 Byte Week Vol 4. No. 25: "In an announcement that has generated high interest - and more than a bit of skepticism - WEB Technologies (Smyrna, GA) says it has developed a utility that will compress files of greater than 64KB in size to about 1/16th their original length. Furthermore, WEB says its DataFiles/16 program can shrink files it has already compressed." [...] "A week after our preliminary test, WEB showed us the program successfully compressing a file without losing any data. But we have not been able to test this latest beta release ourselves." [...] "WEB, in fact, says that virtually any amount of data can be squeezed to under 1024 bytes by using DataFiles/16 to compress its own output multiple times." June 1992 Byte, Vol 17 No 6: [...] According to Earl Bradley, WEB Technologies' vice president of sales and marketing, the compression algorithm used by DataFiles/16 is not subject to the laws of information theory. [...] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 9.3.2 First details, by John Wallace <firstname.lastname@example.org> I called WEB at (404)514-8000 and they sent me some product literature as well as chatting for a few minutes with me on the phone. Their product is called DataFiles/16, and their claims for it are roughly those heard on the net. According to their flier: "DataFiles/16 will compress all types of binary files to approximately one-sixteenth of their original size ... regardless of the type of file (word processing document, spreadsheet file, image file, executable file, etc.), NO DATA WILL BE LOST by DataFiles/16." (Their capitalizations; 16:1 compression only promised for files >64K bytes in length.) "Performed on a 386/25 machine, the program can complete a compression/decompression cycle on one megabyte of data in less than thirty seconds" "The compressed output file created by DataFiles/16 can be used as the input file to subsequent executions of the program. This feature of the utility is known as recursive or iterative compression, and will enable you to compress your data files to a tiny fraction of the original size. In fact, virtually any amount of computer data can be compressed to under 1024 bytes using DataFiles/16 to compress its own output files muliple times. Then, by repeating in reverse the steps taken to perform the recusive compression, all original data can be decompressed to its original form without the loss of a single bit." Their flier also claims: "Constant levels of compression across ALL TYPES of FILES" "Convenient, single floppy DATA TRANSPORTATION" From my telephone conversation, I was assured that this is an actual compression program. Decompression is done by using only the data in the compressed file; there are no hidden or extra files. 9.3.3 More information, by Rafael Ramirez <email@example.com>: Today (Tuesday, 28th) I got a call from Earl Bradley of Web who now says that they have put off releasing a software version of the algorithm because they are close to signing a major contract with a big company to put the algorithm in silicon. He said he could not name the company due to non-disclosure agreements, but that they had run extensive independent tests of their own and verified that the algorithm works. [...] He said the algorithm is so simple that he doesn't want anybody getting their hands on it and copying it even though he said they have filed a patent on it. [...] Mr. Bradley said the silicon version would hold up much better to patent enforcement and be harder to copy. He claimed that the algorithm takes up about 4K of code, uses only integer math, and the current software implementation only uses a 65K buffer. He said the silicon version would likely use a parallel version and work in real-time. [...] 9.3.4 No software version Appeared on BIX, reposted by Bruce Hoult <Bruce.Hoult@actrix.gen.nz>: tojerry/chaos #673, from abailey, 562 chars, Tue Jun 16 20:40:34 1992 Comment(s). ---------- TITLE: WEB Technology I promised everyone a report when I finally got the poop on WEB's 16:1 data compression. After talking back and forth for a year and being put off for the past month by un-returned phone calls, I finally got hold of Marc Spindler who is their sales manager. _No_ software product is forth coming, period! He began talking about hardware they are designing for delivery at the end of the year. [...] 9.3.5 Product cancelled Posted by John Toebes <firstname.lastname@example.org> on Aug 10th, 1992: [Long story omitted, confirming the reports made above about the original WEB claims.] 10JUL92 - Called to Check Status. Was told that testing had uncovered a new problem where 'four numbers in a matrix were the same value' and that the programmers were off attempting to code a preprocessor to eliminate this rare case. I indicated that he had told me this story before. He told me that the programmers were still working on the problem. 31JUL92 - Final Call to Check Status. Called Earl in the morning and was told that he still had not heard from the programmers. [...] Stated that if they could not resolve the problem then there would probably not be a product. 03AUG92 - Final Call. Earl claims that the programmers are unable to resolve the problem. I asked if this meant that there would not be a product as a result and he said yes. 9.3.6 Byte's final report Extract from the Nov. 95 issue of Byte, page 42: Not suprisingly, the beta version of DataFiles/16 that reporter Russ Schnapp tested didn't work. DataFiles/16 compressed files, but when decompressed, those files bore no resemblance to their originals. WEB said it would send us a version of the program that worked, but we never received it. When we attempted to follow up on the story about three months later, the company's phone had been disconnected. Attempts to reach company officers were also unsuccessful. [...] 9.4 Jules Gilbert As opposed to WEB Technologies, Jules Gilbert <email@example.com> does not claim to compress *all* files, but only "random or random-appearing" files. Here are some quotes from a few of Mr Gilbert's articles, which can be helpful to get a better idea of his claims. No comments or conclusions are given; if you need more information contact Mr. Gilbert directly. From: firstname.lastname@example.org (Jules Gilbert) Newsgroups: comp.compression Subject: Re: No Magic Compressors, No Factoring Compressors, Jules Gilbert is a liar Date: 14 May 1996 03:13:31 -0400 Message-ID: <email@example.com> [...] I will, in front of several Boston area computer scientists ('monitors'), people I choose but generally known to be fair and competent, under conditions which are sufficient to prevent disclosure of the method and fully protect the algorithm and other aspects of the underlying method from untoward discovery, use two computers, (which I am permitted to examine but not alter) with both machine's running Linux, and with the file-systems and Linux OS freshly restored from commercial CD-ROM's do the following: On one machine (the 'src-CPU') will be loaded a copy of the CALGARY-CORPUS. (Or other agreed on '.ZIP' or '.ARJ' file.) I will compress the CALGARY-CORPUS for transfer from the src-CPU onto 3.5" disks and transfer it (by sneaker-net) to the other machine for decompression and produce a perfect copy of the CORPUS file on the 'dst-CPU'. The CORPUS archive contents will not be 'cracked', ie', the original CORPUS can be encrypted and the password kept from me. All I care about is that the input file is highly random-aprearing. I claim that I can perform this process several times, and each iteration will reduce the overall file by at least 50%, ie., a ratio of 2:1. An 'iteration' will constitute copying, using compression, from the src-CPU to the dst-CPU, and then reversing the direction to achieve another iteration. For example, for say a 4M input file, it is reasonable to expect an approximately 1M output file, after two complete iterations. [...] ONLY RANDOM OR RANDOM-APPEARING DATA INPUT CAN BE COMPRESSED BY MY METHOD. [...] If one iteration (of the compression 'sandwich') consists of two parts, say an LZ phase followed by a JG phase, the LZ method will compression by perhaps a ration of 2:1 (at the first iteration), perhaps much better if the input is text, and the JG phase will do 3-4:1, but slowly!! During subsequent iterations, the LZ phase will do perhaps 1.25:1 and the JG phase will continue to do about 3-4:1. Experimentally, I have achieved compression results of nearly 150:1, overall, ^^^^^^^^^^^^^^ ^^^^^ for a 60M file. (I started with a '.arj' archive of a large DOS partition.) [...] ---------------------------------------------------------------------------- From: firstname.lastname@example.org (Jules Gilbert) Newsgroups: comp.compression Subject: Re: Explanation: that uh, alg thing... Date: 15 May 1996 16:38:18 -0400 Message-ID: <email@example.com> [...] One more thing, I am preparing a short technical note to deal with the reason most programmers' and computer scientists' think it's impossible to (further) compress random input. (Many people think that because you can't get more than 2^N messages from a N-bit compressed msg, that it means that you can't compress random input. (Lot's of folks have told me that.) The short story is: I agree that you can not get more than 2^N messages from N bits. No question about it. BUT THAT STATMENT HAS NOTHING TO DO WHATSOEVER WITH THE INTERPRETATION OF WHAT THOSE BITS 'MEAN'. [...] ---------------------------------------------------------------------------- From: firstname.lastname@example.org (Jules Gilbert) Newsgroups: comp.compression Subject: Seeing is believing! Date: 9 Jun 1996 03:20:52 -0400 Message-ID: <email@example.com> [...] If your firm needs industrial-strength compression, contact 'firstname.lastname@example.org' and ask us for an on-site demonstration of our MR2 compressors. Each can compress large files of 'random-appearing' information, whether RSA-encrypted blocks, or files already compressed using LZ-techniques. Our demonstration will give you the opportunity to observe compression of 'random-appearing' files of at least 100MB by at least 3:1 per iteration. Usually, several iterations are possible. (These are minimum figures easily exceeded.) [...] ---------------------------------------------------------------------------- From: email@example.com (Jules Gilbert) Newsgroups: comp.compression Subject: Re: My remarks on Jules Gilbert Date: 24 Jul 1996 18:05:44 -0400 Message-ID: <firstname.lastname@example.org> [...] My claims can not possibly be true IF I'M PLAYING BY THE 'RULES' THAT YOU ASSUME APPLY TO ME. (Sorry to shout). Clearly, anyone sending a signal (in the Shannon context), is constrained by limits which make it impossible to compress RAD ('random-appearing data') input. [...] 1) I can't compress bits any better than the next guy. Maybe not as well, in fact. 2) I have designed an engine that accepts RAD input and emits far too little data to reconstitute the original data, based on conventional assumptions. Okay! I know this. 3) But, I none-the-less reconstitute the original data. [...] ---------------------------------------------------------------------------- From: email@example.com (Jules Gilbert) Newsgroups: comp.compression Subject: Re: Jules Gilbert's New Compresssion Technology Date: 12 Aug 1996 08:11:10 -0400 Message-ID: <firstname.lastname@example.org> I have multiple methods for compressing RAD. Watch carefully: MR1 does 3:1, on large buffers and is repeatable until the volume of input data falls below 128k or so. (This figure is under user control, but compreesion quality will suffer as the buffer size is decreased). Recent changes make this method about as fast as any conventional compressor. MR2 does at least 6:1, with a minimum buffer size of perhaps 32k. It is also repeatable. MR2 does not actually compress, though. Instead, it translates an input buffer into an output buffer of roughly equivalent size. This output buffer contains mostly constants, and other things, such as simple sequences: 28,29,31,32,33,35,40,41,42,43,44,45. (An actual sequence of bytes). Obviously, this kind of information is readily compressed, and that is why I claim that MR2 can achieve a minimum of 6:1. Again, like MR1, this process can be re-applied over it's own output. When, I've said, "No, it's impossible to compress by 100:1" I was trying to get this audience to see this as realistic. But I can compress RAD files 100:1 if allowed to re-process the output through the same process. I first actually achieved a 100:1 compression level in March of this year using tools ^^^^^^^^^^^^^^^^^^^^^^^^^ designed for experimenting in RAD issues. But now I have C programs which have been written to be easy to understand and are intended to be part of my technology transfer process for clients. [...] So, can someone compress by 100:1 or even 1000:1? Yes! But ONLY if the input file is sufficiently large. A 1000:1 compression ratio would require a very large input file, and, at least for PC users, archive files of this size are almost never produced. ---------------------------------------------------------------------------- From: email@example.com (Jules Gilbert) Newsgroups: comp.compression Subject: Re: Gilbert's RAD compression product Date: 18 Aug 1996 08:40:28 -0400 Message-ID: <firstname.lastname@example.org> [...] (In my original remarks), I am quoted above as claiming that a 3,152,896 byte 'tar 'file (conventionally compressed to 1,029,790 bytes) can be compressed to 50*1024 bytes. It's an accurate quote. Now how can that be possible? If a gzip compressed version of the Corpus requires roughly a 1MB, what do I do with the 950k bytes I don't store in the compressed intermediate file? Well, that's certainly a puzzler! For now, all I will say is that it does not go into the compressed intermediate file. And because it doesn't, Shannons' channel capacity axioms apply only to the 50k component. ---------------------------------------------------------------------------- From: email@example.com (Jules Gilbert) Newsgroups: comp.compression Subject: Some answers about MR1 Date: 22 Aug 1996 23:45:54 -0400 Message-ID: <firstname.lastname@example.org> [...] However, arrangements are being made to do another demo in September at MIT. One of the files compressed and decompressed will be the Corpus, after it's already been compressed using ARJ, a good quality conventional compressor. (It should be about a 1MB at that point). My program has made the corpus as small as 6k, although that requires SEVERAL separate physical passes. ^^^^^^^^^^^^^^ Because we will only have a few minutes to spend on this single file, I'll likely stop at 250k or so. Under Linux, the total size of the compressor and decompressor load modules is about 50k bytes. And under DOS, using the Intel C compiler (a great product, but sadly, not sold anymore), the same files total about 300k bytes. MR1 contains code that is highly dependent on the particularities of a host computer's floating point processor, or more correctly, architectural differ- ences existing between the source machine and the target machine would likely cause failure to de-compress. [...] 9.5 Patents on compression of random data or recursive compression 9.5.1 David C. James On July 2, 1996, David C. James was granted patent 5,533,051 "Method for data compression" for a method claimed to be effective even on random data. From: email@example.com (Peter J. Cranstone) Newsgroups: comp.compression Subject: Re: Jules Gilbert's Compression Technology Date: Sun Aug 18 12:48:11 EDT 1996 Wh have just been issued a patent (US. #5,533,051) and have several more pending on a new method for data compression. It will compess all types of data, including "random", and data containing a uniform distribution of "0's" and "1's". [...] The first line of the patent abstract is: Methods for compressing data including methods for compressing highly randomized data are disclosed. Page 3, line 34 of the patent states: A second aspect of the present invention which further enhances its ability to achieve high compression percentages, is its ability to be applied to data recursively. Specifically, the methods of the present invention are able to make multiple passes over a file, each time further compressing the file. Thus, a series of recursions are repeated until the desired compression level is achieved. Page 27, line 18 of the patent states that the claimed method can compress without loss *all* files by at least one bit: the direct bit encode method of the present invention is effective for reducing an input string by one bit regardless of the bit pattern of the input string. The counting argument shows that this is mathematically impossible (see section 9.2) above. If the method were indeed able to shrink any file by at least one bit, applying it recursively would shrink gigabytes down to a few bits. The patent contains evasive arguments to justify the impossible claims: Page 12, line 22: Of course, this does not take into account any overhead registers or other "house-keeping" type information which must be tracked. However such overhead tends to be negligible when processing the large quantities of data typically encountered in data compression applications. Page 27, line 17: Thus, one skilled in the art can see that by keeping the appropriate counters, the direct bit encode method of the present invention is effective for reducing an input string by one bit regardless of the bit pattern of the input string. Although a certain amount of "loss" is necessary in keeping and maintaining various counters and registers, for files which are sufficiently large, this overhead is insignificant compared to the savings obtained by the direct bit encode method. The flaw in these arguments is that the the "house-keeping" type information is *not* negligible. If it is properly taken it into account, it cancels any gains made elsewhere when attempting to compress random data. The patent contains even more evasive arguments: Page 22, line 31: It is commonly stated that perfectly entropic data streams cannot be compressed. This misbelief is in part based on the sobering fact that for a large set of entropic data, calculating the number of possible bit pattern combinations is unfathomable. For example, if 100 ones and 100 zeros are randomly distributed in a block 200 bits long, there are 200C100 = 9.055 10^58 combinations possible. The numbers are clearly unmanageable and hence the inception that perfectly entropic data streams cannot be compressed. The key to the present compression method under discussion is that it makes no attempt to deal with such large amounts of data and simply operates on smaller portions. The actual claims of the patent are harmless since they only describe methods which cannot work (they actually expand random data instead of compressing it). For example, claims 6 and 7 are: 6. A method of compressing a stream of binary data, comprising the steps of: A) parsing n-bits from said stream of binary data; B) determining the value of said parsed n-bits; C) based on the results of step B, coding said values of said n-bits in at least one of a first, second, and third target string, wherein coding said value includes generating a plurality of code strings and correlating said value with one of said code strings and dividing said correlated code string variable length codes and dividing at least some of said into at least first and second segments, and assigning at least one of said correlated code string segments to at least one of said first, second, and third target strings, wherein at least one of said plurality of codes is not greater than n-1 bits long. 7. The method of compressing a stream of binary data of claim 6, wherein n=2. Making abstraction of the legalese, claim 7 says in short that you can compress an arbitrary sequence of two bits down to one bit. 9.5.2 Michael L. Cole Patent 5,488,364 "Recursive data compression", granted Jan. 30, 1996, also claims that recursive compression of random data is possible. See http://www.teaser.fr/~jlgailly/05488364.html for the text and a short analysis of this patent. 9.5.3 John F. Remillard Patent 5,486,826 "Method and apparatus for iterative compression of digital data" uses methods very similar to those of the "magic function theory" (see section 9.2 above). The patent is available at http://patent.womplex.ibm.com/details?patent_number=5486826 See also from the same person patent 5,594,435 "Permutation-based data compression" http://patent.womplex.ibm.com/details?patent_number=5594435 The assignee for this patent is Philosophers' Stone LLC. (The Philosopher's Stone is the key to all the riches in the universe; an LLC is a Limited Liability Corporation.)
Top Document: comp.compression Frequently Asked Questions (part 1/3)
Previous Document:  What about patents on data compression algorithms?
Next Document:  Fake compression programs (OWS, WIC)
Part1 - Part2 - Part3 - Single Page
Send corrections/additions to the FAQ Maintainer:
Last Update August 08 2012 @ 06:18 AM