Search the FAQ Archives

3 - A - B - C - D - E - F - G - H - I - J - K - L - M
N - O - P - Q - R - S - T - U - V - W - X - Y - Z
faqs.org - Internet FAQ Archives

comp.compression Frequently Asked Questions (part 2/3)
Section - [74] Introduction to JBIG

( Part1 - Part2 - Part3 - Single Page )
[ Usenet FAQs | Web FAQs | Documents | RFC Index | Cities ]


Top Document: comp.compression Frequently Asked Questions (part 2/3)
Previous Document: [73] What is the theoretical compression limit?
Next Document: [75] Introduction to JPEG
See reader questions & answers on this topic! - Help others by sharing your knowledge

JBIG software and the JBIG specification are available in
ftp://nic.funet.fi/pub/graphics/misc/test-images/jbig.tar.gz

The ISO JBIG committee's home page is http://www.jpeg.org/public/welcome.htm


A short introduction to JBIG, written by Mark Adler <madler@cco.caltech.edu>:

  JBIG losslessly compresses binary (one-bit/pixel) images.  (The B stands
  for bi-level.)  Basically it models the redundancy in the image as the
  correlations of the pixel currently being coded with a set of nearby
  pixels called the template.  An example template might be the two
  pixels preceding this one on the same line, and the five pixels centered
  above this pixel on the previous line.  Note that this choice only
  involves pixels that have already been seen from a scanner.

  The current pixel is then arithmetically coded based on the eight-bit
  (including the pixel being coded) state so formed.  So there are (in this
  case) 256 contexts to be coded.  The arithmetic coder and probability
  estimator for the contexts are actually IBM's (patented) Q-coder.  The
  Q-coder uses low precision, rapidly adaptable (those two are related)
  probability estimation combined with a multiply-less arithmetic coder.
  The probability estimation is intimately tied to the interval calculations
  necessary for the arithmetic coding.

  JBIG actually goes beyond this and has adaptive templates, and probably
  some other bells and whistles I don't know about.  You can find a
  description of the Q-coder as well as the ancestor of JBIG in the Nov 88
  issue of the IBM Journal of Research and Development.  This is a very
  complete and well written set of five articles that describe the Q-coder
  and a bi-level image coder that uses the Q-coder.

  You can use JBIG on grey-scale or even color images by simply applying
  the algorithm one bit-plane at a time.  You would want to recode the
  grey or color levels first though, so that adjacent levels differ in
  only one bit (called Gray-coding).  I hear that this works well up to
  about six bits per pixel, beyond which JPEG's lossless mode works better.
  You need to use the Q-coder with JPEG also to get this performance.

  Actually no lossless mode works well beyond six bits per pixel, since
  those low bits tend to be noise, which doesn't compress at all.

  Anyway, the intent of JBIG is to replace the current, less effective
  group 3 and 4 fax algorithms.


Another introduction to JBIG, written by Hank van Bekkem <jbek@oce.nl>:

  The following description of the JBIG algorithm is derived from
  experiences with a software implementation I wrote following the
  specifications in the revision 4.1 draft of September 16, 1991. The
  source will not be made available in the public domain, as parts of
  JBIG are patented.

  JBIG (Joint Bi-level Image Experts Group) is an experts group of ISO,
  IEC and CCITT (JTC1/SC2/WG9 and SGVIII). Its job is to define a
  compression standard for lossless image coding ([1]). The main
  characteristics of the proposed algorithm are:
  - Compatible progressive/sequential coding. This means that a
    progressively coded image can be decoded sequentially, and the
    other way around.
  - JBIG will be a lossless image compression standard: all bits in
    your images before and after compression and decompression will be
    exactly the same.

  In the rest of this text I will first describe the JBIG algorithm in
  a short abstract of the draft. I will conclude by saying something
  about the value of JBIG.


  JBIG algorithm.
  --------------

  JBIG parameter P specifies the number of bits per pixel in the image.
  Its allowable range is 1 through 255, but starting at P=8 or so,
  compression will be more efficient using other algorithms. On the
  other hand, medical images such as chest X-rays are often stored with
  12 bits per pixel, while no distorsion is allowed, so JBIG can
  certainly be of use in this area. To limit the number of bit changes
  between adjacent decimal values (e.g. 127 and 128), it is wise to use
  Gray coding before compressing multi-level images with JBIG. JBIG
  then compresses the image on a bitplane basis, so the rest of this
  text assumes bi-level pixels.

  Progressive coding is a way to send an image gradually to a receiver
  instead of all at once. During sending, more detail is sent, and the
  receiver can build the image from low to high detail. JBIG uses
  discrete steps of detail by successively doubling the resolution. The
  sender computes a number of resolution layers D, and transmits these
  starting at the lowest resolution Dl. Resolution reduction uses
  pixels in the high resolution layer and some already computed low
  resolution pixels as an index into a lookup table. The contents of
  this table can be specified by the user.

  Compatibility between progressive and sequential coding is achieved
  by dividing an image into stripes. Each stripe is a horizontal bar
  with a user definable height. Each stripe is separately coded and
  transmitted, and the user can define in which order stripes,
  resolutions and bitplanes (if P>1) are intermixed in the coded data.
  A progressive coded image can be decoded sequentially by decoding
  each stripe, beginning by the one at the top of the image, to its
  full resolution, and then proceeding to the next stripe. Progressive
  decoding can be done by decoding only a specific resolution layer
  from all stripes.

  After dividing an image into bitplanes, resolution layers and
  stripes, eventually a number of small bi-level bitmaps are left to
  compress. Compression is done using a Q-coder. Reference [2]
  contains a full description, I will only outline the basic principles
  here.

  The Q-coder codes bi-level pixels as symbols using the probability of
  occurrence of these symbols in a certain context. JBIG defines two
  kinds of context, one for the lowest resolution layer (the base
  layer), and one for all other layers (differential layers).
  Differential layer contexts contain pixels in the layer to be coded,
  and in the corresponding lower resolution layer.

  For each combination of pixel values in a context, the probability
  distribution of black and white pixels can be different. In an all
  white context, the probability of coding a white pixel will be much
  greater than that of coding a black pixel. The Q-coder assigns, just
  like a Huffman coder, more bits to less probable symbols, and so
  achieves compression. The Q-coder can, unlike a Huffmann coder,
  assign one output codebit to more than one input symbol, and thus is
  able to compress bi-level pixels without explicit clustering, as
  would be necessary using a Huffman coder.

  Maximum compression will be achieved when all probabilities (one set
  for each combination of pixel values in the context) follow the
  probabilities of the pixels. The Q-coder therefore continuously
  adapts these probabilities to the symbols it sees.


  JBIG value.
  ----------

  In my opinion, JBIG can be regarded as two combined devices:
  - Providing the user the service of sending or storing multiple
    representations of images at different resolutions without any
    extra cost in storage. Differential layer contexts contain pixels
    in two resolution layers, and so enable the Q-coder to effectively
    code the difference in information between the two layers, instead
    of the information contained in every layer. This means that,
    within a margin of approximately 5%, the number of resolution
    layers doesn't effect the compression ratio.
  - Providing the user a very efficient compression algorithm, mainly
    for use with bi-level images. Compared to CCITT Group 4, JBIG is
    approximately 10% to 50% better on text and line art, and even
    better on halftones. JBIG is however, just like Group 4, somewhat
    sensitive to noise in images. This means that the compression ratio
    decreases when the amount of noise in your images increases.

  An example of an application would be browsing through an image
  database, e.g. an EDMS (engineering document management system).
  Large A0 size drawings at 300 dpi or so would be stored using five
  resolution layers. The lowest resolution layer would fit on a
  computer screen. Base layer compressed data would be stored at the
  beginning of the compressed file, thus making browsing through large
  numbers of compressed drawings possible by reading and decompressing
  just the first small part of all files. When the user stops browsing,
  the system could automatically start decompressing all remaining
  detail for printing at high resolution.

  [1] "Progressive Bi-level Image Compression, Revision 4.1", ISO/IEC
      JTC1/SC2/WG9, CD 11544, September 16, 1991
  [2] "An overview of the basic principles of the Q-coder adaptive
      binary arithmetic coder", W.B. Pennebaker, J.L. Mitchell, G.G.
      Langdon, R.B. Arps, IBM Journal of research and development,
      Vol.32, No.6, November 1988, pp. 771-726 (See also the other
      articles about the Q-coder in this issue)

User Contributions:

Comment about this article, ask questions, or add new information about this topic:

CAPTCHA




Top Document: comp.compression Frequently Asked Questions (part 2/3)
Previous Document: [73] What is the theoretical compression limit?
Next Document: [75] Introduction to JPEG

Part1 - Part2 - Part3 - Single Page

[ Usenet FAQs | Web FAQs | Documents | RFC Index ]

Send corrections/additions to the FAQ Maintainer:
jloup@gzip.OmitThis.org





Last Update March 27 2014 @ 02:11 PM