Fractal Image Compression
computer science crazy|
Joined: Dec 2008
21-09-2008, 10:09 AM
The subject of this work is image compression with fractals. Today JPEG has become an industrial standard in image compression. Further researches are held in two areas, wavelet based compression and fractal image compression. The fractal scheme was introduced by Michael F Barnsley in the year 1945.His idea was that images could be compactly stored as iterated functions which led to the development of the IFS scheme which forms the basis of fractal image compression. Further work in this area was conducted by A.Jacquin, a student of Barnsley who published several papers on this subject. He was the first to publish an efficient algorithm based on local fractal system.
Fractal image compression has the following features:
" Compression has high complexity.
" Fast image decoding
" High compression ratios can be achieved.
These features enable applications such as computer encyclopedias, like the Microsoft Atlas that came with this technology. The technology is relatively new.
Overview Of Image Compression
Images are stored as pixels or picture forming elements. Each pixel requires a certain amount of computer memory to be stored on. Suppose we had to store a family album with say a hundred photos. To store this on a computer memory would require say a few thousands of dollars.
This problem can be solved by image compression. The number of pixels involved in the picture can be drastically reduced by employing image compression techniques. The human eye is insensitive to a wide variety of information loss. The redundancies in images cannot be easily detected and certain minute details in pictures can also be eliminated while storing so as to reduce the number of pixels. These can be further incorporated while reconstructing the image for minimum error. This is the basic idea behind image compression. Most of the image compression techniques are said to be lossy as they reduce the information being stored.
The present method being employed consists of storing the image by eliminating the high frequency Fourier co-efficients and storing only the low frequency coefficients. This is the principle behind the DCT transformation which forms the basis of the JPEG scheme of image compression.
Barnsley suggested that storing of images as iterated functions of parts of itself leads to efficient image compression.In the middle of the 80's this concept of IFS became popular. Barnsley and his colleagues in the Georgia University first observed the use of IFS in computerized graphics applications. They found that IFS was able to cress colour images upto 10000 times. The compression contained two phases. First the image was partitioned to segments that were self-similar as possible. Then each part was described as IFS with probabilities
Use Search at http://topicideas.net/search.php wisely To Get Information About Project Topic and Seminar ideas with report/source code along pdf and ppt presenaion
Active In SP
Joined: Feb 2011
23-04-2011, 10:12 AM
Image Compressio1.docx (Size: 737.64 KB / Downloads: 152)
An Introduction to Image Compression
Compressing an image is significantly different than compressing raw binary data. Of course, general purpose compression programs can be used to compress images, but the result is less than optimal. This is because images have certain statistical properties which can be exploited by encoders specifically designed for them. Also, some of the finer details in the image can be sacrificed for the sake of saving a little more bandwidth or storage space. This also means that lossy compression techniques can be used in this area.
Image compression is minimizing the size in bytes of a graphics file without degrading the quality of the image to an unacceptable level. The reduction in file size allows more images to be stored in a given amount of disk or memory space. It also reduces the time required for images to be sent over the Internet or downloaded from Web pages.
There are several different ways in which image files can be compressed. For Internet use, the two most common compressed graphic image formats are the JPEG format and the GIF format. The JPEG method is more often used for photographs, while the GIF method is commonly used for line art and other images in which geometric shapes are relatively simple.
Other techniques for image compression include the use of fractals and wavelets. These methods have not gained widespread acceptance for use on the Internet as of this writing. However, both methods offer promise because they offer higher compression ratios than the JPEG or GIF methods for some types of images. Another new method that may in time replace the GIF format is the PNG format.
Lossless compression involves with compressing data which, when decompressed, will be an exact replica of the original data. This is the case when binary data such as executables, documents etc. are compressed. They need to be exactly reproduced when decompressed. On the other hand, images (and music too) need not be reproduced 'exactly'. An approximation of the original image is enough for most purposes, as long as the error between the original and the compressed image is tolerable.
A text file or program can be compressed without the introduction of errors, but only up to a certain extent. This is called lossless compression. Beyond this point, errors are introduced. In text and program files, it is crucial that compression be lossless because a single error can seriously damage the meaning of a text file, or cause a program not to run. In image compression, a small loss in quality is usually not noticeable. There is no "critical point" up to which compression works perfectly, but beyond which it becomes impossible. When there is some tolerance for loss, the compression factor can be greater than it can when there is no loss tolerance. For this reason, graphic images can be compressed more than text files or programs.
The theoretical background of compression is provided by information theory (which is closely related to algorithmic information theory) for lossless compression, and by rate–distortion theory for lossy compression. These fields of study were essentially created by Claude Shannon, who published fundamental papers on the topic in the late 1940s and early 1950s. Cryptography and coding theory are also closely related. The idea of data compression is deeply connected with statistical inference.
Many lossless data compression systems can be viewed in terms of a four-stage model. Lossy data compression systems typically include even more stages, including, for example, prediction, frequency transformation, and quantization.
Classifying image data
An image is represented as a two-dimentional array of coefficients, each coefficient representing the brightness level in that point. When looking from a higher perspective, we can't differentiate between coefficients as more important ones, and lesser important ones. But thinking more intuitively, we can. Most natural images have smooth colour variations, with the fine details being represented as sharp edges in between the smooth variations. Technically, the smooth variations in colour can be termed as low frequency variations and the sharp variations as high frequency variations.
The low frequency components (smooth variations) constitute the base of an image, and the high frequency components (the edges which give the detail) add upon them to refine the image, thereby giving a detailed image. Hence, the smooth variations are demanding more importance than the details.
Separating the smooth variations and details of the image can be done in many ways. One such way is the decomposition of the image using a Discrete Wavelet Transform (DWT).
The DWT of an image
The procedure goes like this. A low pass filter and a high pass filter are chosen, such that they exactly halve the frequency range between themselves. This filter pair is called the Analysis Filter pair. First, the low pass filter is applied for each row of data, thereby getting the low frequency components of the row. But since the lpf is a half band filter, the output data contains frequencies only in the first half of the original frequency range. So, by Shannon's Sampling Theorem, they can be subsampled by two, so that the output data now contains only half the original number of samples. Now, the high pass filter is applied for the same row of data, and similarly the high pass components are separated, and placed by the side of the low pass components. This procedure is done for all rows.
Active In SP
Joined: Feb 2012
16-02-2012, 12:23 PM
to get information about the topic image compression full report ppt and related topic refer the link bellow
http://topicideas.org/how-to-fractal-ima...on-seminar and presentation-report
http://topicideas.org/how-to-image-proce...ll-seminar and presentation-report?page=2
http://seminar and presentationproject and implimentations.com/attachment.php?aid=476
Joined: Apr 2012
08-08-2012, 03:29 PM
Fractal Image Compression
Fractal Image.doc (Size: 314.5 KB / Downloads: 71)
This paper seeks to explain the emerging technology known as fractal image compression. It presents both the mathematical ideas and the practical techniques involved in compressing images by means of fractals. With claims of compression ratios in excess of 10 000 to 1, fractal compression is certainly a promising new technology. However, the method is in fact quite controversial, with some people claiming it doesn't work well, while others claim it works wonderfully; as the technology is fast developing, but as yet unproven. At the moment, very little is known about fractal compression and what is known is difficult to ascertain. The main aspects of the technology are patented by M. F. Barnsley, who pioneered fractal image compression. This paper attempts to clarify the key ideas behind the methodology; and includes an overview of the competition, which includes the JPEG standard and vector quantization. Included as an appendix is the source code for a model digital implementation of fractal image compression of binary black-and-white images, written in C. Examples of decompressed images are also included, as is an evaluation of the program.
Fractal image compression relies on the fact that all real-world images are rich in affine redundancy; that is, under suitable affine transformations, large bits of real-world images look like smaller bits of the same real-world image. The basic idea behind the method is to express the image as a set of local iterated function systems (IFSs). An IFS is just a set of contractive affine transformations. The IFS coefficients are then stored in place of the original. No information regarding the pixels is actually stored. When the original is required, the IFSs are iterated on an arbitrary starting image of the desired resolution. The image can then be displayed quickly and zooming will create infinite levels of (synthetic) fractal detail. The problem is how to efficiently generate the IFSs from the original image.
The hypothetical Multiple Reduction Photocopy Machine is used to illustrate the implementation of IFSs. Also, the classical Sierpinski triangle is used as an example of a self-similar shape to illustrate how IFSs work. One important result that is central to fractal image compression is known as the Collage Theorem, which was proved by M. Barnsley in his book Fractals Everywhere. The Collage Theorem states what an IFS must be like in order to represent an image. Also covered is what is known as the inverse problem, which involves going from a given image to an IFS that can generate a good approximation of the original. This remains unsolved.
With the advent of multimedia, the necessity for the storage of large numbers of high quality images is increasing. One obstacle that has to be overcome is the large size of image files. For example, a single 800- by 600-pixel true-colour image requires three bytes per pixel, plus a header, which amounts to over 1.37 Mb of disk space, thus almost filling a 1.4Mb high-density diskette. Clearly, some form of compression is necessary. As well as saving storage space, compressed files take less time to transmit via modem, so money can be saved on both counts.
The choice of compression algorithm involves several conflicting considerations. These include degree of compression required, and the speed of operation. Obviously if one is attempting to run programs direct from their compressed state, decompression speed is paramount. The other consideration is size of compressed file versus quality of decompressed image.
There are essentially two sorts of data compression. 'Lossless' compression works by reducing the redundancy in the data. The decompressed data is an exact copy of the original, with no loss of data. Huffman Encoding and LZW are two examples of lossless compression algorithms. There are times when such methods of compression are unnecessarily exact. 'Lossy' compression sacrifices exact reproduction of data for better compression. It both removes redundancy and creates an approximation of the original. The JPEG standard is currently the most popular method of lossy compression. Obviously, a lossless compression method must be used with programs or text files, while lossy compression is really only suitable for graphics or sound data, where an exact reproduction is not necessary. Fractal image compression is a lossy compression method.
The French mathematician Benoit B. Mandelbrot first coined the term fractal in 1975. He derived the word from the Latin fractus, which means "broken", or "irregular and fragmented". In fact, the birth of fractal geometry is usually traced to Mandelbrot and the 1977 publication of his seminal book The Fractal Geometry of Nature. Mandelbrot claimed that classical Euclidean geometry was inadequate at describing many natural objects, such as clouds, mountains, coastlines and trees. So he conceived and developed fractal geometry.
There are two main groups of fractals: linear and nonlinear. The latter are typified by the popular Mandelbrot set and Julia sets, which are fractals of the complex plane. However, the fractals used in image compression are linear, and of the real plane. So, the fractals used are not chaotic; in other words, they are not sensitive to initial conditions. They are the fractals from Iterated Function Theory. An Iterated Function System (IFS) is simply a set of contractive affine transformations. IFSs may efficiently produce shapes such as ferns, leaves and trees.
The Sierpinski Triangle
A classical example of a self-similar shape is the Sierpinski triangle, or gasket. It was named after the Polish mathematician Waclaw Sierpinski, who first described it in 1916. The Sierpinski triangle may be created using a Multiple Reduction Photocopy Machine in the following manner. An image is placed on the machine, reduced by one half and copied three times, once onto each vertex of a triangle.
Automatic Fractal Image Compression and the Fractal Transform
Local Iterated Function Systems
The Advantages of Representing Images Using Iterated Function Schemes
By utilizing the self-similarity and repetition in nature, a relatively small number of transformations (and probabilities if used) lead to very high compression ratios. Also, given a set of affine transformations, reproduction of the image is computationally straightforward, is well suited to parallel computation, and is stable – small changes in the transformations lead to small changes in the invariant set. The ability of the transformations to define the image at arbitrarily small scales, and the ease at which small regions may be zoomed in on are also points in favour of IFSs.
The Problem With Using Iterated Function Systems to Compress Images
The main disadvantage of the method is the difficulty of obtaining a set of transformations to represent a given image. None of the algorithms proposed for the "inverse problem" have been successfully used in practical image compression. In order to fully automate fractal image compression, an algorithm must be used which requires no 'user interference', such as adjusting the coefficients in an IFS. Standard IFSs are not, in fact, good for general and practical image compression. A high correlation between different parts of a picture are required. The method may be excellent for giving an overall picture of a tree, but is no use if the exact arrangement of the leaves on different branches is important.
Even if the inverse problem were solved, it may be to no avail. Real world scenes are very diverse in subject matter, there may be totally different looking objects in the same scene. They do not, on the whole, obey fractal geometry. Real ferns do not branch down to infinity, and they may be distorted, discoloured, perforated and torn; while other objects are even further from fractal geometry. One theoretical reason why IFSs may never be successfully used in compression is the fact that given the IFS for object A, and the IFS for object B, they cannot be combined to give the IFS for object A * B, or object A * B. In practical fractal compression, to capture the diversity of real images, what is used instead is the "fractal transform", which uses local (or partitioned) IFSs, in which, the (contractive) transformations do not map the whole image onto part of itself, but map one part of the image onto another (smaller) part of the image. They allow the compression process to be fully automated.
Joined: Oct 2012
12-11-2012, 12:19 PM
to get information about the topic "FRACTAL COMPRESSION " full report ppt and related topic refer the link bellow
http://topicideas.org/how-to-fractal-ima...on-seminar and presentation-report