Image Processing
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Active In SP

Posts: 1
Joined: Aug 2010
29-08-2010, 07:14 PM

(With special emphasis on Biological Systems)
Image processing is any form of signal processing for which the input is an image, such as photographs or frames of video; the output of image processing can be either an image or a set of characteristics or parameters related to the image. A technique in which the data from an image are digitized and various mathematical operations are applied to the data, generally with a digital computer, in order to create an enhanced image that is more useful or pleasing to a human observer, or to perform some of the interpretation and recognition tasks usually performed by humans. An image is an artifact, for example a two-dimensional picture that has a similar appearance to some subject--usually a physical object or a person. The figure or picture of any object formed at the focus of a lens or mirror, by rays of light from the several points of the object symmetrically refracted or reflected to corresponding points in such focus; this may be received on a screen, a photographic plate, or the retina of the eye, and viewed directly by the eye, or with an eyeglass, as in the telescope and microscope; the likeness of an object formed by reflection; as, to see one's image in a mirror. It usually refers to digital image processing, but optical and analog image processing are also possible. The acquisition of images (producing the input image in the first place) is referred to as imaging. Most image-processing techniques involve treating the image as a two-dimensional signal and applying standard signal-processing techniques to it. Image processing is a physical process used to convert an image signal into a physical image. The image signal can be either digital or analog. The actual output itself can be an actual physical image or the characteristics of an image. Capturing and manipulating images to enhance or extract information. It generally refers to digital or analog enhancement and geometric manipulation of the video signal, which is in contrasts to image analysis, which emphasizes the measurement of image parameters. An image is usually interpreted as a two-dimensional array of brightness values, and is most familiarly represented by such patterns as those of a photographic print, slide, television screen, or movie screen. An image can be processed optically, or digitally with a computer. To digitally process an image, it is first necessary to reduce the image to a series of numbers that can be manipulated by the computer. Each number representing the brightness value of the image at a particular location is called a picture element, or pixel. A typical digitized image may have 512 × 512 or roughly 250,000 pixels, although much larger images are becoming common. Once the image has been digitized, there are three basic operations that can be performed on it in the computer. For a point operation, a pixel value in the output image depends on a single pixel value in the input image. For local operations, several neighboring pixels in the input image determine the value of an output image pixel. In a global operation, all of the input image pixels contribute to an output image pixel value. These operations, taken singly or in combination, are the means by which the image is enhanced, restored, or compressed. An image is enhanced when it is modified so that the information it contains is more clearly evident, but enhancement can also include making the image more visually appealing.

Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing. It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and signal distortion during processing. Since images are defined over two dimensions (perhaps more) digital image processing may be modeled in the form of Multidimensional Systems. Digital image processing allows the use of much more complex algorithms for image processing, and hence, can offer both more sophisticated performance at simple tasks, and the implementation of methods which would be impossible by analog means.
In particular, digital image processing is the only practical technology for:
¢ Classification:
Classification is a supervised machine learning procedure in which individual items are placed into groups based on quantitative information on one or more characteristics inherent in the items (referred to as traits, variables, characters, etc) and based on a training set of previously labeled items.
¢ Feature extraction:
In image processing, feature extraction is a special form of dimensionality reduction. When the input data to an algorithm is too large to be processed and it is suspected to be notoriously redundant (much data, but not much information) then the input data will be transformed into a reduced representation set of features (also named features vector). Transforming the input data into the set of features is called feature extraction.
¢ Pattern recognition:
Pattern recognition is "the act of taking in raw data and taking an action based on the category of the pattern". Most research in pattern recognition is about methods for supervised learning and unsupervised learning. Pattern recognition aims to classify data (patterns) based either on a priori knowledge or on statistical information extracted from the patterns. The patterns to be classified are usually groups of measurements or observations, defining points in an appropriate multidimensional space. This is in contrast to pattern matching, where the pattern is rigidly specified.

¢ Projection:
The display of an image by devices such as Movie project and implimentationor ,Video project and implimentationor, Overhead project and implimentationor, Slide project and implimentationor ,Camera obscura
¢ Multi-scale signal analysis:
Signal processing is an area of electrical engineering, systems engineering, and applied mathematics that deals with operations on or analysis of signals, in either discrete or continuous time to perform useful operations on those signals. Signals of interest can include sound, images, time-varying measurement values and sensor data, for example biological data such as electrocardiograms, control system signals, telecommunication transmission signals such as radio signals, and many others. Signals are analog or digital electrical representations of time-varying or spatial-varying physical quantities. In the context of signal processing, arbitrary binary data streams and on-off signals are not considered as signals, but only analog and digital signals that are representations of analog physical quantities.
Some techniques which are used in digital image processing include:
¢ Pixelization:
Pixelization is a video- and image-editing technique in which an image is blurred by displaying part or all of it at a markedly lower resolution. It is primarily used for censorship. The effect is a standard graphics filter, available in all but the most basic bitmap graphics editors.
¢ Linear filtering:
A linear filter applies a linear operator to a time-varying input signal. Linear filters are very common in electronics and digital signal processing (see the article on electronic filters), but they can also be found in mechanical engineering and other technologies.They are often used to eliminate unwanted frequencies from an input signal or to select a desired frequency among many others.
¢ Principal components analysis:
Principal component analysis (PCA) involves a mathematical procedure that transforms a number of possibly correlated variables into a smaller number of uncorrelated variables called principal components. The first principal component accounts for as much of the variability in the data as possible, and each succeeding component accounts for as much of the remaining variability as possible.

¢ Independent component analysis:
Independent component analysis (ICA) is a computational method for separating a multivariate signal into additive subcomponents supposing the mutual statistical independence of the non-Gaussian source signals. It is a special case of blind source separation.
¢ Hidden Markov models:
A hidden Markov model (HMM) is a statistical model in which the system being modeled is assumed to be a Markov process with unobserved state. A HMM can be considered as the simplest dynamic Bayesian network. In a regular Markov model, the state is directly visible to the observer, and therefore the state transition probabilities are the only parameters. In a hidden Markov model, the state is not directly visible, but output, dependent on the state, is visible. Each state has a probability distribution over the possible output tokens. Therefore the sequence of tokens generated by an HMM gives some information about the sequence of states.
¢ Partial differential equations:
Anisotropic diffusion resembles the process that creates a scale-space, where an image generates a parameterized family of successively more and more blurred images based on a diffusion process. Each of the resulting images in this family are given as a convolution between the image and a 2D isotropic Gaussian filter, where the width of the filter increases with the parameter. This diffusion process is a linear and space-invariant transformation of the original image. Anisotropic diffusion is a generalization of this diffusion process: it produces a family of parameterized images, but each resulting image is a combination between the original image and a filter that depends on the local content of the original image. As a consequence, anisotropic diffusion is a non-linear and space-variant transformation of the original image.

¢ Self-organizing maps:
A self-organizing map (SOM) or self-organizing feature map (SOFM) is a type of artificial neural network that is trained using unsupervised learning to produce a low-dimensional (typically two-dimensional), discretized representation of the input space of the training samples, called a map. Self-organizing maps are different from other artificial neural networks in the sense that they use a neighborhood function to preserve the topological properties of the input space.
¢ Neural networks:
An artificial neural network (ANN), usually called "neural network" (NN), is mathematical model or computational model that tries to simulate the structure and/or functional aspects of biological neural networks. It consists of an interconnected group of artificial neurons and processes information using a connectionist approach to computation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network during the learning phase. Modern neural networks are non-linear statistical data modeling tools. They are usually used to model complex relationships between inputs and outputs or to find patterns in data.

¢ Wavelets:
A wavelet is a wave-like oscillation with an amplitude that starts out at zero, increases, and then decreases back to zero. It can typically be visualized as a "brief oscillation" like one might see recorded by a seismograph or heart monitor. Generally, wavelets are purposefully crafted to have specific properties that make them useful for signal processing. Wavelets can be combined, using a "shift, multiply and sum" technique called convolution, with portions of an unknown signal to extract information from the unknown signal.
In electrical engineering and computer science, analog image processing is any image processing task conducted on two-dimensional analog signals by analog means (as opposed to digital image processing). The most common type of image processing is photography. In photography process, an image is captured using a camera to create a digital or analog image. In order to produce a physical picture, the image is processed using the appropriate technology based on the input source type. Photography is again divided into two types:
¢ Digital photography
¢ Analog photography
In digital photography, the image is stored as a computer file. This file is translated using photographic software to generate an actual image. The colors, shading, and nuances are all captured at the time the photograph is taken the software translates this information into a image. When creating images using analog photography, the image is burned into a film using a chemical reaction triggered by controlled exposure to light. The image is processed in a darkroom, using special chemicals to create the actual image. This process is decreasing in popularity due to the advent of digital photography, which requires less effort and special training to product images. There are three major benefits to digital image processing. The consistent high quality of the image, the low cost of processing and the ability to manipulate all aspects of the process are all great benefits. As long as computer processing speed continues to increase while the cost of storage memory continues to drop, the field of image processing will grow. In addition to photography, there are a wide range of other image processing operations. The field of digital imaging has created a whole range of new applications and tools that were previously impossible. Face recognition software, medical image processing and remote sensing are all possible due to the development of digital image processing. Specialized computer programs are used to enhance and correct images. These programs apply algorithms to the actual data and are able to reduce signal distortion, clarify fuzzy images and add light to an underexposed image.
Image processing techniques were first developed in 1960 through the collaboration of a wide range of scientists and academics. The main focus of their work was to develop medical imaging, character recognition and create high quality images at the microscopic level. During this period, equipment and processing costs were prohibitively high. The financial constraints had a serious impact on the depth and breadth of technology development that could be done. By the 1970s, computing equipment costs had dropped substantially making digital image processing more realistic. Film and software companies invested significant funds into the development and enhancement of image processing, creating a new industry. There are two processes to the image formation process. They are stated as below:
¢ The geometry of image formation-which determines where in the image plane the project and implimentationion of a point in the scene will be located.
¢ The physics of light which determines the brightness of a point in the image plane as a function of illumination and surface properties.
In case of photography the scene is illuminated by a single source. The scene reflects the radiation towards the camera. The camera senses it via the chemicals on the film. The pin-hole camera is the simplest of the device to project and implimentation the image of a 3-D scene on a 2-D surface. Generally in a camera the aperture should be more to admit the passage of more light. The entire concept of image processing depends upon the concept of a simple thing called light. Light can be defined in simple words as the visible portion of the electromagnetic spectrum. It occurs within the range of 400-700 nm. Different wavelengths of radiation have different properties. The typical rays of light employed in image processing techniques having short wavelength are the X-rays. They have the power to penetrate any surface of the scene and create an illusion or image. Similarly in case of long wavelengths the rays of light typically used in image processing techniques are the infra red rays. These are emitted from warm objects and are generally used to locate people in dark. Synthetic Aperture Radar (SAR) imaging techniques use an artificially generated source of microwaves to probe a scene. SAR is generally unaffected by weather conditions and cloud. There are different image processing software:
¢ CVIP tools-computer vision and image processing tools.
¢ Intel open computer and vision library.
¢ Microsoft vision SDL library.
¢ Matlab

The concept of image processing can be first illustrated with the concept of the human eye. The sensitive part of the human eye which is concerned with the process of image processing is the retina. The retina consists of cones and rods. The cones are color receptors whereas the rods are involved in illumination. The human eye is sensitive to the visible region of the electromagnetic spectrum. The human eye is able to distinguish between the different wavelengths of light. It has a higher density of receptors in the centre. The digital image processing does not deal with the cognitive aspect of the perceived image. Computer vision is the science and technology of machines that see and it is a field related to the image processing. It can be rightly said that image processing is a branch of computer vision. As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences, views from multiple cameras, or multi-dimensional data from a medical scanner. As a technological discipline, computer vision seeks to apply its theories and models to the construction of computer vision systems. Examples of applications of computer vision include systems for:
¢ Controlling processes (e.g., an industrial robot or an autonomous vehicle).
¢ Detecting events (e.g., for visual surveillance or people counting).
¢ Organizing information (e.g., for indexing databases of images and image sequences).
¢ Modeling objects or environments (e.g., industrial inspection, medical image analysis or topographical modeling).
¢ Interaction (e.g., as the input to a device for computer-human interaction).
Computer vision is closely related to the study of biological vision. The field of biological vision studies and models the physiological processes behind visual perception in humans and other animals. Computer vision, on the other hand, studies and describes the processes implemented in software and hardware behind artificial vision systems. Interdisciplinary exchange between biological and computer vision has proven fruitful for both fields. Computer vision is, in some ways, the inverse of computer graphics. While computer graphics produces image data from 3D models, computer vision often produces 3D models from image data. There is also a trend towards a combination of the two disciplines, e.g., as explored in augmented reality. Sub-domains of computer vision include scene reconstruction, event detection, video tracking, object recognition, learning, indexing, motion estimation, and image restoration. Computer vision is a diverse and relatively new field of study. In the early days of computing, it was difficult to process even moderately large sets of image data. It was not until the late 1970s that a more focused study of the field emerged. Computer vision covers a wide range of topics which are often related to other disciplines, and consequently there is no standard formulation of "the computer vision problem". Moreover, there is no standard formulation of how computer vision problems should be solved. Instead, there exists an abundance of methods for solving various well-defined computer vision tasks, where the methods often are very task specific and seldom can be generalized over a wide range of applications. Many of the methods and applications are still in the state of basic research, but more and more methods have found their way into commercial products, where they often constitute a part of a larger system which can solve complex tasks (e.g., in the area of medical images, or quality control and measurements in industrial processes). In most practical computer vision applications, the computers are pre-programmed to solve a particular task, but methods based on learning are now becoming increasingly common.


Fig: Relation between image processing and various other fields
Physics is another field that is closely related to computer vision. Computer vision systems rely on image sensors which detect electromagnetic radiation which is typically in the form of either visible or infra-red light. The sensors are designed using solid-state physics. The process by which light propagates and reflects off surfaces is explained using optics. Sophisticated image sensors even require quantum mechanics to provide a complete understanding of the image formation process. Also, various measurement problems in physics can be addressed using computer vision, for example motion in fluids. A third field which plays an important role is neurobiology, specifically the study of the biological vision system. Over the last century, there has been an extensive study of eyes, neurons, and the brain structures devoted to processing of visual stimuli in both humans and various animals. This has led to a coarse, yet complicated, description of how "real" vision systems operate in order to solve certain vision related tasks. These results have led to a subfield within computer vision where artificial systems are designed to mimic the processing and behavior of biological systems, at different levels of complexity. Also, some of the learning-based methods developed within computer vision have their background in biology. Yet another field related to computer vision is signal processing. Many methods for processing of one-variable signals, typically temporal signals, can be extended in a natural way to processing of two-variable signals or multi-variable signals in computer vision. However, because of the specific nature of images there are many methods developed within computer vision which have no counterpart in the processing of one-variable signals. A distinct character of these methods is the fact that they are non-linear which, together with the multi-dimensionality of the signal, defines a subfield in signal processing as a part of computer vision. Beside the above mentioned views on computer vision, many of the related research topics can also be studied from a purely mathematical point of view. For example, many methods in computer vision are based on statistics, optimization or geometry. Finally, a significant part of the field is devoted to the implementation aspect of computer vision; how existing methods can be realized in various combinations of software and hardware, or how these methods can be modified in order to gain processing speed without losing too much performance. The fields most closely related to computer vision are image processing, image analysis and machine vision. There is a significant overlap in the range of techniques and applications that these cover. This implies that the basic techniques that are used and developed in these fields are more or less identical, something which can be interpreted as there is only one field with different names. On the other hand, it appears to be necessary for research groups, scientific journals, conferences and companies to present or market themselves as belonging specifically to one of these fields and, hence, various characterizations which distinguish each of the fields from the others have been presented.
¢ One of the most prominent application fields is medical computer vision or medical image processing. This area is characterized by the extraction of information from image data for the purpose of making a medical diagnosis of a patient. Generally, image data is in the form of microscopy images, X-ray images, angiography images, ultrasonic images, and tomography images. An example of information which can be extracted from such image data is detection of tumours, arteriosclerosis or other malign changes. It can also be measurements of organ dimensions, blood flow, etc. This application area also supports medical research by providing new information, e.g., about the structure of the brain, or about the quality of medical treatments.
¢ A second application area in computer vision is in industry, sometimes called machine vision, where information is extracted for the purpose of supporting a manufacturing process. One example is quality control where details or final products are being automatically inspected in order to find defects. Another example is measurement of position and orientation of details to be picked up by a robot arm. Machine vision is also heavily used in agricultural process to remove undesirable food stuff from bulk material, a process called optical sorting.
¢ Military applications are probably one of the largest areas for computer vision. The obvious examples are detection of enemy soldiers or vehicles and missile guidance. More advanced systems for missile guidance send the missile to an area rather than a specific target, and target selection is made when the missile reaches the area based on locally acquired image data. Modern military concepts, such as "battlefield awareness", imply that various sensors, including image sensors, provide a rich set of information about a combat scene which can be used to support strategic decisions. In this case, automatic processing of the data is used to reduce complexity and to fuse information from multiple sensors to increase reliability.

The classical problem in image processing, and machine vision is that of determining whether or not the image data contains some specific object, feature, or activity. This task can normally be solved robustly and without effort by a human, but is still not satisfactorily solved in computer vision for the general case: arbitrary objects in arbitrary situations. The existing methods for dealing with this problem can at best solve it only for specific objects, such as simple geometric objects (e.g., polyhedra), human faces, printed or hand-written characters, or vehicles, and in specific situations, typically described in terms of well-defined illumination, background, and pose of the object relative to the camera.
Different varieties of the recognition problem are described in the literature:
¢ Object recognition: one or several pre-specified or learned objects or object classes can be recognized, usually together with their 2D positions in the image or 3D poses in the scene.
¢ Identification: An individual instance of an object is recognized. Examples: identification of a specific person's face or fingerprint, or identification of a specific vehicle.
¢ Detection: the image data is scanned for a specific condition. Examples: detection of possible abnormal cells or tissues in medical images or detection of a vehicle in an automatic road toll system. Detection based on relatively simple and fast computations is sometimes used for finding smaller regions of interesting image data which can be further analyzed by more computationally demanding techniques to produce a correct interpretation.
Several specialized tasks based on recognition exist, such as:
¢ Content-based image retrieval: Finding all images in a larger set of images which have a specific content. The content can be specified in different ways, for example in terms of similarity relative a target image (give me all images similar to image X), or in terms of high-level search criteria given as text input (give me all images which contains many houses, are taken during winter, and have no cars in them).
¢ Pose estimation: Estimating the position or orientation of a specific object relative to the camera. An example application for this technique would be assisting a robot arm in retrieving objects from a conveyor belt in an assembly line situation.
¢ Optical character recognition (OCR): identifying characters in images of printed or handwritten text, usually with a view to encoding the text in a format more amenable to editing or indexing (e.g. ASCII).

Several tasks relate to motion estimation where an image sequence is processed to produce an estimate of the velocity either at each points in the image or in the 3D scene, or even of the camera that produces the images . Examples of such tasks are:
¢ Ego motion: determining the 3D rigid motion (rotation and translation) of the camera from an image sequence produced by the camera.
¢ Tracking: following the movements of a (usually) smaller set of interest points or objects (e.g., vehicles or humans) in the image sequence.
¢ Optical flow: to determine, for each point in the image, how that point is moving relative to the image plane, i.e., its apparent motion. This motion is a result both of how the corresponding 3D point is moving in the scene and how the camera is moving relative to the scene.
Given one or (typically) more images of a scene, or a video, scene reconstruction aims at computing a 3D model of the scene. In the simplest case the model can be a set of 3D points. More sophisticated methods produce a complete 3D surface model.
The aim of image restoration is the removal of noise (sensor noise, motion blur, etc.) from images. The simplest possible approach for noise removal is various types of filters such as low-pass filters or median filters. More sophisticated methods assume a model of how the local image structures look like, a model which distinguishes them from the noise. By first analysing the image data in terms of the local image structures, such as lines or edges, and then controlling the filtering based on local information from the analysis step, a better level of noise removal is usually obtained compared to the simpler approaches.

The organization of a computer vision system is highly application dependent. Some systems are stand-alone applications which solve a specific measurement or detection problem, while others constitute a sub-system of a larger design which, for example, also contains sub-systems for control of mechanical actuators, planning, information databases, man-machine interfaces, etc. The specific implementation of a computer vision system also depends on if its functionality is pre-specified or if some part of it can be learned or modified during operation. There are, however, typical functions which are found in many computer vision systems.
¢ Image acquisition: A digital image is produced by one or several image sensors, which, besides various types of light-sensitive cameras, include range sensors, tomography devices, radar, ultra-sonic cameras, etc. Depending on the type of sensor, the resulting image data is an ordinary 2D image, a 3D volume, or an image sequence. The pixel values typically correspond to light intensity in one or several spectral bands (gray images or colour images), but can also be related to various physical measures, such as depth, absorption or reflectance of sonic or electromagnetic waves, or nuclear magnetic resonance.
¢ Pre-processing: Before a computer vision method can be applied to image data in order to extract some specific piece of information, it is usually necessary to process the data in order to assure that it satisfies certain assumptions implied by the method. Examples are
o Re-sampling in order to assure that the image coordinate system is correct.
o Noise reduction in order to assure that sensor noise does not introduce false information.
o Contrast enhancement to assure that relevant information can be detected.
o Scale-space representation to enhance image structures at locally appropriate scales.
¢ Feature extraction: Image features at various levels of complexity are extracted from the image data. Typical examples of such features are
o Lines, edges and ridges.
o Localized interest points such as corners, blobs or points.
More complex features may be related to texture, shape or motion.
¢ Detection/segmentation: At some point in the processing a decision is made about which image points or regions of the image are relevant for further processing. Examples are
o Selection of a specific set of interest points
o Segmentation of one or multiple image regions which contain a specific object of interest.
¢ High-level processing: At this step the input is typically a small set of data, for example a set of points or an image region which is assumed to contain a specific object. The remaining processing deals with, for example:
o Verification that the data satisfy model-based and application specific assumptions.
o Estimation of application specific parameters, such as object pose or object size.
o Classifying a detected object into different categories.
¢ Medical imaging is the technique and process used to create images of the human body (or parts and function thereof) for clinical purposes (medical procedures seeking to reveal, diagnose or examine disease) or medical science (including the study of normal anatomy and physiology). Although imaging of removed organs and tissues can be performed for medical reasons, such procedures are not usually referred to as medical imaging, but rather are a part of pathology. As a discipline and in its widest sense, it is part of biological imaging and incorporates radiology (in the wider sense), nuclear medicine, investigative radiological sciences, endoscopy, (medical) thermography, medical photography and microscopy (e.g. for human pathological investigations). Measurement and recording techniques which are not primarily designed to produce images, such as electroencephalography (EEG), magneto encephalography (MEG), Electrocardiography (EKG) and others, but which produce data susceptible to be represented as maps (i.e. containing positional information), can be seen as forms of medical imaging.
Two forms of radiographic images are in use in medical imaging; project and implimentationion radiography and fluoroscopy, with the latter being useful for intraoperative and catheter guidance. These 2D techniques are still in wide use despite the advance of 3D tomography due to the low cost, high resolution, and depending on application, lower radiation dosages. This imaging modality utilizes a wide beam of x rays for image acquisition and is the first imaging technique available in modern medicine.
¢ Fluoroscopy produces real-time images of internal structures of the body in a similar fashion to radiography, but employs a constant input of x-rays, at a lower dose rate. Contrast media, such as barium, iodine, and air are used to visualize internal organs as they work. Fluoroscopy is also used in image-guided procedures when constant feedback during a procedure is required. An image receptor is required to convert the radiation into an image after it has passed through the area of interest. Early on this was a fluorescing screen, which gave way to an Image Amplifier (IA) which was a large vacuum tube that had the receiving end coated with cesium iodide, and a mirror at the opposite end. Eventually the mirror was replaced with a TV camera.
 Projectional radiographs, more commonly known as x-rays, are often used to determine the type and extent of a fracture as well as for detecting pathological changes in the lungs. With the use of radio-opaque contrast media, such as barium, they can also be used to visualize the structure of the stomach and intestines - this can help diagnose ulcers or certain types of colon cancer.

Fig: A brain MRI representation
A magnetic resonance imaging instrument (MRI scanner), or "nuclear magnetic resonance (NMR) imaging" scanner as it was originally known, uses powerful magnets to polarize and excite hydrogen nuclei (single proton) in water molecules in human tissue, producing a detectable signal which is spatially encoded, resulting in images of the body. MRI uses three electromagnetic fields: a very strong (on the order of units of Tesla) static magnetic field to polarize the hydrogen nuclei, called the static field; a weaker time-varying (on the order of 1 kHz) field(s) for spatial encoding, called the gradient field(s); and a weak radio-frequency (RF) field for manipulation of the hydrogen nuclei to produce measurable signals, collected through an RF antenna. Like CT, MRI traditionally creates a two dimensional image of a thin "slice" of the body and is therefore considered a tomographic imaging technique. Modern MRI instruments are capable of producing images in the form of 3D blocks, which may be considered a generalization of the single-slice, tomographic, concept. Unlike CT, MRI does not involve the use of ionizing radiation and is therefore not associated with the same health hazards. For example, because MRI has only been in use since the early 1980s, there are no known long-term effects of exposure to strong static fields and therefore there is no limit to the number of scans to which an individual can be subjected, in contrast with X-ray and CT. However, there are well-identified health risks associated with tissue heating from exposure to the RF field and the presence of implanted devices in the body, such as pace makers. These risks are strictly controlled as part of the design of the instrument and the scanning protocols used. Because CT and MRI are sensitive to different tissue properties, the appearance of the images obtained with the two techniques differ markedly. In CT, X-rays must be blocked by some form of dense tissue to create an image, so the image quality when looking at soft tissues will be poor. In MRI, while any nucleus with a net nuclear spin can be used, the proton of the hydrogen atom remains the most widely used, especially in the clinical setting, because it is so ubiquitous and returns a large signal. This nucleus, present in water molecules, allows the excellent soft-tissue contrast achievable with MRI.
Nuclear medicine encompasses both diagnostic imaging and treatment of disease, and may also be referred to as molecular medicine or molecular imaging & therapeutics [1]. Nuclear medicine uses certain properties of isotopes and the energetic particles emitted from radioactive material to diagnose or treat various pathology. Different from the typical concept of anatomic radiology, nuclear medicine enables assessment of physiology. This function-based approach to medical evaluation has useful applications in most subspecialties, notably oncology, neurology, and cardiology. Gamma cameras are used in e.g. scintigraphy, SPECT and PET to detect regions of biologic activity that may be associated with disease. Relatively short lived isotope, such as 123I is administered to the patient. Isotopes are often preferentially absorbed by biologically active tissue in the body, and can be used to identify tumors or fracture points in bone. Images are acquired after collimated photons are detected by a crystal that gives off a light signal, which is in turn amplified and converted into count data.
¢ Scintigraphy ("scint") is a form of diagnostic test wherein radioisotopes are taken internally, for example intravenously or orally. Then, gamma camera capture and form two-dimensional images from the radiation emitted by the radiopharmaceuticals.
¢ SPECT is a 3D tomographic technique that uses gamma camera data from many project and implimentationions and can be reconstructed in different planes. A dual detector head gamma camera combined with a CT scanner, which provides localization of functional SPECT data, is termed a SPECT/CT camera, and has shown utility in advancing the field of molecular imaging.
 Positron emission tomography (PET) uses coincidence detection to image functional processes. Short-lived positron emitting isotope, such as 18F, is incorporated with an organic substance such as glucose, creating F18-fluorodeoxyglucose, which can be used as a marker of metabolic utilization. Images of activity distribution throughout the body can show rapidly growing tissue, like tumor, metastasis, or infection. PET images can be viewed in comparison to computed tomography scans to determine an anatomic correlate. Modern scanners combine PET with a CT, or even MRI, to optimize the image reconstruction involved with positron imaging. This is performed on the same equipment without physically moving the patient off of the gantry. The resultant hybrid of functional and anatomic imaging information is a useful tool in non-invasive diagnosis and patient management.
Photo acoustic imaging is a recently developed hybrid biomedical imaging modality based on the photo acoustic effect. It combines the advantages of optical absorption contrast with ultrasonic spatial resolution for deep imaging in (optical) diffusive or quasi-diffusive regime. Recent studies have shown that photo acoustic imaging can be used in vivo for tumor angiogenesis monitoring, blood oxygenation mapping, functional brain imaging, and skin melanoma detection, etc.
Digital infrared imaging thermography is based on the principle that metabolic activity and vascular circulation in both pre-cancerous tissue and the area surrounding a developing breast cancer is almost always higher than in normal breast tissue. Cancerous tumors require an ever-increasing supply of nutrients and therefore increase circulation to their cells by holding open existing blood vessels, opening dormant vessels, and creating new ones (neoangiogenesis). This process frequently results in an increase in regional surface temperatures of the breast. Digital infrared imaging uses extremely sensitive medical infrared cameras and sophisticated computers to detect, analyze, and produce high-resolution diagnostic images of these temperature variations. Because of DII's sensitivity, these temperature variations may be among the earliest signs of breast cancer and/or a pre-cancerous state of the breast.
Tomography is the method of imaging a single plane, or slice, of an object resulting in a tomogram. There are several forms of tomography:
 Linear tomography: This is the most basic form of tomography. The X-ray tube moved from point "A" to point "B" above the patient, while the cassette holder (or "bucky") moves simultaneously under the patient from point "B" to point "A." The fulcrum, or pivot point, is set to the area of interest. In this manner, the points above and below the focal plane are blurred out, just as the background is blurred when panning a camera during exposure. No longer carried out and replaced by computed tomography.
Poly tomography: This was a complex form of tomography. With this technique, a number of geometrical movements were programmed, such as hypocycloidic, circular, figure 8, and elliptical. Philips Medical Systems [1] produced one such device called the 'Polytome.' This unit was still in use into the 1990s, as its resulting images for small or difficult physiology, such as the inner ear, was still difficult to image with CTs at that time. As the resolution of CTs got better, this procedure was taken over by the CT.
 Zonography: This is a variant of linear tomography, where a limited arc of movement is used. It is still used in some centres for visualising the kidney during an intravenous urogram (IVU).
 Orthopantomography (OPT or OPG): The only common tomographic examination in use. This makes use of a complex movement to allow the radiographic examination of the mandible, as if it were a flat bone. It is often referred to as a "Panorex", but this is incorrect, as it is a trademark of a specific company.
 Computed Tomography (CT), or Computed Axial Tomography (CAT: A CT scan, also known as a CAT scan, is a helical tomography (latest generation), which traditionally produces a 2D image of the structures in a thin section of the body. It uses X-rays. It has a greater ionizing radiation dose burden than project and implimentationion radiography; repeated scans must be limited to avoid health effects.
Medical ultra sonography uses high frequency broadband sound waves in the megahertz range that are reflected by tissue to varying degrees to produce (up to 3D) images. This is commonly associated with imaging the fetus in pregnant women. Uses of ultrasound are much broader, however. Other important uses include imaging the abdominal organs, heart, breast, muscles, tendons, arteries and veins. While it may provide less anatomical detail than techniques such as CT or MRI, it has several advantages which make it ideal in numerous situations, in particular that it studies the function of moving structures in real-time, emits no ionizing radiation, and contains speckle that can be used in elastography. It is very safe to use and does not appear to cause any adverse effects, although information on this is not well documented. It is also relatively inexpensive and quick to perform. Ultrasound scanners can be taken to critically ill patients in intensive care units, avoiding the danger caused while moving the patient to the radiology department. The real time moving image obtained can be used to guide drainage and biopsy procedures. Doppler capabilities on modern scanners allow the blood flow in arteries and veins to be assessed.

The MIPAV (Medical Image Processing, Analysis, and Visualization) application enables quantitative analysis and visualization of medical images of numerous modalities such as PET, MRI, CT, or microscopy. Using MIPAV's standard user-interface and analysis tools, researchers at remote sites (via the internet) can easily share research data and analyses, thereby enhancing their ability to research, diagnose, monitor, and treat medical disorders. MIPAV is a Java application and can be run on any Java-enabled platform such as Windows, UNIX, or Macintosh OS X. MIPAV is to meet the following goals:
¢ To develop computational methods and algorithms to analyze and quantify biomedical data;
¢ To collaborate with NIH researchers and colleagues at other research centers in applying information analysis and visualization to biomedical research problems;
¢ To develop tools (in both hardware and software) to give our collaborators the ability to analyze biomedical data to support the discovery and advancement of biomedical knowledge.
Imaging has become an essential component in many fields of bio-medical research and clinical practice. Biologists study cells and generate 3D confocal microscopy data sets, virologists generate 3D reconstructions of viruses from micrographs, radiologists identify and quantify tumors from MRI and CT scans, and neuroscientists detect regional metabolic brain activity from PET and functional MRI scans. Analysis of these diverse types of images requires sophisticated computerized quantification and visualization tools. To support scientific research in the NIH intramural program, CIT has made major progress in the development of a platform-independent, n-dimensional, general-purpose, extensible image processing and visualization program. The Biomedical Imaging Technology [BMIT] Scientific Review Group reviews grant applications involving basic, applied, and pre-clinical aspects of the design and development of medical imaging system technologies, their components, software, and mathematical methods for studies at the cellular, organ, small or large animal, and human scale. Emphasis is on the technology development but extends to the science of image formation, analysis, evaluation and validation, including image perception, and integration of imaging technologies. Specific areas covered by BMIT:
¢ Component technologies used in the design, development, implementation, testing and application of imaging systems, such as: image detectors and related energy conversion devices, ionizing and non-ionizing radiation detectors, magnets and coils, and other technologies used in devices to acquire medical image data from various modalities.
¢ Physics and mathematics of medical imaging devices and systems for hardware and software development: application of methods of applied mathematics for solving inverse problems using iterative, non-iterative, deterministic and probabilistic approaches; and analysis of complex dynamical systems.
¢ Methods of processing and presenting medical images: display, computational resources for reconstruction, registration, segmentation, visualization, and analysis of 2-, 3-, and 4- (or higher) dimensional data sets from various modalities.
¢ Development of image-based methods and strategies to characterize tissue or for the support of image-guided surgical or physical interventions that require high performance computing and display of images for interactive man-machine environments that simultaneously, or sequentially, diagnose, plan, treat, update, and follow-up.
¢ Methodology for validating medical imaging systems including medical-image-observer performance: vision modeling, metrics, calibration, standards, statistical methods, and simulation of an ideal observer using principles of psychophysical experimentation.
The Bioengineering, Technology, and Surgical Sciences (BTSS) Study Section reviews grant
applications in the interdisciplinary fields of surgery and bioengineering to develop innovative medical instruments, materials, processes, implants, and devices to diagnosis and treat disease and injury. Within BTSS there is a balance between basic, translational, and clinical research and application and development of emerging cross-cutting technologies relevant to the cardiac system. Specific areas covered by BTSS:
¢ Development of advanced tools and techniques, including the design, construction, and function of cellular and tissue-engineered constructs, vascular and vein grafts.
¢ Design, development and evaluation of medical devices using animal models and pre-clinical human studies, including endo-surgical procedures, catheter-based surgery, minimally invasive surgery, microsurgical procedures, and robotics.
¢ Development of therapeutic implantable devices, including delivery systems for drug delivery as well as the delivery of nano-molecules and bio-molecules..
¢ Fluid mechanics studies of circulation, microcirculation, and transport systems. Biomechanics, computational fluid dynamics, hemodynamics, mathematical modeling, simulation, ventricular remodeling, tissue and organ mechanics and the mechanics of injury.
¢ Sensors, biosensors, sensing, laser, acoustics, mems, microarrays, imaging, and nanotechnology.
The Medical Imaging [MEDI] Scientific Review Group reviews proposals involving the application and validation of in vivo imaging of humans and animals, including early phase clinical studies of medical imaging systems, molecular probes and contrast agents, software, molecular imaging techniques, and related technologies. The underlying technologies may be refined and optimized during testing in response to research questions or clinical needs. Specific areas covered by MEDI:
¢ Evaluation of improvements in technologies underlying medical imaging systems, as well as studies of available medical imaging systems to evaluate novel medical applications.
¢ Pre-clinical, Phase-I, and -II clinical trials of medical imaging systems and accessories, including MRI,MRS, optical, PET, PET/CT, fMRI, photoacoustic, DTI, nuclear medicine, ultrasound, multimodality, etc. and their associated contrast agents.
¢ Prediction, selection, and monitoring of therapeutic response based on imaging studies, with or without exogenous agents, using one or more modalities, especially for multi-temporal investigations to measure changes relative to a pretreatment baseline.
¢ Applications of imaging systems and modification of diagnostic methods for use in: screening; characterizing physiological effects,, and assessing risk.
¢ Image-guided interventions in integrated diagnostic and therapeutic systems.
¢ In vivo strategies and methods for characterizing tissue, and distinguishing between normal and pathologic states, based on estimates of biophysical, biomechanical, bioelectrical, biochemical, metabolic, perfusion/diffusion, or other properties.
¢ Development of surrogate endpoints based on quantitative imaging for use in clinical trials of medical devices, pharmaceuticals, biologics and other therapeutic interventions.
¢ Prediction, selection and monitoring therapeutic response by administering agents and imaging, to detect the location, amount, and fate of the agent in normal and diseased tissues.
¢ Diagnosis of functional disorders and classification of tissue as normal or pathologic based on exogenous agents that may be tailored to specific cellular processes or genetic expressions.
The Surgery, Anesthesiology and Trauma (SAT) Study Section reviews applications in the disciplines of surgery, anesthesiology, and critical care. Sepsis and injury studies reviewed by SAT often address the host response to these complex insults such as trauma, disseminated infection, or surgical stress, with a general focus on systemic metabolic, hormonal, or immune responses to infection and multi-organ damage. Specific areas covered by SAT:
¢ Tissue, organ and systemic injury responses to surgery, trauma, burn, sepsis, hemorrhage, ischemia-reperfusion, or resuscitation, including integrating pathways and signals.
¢ Genetic and epigenetic determinants of response to injury or sepsis; and genetic, epigenetic, or pharmacologic approaches for treatment.
¢ Pathogenesis and therapeutic interventions for shock and multiple organ failure, and for hypoxic or oxidative cell/tissue injury and stress-induced cellular turnover and repair.
¢ Multi-modal treatment of critical injury including metabolic, hormonal, or nutritional interventions, and infection prophylaxis or therapies.
¢ Modeling of shock, critical illness, and injury with multi-modal diagnostic and/or therapeutic approaches.
¢ Skin and integument wound healing, including tissue/organ regeneration, remodeling of damaged tissues, stem cells/progenitors, and novel therapeutic interventions.
¢ Pharmacology of general and local anesthetics, including mechanisms and side effects.
¢ Mechanisms and management of pain in the context of surgery, injury, and anesthesiology.
¢ Approaches to utilize adult stem cells for maintenance or restoration of tissue function.
¢ Mechanisms of the host response to the tissue damage associated with organ, tissue, or cellular transplantation.
¢ Surgical approaches to organ/tissue-specific disease, injury, or repair including minimally invasive and trans luminal surgical approaches.
From the beginning of the science, visual observation has played a major role. At that time the only way to document the results of an experiment was verbal description and manual drawings. The next step was photography. But these had certain demerits; the manual evaluation procedures were time consuming. The image processing techniques are quite essential in each and every aspect of life, its immense utility is observed in the medical field. The project and implimentation entitled Image Processing with special emphasis on biological science contains the definition, the basic concepts associated with image processing, the various aspects of image processing and finally the applications of image processing.
project report helper
Active In SP

Posts: 2,270
Joined: Sep 2010
22-09-2010, 02:49 PM

More Info About Image Processing
Active In SP

Posts: 1,124
Joined: Jun 2010
18-10-2010, 10:52 AM

.ppt   IMAGE PROCESSING.ppt (Size: 495.5 KB / Downloads: 136)
This article is presented by: M.SELVARANI





Image processing is performing operations on image.
Examining images for the purpose of identifying objects.
Input is an IMAGE and output be PARAMETERS related to IMAGE.


Image processing is performing operations on image.
Examining images for the purpose of identifying objects.
Input is an IMAGE and output be PARAMETERS related to IMAGE.

Many simple
Working model
is available.
Perform relatively
on small

Important Note..!

If you are not satisfied with above reply ,..Please


So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Tagged Pages: learning a contextual multi thread model for movie tv scene segmentation, powerpoint slide show seminar on data capture and processing using fingerprint, bylaiah digital image processing is described as the use of computer algorithms to perform image processing and procedures su, tracking and counting people in visual surveillance systems, description of mri brain feature extraction for beginners, body parts imaged for processing ppt, ppts of cfd analysis of single phase ows inside helically coiled tubes free downloads,
Popular Searches: seminar topics on image processing latest of mri, blu ray technology company, seminar report on dogotal image processing, digital photo processing using matlab, project report on blue ray disc, web based digital image processing, emgu cv fingerprint image processing,

Quick Reply
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  srs document for image steganography pdf Guest 1 81 11-10-2016, 04:19 PM
Last Post: amrutha735
  bucket wheel escalator project image see Guest 1 38 11-10-2016, 04:14 PM
Last Post: amrutha735
  free matlab source code for satellite image classification Guest 1 74 11-10-2016, 10:18 AM
Last Post: amrutha735
  a novel image data hiding scheme with diamond encoding ppt Guest 1 45 08-10-2016, 04:44 PM
Last Post: amrutha735
  detection of malarial parasite in blood using image processing source code Guest 1 55 08-10-2016, 03:51 PM
Last Post: Dhanabhagya
  matlab code for curvelet transform for image enhancement Guest 1 54 08-10-2016, 03:01 PM
Last Post: Dhanabhagya
  image fusion using hosvd matlab code Guest 1 58 08-10-2016, 12:13 PM
Last Post: amrutha735
  ppt on image fusion with guided filter Guest 1 90 24-08-2016, 11:03 AM
Last Post: mkaasees
  matlab coding for image encryption using blowfish algorithm Guest 1 46 01-07-2016, 11:41 AM
Last Post: Dhanabhagya
  digital image processing author by r l rekha pdf download Guest 2 200 04-06-2016, 02:07 PM
Last Post: Dhanabhagya