Concealed weapon detection
Active In SP
Joined: Mar 2010
01-04-2010, 10:37 PM
Concealed weapon detection (CWD) is an increasingly important topic in the general area of law and enforcement and it appears to be a critical technology for dealing with terrorism, which appears to be the most significant law enforcement problem for the next decade.
Since no single sensor technology can provide acceptable performance in CWD applications, image fusion has been identified as a key technology to achieve improved CWD procedures.
Existing image sensing mechanisms include thermal/infrared (IR), millimeter wave, and visual.
In our current work we focus on fusing visual and IR images for CWD.
To develop a new algorithm to fuse a color visual image and a corresponding IR image for a concealed weapon detection application.
The fused image obtained by the proposed algorithm will maintain the high resolution of the visual image, incorporate any concealed weapons detected by the IR sensor, and keep the natural color of the visual image.
COLOR IMAGE FUSION
Image fusion is the process of combining relevant information from two or more images into a single image. The resulting image will be more informative than any of the input images.
Use Search at http://topicideas.net/search.php wisely To Get Information About Project Topic and Seminar ideas with report/source code along pdf and ppt presenaion
smart paper boy|
Active In SP
Joined: Jun 2011
19-08-2011, 10:11 AM
Chapter 1.docx (Size: 1.42 MB / Downloads: 57)
1.1 Problem definition:
Concealed weapon detection (CWD) is an increasingly important topic in the general area of law enforcement and it appears to be a critical technology for dealing with terrorism. The detection of weapons concealed underneath a person’s clothing is very much important to the improvement of the security of the general public as well as the safety of public assets like airports, buildings, and railway stations etc.
Manual screening procedures for detecting concealed weapons such as handguns, knives, and explosives are common in controlled access settings like airports, entrances to sensitive buildings and public events. But these manual screening procedures are not giving satisfactory results, because this type of manual screenings procedures screens the person when the person is near the screening machine and also sometimes it gives wrong alarm indications. So it is desirable sometimes to be able to detect concealed weapons from a standoff distance, especially when it is impossible to arrange the flow of people through a controlled procedure.. This can be achieved by imaging for concealed weapons.
Sensor fusion is an enabling technology that may be used to solve this problem. Through fusion the signal of the sensor that depicts the weapon can be displayed in the context provided by a sensor of a different modality. We propose an image fusion scheme in which non-literal imagery can be fused with standard color images such that the result clearly displays the observed weapons in the context of the original color image. The procedure is such that the relevant contrast details from the non-literal image are transferred to the color image without altering the original color distribution of this image.
The goal is the eventual deployment of automatic detection and recognition of concealed weapons. It is a technological challenge that requires innovative solutions in sensor technologies and image processing.
1.2 Scope of study:
Image fusion provides an effective way of reducing the increased volume of information from multi sensors, while at the same time extracting all the useful information from the source images. The aim of image fusion, apart from reducing the amount of data, is to create new images that are more suitable for the purposes of human/machine perception, and for further image-processing tasks. We are interested in using image fusion to help a human or computer in detecting a concealed weapon using IR and visual sensors.
1.3 Report organization:
This report is organized into 6 chapters:
Chapter1: This chapter gives the introduction to this report with problem definition, scope of the study and report organization.
Chapter 2: This chapter deals with image fusion, its definition, image preprocessing and the fusion techniques.
Chapter 3: The chapter deals with the color image fusion methodologies.
Chapter 4: It discusses about the advantages of image fusion.
Chapter 5: This chapter includes the disadvantages of the color image fusion technology.
Chapter 6: This chapter contains the applications of the proposed image fusion method.
Image fusion is a process of combining complementary information from multiple sensor images to generate a single image that contains a more accurate description of the scene than any of the individual images. The aim of image fusion, apart from reducing the amount of data, is to create new images that are more suitable for the purposes of human or machine perception, and for further image-processing tasks such as segmentation, object detection or target recognition in applications such as remote sensing & defense. There are mainly two types of image fusion systems:
2.1.1 Single sensor image fusion system:
An illustration of a single sensor image fusion system is shown in Figure 2.1.1. The sensor shown could be a visible-band sensor such as a digital camera. This sensor captures the real world as a sequence of images. The sequence is then fused in one single image and used either by a human operator or by a computer to do some task. For example in object detection, a human operator searches the scene to detect objects such intruders in a security area.
This kind of systems has some limitations due to the capability of the imaging sensor that is being used. The conditions under which the system can operate, the dynamic range, resolution, etc. are all limited by the capability of the sensor. For example, a visible-band sensor such as the digital camera is appropriate for a brightly illuminated environment such as daylight scenes but is not suitable for poorly illuminated situations found during night, or under adverse conditions such as in fog or rain.
Fig:2.1.1 Single sensor image fusion systems
2.1.2 Multi-sensor image fusion system:
A multi-sensor image fusion system overcomes the limitations of a single sensor vision system by combining the images from these sensors to form a composite image. Figure 2.1.2 shows an illustration of a multi-sensor image fusion system. In this case, an infrared camera is supplementing the digital camera and their individual images are fused to obtain a fused image. This approach overcomes the problems referred to before, while the digital camera is appropriate for daylight scenes, the infrared camera is suitable in poorly illuminated ones.
Fig 2.1.2 Multi-sensor image fusion systems
2.2 IMAGE PREPROCESSING
There are very often some issues that have to be dealt with before the fusion can be performed. Most of the time the images are misaligned. Registration is used to establish a spatial correspondence between the sensor images and to determine a spatial geometric transformation, called warping, which aligns the images. Misalignment of image features is caused by several factors including the geometries of the sensors, different spatial positions of the sensors, different temporal capture rates of the sensors and the inherent misalignment of the sensing elements. Registration techniques align the images by exploiting the similarities between sensor images. The mismatch of image features in multisensor images reduces the similarities between the images and makes it difficult to establish the correspondence between the images.
One of the issues a fusion system has to deal with is the registration of the source images. Most of the times, images from the same scene are acquired from different sensors, or from the same sensor but at different times. These images may have relative translation, rotation, scale, and other geometric transformations between them. The goal of image registration is to establish the correspondence between two images and determine the geometric transformation that aligns one image with the other.
Making use of multiple sensors may increase the efficiency of a CWD system. The first step toward image fusion is a precise alignment of images (i.e., image registration). Very little has been reported on the registration problem for the CWD application. Here, we describe a registration approach for images taken at the same time from different but nearly collocated (adjacent and parallel) sensors based on the maximization of mutual information (MMI) criterion. MMI states that two images are registered when their mutual information (MI) reaches its maximum value.
2.3 Image fusion techniques:
Figure 1(a) shows the color visual image and (b) shows the corresponding IR image. The visual and IR images have been aligned by image registration. We observe that the body is brighter than the background in the IR image. Further the background is almost black and shows little detail because of the high thermal emissivity of body. The weapon is darker than the surrounding body due to a temperature difference between it and the body (it is colder than human body). The resolution in the visual image is much higher than that of the IR image, but there is no information on the concealed weapon in the visual image.