Electrical Seminar Abstract And Report
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
computer science crazy
Super Moderator

Posts: 3,048
Joined: Dec 2008
15-02-2009, 01:26 PM

Iris Scanning

In today's information age it is not difficult to collect data about an individual and use that information to exercise control over the individual. Individuals generally do not want others to have personal information about them unless they decide to reveal it. With the rapid development of technology, it is more difficult to maintain the levels of privacy citizens knew in the past. In this context, data security has become an inevitable feature. Conventional methods of identification based on possession of ID cards or exclusive knowledge like social security number or a password are not altogether reliable. ID cards can be almost lost, forged or misplaced: passwords can be forgotten.

Such that an unauthorized user may be able to break into an account with little effort. So it is need to ensure denial of access to classified data by unauthorized persons. Biometric technology has now become a viable alternative to traditional identification systems because of its tremendous accuracy and speed. Biometric system automatically verifies or recognizes the identity of a living person based on physiological or behavioral characteristics.

Since the persons to be identified should be physically present at the point of identification, biometric techniques gives high security for the sensitive information stored in mainframes or to avoid fraudulent use of ATMs. This paper explores the concept of Iris recognition which is one of the most popular biometric techniques. This technology finds applications in diverse fields.

Biometrics - Future Of Identity

Biometric dates back to ancient Egyptians who measured people to identify them. Biometric devices have three primary components.
1. Automated mechanism that scans and captures a digital or analog image of a living personal characteristic
2. Compression, processing, storage and comparison of image with a stored data.
3. Interfaces with application systems.

A biometric system can be divided into two stages: the enrolment module and the identification module. The enrolment module is responsible for training the system to identity a given person. During an enrolment stage, a biometric sensor scans the person's physiognomy to create a digital representation. A feature extractor processes the representation to generate a more compact and expressive representation called a template. For an iris image these include the various visible characteristics of the iris such as contraction, Furrows, pits, rings etc. The template for each user is stored in a biometric system database.

The identification module is responsible for recognizing the person. During the identification stage, the biometric sensor captures the characteristics of the person to be identified and converts it into the same digital format as the template. The resulting template is fed to the feature matcher, which compares it against the stored template to determine whether the two templates match. The identification can be in the form of verification, authenticating a claimed identity or recognition, determining the identity of a person from a database of known persons. In a verification system, when the captured characteristic and the stored template of the claimed identity are the same, the system concludes that the claimed identity is correct. In a recognition system, when the captured characteristic and one of the stored templates are the same, the system identifies the person with matching template.
Loop magnetic couplers
Loop magnetic couplers

Couplers, also known as "isolators" because they electrically isolate as well as transmit data, are widely used in industrial and factory networks, instruments, and telecommunications. Every one knows the problems with optocouplers. They take up a lot of space, are slow, optocouplers age and their temperature range is quite limited. For years, optical couplers were the only option. Over the years, most of the components used to build instrumentation circuits have become ever smaller. Optocoupler technology, however, hasn't kept up. Existing coupler technologies look like dinosaurs on modern circuit boards.

Magnetic couplers are analogous to optocouplers in a number of ways. Design engineers, especially in instrumentation technology, will welcome a galvanically-isolated data coupler with integrated signal conversion in a single IC. My report will give a detailed study about 'ISOLOOP MAGNETIC COUPLERS'. GROUND LOOPS

When equipment using different power supplies is tied together (with a common ground connection) there is a potential for ground loop currents to exist. This is an induced current in the common ground line as a result of a difference in ground potentials at each piece of equipment. Normally all grounds are not in the same potential. Widespread electrical and communications networks often have nodes with different ground domains. The potential difference between these grounds can be AC or DC, and can contain various noise components. Grounds connected by cable shielding or logic line ground can create a ground loop-unwanted current flow in the cable. Ground-loop currents can degrade data signals, produce excessive EMI, damage components, and, if the current is large enough, present a shock hazard.

Galvanic isolation between circuits or nodes in different ground domains eliminates these problems, seamlessly passing signal information while isolating ground potential differences and common-mode transients. Adding isolation components to a circuit or network is considered good design practice and is often mandated by industry standards. Isolation is frequently used in modems, LAN and industrial network interfaces (e.g., network hubs, routers, and switches), telephones, printers, fax machines, and switched-mode power supplies.

Giant Magnetoresistive (GMR):
Large magnetic field dependent changes in resistance are possible in thin film ferromagnet/nonmagnetic metallic multilayers. The phenomenon was first observed in France in 1988, when changes in resistance with magnetic field of up to 70% were seen. Compared to the small percent change in resistance observed in anisotropic magnetoresistance, this phenomenon was truly 'giant' magnetoresistance.

The spin of electrons in a magnet is aligned to produce a magnetic moment. Magnetic layers with opposing spins (magnetic moments) impede the progress of the electrons (higher scattering) through a sandwiched conductive layer. This arrangement causes the conductor to have a higher resistance to current flow.

An external magnetic field can realign all of the layers into a single magnetic moment. When this happens, electron flow will be less effected (lower scattering) by the uniform spins of the adjacent ferromagnetic layers. This causes the conduction layer to have a lower resistance to current flow. Note that these phenomenon takes places only when the conduction layer is thin enough (less than 5 nm) for the ferromagnetic layer's electron spins to affect the conductive layer's electron's path.


Over the last few years, the interest for connecting computers and computer supported devices to wireless networks has steadily increased. Computers are becoming more and more seamlessly integrated with everyday equipment and prices are dropping. At the same time wireless networking technologies, such as Bluetooth and IEEE 802.11b WLAN , are emerging. This gives rise to many new fascinating scenarios in areas such as health care, safety and security, transportation, and processing industry. Small devices such as sensors can be connected to an existing network infrastructure such as the global Internet, and monitored from anywhere.

The Internet technology has proven itself flexible enough to incorporate the changing network environments of the past few decades. While originally developed for low speed networks such as the ARPANET, the Internet technology today runs over a large spectrum of link technologies with vastly different characteristics in terms of bandwidth and bit error rate. It is highly advantageous to use the existing Internet technology in the wireless networks of tomorrow since a large amount of applications using the Internet technology have been developed. Also, the large connectivity of the global Internet is a strong incentive.

Since small devices such as sensors are often required to be physically small and inexpensive, an implementation of the Internet protocols will have to deal with having limited computing resources and memory. This report describes the design and implementation of a small TCP/IP stack called lwIP that is small enough to be used in minimal systems.


As in many other TCP/IP implementations, the layered protocol design has served as a guide for the design of the implementation of lwIP. Each protocol is implemented as its own module, with a few functions acting as entry points into each protocol. Even though the protocols are implemented separately, some layer violations are made, as discussed above, in order to improve performance both in terms of processing speed and memory usage. For example, when verifying the checksum of an incoming TCP segment and when demultiplexing a segment, the source and destination IP addresses of the segment has to be known by the TCP module. Instead of passing these addresses to TCP by the means of a function call, the TCP module is aware of the structure of the IP header, and can therefore extract this information by itself.

lwIP consists of several modules. Apart from the modules implementing the TCP/IP protocols (IP, ICMP, UDP, and TCP) a number of support modules are implemented.
The support modules consists of :-

" The operating system emulation layer (described in Chapter3)
" The buffer and memory management subsystems (described in Chapter 4)
" Network interface functions (described in Chapter 5)
" Functions for computing Internet checksum (Chapter 6)
" An abstract API (described in Chapter 8 )
Image Authentication Techniques
Image Authentication Techniques

This paper explores the various techniques used to authenticate the visual data recorded by the automatic video surveillance system. Automatic video surveillance systems are used for continuous and effective monitoring and reliable control of remote and dangerous sites. Some practical issues must be taken in to account, in order to take full advantage of the potentiality of VS system. The validity of visual data acquired, processed and possibly stored by the VS system, as a proof in front of a court of law is one of such issues. But visual data can be modified using sophisticated processing tools without leaving any visible trace of the modification.

So digital or image data have no value as legal proof, since doubt would always exist that they had been intentionally tampered with to incriminate or exculpate the defendant. Besides, the video data can be created artificially by computerized techniques such as morphing. Therefore the true origin of the data must be indicated to use them as legal proof. By data authentication we mean here a procedure capable of ensuring that data have not been tampered with and of indicating their true origin.

Automatic Visual Surveillance System

Automatic Visual Surveillance system is a self monitoring system which consists of a video camera unit, central unit and transmission networks A pool of digital cameras is in charge of frame the scene of interest and sent corresponding video sequence to central unit. The central unit is in charge of analyzing the sequence and generating an alarm whenever a suspicious situation is detected.

Central unit also transmits the video sequences to an intervention centre such as security service provider, the police department or a security guard unit. Somewhere in the system the video sequence or some part of it may be stored and when needed the stored sequence can be used as a proof in front of court of law. If the stored digital video sequences have to be legally credible, some means must be envisaged to detect content tampering and reliably trace back to the data origin

Authentication Techniques

Authentication techniques are performed on visual data to indicate that the data is not a forgery; they should not damage visual quality of the video data. At the same time, these techniques must indicate the malicious modifications include removal or insertion of certain frames, change of faces of individual, time and background etc. Only a properly authenticated video data has got the value as legal proof. There are two major techniques for authenticating video data.

They are as follows

1. Cryptographic Data Authentication

It is a straight forward way to provide video authentication, namely through the joint use of asymmetric key encryption and the digital Hash function. Cameras calculate a digital summary (digest) of the video by means of hash function. Then they encrypt the digest with their private key, thus obtaining a signed digest which is transmitted to the central unit together with acquired sequences. This digest is used to prove data integrity or to trace back to their origin. Signed digest can only read by using public key of the camera.

2. Watermarking- based authentication

Watermarking data authentication is the modern approach to authenticate visual data by imperceptibly embedding a digital watermark signal on the data.

Digital watermarking is the art and science of embedding copyright information in the original files. The information embedded is called 'watermarks '. Digital watermarks are difficult to remove without noticeably degrading the content and are a covert means in situation where copyright fails to provide robustness.
Seasonal Influence on Safety of Substation Grounding
Seasonal Influence on Safety of Substation Grounding

With the development of modern power system to the direction of extra-high voltage, large capacity, far distance transmission and application of advanced technologies the demand on the safety, stability and economic operation of power system became higher. A good grounding system is the fundamental insurance to keep the safe operation of the power system. The good grounding system should ensure the following:

" To provide safety to personnel during normal and fault conditions by limiting step and touch potential.
" To assure correct operation of electrical devices.
" To prevent damage to electrical apparatus.
" To dissipate lightning strokes.
" To stabilize voltage during transient conditions and therefore to minimize the probability of flashover during the transients

As it is stated in the ANSI/IEEE Standard 80-1986 "IEEE Guide for Safety in AC substation grounding," a safe grounding design has two objectives:

" To provide means to carry electric currents into the earth under normal and fault condition without exceeding any operational and equipment limit or adversely affecting continuity of service.
" To assure that a person in the vicinity of grounded facilities is not exposed to the danger of critical electrical shock.
A practical approach to safe grounding considers the interaction of two grounding systems: The intentional ground, consisting of ground electrodes buried at some depth below the earth surface, and the accidental ground, temporarily established by a person exposed to a potential gradient at a grounded facility.

An ideal ground should provide a near zero resistance to remote earth. In practice, the ground potential rise at the facility site increases proportionally to the fault current; the higher the current, the lower the value of total system resistance which must be obtained. For most large substations the ground resistance should be less than 1 Ohm. For smaller distribution substations the usually acceptable range is 1-5 Ohms, depending on the local conditions. When a grounding system is designed, the fundamental method is to ensure the safety of human beings and power apparatus is to control the step and touch voltages in their respective safe region. step and touch voltage can be defined as follows.
Step Voltage

It is defined as the voltage between the feet of the person standing in near an energized object. It is equal to the difference in voltage given by the voltage distribution curve between two points at different distance from the electrode.

Touch Voltage

It is defined as the voltage between the energized object and the feet of the person in contact with the object. It is equal to the difference in voltage between the object and a point some distance away from it.

In different season, the resistivity of the surface soil layer would be changed. This would affect the safety of grounding systems. The value of step and touch voltage will move towards safe region or to the hazard side is the main concerned question
Wavelet transforms


Ipv6 - The Next Generation Protocol

Driving Optical Network Evolution

Radio Network Controller

Wireless Networked Digital Devices
Wireless Networked Digital Devices

The proliferation of mobile computing devices including laptops, personal digital assistants (PDAs),and wearable computers has created a demand for wireless personal area networks (PANs).PANs allow proximal devices to share information and resources.The mobile nature of these devices places unique requirements on PANs,such as low power consumption, frequent make-and-break connections, resource discovery and utilization, and international regulations.

This paper examines wireless technologies appropriate for PANs and reviews promising research in resource discovery and service utilization. We recognize the need for PDAs to be as manageable as mobile phones and also the restrictive screen area and input area in mobile phone. Thus the need for a new breed of computing devices to fit the bill for a PAN. The above devices become especially relevant for mobile users such as surgeons and jet plane mechanics who need both hands free and thus would need to have "wearable" computers.

This paper first examines the technology used for wireless communication.Putting a radio in a digital device provides physical connectivity;however,to make the device useful in a larger context a networking infrastructure is required. The infrastructure allows devices o share data,applications,and resources such as printers, mass storage, and computation power. Defining a radio standard is a tractable problem as demonstrated by the solutions presented in this paper.

Designing a network infrastructure is much more complex. The second half of the paper describes several research project and implimentations that try to address components of the networking infrastructure. Finally there are the questions that go beyond the scope of this paper, yet will have he greatest effect on the direction,capabilities,and future of this paradigm. Will these networking strategies be incompatible, like he various cellular phone systems in the United States, or will there be a standard upon which manufacturers and developers agree, like the GSM (global system for mobile communication)cellular phones in Europe?

Communication demands compatibility, which is challenging in a heterogeneous marketplace. Yet by establishing and implementing compatible systems, manufacturers can offer more powerful and useful devices to their customers. Since these are, after all, digital devices living in a programmed digital world, compatibility and interoperation are possible.

Technologies explored:

1. Electric field- use human body as a current conduit.
2.Magnetic field-use base station technology for picocells of space.
3.Infra Red- Basic issues including opaque body obstruction.
4.Wireless Radio Frequency- The best technology option however has to deal with the finite resource of the electro magnetic spectrum.

Also must meet international standards by a compatible protocol.
a. UHF Radio.
b. Super regenerative receiver
c. SAW/ASH Receiver.
3- D IC's
3- D IC's

There is a saying in real estate; when land get expensive, multi-storied buildings are the alternative solution. We have a similar situation in the chip industry. For the past thirty years, chip designers have considered whether building integrated circuits multiple layers might create cheaper, more powerful chips.

Performance of deep-sub micrometer very large scale integrated (VLSI) circuits is being increasingly dominated by the interconnects due to increasing wire pitch and increasing die size. Additionally, heterogeneous integration of different technologies on one single chip is becoming increasingly desirable, for which planar (2-D) ICs may not be suitable.

The three dimensional (3-D) chip design strategy exploits the vertical dimension to alleviate the interconnect related problems and to facilitate heterogeneous integration of technologies to realize system on a chip (SoC) design. By simply dividing a planar chip into separate blocks, each occupying a separate physical level interconnected by short and vertical interlayer interconnects (VILICs), significant improvement in performance and reduction in wire-limited chip area can be achieved.In the 3-Ddesign architecture, an entire chip is divided into a number of blocks, and each block is placed on a separate layer of Si that are stacked on top of each other.

Motivation For 3-D ICs

The unprecedented growth of the computer and the information technology industry is demanding Very Large Scale Integrated ( VLSI ) circuits with increasing functionality and performance at minimum cost and power dissipation. Continuous scaling of VLSI circuits is reducing gate delays but rapidly increasing interconnect delays. A significant fraction of the total power consumption can be due to the wiring network used for clock distribution, which is usually realized using long global wires. Furthermore, increasing drive for the integration of disparate signals (digital, analog, RF) and technologies (SOI, SiGe, GaAs, and so on) is introducing various SoC design concepts, for which existing planner (2-D) IC design may not be suitable.

3D Architecture

Three-dimensional integration to create multilayer Si ICs is a concept that can significantly improve interconnect performance ,increase transistor packing density, and reduce chip area and power dissipation. Additionally 3D ICs can be very effective large scale on chip integration of different systems.

In 3D design architecture, and entire(2D) chips is divided into a number of blocks is placed on separate layer of Si that are stacked on top of each other. Each Si layer in the 3D structure can have multiple layer of interconnects(VILICs) and common global interconnects.
Sensors on 3D Digitization
Sensors on 3D Digitization

Digital 3D imaging can benefit from advances in VLSI technology in order to accelerate its deployment in many fields like visual communication and industrial automation. High-resolution 3D images can be acquired using laser-based vision systems. With this approach, the 3D information becomes relatively insensitive to background illumination and surface texture. Complete images of visible surfaces that are rather featureless to the human eye or a video camera can be generated. Intelligent digitizers will be capable of measuring accurately and simultaneously colour and 3D.

Colour 3D Imaging Technology

Machine vision involves the analysis of the properties of the luminous flux reflected or radiated by objects. To recover the geometrical structures of these objects, either to recognize or to measure their dimension, two basic vision strategies are available [1].

Passive vision, attempts to analyze the structure of the scene under ambient light. [1] Stereoscopic vision is a passive optical technique. The basic idea is that two or more digital images are taken from known locations. The images are then processed to find the correlations between them. As soon as matching points are identified, the geometry can be computed.

Active vision attempts to reduce the ambiguity of scene analysis by structuring the way in which images are formed. Sensors that capitalize on active vision can resolve most of the ambiguities found with two-dimensional imaging systems. Lidar based or triangulation based laser range cameras are examples of active vision technique. One digital 3D imaging system based on optical triangulation were developed and demonstrated.

Sensors For 3D Imaging

The sensors used in the autosynchronized scanner include

1. Synchronization Circuit Based Upon Dual Photocells

This sensor ensures the stability and the repeatability of range measurements in environment with varying temperature. Discrete implementations of the so-called synchronization circuits have posed many problems in the past. A monolithic version of an improved circuit has been built to alleviate those problems. [1]

2. Laser Spot Position Measurement Sensors
High-resolution 3D images can be acquired using laser-based vision systems. With this approach, the 3D information becomes relatively insensitive to background illumination and surface texture. Complete images of visible surfaces that are rather featureless to the human eye or a video camera can be generated.[1]
Use Search at http://topicideas.net/search.php wisely To Get Information About Project Topic and Seminar ideas with report/source code along pdf and ppt presenaion

Important Note..!

If you are not satisfied with above reply ,..Please


So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Tagged Pages: since the persons to be identified should be physically present at the point of identification biometric techniques gives hig, biometrics future of identity biometric dates back to ancient egyptians who measured people to identify them biometric device, electrical seminar abstract,
Popular Searches: project on biometric attendance system, biometric topics for seminars, biometric technology computer science seminor topics, biometric ppt download, ppt on project attendance biometric, biometric voting system, biometric project ideas,

Quick Reply
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  four wheel steering system seminar report jaseelati 1 296 08-07-2016, 03:26 PM
Last Post: mkaasees
  electrical topics for ppt jaseelati 0 205 23-02-2015, 02:59 PM
Last Post: jaseelati
  new electrical seminar topics jaseelati 0 262 18-02-2015, 01:02 PM
Last Post: jaseelati
  silent sound technology seminar report pdf jaseelati 0 315 23-01-2015, 02:26 PM
Last Post: jaseelati
  bicmos technology seminar report jaseelati 0 296 13-01-2015, 03:16 PM
Last Post: jaseelati
  autocad electrical ppt jaseelati 0 217 13-01-2015, 01:56 PM
Last Post: jaseelati
  mems technology seminar report jaseelati 0 332 10-01-2015, 02:55 PM
Last Post: jaseelati
  thunderbolt technology seminar report jaseelati 0 370 05-01-2015, 03:45 PM
Last Post: jaseelati
  self curing concrete seminar report jaseelati 0 222 23-12-2014, 01:43 PM
Last Post: jaseelati
  scramjet engine seminar report jaseelati 0 193 20-12-2014, 03:08 PM
Last Post: jaseelati