Electrical Seminar Abstract And Report 4
computer science crazy|
Joined: Dec 2008
15-02-2009, 01:15 PM
Laser Communication Systems
Lasers have been considered for space communications since their realization in 1960. Specific advancements were needed in component performance and system engineering particularly for space qualified hardware. Advances in system architecture, data formatting and component technology over the past three decades have made laser communications in space not only viable but also an attractive approach into inter satellite link applications.
Information transfer is driving the requirements to higher data rates, laser cross -link technology explosions, global development activity, increased hardware, and design maturity. Most important in space laser communications has been the development of a reliable, high power, single mode laser diode as a directly modulable laser source. This technology advance offers the space laser communication system designer the flexibility to design very lightweight, high bandwidth, low-cost communication payloads for satellites whose launch costs are a very strong function of launch weigh. This feature substantially reduces blockage of fields of view of most desirable areas on satellites. The smaller antennas with diameter typically less than 30 centimeters create less momentum disturbance to any sensitive satellite sensors. Fewer on board consumables are required over the long lifetime because there are fewer disturbances to the satellite compared with heavier and larger RF systems. The narrow beam divergence affords interference free and secure operation.
Until recently, the United States government was funding the development of an operational space laser cross-link system employing solid-state laser technology. The NASA is developing technology and studying the applicability of space laser communication to NASA's tracking and data relay network both as cross-link and for user relay links. NASA's Jet Propulsion Laboratory is studying the development of large space and ground-base receiving stations and payload designs for optical data transfer from interplanetary spacecraft. Space laser communication is beginning to be accepted as a viable and reliable means of transferring data between satellites. Presently, ongoing hardware development efforts include ESA's Space satellite Link Experiment (SILEX) and the Japanese's Laser Communication Experiment (LCE). The United States development programs ended with the termination of both the production of the laser cross-link subsystem and the FEWS satellite program .
Satellite use from space must be regulated and shared on a worldwide basis. For this reason, frequencies to be used by the satellite are established by a world body known as the International Telecommunications Union (ITU) with broadcast regulations controlled by a subgroup known as World Administrative Radio Conference (WARC).
An international consultative technical committee (CCIR) provides specific recommendations on satellite frequencies under consideration by WARC. The basic objective is to allocate particular frequency bands for different types of satellite services, and also to provide international regulations in the areas of maximum radiation's level from space, co-ordination with terrestrial systems and the use of specific satellite locations in a given orbit. Within these allotments and regulations an individual country can make its own specific frequency selections based on intended uses and desired satellite services.
The 20th century saw many developments in the field of electronics because of basically two reasons
1. The development of transistors, which forms the basics of everything that is electronics.
2. The development of IC, which helped in the fabrication of fast, compact & sophisticated electronic circuits.
In the 21st century we are going to see some radical changes in the approach towards electronics. These are :
1. The replacement of semiconducting devices with superconducting devices.
2. The use of new classical theories in physics like the relative physics & quantum mechanics to explain various phenomenon, application & working of electronic devices.
The first step to integrate the previously separate branches, electronics &super conductivity was done by the scientist called Brian Josephson by the invention of the JJ in the year 1962 for which he received the Nobel prize in the year 1973.The analysis of the device is impossible using classical theories of physics. The device has immense potential & numerous applications in almost all fields of applied electronics.
The Josephson junction (JJ) is basically an insulator sandwitched between the two semiconductor layers. Hence the device is also called as a SIS (Superconductor-Insulator-superconductor). A tunneling phenomenon called Josephson tunneling takes place through the insulator when the thickness of the insulator is very thin (less than 1.5 nm) and the insulator turns into a superconductor due to the tunneling of charge carriers from the 1st to the 2nd super conductor; through the insulator. To explain the working of the device we need to analyze the principles of superconductivity & the principles of tunneling. The superconductivity is explained in terms of BCS theory & tunneling in terms of the uncertainity principle.
It is a remarkable property in which there is a complete loss of resistivity in a metal or alloy, usually at temperature close to the absolute zero & this property was discovered by Kammerlingh Onnes. As perfect conductors, superconductors will carry current without resistance loss, i.e, the current applied will persist forever without any loss of power. These materials are also perfect diamagnetic & magnet placed above the super conductor will levitate under its own magnetic field. Low temperature superconductors exhibit property at temperature near-250?C.
LBCO &certain alloys of La & Ba shows this property near 35k, RBa2Cu3O7, Bi2Sr2ca2Cu3O10 can show the property near 90k. Thallium based & mercury based cuprates can show superconductivity at 134k. Progress in the development of high temperature superconductivity & particular cuprate based superconductors has made significant advances. Some organic compounds have lately been developed as Superconductors.
Introduction to the Internet Protocols
Introduction to the Internet Protocols
TCP/IP is a set of protocols developed to allow cooperating computers to share resources across a network. It was developed by a community of researchers centered around the ARPAnet. Certainly theARPAnet is the best-known TCP/IP network. However as of June, 87, at least 130 different vendors had products that support TCP/IP, and thousands of networks of all kinds use it. First some basic definitions. The most accurate name for the set of protocols we are describing is the "Internet protocol suite". TCP and IP are two of the protocols in this suite. Because TCP and IP are the best known of the protocols, it has become common to use the term TCP/IP or IP/TCP to refer to the whole family. It is probably not worth fighting this habit. However this can lead to some oddities. For example, I find myself talking about NFS as being based on TCP/IP, even though it doesn't use TCP at all.
The Internet is a collection of networks, including the Arpanet, NSFnet, regional networks such as NYsernet, local networks at a number of University and research institutions, and a number of military networks. The term "Internet" applies to this entire set of networks. The subset of them that is managed by the Department of Defense is referred to as the "DDN" (Defense Data Network). This includes some research-oriented networks, such as the Arpanet, as well as more strictly military ones. (Because much of the funding for Internet and DDN can sometimes seem equivalent.) All of these networks are connected to each other. Users can send messages from any of them to any other, except where there are security or other policy restrictions on access.
Officially speaking, the Internet protocol documents are simply standards adopted by the Internet community for its own use. More recently, the Department of Defense issued a MILSPEC definition of TCP/IP. This was intended to be a more formal definition, appropriate for use in purchasing specifications. However most of the TCP/IP community continues to use the Internet standards. The MILSPEC version is intended to be consistent with it.Whatever it is called, TCP/IP is a family of protocols. A few provide "low-level" functions needed for many applications. These include IP, TCP, and UDP. (These will be described in a bit more detail later.) Others are protocols for doing specific tasks, e.g. transferring files between computers, sending mail, or finding out who is logged in on another computer. Initially TCP/IP was used mostly between minicomputers or mainframes. These machines had their own disks, and generally were self-contained.
Thus the most important "traditional" TCP/IP services are:
- file transfer. The file transfer protocol (FTP) allows a user on any computer to get files from another computer, or to send files to another computer.
- remote login. The network terminal protocol (TELNET) allows a user to log in on any other computer on the network. You start a remote session by specifying a computer to connect to. - computer mail. This allows you to send messages to users on other computers. Originally, people tended to use only one or two specific computers.
- network file systems. This allows a system to access files on another computer in a somewhat more closely integrated fashion than FTP. A network file system provides the illusion that disks or other devices from one system are directly connected to other systems. - remote printing. This allows you to access printers on other computers as if they were directly attached to yours. (The most commonly used protocol is the remote lineprinter protocol from Berkeley Unix).
The focus of the Imagine is to develop a programmable architecture that achieves the performance of special purpose hardware on graphics and image/signal processing. This is accomplished by exploiting stream-based computation at the application, compiler, and architectural level. At the application level, we have cast several complex media applications such as polygon rendering, stereo depth extraction, and video encoding into streams and kernels. At the compiler-level, we have developed programming languages for writing stream-based programs and have developed software tools that optimize their execution on stream hardware. Finally, at the architectural level, we have developed the Imagine stream processor, a novel architecture that executes stream-based programs and is able to sustain over tens of GFLOPs over a range of media applications with a power dissipation of less than 10 Watts.
The Imagine Stream Architecture is a novel architecture that executes stream-based programs. It provides high performance with 48 floating-point arithmetic units and a area- and power-efficient register organization. A streaming memory system loads and stores streams from memory. A stream register file provides a large amount of on-chip intermediate storage for streams. Eight VLIW arithmetic clusters perform SIMD operations on streams during kernel execution. Kernel execution is sequenced by a micro-controller. A network interface is used to support multi-Imagine systems and I/O transfers. Finally, a stream controller manages the operation of all of these units.
Stream Programming Model
Applications for Imagine are programmed using the stream programming model. This model consists of streams and kernels. Streams are sequences of similar data records. Kernels are small programs which operate on a set of input streams and produce a set of output streams.
Imagine is programmed with a set of languages and software tools that implement the stream programming model. Applications are programmed in StreamC and KernelC. A stream scheduler maps StreamC to stream instructions for Imagine and a kernel scheduler maps KernelC to VLIW kernel instructions for Imagine. Imagine applications have been tested using a cycle accurate simulator, named ISim, and are currently being tested on a prototype board.
Programmable Graphics and Real-time Media Applications The Imagine stream processor combines full programmability with high performance. This has enabled research into new real-time media applications such as programmable graphics pipelines.
A prototype Imagine processor was design and fabricated in conjunction with Texas Instruments.and received by Stanford on April 9, 2002. Imagine contains 21 million transistors and has a die size of 16mm x 16mm in a 0.15 micron standard cell technology.
Stream Processor Development Platform
A prototype development board was designed and fabricated in conjunction with ISI-East Dynamic Systems Division. This board has enabled experimental measurements of the prototype Imagine processor, experiments on performance of multi-Imagine systems, and additional application and software tool development.
Roke Manor Research is a leading provider of mobile telecommunications technology for both terminals and base stations. We add value to our clients' project and implimentations by reducing time-to-market and lowering production costs, and provide lasting benefits through building long-term relationships and working in partnership with our customers.
We have played an active role in cellular communications technology since the 1980's, working initially in GSM and more recently in the definition and development of 3G (UMTS). Roke Manor Research has over 200 engineers with experience in designing hardware and software for 3G terminals and base stations and is currently developing technology for 4G and beyond.
We are uniquely positioned to provide 2G, 3G and 4G expertise to our customers. The role of Roke Manor Research engineers in standardisation bodies (e.g. ETSI and 3GPP) provides us with intimate knowledge of all the 2G and 3G standards (GSM, GPRS, EDGE, UMTS FDD (WCDMA) and TD-SCDMA standards). Our engineers are currently contributing to the evolution of 3G standards and can provide up-to-the-minute implementation advice to customers.
Time Division Multiple Access (TDMA)
AC Performance Of Nanoelectronics
AC Performance Of Nanoelectronics
Nano electronic devices fall into two classes: tunnel devices and ballistic transport devices. In Tunnel devices single electron effects occur if the tunnel resistance is larger than h/e = 25 K Ã‚Â§Ãƒâ„¢. In Ballistic devices with cross sectional dimensions in the range of quantum mechanical wavelength of electrons, the resistance is of order h/e = 25 K Ã‚Â§Ãƒâ„¢. This high resistance may seem to restrict the operational speed of nano electronics in general. However the capacitance values and drain source spacing are typically small which gives rise to very small RC times and transit times of order of ps or less. Thus the speed may be very large, up to THz range. The goal of this seminar and presentation is to present the models an performance predictions about the effects that set the speed limit in carbon nanotube transistors, which form the ideal test bed for understanding the high frequency properties of Nano electronics because they may behave as ideal ballistic 1d transistors.
Ballistic Transport- An Outline
When carriers travel through a semiconductor material, they are likely to be scattered by any number of possible sources, including acoustic and optical phonons, ionized impurities, defects, interfaces, and other carriers. If, however, the distance traveled by the carrier is smaller than the mean free path, it is likely not to encounter any scattering events; it can, as a result, move ballistically through the channel. To the first order, the existence of ballistic transport in a MOSFET depends on the value of the characteristic scattering length (i.e. mean free path) in relation to channel length of the transistor.
This scattering length, l , can be estimated from the measured carrier mobility where t is the average scattering time, m* is the carrier effective mass, and vth is the thermal velocity. Because scattering mechanisms determine the extent of ballistic transport, it is important to understand how these depend upon operating conditions such as normal electric field and ambient temperature.
Dependence On Normal Electric Field
In state-of-the-art MOSFET inversion layers, carrier scattering is dominated by phonons, impurities (Coulomb interaction), and surface roughness scattering at the Si-SiO2 interface. The relative importance of each scattering mechanism is dependent on the effective electric field component normal to the conduction channel. At low fields, impurity scattering dominates due to strong Coulombic interactions between the carriers and the impurity centers. As the electric field is increased, acoustic phonons begin to dominate the scattering process. At very high fields, carriers are pulled closer to the Si-SiO2 gate oxide interface; thus, surface roughness scattering degrades carrier mobility. A universal mobility model has been developed to relate field strength with the effective carrier mobility due to phonon and surface roughness scattering:
Dependence On Temperature
When the temperature is changed, the relative importance of each of the aforementioned scattering mechanisms is altered. Phonon scattering becomes less important at very low temperatures. Impurity scattering, on the other hand, becomes more significant because carriers are moving slower (thermal velocity is decreased) and thus have more time to interact with impurity centers. Surface roughness scattering remains the same because it does not depend on temperature. At liquid nitrogen temperatures (77K) and an effective electric field of 1MV/cm, the electron and hole mobilities are ~700 cm2/Vsec and ~100 cm2/Vsec, respectively. Using the above equations, the scattering lengths are approximately 17nm and 3.6nm.These scattering lengths can be assumed to be worst-case scenarios, as large operating voltages (1V) and aggressively scaled gate oxides (10Ãƒâ€¦) are assumed. Thus, actual scattering lengths will likely be larger than the calculated values.
Further device design considerations in maximizing this scattering length will be discussed in the last section of this paper. Still, the values calculated above are certainly in the range of transistor gate lengths currently being studied in advanced MOSFET research (<50nm).
Ballistic carrier transport should thus become increasingly important as transistor channel lengths are further reduced in size. In addition, it should be noted that the mean free path of holes is generally smaller than that of electrons. Thus, it should be expected that ballistic transport in PMOS transistors is more difficult to achieve, since current conduction occurs through hole transport. Calculation of the mean scattering length, however, can only be regarded as a first-order estimation of ballistic transport.
To accurately determine the extent of ballistic transport evident in a particular transistor structure, Monte Carlo simulation methods must be employed. Only by modeling the random trajectory of each carrier traveling through the channel can we truly assess the extent of ballistic transport in a MOSFET.
Human beings extract a lot of information about their environment using their ears. In order to understand what information can be retrieved from sound, and how exactly it is done, we need to look at how sounds are perceived in the real world. To do so, it is useful to break the acoustics of a real world environment into three components: the sound source, the acoustic environment, and the listener:
1. The sound source: this is an object in the world that emits sound waves. Examples are anything that makes sound - cars, humans, birds, closing doors, and so on. Sound waves get created through a variety of mechanical processes. Once created, the waves usually get radiated in a certain direction. For example, a mouth radiates more sound energy in the direction that the face is pointing than to side of the face.
2. The acoustic environment: once a sound wave has been emitted, it travels through an environment where several things can happen to it: it gets absorbed by the air (the high frequency waves more so than the low ones. The absorption amount depends on factors like wind and air humidity); it can directly travel to a listener (direct path), bounce off of an object once before it reaches the listener (first order reflected path), bounce twice (second order reflected path), and so on; each time a sound reflects off an object, the material that the object is made of has an effect on how much each frequency component of the sound wave gets absorbed, and how much gets reflected back into the environment; sounds can also pass through objects such as water, or walls; finally, environment geometry like corners, edges, and small openings have complex effects on the physics of sound waves (refraction, scattering).
3. The listener: this is a sound-receiving object, typically a "pair of ears". The listener uses acoustic cues to interpret the sound waves that arrive at the ears, and to extract information about the sound sources and the environment.
How Virtual Surround Works
A 3D audio system aims to digitally reproduce a realistic sound field. To achieve the desired effect a system needs to be able to re-create portions or all of the listening cues discussed in the previous chapter: IID, ITD, outer ear effects, and so on. A typical first step to building such a system is to capture the listening cues by analyzing what happens to a single sound as it arrives at a listener from different angles. Once captured, the cues are synthesized in a computer simulation for verification.
What is an HRTF?
The majority of 3D audio technologies are at some level based on the concept of HRTFs, or Head-Related Transfer Functions. An HRTF can be thought of as set of two audio filters (one for each ear) that contains in it all the listening cues that are applied to a sound as it travels from the sound's origin (its source, or position in space), through the environment, and arrives at the listener's ear drums. The filters change depending on the direction from which the sound arrives at the listener. The level of HRTF complexity necessary to create the illusion of 3D realistic hearing is subject to considerable discussion and varies greatly across technologies.
The most common method of measuring the HRTF of an individual is to place tiny probe microphones inside a listener's left and right ear canals, place a speaker at a known location relative to the listener, play a known signal through that speaker, and record the microphone signals. By comparing the resulting impulse response with the original signal, a single filter in the HRTF set has been found. After moving the speaker to a new location, the process is repeated until an entire, spherical map of filter sets has been devised.
The history of semiconductor devices starts in 1930's when Lienfed and Heil first proposed the mosfet. However it took 30 years before this idea was applied to functioning devices to be used in practical applications, and up to the late 1980 this trend took a turn when MOS technology caught up and there was a cross over between bipolar and MOS share.CMOS was finding more wide spread use due to its low power dissipation, high packing density and simple design, such that by 1990 CMOS covered more than 90% of total MOS scale.
In 1983 bipolar compatible process based on CMOS technology was developed and BiCMOS technology with both the MOS and bipolar device fabricated on the same chip was developed and studied. The objective of the BiCMOS is to combine bipolar and CMOS so as to exploit the advantages of both at the circuit and system levels. Since 1985, the state-of-the-art bipolar CMOS structures have been converging. Today BiCMOS has become one of the dominant technologies used for high speed, low power and highly functional VLSI circuits especially when the BiCMOS process has been enhanced and integrated in to the CMOS process without any additional steps. Because the process step required for both CMOS and bipolar are similar, these steps cane be shared for both of them.
System On Chip (SOC) Fundamentals
The concept of system-on-chip (SOC) has evolved as the number of gates available to a designer has increased and as CMOS technology has migrated from a minimum feature size of several microns to close to 0.1 Ã‚Âµm. Over the last decade, the integration of analog circuit blocks is an increasingly common feature of SOC development, motivated by the desire to shrink the number of chips and passives on a PC board. This, in turn, reduces system size and cost and improves reliability by requiring fewer components to be mounted on a PC board. Power dissipation of the system also improves with the elimination of the chip input-output (I/O) interconnect blocks.
Superior matching and control of integrated components also allows for new circuit architectures to be used that cannot be attempted in multi-chip architectures. Driving PC board traces consume significant power, both in overcoming the larger capacitances on the PC board and through larger signal swings to overcome signal cross talk and noise on the PC board. Large-scale microcomputer systems with integrated peripherals, the complete digital processor of cellular phone, and the switching system for a wire-line data-communication system are some of the many applications of digital SOC systems.
Examples of analog or mixed-signal SOC devices include analog modems; broadband wired digital communication chips, such as DSL and cable modems; Wireless telephone chips that combine voice band codes with base band modulation and demodulation function; and ICs that function as the complete read channel for disc drives. The analog section of these chips includes wideband amplifiers, filters, phase locked loops, analog-to-digital converters, digital-to-analog converters, operational amplifiers, current references, and voltage references.
Many of these systems take advantage of the digital processors in an SOC chip to auto-calibrate the analog section of the chip, including canceling de offsets and reducing linearity errors within data converters. Digital processors also allow tuning of analog blocks, such as centering filter-cutoff frequencies. Built-in self-test functions of the analog block are also possible through the use of on-chip digital processors.
Analog or mixed-signal SOC integration is inappropriate for designs that will allow low production volume and low margins. In this case, the nonrecurring engineering costs of designing the SOC chip and its mask set will far exceed the design cost for a system with standard programmable digital parts, standard analog and RF functional blocks, and discrete components. Noise issues from digital electronics can also limit the practicality of forming an SOC with high-precision analog or RF circuits. A system that requires power-supply voltages greater than 3.6 V in its analog or RF stages is also an unattractive candidate for an SOC because additional process modifications would be required for the silicon devices to work above the standard printed circuit board interface voltage of 3.3 V+- 10%.
Before a high-performance analog system can be integrated on a digital chip, the analog circuit blocks must have available critical passive components, such as resistors and capacitors. Digital blocks, in contrast, require only n-channel metal-oxide semiconductor (NMOS) and p-channel metal-oxide semiconductor (PMOS) transistors. Added process steps may be required to achieve characteristics for resistors and capacitors suitable for high-performance analog circuits. These steps create linear capacitors with low levels of parasitic capacitance coupling to other parts of the IC, such as the substrate.
Though additional process steps may be needed for the resistors, it may be possible to alternatively use the diffusions steps, such as the N and P implants that make up the drains and sources of the MOS devices. The shortcomings of these elements as resistors, as can the poly silicon gate used as part of the CMOS devices. The shortcomings of these elements as resistors, beyond their high parasitic capacitances, are the resistors, beyond their high parasitic capacitances, are the resistor's high temperature and voltage coefficients and the limited control of the absolute value of the resistor.
Use Search at http://topicideas.net/search.php wisely To Get Information About Project Topic and Seminar ideas with report/source code along pdf and ppt presenaion
|Tagged Pages: info relay nm com loc es, the first step to integrate the previously separate branches electronics, the coding is in c to chat between two pcs or send a file through rf link at the distance of approx 10 meters,|
|Popular Searches: iipc abstract, abstract of ebazaar, seminar report electrical, meas abstract, abstract fonts, aromatics seminar abstract, abstract of ubipan,|
|Possibly Related Threads...|
|four wheel steering system seminar report||jaseelati||1||300||
08-07-2016, 03:26 PM
Last Post: mkaasees
|electrical topics for ppt||jaseelati||0||205||
23-02-2015, 02:59 PM
Last Post: jaseelati
|new electrical seminar topics||jaseelati||0||264||
18-02-2015, 01:02 PM
Last Post: jaseelati
|silent sound technology seminar report pdf||jaseelati||0||315||
23-01-2015, 02:26 PM
Last Post: jaseelati
|bicmos technology seminar report||jaseelati||0||296||
13-01-2015, 03:16 PM
Last Post: jaseelati
|autocad electrical ppt||jaseelati||0||217||
13-01-2015, 01:56 PM
Last Post: jaseelati
|mems technology seminar report||jaseelati||0||332||
10-01-2015, 02:55 PM
Last Post: jaseelati
|thunderbolt technology seminar report||jaseelati||0||371||
05-01-2015, 03:45 PM
Last Post: jaseelati
|self curing concrete seminar report||jaseelati||0||223||
23-12-2014, 01:43 PM
Last Post: jaseelati
|scramjet engine seminar report||jaseelati||0||194||
20-12-2014, 03:08 PM
Last Post: jaseelati