Computer Science Seminar Abstract And Report 11
computer science crazy|
Joined: Dec 2008
15-02-2009, 02:00 PM
Wireless transmission of electromagnetic radiation (communication signals) has become a popular method of transmitting RF signals such as cordless, wireless and cellular telephone signals, paper signals, two way radio signals,video conferencing signals and LAN signals indoors.
Indoor wireless transmission has the advantage that building in which transmission is taking place does not have to be filled with wires or cables that are equipped to carry a multitude of signals. Wires and signals are costly to install and may require expensive upgrades when their capacity is exceeded or when new technologies require different types of wires and cables than those already installed.
Traditional indoor wireless communication systems transmit and receive signals through the use of a network of transmitters, receivers and antennas that are placed through out the interior of a building. Devices must be located such that signals must not be lost or signal strength may not get attenuated. Again a change in the existing architecture also affects the wireless transmission. Another challenge related to installation of wireless networks in buildings is the need to predict the RF propagation and coverage in the presence of complex combinations of shapes and materials in the buildings.
In general, the attenuation in buildings is larger than that in free space, requiring more cells and higher power to obtain wider coverage. Despite of all these, placement of antennas, receivers and antennas in an indoor environment is largely a process of trial and error. Hence there is need for a method and a system for efficiently transmitting RF and microwave signals indoors without having to install an extensive system of wires and cables inside the buildings.
This paper suggests an alternative method of distributing electromagnetic signals in buildings by the recognition that every building is equipped with an RF wave guide distribution system, the HVAC ducts. The use of HVAC ducts is also amenable to a systematic design procedure but should be significantly less expensive than other approaches since existing infrastructure is used and RF is distributed more efficiently.
Terrestrial Trunked Radio (TETRA) [/color]
Terrestrial Trunked Radio (TETRA)
TErrestrial Trunked RAdio (TETRA) standard was designed to meet some common requirements and objectives of the PMR and PAMR market alike. One of the last strong holds of analog technology in a digital world has been the area of trunked mobile radio. Although digital cellular technology has made great strides with broad support from a relatively large number of manufactures, digital trunked mobile radio systems for the Private Mobile Radio (PMR) and Public Access Mobile Radio (PAMR) market have lagged behind. Few manufacture currently offer digital systems, all of which are based on proprietary technology. However, the transition to digital is gaining momentum with the emergence of an open standard TETRA
TETRA is a Digital PMR Standard developed by ETSI. It is an open standard offers interoperability of equipment and networks from different manufacturers. It is potential replacement for analog and proprietary digital systems. Standard originated in1989 as Mobile Digital Trunked Radio System (MDTRS), later renamed to Trans European Trunked Radio, and is called TETRA since 1997. TErrestrial Trunked Radio TETRA is the agreed standard for a new generation of digital land mobile radio communications designed to meet the needs of the most demanding Professional Mobile Radio networks (PMR) and Public Access Radio (PAMR) users. TETRA is the only existing digital PMR standard defined by the European Telecommunications Standard Institute (ETSI).
Among the standard's many features are voice and extensive data communications services. Networks based on the TETRA standard will provide cost-effective, spectrum-efficient and secure communications with advance capabilities for the mobile and fixed elements of companies and organizations.
As a standard, TETRA should be regarded as complementary to GSM and DECT. In comparison with GSM as currently implemented, TETRA provides faster call set-up, higher data rates, group calls and direct mode. TETRA manufactures have been developing their products for ten years. The investments have resulted in highly sophisticated products. A number of important orders have already been placed. According to estimates, TETRA-based networks will have 5-10 million users by the year 2010.
Swarm intelligence & traffic Safety
Swarm intelligence & traffic Safety
An automotive controller that complements the driving experience must work to avoid collisions, enforce a smooth trajectory, and deliver the vehicle to the intended destination as quickly as possible. Unfortunately, satisfying these requirements with traditional methods proves intractable at best and forces us to consider biologically -inspired techniques like Swarm Intelligence.
A controller is currently being designed in a robot simulation program with the goal of implementing the system in real hardware to investigate these biologically-inspired techniques and to validate the results. In this paper I present an idea that can be implemented in traffic safety by the application of Robotics & Computer Vision through Swarm Intelligence.
We stand today at the culmination of the industrial revolution. For the last four centuries, rapid advances in science have fueled industrial society. In the twentieth century, industrialization found perhaps its greatest expression in Henry Ford's assembly line. Mass production affects almost every facet of modern life. Our food is mass produced in meat plants, commercial bakeries, and canaries.
Our clothing is shipped by the ton from factories in China and Taiwan. Certainly all the amenities of our lives - our stereos, TVs, and microwave ovens - roll off assembly lines by the truck load. Today, we're presented with another solution, that hopefully will fare better than its predecessors. It goes by the name of post-industrialism, and is commonly associated with our computer technology with Robots and Artificial Intelligence.
Robots are today where computers were 25 years ago. They're huge, hulking machines that sit on factory floors, consume massive resources and can only be afforded by large corporations and governments. Then came the PC revolution of the 1980s, when computers came out of the basements and landed on the desktops. So we're on the verge of a "PR" revolution today - a Personal Robotics revolution, which will bring the robots off the factory floor and put them in our homes, on our desktops and inside our vehicles.
Optical Switching [color]
Explosive information demand in the internet world is creating enormous needs for capacity expansion in next generation telecommunication networks. It is expected that the data- oriented network traffic will double every year.
Optical networks are widely regarded as the ultimate solution to the bandwidth needs of future communication systems. Optical fiber links deployed between nodes are capable to carry terabits of information but the electronic switching at the nodes limit the bandwidth of a network. Optical switches at the nodes will overcome this limitation. With their improved efficiency and lower costs, Optical switches provide the key to both manage the new capacity Dense Wavelength Division Multiplexing (DWDM) links as well as gain a competitive advantage for provision of new band width hungry services. However, in an optically switched network the challenge lies in overcoming signal impairment and network related parameters. Let us discuss the present status, advantages and challenges and future trends in optical switches.
A fiber consists of a glass core and a surrounding layer called the cladding. The core and cladding have carefully chosen indices of refraction to ensure that the photos propagating in the core are always reflected at the interface of the cladding. The only way the light can enter and escape is through the ends of the fiber. A transmitter either alight emitting diode or a laser sends electronic data that have been converted to photons over the fiber at a wavelength of between 1,200 and 1,600 nanometers.
Today fibers are pure enough that a light signal can travel for about 80 kilometers without the need for amplification. But at some point the signal still needs to be boosted. Electronics for amplitude signal were replaced by stretches of fiber infused with ions of the rare-earth erbium. When these erbium-doped fibers were zapped by a pump laser, the excited ions could revive a fading signal. They restore a signal without any optical to electronic conversion and can do so for very high speed signals sending tens of gigabits a second. Most importantly they can boost the power of many wavelengths simultaneously.
Now to increase information rate, as many wavelengths as possible are jammed down a fiber, with a wavelength carrying as much data as possible. The technology that does this has a name-dense wavelength division multiplexing (DWDM ) - that is a paragon of technospeak.Switches are needed to route the digital flow to its ultimate destination. The enormous bit conduits will flounder if the light streams are routed using conventional electronic switches, which require a multi-terabit signal to be converted into hundreds of lower-speed electronic signals. Finally, switched signals would have to be reconverted to photons and reaggregated into light channels that are then sent out through a designated output fiber.
Before the 1950's, ferromagnetic cores were the only type of random-access, nonvolatile memories available. A core memory is a regular array of tiny magnetic cores that can be magnetized in one of two opposite directions, making it possible to store binary data in the form of a magnetic field. The success of the core memory was due to a simple architecture that resulted in a relatively dense array of cells. This approach was emulated in the semiconductor memories of today (DRAM's, EEPROM's, and FRAM's).
Ferromagnetic cores, however, were too bulky and expensive compared to the smaller, low-power semiconductor memories. In place of ferromagnetic cores ferroelectric memories are a good substitute. The term "ferroelectric' indicates the similarity, despite the lack of iron in the materials themselves.
Ferroelectric memory exhibit short programming time, low power consumption and nonvolatile memory, making highly suitable for application like contact less smart card, digital cameras which demanding many memory write operations. In other word FRAM has the feature of both RAM and ROM. A ferroelectric memory technology consists of a complementry metal-oxide-semiconductor (CMOS) technology with added layers on top for ferroelectric capacitors.
A ferroelectric memory cell has at least one ferroelectric capacitor to store the binary data, and one or two transistors that provide access to the capacitor or amplify its content for a read operation.A ferroelectric capacitor is different from a regular capacitor in that it substitutes the dielectric with a ferroelectric material (lead zirconate titanate (PZT) is a common material used)-when an electric field is applied and the charges displace from their original position spontaneous polarization occurs and displacement becomes evident in the crystal structure of the material.
Importantly, the displacement does not disappear in the absence of the electric field. Moreover, the direction of polarization can be reversed or reoriented by applying an appropriate electric field.A hysteresis loop for a ferroelectric capacitor displays the total charge on the capacitor as a function of the applied voltage. It behaves similarly to that of a magnetic core, but for the sharp transitions around its coercive points, which implies that even a moderate voltage can disturb the state of the capacitor.
One remedy for this would be to modify a ferroelectric memory cell including a transistor in series with the ferroelectric capacitor. Called an access transistor, it wo control the access to the capacitor and eliminate the need for a square like hysteresis loop compensating for the softness of the hysteresis loop characteristics and blocking unwanted disturb signals from neighboring memory cells.
The World Wide Web's current implementation is designed predominantly for information retrieval and display in a human readable form. Its data formats and protocols are neither intended nor suitable for machine-to-machine interaction without humans in the loop. Emergent Internet uses - including peer- to- peer and grid computing - provide both a glimpse and impetus for evolving the Internet into a distributed computing platform.
What would be needed to make the Internet into a application-hosting platform. This would be a networked, distributed counterpart of the hosting environment that traditional operating system provides to application in a single node. Creating this platform requires additional functional layer to the Internet that can allocate and manage resources necessary for application execution. Given such a hosting environment, software designers could create network application without having to know at design time the type or the number of nodes the application will execute on. With proper support, the system could allocate and bind software components to the resources they require at runtime, based on resource requirement, availability, connectivity and system state at actual time of execution. In contrast, early bindings tend to result in static allocations that cannot adapt well to resource, load and availability variations, thus the software components tend to be less efficient and have difficulty recovering from failures. The foundation of proposed approach is to disaggregate and virtualize.
System resources as services that can be described, discovered and dynamically configured at runtime to execute a application. Such a system can be built as a combination and extension of Web services, peer-to-peer computing, and grid computing standards and technologies, It thus follows the successful internet model of adding minimal and relatively simple functional layers to meet requirements while atop already available technologies.
But it does not advocate an "Internet OS" approach that would provide some form of uniform or centralized global-resources management. Several theoretical and practical reasons makes such an approach undesirable, including its inability to scale and the need to provide and manage supporting software on every participating platform. Instead, we advocate a mechanism that supports spontaneous, dynamic, and voluntary collaboration among entities with their contributing resources.
Driving Optical Network Evolution
Driving Optical Network Evolution
Over the years, advancement in technologies has improved transmission limitations, the number of wavelengths we can send down a piece of fiber, performance, amplification techniques, and protection and redundancy of the network. When people have described and spoken at length about optical networks, they have typically limited the discussion of optical network technology to providing physical-layer connectivity.
When actual network services are discussed, optical transport is augmented through the addition of several protocol layers, each with its own sets of unique requirements, to make up a service-enabling network. Until recently, transport was provided through specific companies that concentrated on the core of the network and provided only point-to- point transport services.
A strong shift in revenue opportunities from a service provider and vendor perspective, changing traffic patterns from the enterprise customer, and capabilities to drive optical fiber into metropolitan (metro) areas has opened up the next emerging frontier of networking. Providers are now considering emerging lucrative opportunities in the metro space. Whereas traditional or incumbent vendors have been installing optical equipment in the space for some time, little attention has been paid to the opportunity available through the introduction of new technology advancements and the economic implications these technologies will have.
Specifically, the new technologies in the metro space provide better and more profitable economics, scale, and new services and business models. The current metro infrastructure comprises this equipment, which emphasizes voice traffic; is limited in scalability; and was not designed to take advantage of new technologies, topologies, and changing traffic conditions.
Next-generation equipment such as next-generation Synchronous Optical Network (SONET), metro core dense wavelength division multiplexing (DWDM), metro-edge DWDM, and advancements in the optical core have taken advantage of these limitations, and they are scalable and data optimized; they include integrated DWDM functionality and new amplification techniques; and they have made improvements in the operational and provisioning cycles.This tutorial provides technical information that can help engineers address numerous Cisco innovations and technologies for Cisco Complete Optical Multiservice Edge and Transport (Cisco COMET). They can be broken down into five key areas: photonics, protection, protocols, packets, and provisioning.
Cellular Neural Network (CNN)
Cellular Neural Network (CNN)
Cellular Neural Network is a revolutionary concept and an experimentally proven new computing paradigm for analog computers. Looking at the technological advancement in the last 50 years ; we see the first revolution which led to pc industry in 1980's, second revolution led to internet industry in 1990's cheap sensors & mems arrays in desired forms of artificial eyes, nose, ears etc. this third revolution owes due to C.N.N.This technology is implemented using CNN-UM and is also used in imageprocessing.It can also implement any Boolean functions.
ARCHITECTURE OF CNN
A standard CNN architecture consists of an m*n rectangular array of cells c(i,j) with Cartesian coordinates (i,j) i=1,2Â¦..M, j=12Â¦...N.
A class -1 m*n standard CNN is defined by a m*n rectangular array of cells cij located at site (i,j) i= 1,2 Â¦Â¦.m ,j=1,2,Â¦.n is defined mathematically by
(dXij/dt )= -Xij + A(I,j,k,l) Ykl + B(i,j,k,l) + Zij
Radio Network Controller
Radio Network Controller
A Radio Network Controller (RNC) provides the interface between the wireless devices communicating through Node B transceivers and the network edge. This includes controlling and managing the radio transceivers in the Node B equipment, as well as management tasks like soft handoff.
The RNC performs tasks in a 3G wireless network analogous to those of the Base Station Controller (BSC) in a 2G or 2.5G network. It interfaces with GPRS Service Nodes (SGSNs) and Gateways (GGSNs) to mediate with the network service providers.
A radio network controller manages hundreds of Node B transceiver stations while switching and provisioning services off the Mobile Switching Center and 3G data network interfaces. The connection from the RNC to a Node B is called the User Plane Interface Layer and it uses T1/E1 transport to the RNC.
Due to the large number of Node B transceivers, a T1/E1 aggregator is used to deliver the Node B data over channelized OC-3 optical transport to the RNC. The OC-3 pipe can be a direct connection to the RNC or through traditional SONET/SDH transmission networks. A typical Radio Network Controller may be built on a PICMG or Advanced TCA chassis. It contains several different kinds of cards specialized for performing the functions and interacting with the various interfaces of the RNC.
Digital Audio Broadcasting
Digital Audio Broadcasting
Digital audio broadcasting, DAB, is the most fundamental advancement in radio technology since that introduction of FM stereo radio. It gives listeners interference - free reception of CD quality sound, easy to use radios, and the potential for wider listening choice through many additional stations and services.
DAB is a reliable multi service digital broadcasting system for reception by mobile, portable and fixed receivers with a simple, non-directional antenna. It can be operated at any frequency from 30 MHz to 36Hz for mobile reception (higher for fixed reception) and may be used on terrestrial, satellite, hybrid (satellite with complementary terrestrial) and cable broadcast networks. DAB system is a rugged, high spectrum and power efficient sound and data broadcasting system. It uses advanced digital audio compression techniques (MPEG 1 Audio layer II and MPEG 2 Audio Layer II) to achieve a spectrum efficiency equivalent to or higher than that of conventional FM radio.
The efficiency of use of spectrum is increased by a special feature called Single. Frequency Network (SFN). A broadcast network can be extended virtually without limit a operating all transmitters on the same radio frequency.
EVOLUTION OF DAB
DAB has been under development since 1981 of the Institute Fur Rundfunktechnik (IRT) and since 1987 as part of a European Research Project (EUREKA-147).
" In 1987 the Eureka-147 consoritium was founded. It's aim was to develop and define the digital broadcast system, which later became known as DAB.
" In 1988 the first equipment was assembled for mobile demonstration at the Geneva WARC conference.
" By 1990, a small number of test receivers was manufactured. They has a size of 120 dm3
" In 1992, the frequencies of the L and S - band were allocated to DAB on a world wide basis.
" From mid 1993 the third generation receivers, widely used for test purposes had a size of about 25 dm3, were developed.
" The fourth generation JESSI DAB based test receivers had a size of about 3 dm3.
1995 the first consumer - type DAB receivers, developed for use in pilot project and implimentations, were presented at the IFA in Berlin.
Significance of real-time transport Protocol in VOIP (RTP)
Significance of real-time transport Protocol in VOIP (RTP)
The advent of Voice over IP (VoIP) has given a new dimension to Internet and opened a host of new possibilities and opportunities for both corporate and public network planners. More and more companies are seeing the value of transporting voice over IP networks to reduce telephone and facsimile costs.
Adding voice to packet networks requires an understanding of how to deal with system level challenges such as interoperability, packet loss, delay,density, scalability, and reliability. This is because of the real time constraints that come into picture. But then the basic protocols being used at the network and transport layer have remained unchanged. This calls for the definition of new protocols, which can be used in addition with the existing protocols.
Such a protocol should provide the application using them with enough information to conform to the real-time constraints. This paper discusses the significance of Real-time Transport Protocol (RTP) in VoIP applications. A brief introduction to VoIP and then a description of the RTP header are given in sections 1-5.
The actual realisation of the RTP header, packetisation and processing of an RTP packet is discussed section six. Section 7, called 'Realising RTP functionalities', discusses a few problems that occur in a real time environment and how RTP provides information to counter the same. Finally, sample codes that we wrote for realising RTP packetisation, processing and RTP functionalities, written in 'C', for a Linux platform are presented.
Please note that RTP is incomplete without the companion RTP Control Protocol (RTCP), but a detailed description of RTCP is beyond the scope of this paper.
Every day of your computing life, you reach out for the mouse whenever you want to move the cursor or activate something. The mouse senses your motion and your clicks and sends them to the computer so it can respond appropriately. An ordinary mouse detects motion in the X and Y plane and acts as a two dimensional controller. It is not well suited for people to use in a 3D graphics environment.
Space Mouse is a professional 3D controller specifically designed for manipulating objects in a 3D environment. It permits the simultaneous control of all six degrees of freedom - translation rotation or a combination. . The device serves as an intuitive man-machine interface
The predecessor of the spacemouse was the DLR controller ball. Spacemouse has its origins in the late seventies when the DLR (German Aerospace Research Establishment) started research in its robotics and system dynamics division on devices with six degrees of freedom (6 dof) for controlling robot grippers in Cartesian space. The basic principle behind its construction is mechatronics engineering and the multisensory concept. The spacemouse has different modes of operation in which it can also be used as a two-dimensional mouse.
How does computer mouse work?
Mice first broke onto the public stage with the introduction of the Apple Macintosh in 1984, and since then they have helped to completely redefine the way we use computers. Every day of your computing life, you reach out for your mouse whenever you want to move your cursor or activate something. Your mouse senses your motion and your clicks and sends them to the computer so it can respond appropriately
Inside a Mouse
The main goal of any mouse is to translate the motion of your hand into signals that the computer can use. Almost all mice today do the translation using five components:
[color]Resilient Packet Ring Technology[/color]
Resilient Packet Ring Technology
The nature of the public network has changed. Demand for Internet Protocol (IP) data is growing at a compound annual rate of between 100% and 800%1, while voice demand remains stable. What was once a predominantly circuit switched network handling mainly circuit switched voice traffic has become a circuit-switched network handling mainly IP data. Because the nature of the traffic is not well matched to the underlying technology, this network is proving very costly to scale. User spending has not increased proportionally to the rate of bandwidth increase, and carrier revenue growth is stuck at the lower end of 10% to 20% per year. The result is that carriers are building themselves out of business.
Over the last 10 years, as data traffic has grown both in importance and volume, technologies such as frame relay, ATM, and Point-to-Point Protocol (PPP) have been developed to force fit data onto the circuit network. While these protocols provided virtual connections-a useful approach for many services-they have proven too inefficient, costly and complex to scale to the levels necessary to satisfy the insatiable demand for data services. More recently, Gigabit Ethernet (GigE) has been adopted by many network service providers as a way to network user data without the burden of SONET/SDH and ATM. GigE has shortcomings when applied in carrier networks were recognized and for these problems, a technology called Resilient Packet Ring Technology were developed. RPR retains the best attributes of SONET/SDH, ATM, and Gigabit Ethernet. RPR is optimized for differentiated IP and other packet data services, while providing uncompromised quality for circuit voice and private line services. It works in point-to-point, linear, ring, or mesh networks, providing ring survivability in less than 50 milliseconds. RPR dynamically and statistically multiplexes all services into the entire available bandwidth in both directions on the ring while preserving bandwidth and service quality guarantees on a per-customer, per-service basis. And it does all this at a fraction of the cost of legacy SONET/SDH and ATM solutions.
Data, rather than voice circuits, dominates today's bandwidth requirements. New services such as IP VPN, voice over IP (VoIP), and digital video are no longer confined within the corporate local-area network (LAN). These applications are placing new requirements on metropolitan-area network (MAN) and wide-area network (WAN) transport. RPR is uniquely positioned to fulfill these bandwidth and feature requirements as networks transition from circuit-dominated to packet-optimized infrastructures.
Wireless Networked Digital Devices
Wireless Networked Digital Devices
The proliferation of mobile computing devices including laptops, personal digital assistants (PDAs),and wearable computers has created a demand for wireless personal area networks (PANs).PANs allow proximal devices to share information and resources.The mobile nature of these devices places unique requirements on PANs,such as low power consumption, frequent make-and-break connections, resource discovery and utilization, and international regulations.
This paper examines wireless technologies appropriate for PANs and reviews promising research in resource discovery and service utilization. We recognize the need for PDAs to be as manageable as mobile phones and also the restrictive screen area and input area in mobile phone. Thus the need for a new breed of computing devices to fit the bill for a PAN. The above devices become especially relevant for mobile users such as surgeons and jet plane mechanics who need both hands free and thus would need to have "wearable" computers.
This paper first examines the technology used for wireless communication.Putting a radio in a digital device provides physical connectivity;however,to make the device useful in a larger context a networking infrastructure is required. The infrastructure allows devices o share data,applications,and resources such as printers, mass storage, and computation power. Defining a radio standard is a tractable problem as demonstrated by the solutions presented in this paper. Designing a network infrastructure is much more complex.
The second half of the paper describes several research project and implimentations that try to address components of the networking infrastructure. Finally there are the questions that go beyond the scope of this paper, yet will have he greatest effect on the direction,capabilities,and future of this paradigm. Will these networking strategies be incompatible, like he various cellular phone systems in the United States, or will there be a standard upon which manufacturers and developers agree, like the GSM (global system for mobile communication)cellular phones in Europe?
Communication demands compatibility, which is challenging in a heterogeneous marketplace. Yet by establishing and implementing compatible systems, manufacturers can offer more powerful and useful devices to their customers. Since these are, after all, digital devices living in a programmed digital world, compatibility and interoperation are possible.
1. Electric field- use human body as a current conduit.
2.Magnetic field-use base station technology for picocells of space.
3.Infra Red- Basic issues including opaque body obstruction.
4.Wireless Radio Frequency- The best technology option however has to deal with the finite resource of the electro magnetic spectrum.
Also must meet international standards by a compatible protocol.
a. UHF Radio.
b. Super regenerative receiver
c. SAW/ASH Receiver.
Use Search at http://topicideas.net/search.php wisely To Get Information About Project Topic and Seminar ideas with report/source code along pdf and ppt presenaion
|Popular Searches: what is spacemouse, www cnn com, rnc dcdevice, rnc in 3g, proto pmr, logicad spacemouse treiber win7, what is rnc in 3g,|