Wireless Internet over LMDS: Architecture and Experimental Implementation
computer science crazy|
Joined: Dec 2008
20-09-2009, 03:48 PM
Wireless Internet over LMDS: Architecture and Experimental Implementation
LMDS is currently a promising emerging technology in broadband fixed wireless communications. Cellular structure, high data rates, and flexibility make it perfect for multimedia, digital television, and interactive services. These highbandwidth multimedia services received most of the research attention until lately. Until recently there has been a clear gap when considering UDP/TCP/IP and other data services over LMDS. In this article we examine the ramifications of using standard TCP/IP data communication over a two-layer LMDS system. We argue that the former emphasis only on multimedia and ATM-based communication over LMDS was a mistake. The most exciting prospect for LMDS should be in the role of enabling Internet and data services together with multimedia. We introduce a basic architecture for two-layer IPLMDS based on a trial network built between 1996 and 2000.
Local multipoint distribution service (LMDS) is a broadband wireless access (BWA) networking solution. This is a cellular access technique for high-speed data delivery, and in several cases it is a very promising delivery method . It operates at millimeter frequencies, typically in 28, 38, or 40 GHz bands. There is a lot of bandwidth available (e.g., at 40 GHz over 1.5 GHz is available in Europe). This allows net data rates of up to 38 Mb/s per user. High bandwidth positions this technology to provide a whole range of services including digital video, high-speed Internet/ data, interactive TV, music, and multimedia services. LMDS is a viable alternative to wired solutions such as digital subscriber line (xDSL) for homes and small businesses. It is a costeffective business model, especially if rapid deployment in urban areas is required or lowpopulation density areas should be covered with broadband connectivity. Two different styles of LMDS have been developed: one-layer and two-layer. The problem with the traditional one-layer model is that millimeter frequency transmission is line of sight (LOS); hence, it has only been suitable for highrise buildings or rural areas. In difficult propagation environments repeaters and mirrors could be deployed. Two-layer architectures are more flexible and coincide well with Internet traffic and capacity requirements. These architectures are examined and compared in . Currently one-layer LMDS systems have been implemented at 28 GHz in the United States and at 40 GHz in Europe. Most of the world has standardized these two frequencies for LMDS use. Currently development and testing is taking place to implement advanced two-layer systems of the future. Here we examine only two-layer LMDS. The LMDS research carried out by us has also included an implementation study. This was done partially in the large European Union funded research project and implimentation called CABSINET. The work was done between 1996 and 2000, and full field trials were conducted in 1999 in Berlin. One of the main goals of the project and implimentation was to study the two-layer LMDS architecture at high frequencies. Moreover, we decided very early on (1997) not to use asynchronous transfer mode (ATM), but instead to migrate toward wireless UDP/TCP/IP.
We present a basic two-layer architecture . 1. Macrocells operating at 28/40 GHz have a cellular radius on the order of 1â€œ3 km. This forms the core infrastructure of the LMDS system, and can be seen as the traditional LMDS part of the system. Macrocells are further divided into microcells operating at lower frequencies, typically at 5 or 17 GHz. One should note that, in principle, other (licensed) frequencies could be used for macro- and microcells. One excellent candidate is the 3.5 GHz band used already by some products. The cell radius for microcells is on the order of 50â€œ500 m. There are several advantages in dividing some users into microcells. At this lower frequency, LOS is no longer required. Moreover, user equipment is less expensive at lower frequencies and does not require a microwave dish for reception. In CABSINET the macrocell frequency band was fixed to 40.5â€œ42.5 GHz and microcell frequency to 5.725â€œ5.875 GHz. Base stations used sectored antennas (90Ã‚Â°) and frequency-/timedivision multiple access (FDMA/TDMA) for macrocells. Microcells used TDMA transmission over direct sequence code-division multiple access (DS-CDMA) radios, much like the present wireless LANs (WLANs). In fact, the systemlevel research carried out as part of CABSINET showed that CDMA is not costeffective or capacity justified on macrocells. This is mainly due to the fact that one would require extremely wideband fixed spreading, and the benefits of DS-CDMA are limited because of LOS communications at 40 GHz. There are two different access methods for users. They can have a direct wireless access to macrocells at 40 GHz or, alternatively, through microcells.
Within the given downlink allocation, 50 individual channels are allocated with the CEPTdefined bandwidth of 39 MHz. A single-carrier approach based on digital video broadcast â€ satellite (DVB-S) was chosen at 40 GHz [2, 3]. The reason for selecting the DVB approach was that users not resident within a microcell could nevertheless use cheap satellite receiver set-top boxes to join the LMDS network at 40 GHz. Selecting DVB, either quadrature phase shift keying (QPSK) or orthogonal frequency-division- multiplexing (OFDM)-based, as the starting point of the physical layer has the advantage that we can use all the existing technology, notably chips, developed for digital TVs. In practice each 39 MHz channel can provide a 38 Mb/s user bit rate. Most users cannot utilize such a high bit rate; hence, we multiplex several user streams into each channel. The typical situation is that each channel includes 1â€œ20 users leading to a downlink bit rate variation of between 1.9 and 38 Mb/s per user. With a 90Ã‚Â° sectorial antenna the overall bit rate within the macrocell can be up to 7.6 Gb/s. Hence, calling LMDS gigabit wireless is not without justification. Although our system uses QPSK modulation for simplicity, simulations and experiments show that using QPSK or 16-quadrature amplitude modulation (QAM) with OFDM can be employed to provide enhanced capacity. This allows one to increase bit rates without compromising quality or cell size to any substantial degree. Due to difficult propagation conditions, for portable (or nomadic) applications using microcells, the base station is equipped with OFDM modulators, each with 8 MHz bandwidth. The OFDM signal is transmitted at 40 GHz to a local repeater, where the signal is translated to 5.8 GHz and retransmitted by a DS-CDMA radio link. It is worth noting that in the downlink the LR acts as a simple transponder of the bitstream. The use of OFDM is well justified, even without DS-CDMA, because of low complexity (ease of equalization), robustness, and capacity. UPLINK LMDS data systems are usually highly asymmetric, although different scenarios and systems could be built. The uplink is necessary in order to make the LMDS system a real data network. In our test system only 100 MHz was allocated for the uplink. This allocation can be increased and made dynamic. This selection is an important network design question. Microcells use DSCDMA radios and macrocells differential QPSK with convolutional 3/4 coding. The overall uplink capacity of the trial system was just 6.14 Mb/s. The TDMA system of the uplink is dynamic, and the user can request more than one (contention) time slot for uplink use. The statistical analysis shows that the typical user can expect to have 64â€œ512 kb/s throughput. For a typical home user this is enough. However, for commercial users the system model would most probably require a different set of parameters.
Most of the available commercial LMDS systems provide IPv4 services to end users using IP over ATM. The use of an ATM platform allows cooperation between radio systems and ATM core networks. However, the use of ATM produces a large protocol overhead (headers, etc.) with respect to the payload information carried by the IP packets. In order to optimize transmission of IP connectivity, IP packets can be transmitted directly, avoiding the ATM layer. Moreover, the radio data link control (DLC) and medium access control (MAC) protocols should be optimized for IPv4/IPv6 characteristics (variable packet length, traffic characteristics, etc.). Furthermore, interworking functionality with the IPv6 core network should be ensured. This has been the task carried out in our project and implimentation. We have been able to provide ATM cell-level interoperability because the base station can have ATM connection.
BANDWIDTH-ON-DEMAND MAC FOR
As far as the MAC is concerned, a TDMA/ FDMA approach has been adopted. The TDMA time slot and MAC message structure are chosen to be Digital Audio Video Council (DAVIC) compatible for applications . The implemented basic uplink configuration includes 48 time slots/TDMA period, and a gross channel bit rate of 2 Mb/s with a synchronization header duration of 128 ÃƒÂ¬s and a time margin at 17 ÃƒÂ¬s. For the downlink we simply statistically multiplex several bitstreams into each channel and use FDMA. An alternative approach is to also include (contention-based flexible) TDMA in each downlink channel. Each user is allowed to request one or more time slots according to the bandwidth needed. This is a very attractive possibility for broadband wireless systems. In fact, this can be achieved as a simple low-layer protocol with negotiation between the true base station and terminal equipment, as long as there is extra bandwidth offered. Dynamic allocation reservation is controlled by base stations. In a technical sense this is a simple priority queue that handles reservation and bandwidth. Frequency-division duplexing (FDD) has been favored by the broadcasting community over time-division duplexing (TDD). Our choice to use FDD is based on the fact that it is simpler to implement. However, one should note that although we have FDD in use, we use time-division multiplexing (TDM) and access in the uplink, which is more efficient especially for asymmetric and bursty data traffic. In the downlink only statistical multiplexing is used, although the hardware has the capability to move into TDM mode. In a realistic commercial case, which we have not studied, this would require billing information collection and possible price-priority negotiations. Our system is similar to the General Packet Radio System (GPRS) and other contentionbased reservation networks in the use of dynamic slot reservation. In an optimal situation operators require network management software in order to guarantee sufficient quality of service (QoS) for all customers. The possibility of adjusting asymmetry between uplink and downlink would also be a very attractive feature. Unfortunately, this is relatively easy to specify and even implement as a protocol, but building an actual radio network with cost-effective hardware is difficult.
Two basic entities as hosts are a base station and a customer terminal (end host), as shown in Fig. 2. The base station includes a fixed network termination (computer) that handles network connectivity into, say, IP, ATM, or frame relay networks. It also contains baseband technology and a configurable RF platform (frequency is changed by RF extension cards). Users have two different access possibilities. In principle, if we are using only a macrocell connection we have a 40 GHz transceiver with a small antenna (~10 cm disk) and a modified DVB-S transreceiver set-top box. Users in microcells only require a 5/17 GHz DS-CDMA transceiver. In fact, if we have a wired apartment block, for example, we can have a central receiving unit for 40 GHz transmissions installed on the roof. The final connection into flats is provided through existing wirelines. One should note that if the user uti lizes a modified DVB-S satellite set-top box as an access point, they can also use it as a satellite receiver. Hence, the household has IP over LMDS, video over LMDS, a home network access point, and a satellite receiver in a single piece of equipment. This can justify a relatively high equipment price. In our trials a programmable set-top box was used as the access point. In CABSINET the downlink to the customer is DVB-S/T-compliant in both physical interface and framing. This means we are multiplexing 188-byte-long frames into transmit streams at the base station. If we want to deliver MPEG-2 video, the corresponding traffic ID will be written into the header. In the customer terminal the MPEG2-TS stream is handled by the network interface, usually built into the set-top box. The header ID is read and hashed, and the decision on packet switching is made immediately. Â¢ If the ID indicates IP traffic, we forward the packet into the IP stack. Most often the packet is forwarded from the set-top box receiver to an output port (e.g., Ethernet port) which is connected to a home network or computer. Â¢ If the ID is for MPEG-2 video, we forward the packet directly to the MPEG-2 decoder and subsequently into TV circuits. Because we can multiplex several connections into a single MPEG2-TS stream, this means that a single user can concurrently receive both MPEG-2 video and Internet connectivity. This was successfully demonstrated in the project and implimentation. This is a potentially valuable feature, because in households the probability of concurrent TV and computer use is high.
TCP OVER LMDS
There has been a large amount of work done toward developing and understanding TCP over unreliable wireless channels (e.g., [5â€œ9], just to mention a few). A large part of the recent work with wireless TCP is directed toward mobile cellular systems. There are some differences between generic wireless TCP and TCP over LMDS, which should be understood. Unlike other cellular wireless systems, there are no handoffs in LMDS. This eliminates the problem of handoff delays incurred in TCP. MobileIP and DHCP are good solutions for LMDS. In fact, stateless autoconfiguration of IPv6 will be used in future versions, since it is the most straightforward method of handling IPv6. The two-layer LMDS approach opens up new service opportunities for Internet service providers (ISPs) and operators. One well-known problem is that wireless systems are susceptible to high error rates and large delays when compared to wired systems. However, in the case of LMDS we can afford to perform aggressive channel coding, since we have extra bandwidth to spare. To give a concrete example for the downlink, we use interleaving, convolutional coding, and the Reed-Solomon code RS(204,188). In practice we can drop the link bit error rate (BER) below 10â€œ7 in ordinary conditions, which is quite adequate. Above this method we use selective automatic repeat request (ARQ) and finally the TCP mechanism. Hence, the design of wireless TCP connectivity for LMDS is closer to the problems encountered with WLANs, although some cellular considerations must be made. In fact, the Berkeley Snoop protocol  can easily be implemented into the base station of LMDS. Because the base station is quite powerful and uses a dedicated link to the service provider, Snoop is a straightforward inclusion. There is also a lot of buffer memory available in the base station for the Snoop protocol. We have simulated this with OPNET with good results. However, this was not implemented into the final test system used in these trials since the system was working well without one. Standard TCP used today is very sensitive to packet loss and noncongestive delays. With IP packet losses of only 2 percent, TCP throughput can drop to 20â€œ50 percent, making TCP almost useless . The physical layer forward error correction (FEC) is essentially invisible for TCP/IP. Our experimentation shows that if there is enough bandwidth, it is better to use as strong a physical layer error concealment as possible, rather than to opt for ARQ or methods like Snoop. In most cases at 40 GHz the selected combination of FEC, ARQ, and TCP error control leads to very good conceived QoS. The usual Internet applications do not see any difference between wireless LMDS or wired transmission (over a long fixed link). However, some weather conditions, especially heavy rain or dry snow, can lead to quite sudden degradation of propagation. To fight against this we have chosen to use the poor manâ„¢s software radio approach. The physical layer error control block, especially the Reed-Solomon coder, is implemented as a software block. It is possible to change coding dynamically in order to reach a preselected level of BER. The time dynamics of this algorithm is not high. We are not trying to provide adaptivity against quick fading changes. The situation is more like switching stronger error control on when necessary (e.g., if heavy rain is falling). Finally, we suggest that the link-loss should be explicitly measured by radio equipment. If the link is lost, it should generate an ICMP error message directly to the service provider and originator host(s). The QoS is in most cases much better when an attempt to explicitly reconnect is made rather than waiting for timeout because of a lost link. The tricky part is to find good parameters to decide when the link is lost for a long enough time. In the wireless link we have not implemented any Internet QoS mechanisms such as Resource Reservation Protocol (RSVP)/differentiated services (DiffServ). The CABSINET LMDS architecture can guarantee video and voice streams by allocating reserved time slots and streams for real-time connections. In the end terminal the packet header (the LLC packet, not the IP header) informs the user if the stream is for real-time circuitry or the TCP/IP stack. This is naturally a very crude and simplified QoS mechanism. However, it works robustly, since it is simple and uses specific features of the LMDS network.
TCP FRAGMENTATION AND
Another area that requires special attention in wireless TCP over LMDS is packet fragmentation. The size of packets in IPv4 or IPv6 is, naturally, too long for wireless communications, even those with a fixed broadband link. Originally, the DAVIC standard advocated the use of ATM cells. This was influenced by a strong industry interest in ATM, and there was already a large research base for wireless ATM. However, ATM cell size is quite suboptimal for LMDS, even with a slow uplink. The ATM cell size could be justified with mobile systems often working in a bad propagation environment, although we have our reservations about ATM even in those cases. In the case of LMDS the incurred header overhead and transmission delay is too high a price to pay. In Fig. 4 we show the uplink throughput scalability for ATM framing (68-byte cells; 53 bytes for ATM plus extra radio link headers and error control redundancy) against 188-byte MPEG2-TS frames. The typical usability scenario is one ATM frame per time slot, where ATM is 20 percent less efficient than a 188-byte framing scheme. In fact, if we were simply considering throughput optimization for average conditions on LMDS links, we should use even longer frames. In practice, the 204/188-byte length was chosen, since it is the same length as the MPEG2-TS frame. Hence, the system is still compliant with many multimedia standards; subsequently, many costeffective components can be utilized. This is an important consideration if one would like to build actual mass market customer products. Second, as mentioned previously, propagation conditions sometimes fluctuate strongly, and it is better to be conservative in optimization. Finally, the selection of the MPEG-TS frame length as a basic payload unit is very useful for downlink utilization. As mentioned above, we can send both IP packet services and MPEG-2 video using the same transmission system. In the wireless TDMA system we have the freedom to choose how many fundamental payload packets (logical frames, which in our case are ATM or MPEG-2 packets) to include in each physical time slot. One should not confuse payload packet frames with radio link frames, which are composed of time slots, guardtime, and so on. This is again a question of optimization between reliability, delay, and channel efficiency. The final optimization is very hard to achieve without actual field trials. In Fig. 5 we show application-level end-to-end transmission delay as a function of packets per slot. The measured performance is reasonable for most applications and TCP. The optimal range for throughput is between 2â€œ5 MPEG-2 payload packets/slot. In the CABSINET system this is a configurable parameter for the operator. The ACK threshold for TCP/IP must be selected as suitable, of course. This can be done dynamically, and the base stations can even control customer premises radio equipment through a signaling channel. The overall TCP delay includes the entire processing, uplink transmission delay, and so on; hence, it does not scale linearly with downlink delay. Higher downlink delay tends to bring about problems with timers, buffers, and so forth. The overall TCP delay grows very rapidly after a threshold. In Fig. 6 we show the simulated performance and measurements for this. As one can see from this, LMDS can provide a robust TCP/IP gateway for end users. The required changes for reliable transport are made in the physical and link layers with adaptive modules. In fact, this is an important design requirement, since any substantial change in end-to-end semantics of TCP/IP would render the system uninteresting, and it would be unrealistic to wait for any quick fixes from the Internet Engineering Task Force (IETF). Finally, we must address the question of jitter and reliability with actual trials. The measurements with the trial equipment and network showed that without any adaptive error control, very reliable and predictive transmission is achieved. In fact, transmission is extremely deterministic and jitter is low. This is manifest both with macrocells and in microcells thanks to OFDM transmission. However, occasionally in some traffic traces, one discovered large delay peaks (on the order of 500 ms compared to ordinarily constant 100 ms), which were generated because of lost packets and the time required to recover the situation (Snoop could effectively correct this situation).
A few years ago it was thought that video on demand (VoD) would be the killer application for LMDS. Currently, high-speed wireless Internet access and interactive digital TV have almost completely superseded VoD. Digital video broadcasting and voice over LMDS can be seen as an interesting extra revenue possibility. The inherent benefit of the proposed LMDS architecture is that this can be done easily with the same infrastructure. LMDS offers high flexibility and high ondemand capacity. The services and capacity offered can easily be localized by changing cell size and using different antennae. This makes LMDS well suited to both home and business use. Emerging applications are probably also teleworking and wireless video surveillance. Also, telephone service could be possible (e.g., using VoIP and SIP). Because of large capacity and very low utilization level overnight, we are encouraging LMDS to also be used as a delayed data delivery system. End users could order videos or software upgrades that are downloaded over an LMDS network during low-cost low-utilization time periods. Simulations show that this approach can be very cost-effective and balance network utilization very effectively. LMDS is best suited for cases where operators are required to quickly deploy data communication capability in high population density areas, or when they do not have access to copper infrastructure. Another clear opportunity is to deploy services over LMDS into low population density areas, where a fixed copper or fiber infrastructure might be too expensive or static.
LMDS is well suited to fixed broadband wireless transmissions. We argue and show in practical implementation that LMDS networks should not be seen as mere broadcast or interactive TV systems. This would lead to misuse of scarce wireless bandwidth. Moreover, we have been able to study and implement a robust TCP/IP delivery mechanism over LMDS. This has been achieved by building a TCP-over-MPEG protocol booster. The simulations and field trials show that IP over LMDS can be done using fairly standard radio systems. However, one should not underestimate the need to carefully fine-tune radio, network, and TCP/IP parameters in order to provide the best possible spectral efficiency and perceived QoS. LMDS could be used as a broadband backbone network by replacing the proprietary microcell with some more common technology (e.g., HiperLan/2 or IEEE802.11a). Furthermore, we argue that LMDS is not best used as a static and separate fixed wireless local loop. It is better deployed as part of the overlay network structure, where lower-frequency networks use LMDS as a high-speed core or trunking network. The results of our LMDS research have been the development of a two-layer network based on high-speed TDMA/FDMA, fixed wireless TCP over LMDS, and specifically a TCP-over- MPEG-2 protocol embedding layer, and a configurable and dynamic bandwidth allocation mechanism for LMDS data networks. In our opinion, it is clear that LMDS will also be a promising last mile delivery technology for IP traffic. The technology will begin to be mature enough for large-scale deployment with reasonable customer pricin
 T. Kwok, Residential Broadband Internet Services and
Application Requirements, IEEE Commun. Mag., June
1997, p. 76.
 P. MÃƒÂ¤hÃƒÂ¶nen et al., 40 GHz LMDS-System Architecture
Development, ICT 1998, vol. 1, 1998, p. 422.
 ETS 300 748, 1996, Digital Broadcasting Systems for
Television, Sound and Data Services; Framing Structure,
Channel Coding and Modulation for Multipoint Video
Distribution Systems (MVDS) at 10ghz and Above; ETS
300 744, Digital Broadcasting Systems for Television,
Sound and Data Services. Framing Structure, Channel
Coding and Modulation for Digital Terrestrial Broadcasting,
 DAVIC 1.4 Spec., Pt. 8, Lower Layer Protocols and
Physical Interfaces, 1998, Geneva, Switzerland.
 G. Xylomenos and G.C. Polyzos, TCP and UDP
over a Wireless LAN, IEEE INFOCOM â„¢99, 1999,
 H. Balakrishnan et al., A Comparison of Mechanisms
for Improving TCP Performance Over Wireless Links,
ACM SIGCOMM â„¢96, p. 256.
 K. Brown and S. Singh, M-UDP: UDP for Mobile
Comp. Commun. Rev., vol. 26, no. 5, Oct.
1997, pp. 19â€œ43.
 R. Kalden, I. Meirick and M. Meyer, Wireless Internet
Access Based on GPRS, IEEE Pers. Commun., Apr.
2000, pp. 8â€œ18.
 R. Ludwig and B. Rathonyi, Link Layer Enhancements
for TCP/IP over GSM, IEEE INFOCOM 1999.
 H. Balakrishnan, S. Seshan, R.H. Katz, Improving
Transport and Handoff Performance in Cellular
Wireless Networks, ACM Wireless Networks, vol. 1,
no. 4, Dec. 1995.
 P. MÃƒÂ¤hÃƒÂ¶nen et al., Medium Access and Reconfigurability
for Two-Layer LMDS, Proc. WAS Wksp., San
Francisco, CA, 2001.
Use Search at http://topicideas.net/search.php wisely To Get Information About Project Topic and Seminar ideas with report/source code along pdf and ppt presenaion
|Tagged Pages: wireless internet seminar, lmds implementation, lmds network architecture, prospects and problems in the internet connectivity, link layer support for streaming mpeg video over wireless links, ldms implementation,|
|Popular Searches: dscdma, sistem lmds, usb20 microcell, architecture of 4g wireless systems, fact about proteotic, 1xev do architecture for wireless internet ppt, lmds network architecture,|