Download PDF version of this article PDF

Open Spectrum
Robert J. Berger, Internet Bandwidth Development, LLCA

Path to Ubiquitous Connectivity

Just as open standards and open software rocked the networking and computing industry, open spectrum is poised to be a disruptive force in the use of radio spectrum for communications. At the same time, open spectrum will be a major element that helps continue the Internet’s march to integrate and facilitate all electronic communications with open standards and commodity hardware.

Open spectrum is a collection of new radio technologies. The core concept is that technology and standards can dynamically manage spectrum access (and, thus, spectrum sharing), in place of the current static band allocations through bureaucratic “command and control.”

The three major classes of technology (described in more detail later) to implement open spectrum are:

Because of how open-spectrum techniques impact the way spectrum is regulated, open spectrum has some overtones of a techno-political movement. The regulation of spectrum has had almost 100 years to become one of the most political of bureaucracies. The interests of some of the most powerful lobbies in the world (National Association of Broadcasters, the telephone companies, and their equipment suppliers) have had much influence over the direction and focus of the Federal Communications Commission (FCC), the main spectrum regulator in the United States (with similar situations in other countries). The FCC micromanaged spectrum allocations on a request-by-request basis, usually specifying the application (TV broadcast, phone service, public safety, etc.) and technology that a licensee could use its sliver of spectrum for.

Until recently, there was little expectation for this to change. In the past year or so, the FCC itself realized that there is a need for change. It basically had no more spectrum to allocate, yet the demand for new uses—primarily data—was accelerating. Fortunately, the FCC was open to new techniques for using spectrum, and at the same time discovered that much of the spectrum allocated over the years was being grossly underused. The problem is that the new techniques to harness underused spectrum require somewhat radical new ways of regulating it.

The FCC did a rather amazing thing for a bureaucracy. It brought in a project manager from the Defense Advanced Research Projects Agency (DARPA), Dr. Paul J. Kolodzy, to facilitate an intensive nine-month Spectrum Policy Task Force (SPTF). His group explored and came up with a range of policy recommendations that aggressively promote many of the open-spectrum techniques. To be fair, one faction of the FCC (the economists) came up with a counterproposal to auction off all spectrum and make it private property, thus using market mechanisms to manage spectrum (but we will leave that discussion to another article).

What enabled the FCC even to consider the idea of an open spectrum has been the unexpected success of Wi-Fi (IEEE 802.11 standard) wireless LAN. When the FCC originally allocated 85 megahertz of the 2.4-gigahertz spectrum, it was called the “Junk Band” because it had so many conflicting uses. The 802.11 standard, however, has demonstrated the innovation that can be unleashed with unlicensed spectrum. Products and services based on the 802.11b standard created a $2.9 billion industry in 2002. The FCC then saw that technology and standards can use spectrum in ways that seem to create more capacity.

Open spectrum calls for opening up most of the spectrum for unlicensed use in ways that can co-exist with legacy spectrum users, “creating” huge new capacity with existing spectrum. These new technologies and the grass-roots support behind them may be what are needed to break the last-mile bottleneck. Open spectrum will help to manifest nearly ubiquitous Internet access to an extent previously thought to be available only in science fiction.

INTERFERENCE IS IN THE “EAR” OF THE RECEIVER

At first glance, the promise of open spectrum does sound like the stuff of science fiction. In the not-so-distant past, finding enough spectrum for a handful of TV stations, scores of radio stations, and a few cellphone carriers was difficult. Given that, the ability to have millions of people simultaneously using their surrounding spectrum, each with 10 or 100 megabits or even gigabits per second of bandwidth, indeed seems like an absurd dream. This is because our commonsense understanding of radio-spectrum capacity comes from our day-to-day experience with radio technologies that have been largely unchanged since the beginning of the last century. Back when radio was first being developed and regulations were being set in stone, radio technology was quite primitive. Radio (and later TV) receivers can cope with only one signal at a time, and that signal must be much “louder” (i.e., higher amplitude) than the noise floor and any other signal that is near the same frequency. (See Figure 1.)

Such a dumb receiver can be easily confused if there is a signal at or near the same frequency and whose amplitude approaches or exceeds the amplitude of the signal that the dumb receiver is trying to listen to. This is what is generally called interference. (See Figure 2.) The problem isn’t interference between the signals, but rather the inability of the receiver to differentiate between the signal it is interested in and the other unrelated signal.

In the late 1920s and early 1930s, this kind of simple circuitry was all that was possible. Therefore, the radio industry came up with a regulatory approach to ensure that only one signal is allowed in each carrier frequency assigned to each station in a geographical area. Only one entity, the station licensee, can transmit that signal.

These analog circuits are so dumb that you cannot have another signal anywhere near the signal of the allocated channel. Guard bands of unused frequencies (extra channels) are required between channels in each geographic region, wasting even more spectrum. That is why there are hardly ever two adjacent TV or radio channels in one city. The government, under the guidance of industry, applied a regulatory “patch” to what really is a technological problem.

This has been going on for so long that it has led people to believe that this is the natural and only way to think of spectrum and interference—that is, the spectrum must be carved up into a limited number of small channels with a regulatory or property model to protect receivers from interference from other devices.

Today with the ability to inexpensively make chips with millions of transistors, with new modulation techniques and better understanding of information theory, we can embed a huge amount of signal processing into transmitters and receivers. This allows them to discern the signal they are interested in from potentially millions of other signals and noise. They can operate at much lower power and share the spectrum through multiple dimensions of coding, frequency agility, and spatial/temporal reuse, compared with the traditional single dimension of a frequency channel.

MOORE’S LAW: POWER FOR DYNAMIC AND INTELLIGENT RADIOS

Open-spectrum technologies take advantage of Moore’s Law and its associated inexpensive signal processing and embedded intelligence power to extract huge “new” capacity from the spectrum. The techniques can be split into two major categories:

Wideband spread spectrum physical layer. The primary physical layer techniques include various wideband spread spectrum techniques such as ultra-wideband (UWB). Many products already use some form of spread spectrum including 802.11. The current products, however, have limited amounts of spectrum allocated to them. For example, 802.11b/g has 75-85 MHz (depending on which country), and 802.11a has about 300 MHz (again, more or less depending on different countries’ spectrum policies). Wideband spread spectrum such as UWB spreads its signal over gigahertz of spectrum but uses only a tiny amount (picowatts) of power per Hertz. This means that a UWB signal “looks” like background radiation (known as the noise floor) to conventional narrowband radio receivers. (See Figure 3.)

Wideband spread spectrum receivers do not “tune in” to signals. Instead, they use digital processing techniques such as code and/or timing synchronization. Embedded digital codes in the data or the timing of the digital signals allow the receiver to extract the desired signal out of the noise floor. Thus, it is possible to have a huge number of simultaneous signals overlaying each other across the spectrum. The number and the bandwidth available are limited only by the processing power in the devices communicating and in the amount of spectrum that can be spread across. This is known as process gain from Claude Shannon’s Communications Theorem. Shannon’s Theorem combined with Moore’s Law (digital processing power doubles every two years) clearly demands that we need to have a spectrum policy that allows the radios to spread across as much spectrum as possible if we want to use spectrum to its fullest capacity.

UWB is particularly appropriate for low-power, relatively short-distance applications such as personal, local, and possibly neighborhood area networks (1- to 100-meter ranges). When UWB and other wideband spread spectrum techniques are allowed to operate starting at hundreds of MHz through 10 GHz, the signal can pass through walls and other obstructions even at low power. Current U.S. regulations require UWB to operate above 3.1 GHz and very low power levels so they cannot generally penetrate walls. Some of these issues can be transcended using the cognitive radio and mesh network techniques described later. Over time, wideband spread spectrum is expected to be able to operate across the spectrum and in some situations/spectrum at higher power.

Cognitive and software-defined radios for more intelligent spectrum utilization. The existing spectrum policy forces spectrum to behave as a fragmented disk. It chops it up into thousands of small bands. Not only does this fragmentation make it difficult to take full advantage of spread spectrum, but it just plain wastes spectrum. Recent measurements by the FCC show that even in a major urban areas, only 30 percent of allocated spectrum is being used at any one time.

This leads to the second type of open-spectrum technique: cognitive radios, also known as agile or software-defined radios (SDRs). Joseph Mitola III coined the term cognitive radio in 1991, and his Web site has many resources on the topic [http://ourworld.compuserve.com/homepages/jmitola/]. Cognitive radios have embedded intelligence and radio frequency (RF) technology that allows them to know what kinds of transmissions are desired (bandwidth, latency, urgency). They can “listen” to huge swaths of spectrum and determine which chunks of spectrum are available around them. They also know the rules of what spectrum could potentially be shared if a primary licensee of that spectrum is not using it at that time or place where the cognitive radio is located. The cognitive radio could then use those chunks of spectrum to talk to other cognitive radios nearby using the most appropriate RF modulation techniques for the desired transmission and available spectrum. An SDR can transmit/receive customized RF modulations, which can be conventional, ultra-wideband, or even multi-band, where spread-spectrum techniques can be spread across many non-adjacent bands instead of the continuous spectrum that single-band UWB requires.

The intelligent sensing and dynamic access capabilities of cognitive radios will allow them to operate at power levels higher than the lowest-common-denominator noise floor, because they “know” they aren’t interfering with any other transmitters in the area near them, thus allowing them to co-exist with legacy radio applications. An immediate example of this would be the common situation where whole television channels are not allocated in a city because they are near an allocated channel in frequency. For example, if TV channel 4 is allocated for Tokyo, there will be no Channel 3 or 5 in that city because TV receivers have traditionally not been able to cope with adjacent channels. A cognitive radio, however, could use those frequencies at a power lower than television but higher than the noise floor and not cause problems with any but the most ancient of televisions.

Mesh networks leverage intelligence and geographical spectrum reuse. You must remember that we are using spectrum differently from legacy systems. Moore’s Law is not letting us shrink TV stations into PCs. Open-spectrum techniques call for devices to operate at very low power—definitely less than a few watts and usually less than a watt total power, compared with tens or hundreds of thousands, or in some cases millions of watts of power of legacy radio and TV. This does mean that most open-spectrum devices will have a range in the 10- to 5,000-meter range (depending on the application and environment), with most embedded devices in our laptops, PDAs, and other devices having an indoor range of maybe 100 meters. To get around these distance limitations, we take advantage of both Moore’s and Metcalfe’s laws. [Metcalfe’s Law, named after Ethernet inventor Bob Metcalfe, says that the value of a network grows as the square of its number of users; it is described at http://www.infoworld.com/cgi-bin/displayArchive.pl?/96/19/o04-19.52.htm.] We make each device both a data source/sink and a relay node in a mesh of devices.

Embedded intelligence (made cheap and powerful by Moore’s Law) will allow each open-spectrum device potentially to be not only an end node in a network, but also a relay unit for any nearby neighbors forming a mesh network instead of a conventional point-to-point or point-to-multipoint architecture. Such a mesh network would mean that as long as one or more nodes can access a gateway to the backbone, any node that can connect to any other node of the mesh could get its data to and from the backbone as well. The neighbor-to-neighbor links can thus be very short, in the range of UWB and other low-power spread spectrum techniques. The more devices that participate in the mesh, the more possible paths there are to the backbone (and to each other). This is an example of Metcalfe’s Law. An aspect of such a mesh network is that the capacity of the mesh increases as more nodes are added, known as cooperation gain. Assuming a critical mass of nodes and backbone gateways so that new nodes have a path to the backbone, the system scales very nicely and allows for lower power output per node and thus dramatic geographical spectrum reuse while still extending the coverage area of the mesh.

ULTRA-WIDEBAND: THE RUSH HAS STARTED

The starting gun was just fired in February 2002 when the FCC allowed limited use of UWB techniques in the 3.1- to 10-GHz spectrum. Chip vendors are promising UWB chipsets “any day now.” The IEEE 802.15 WPAN High Rate Alternative PHY Task Group 3a (TG3a) [www.ieee802.org/15/pub/TG3a.html] is in the very early stages of producing a standard based on UWB. The focus of 802.15.3a is on a standard for wirelessly connecting home entertainment, multimedia, and other high-bandwidth devices to each other and to PCs with a minimum of 110 Mbps at 10 meters of range and higher speeds at shorter distances. To quote from the task group’s criteria: “Examples of applications demanding the proposed faster bit rates include time-dependent large-file transfers, multiple simultaneous instances of high-definition audio/video streaming and cable replacement. Examples of devices which can be connected include computers, computer peripherals (similar to USB 2.0’s 480-Mbps capability), PDAs, handheld PCs, printers, set-top boxes, information kiosks, image displays, virtual reality games, DVD players, and Camcorders (similar to IEEE 1394’s 400-Mbps capability)” [see www.ieee802.org/15/pub/2002/Sep02/02371r0P802-15_SG3a-5_Criteria.doc].

Of all the new open-spectrum technologies, UWB is the physical-layer technology closest to being available for development. The opportunities are wide open, from MAC implementations, drivers for embedded systems, PCs, and PDAs to standards above the MAC and applications that will actually use the services. The availability of UWB chipsets and a basic 802.15.3a standard will initially impact device manufacturers, as described in the quote from the 802.15.3a task group criteria. Once the first UWB devices show their capability, the FCC expects to extend the limitations on UWB to potentially allow more power and to operate in the frequencies below 3.1 GHz. This will allow UWB to be used in longer-range scenarios such as wireless LANs and neighborhood area networks (NANs). We can expect UWB to be another physical link (PHY) for 802.11 wireless LAN standards.

MESH NETWORKS: IMMEDIATE OPPORTUNITIES FOR DEVELOPMENT

Mesh networks are somewhat less dependent on what kind of physical layer is used, so some development is already going on with wireless mesh networks for proprietary and standards-based PHYs. Surprisingly, however, the development has not been widespread, and that leaves a wide open window for developers. One of the big questions, though, is: Where is the best place to implement meshing?

Most developers assume that it should be a routed approach at layer 3. Most of the meshing developments that have been published, particularly in academia, have been layer 3 based. This includes the IETF’s Mobile Ad-hoc Networks (manet) standard [www.ietf.org/html.charters/manet-charter.html] and some of the open-source efforts such as LocustWorld MeshAP [global.locustworld.com/index.php]. Besides software developers having more of a comfort level at layer 3, there is the advantage that the code development for a layer 3 implementation can be done relatively independently of which wireless hardware device is being used. The wireless device is abstracted by a generic interface such as 802.11 wireless LAN or looks just like an 802.3 Ethernet interface. This level of abstraction does make it more difficult to incorporate details of the RF link into the relay decision making, however.

Some believe that it may be more appropriate to integrate the relaying at layer 2, more like an extension of Ethernet Rapid Spanning Tree [www.ieee802.org/1/pages/802.1w.html]. It is, after all, wireless LANs or in the wider area case, wireless metro area access networks that are traditionally implemented as layer 2 networks that then feed into routed networks for global connectivity. By implementing relaying within layer 2, the packet-forwarding decision can have easier access and more direct control over RF link factors (signal strength, direction, load on a per-node link) that are already being tracked by the MAC layer. No current standards specifically support layer 2 multi-hop connectivity. A few startup companies, such as Mesh Networks and Skypilot, may be using layer 2 techniques.

Integrating enhancements at layer 2 usually requires development in the firmware of the device. For example, adding layer 2 relaying to 802.11 would mean modifying the firmware of the access point or any device that was to participate in the relay mesh. This usually requires a relationship with the manufacturer of the equipment or chip used in the devices so the developer can gain access to the source code.

Besides the basic relay functionality, it would be advantageous to let devices automatically discover their neighbors and dynamically reconfigure links based on changing conditions, all in a secure but flexible manner. Ensuring that public data transit cannot mix with individual or corporate traffic will be important. This can be accomplished using virtual LAN (VLAN)-type techniques that are integrated with the packet forwarding. Authentication and access control can be implemented using the new 802.1x/EAP standards being developed for making 802.11 secure. [For more information, see “Authentication and Authorization: The Big Picture with IEEE 802.1X,” by Arthur A. Fischer, www.sans.org/rr/authentic/IEEE_8021X.php.] In this case, you would be authenticating not only end users, but also devices. Such a system could support a range or even a mix of community, carrier, and corporate networks meshing together. The policies (and charging if appropriate) can be handled by the existing proxy Remote Authentication Dial-in User Service (RADIUS) techniques used by wholesale and roaming ISP services such as GRIC, iPass, and Boingo, as well as corporate access control systems. The wireless mesh could be just a private corporate mesh or a mixed private-public mesh where public access traffic is on its own VLAN with a lower priority than the private network. Public access to the Internet could be free or paid through a roaming service.

Here’s an example of how such a system could work financially: A broadband service provider could offer individuals and corporations discounts on broadband wireline connections (xDSL, cable modem, fiber-to-the-home, etc.) if they installed wireless access points that supported the public-private access. The broadband service provider could then be the billing processor for any public-access customers that use those access points. Similarly, end users could be given incentives to allow the public-private mesh software to run on their laptops, PDAs, or other devices, thus extending the “edge” of the Internet even farther and deeper. In such a scenario, nearly ubiquitous wireless access could be deployed with no single carrier having to capitalize the cost of building the wireless network. Instead, end users effectively pay the capital cost for the wireless access network. Someone out there still needs to implement and spread this kind of software.

SDRs: A FERTILE REALM FOR SOFTWAREDEVELOPMENT

By definition, the SDR (used here to refer to the whole class of radios, including cognitive and agile radios) will need the most software. It also is the arena where the hardware required to fulfill the promise is still the farthest off. Moore’s Law has not quite cranked to the point where we can have a general processor capable of creating and decoding arbitrary waveforms in the multi-GHz range. Some devices, including an open-source GNU radio [refer to www.gnu.org/software/gnuradio/], allow the manipulation of signals in the hundreds of MHz range. There are also SDR platforms that can work within a constrained set of frequencies and modulations. Thus, it is possible to start experimenting with SDR and cognitive radio techniques. The early experiments may need bulky equipment that take up a rack or two, but we can expect those systems to shrink as they ride the Moore’s Law wave.

Besides SDRs to do the modulation and demodulation, there are the cognitive aspects. These would include doing near-realtime spectral analysis of the spectrum being used and then dynamically selecting the free spectrum in that location to use for linking with neighbors. Some of the protocol challenges would include negotiating and synchronizing what part of the spectrum is free and how the multiple sides of a link would decide which frequencies and protocols to use for the link at any moment.

The combination of cognitive aspects with a mesh network would be particularly interesting. Having all the nodes of a NAN sharing their spectral analysis would create an even more accurate way of sharing the spectrum and avoiding “hidden-node” problems that occur when each node can “hear” only the signals they have line of sight to. This would overcome the situation where an individual node might be blocked from a legacy transmitter that can be “heard” by other members of the mesh or legacy users of that spectrum. In the normal case, the individual node would not hear the transmitter and thus try to use the same frequency, causing interference at least with the legacy users. If the mesh shared information about frequency use in the overall neighborhood, then even the nodes blocked from the legacy transmitter would learn about it from the other neighborhood nodes and thus realize it must use a different set of frequencies.

These areas of development for cognitive and software-defined radios are just the tip of the iceberg. Mitola’s dissertation defense, “Cognitive Radio: An Integrated Agent Architecture for Software-Defined Radio” [www.it.kth.se/~jmitola/Mitola_Dissertation8_Integrated.pdf], discusses many higher-order aspects of cognitive radios that are much higher level than modulation and interference avoidance. It covers knowledge representation, machine learning, knowledge acquisition, and natural language processing, as his ultimate vision of cognitive radio is that it dynamically adapts to the environment, as well as the needs of the user.

BACK TO EARTH: 2003

The ultimate in open spectrum is still off in the future a bit. Opportunities exist, however, to create the foundations and even some of the basic tools and applications using wireless technologies such as 802.11 and cellular data services. As developers, we are lucky that the world is converging around Internet standards. The Internet “end-to-end” and IP “hourglass” architectures allow us to ignore most of the physical realities of what goes on below the IP layer. On the other hand, we have gotten a bit spoiled by the wired stuff underneath IP, where bandwidth and latency are usually not issues.

With wireless, we will see a wide range of performance characteristics, depending on what technology is being used underneath the IP level. Today, you can have an IP link on top of wireless to a PDA or laptop that could range from 9600 baud for some of the 2G cellular systems, or possibly from 20 to 64 kbps for some of the newer 2.5 General Packet Radio Service (GPRS) cellular systems, to the multi-megabit throughputs of 802.11 wireless “hotzones.” The mobile “road warrior” might be using the same device and application in all those different link qualities, depending on where they are at any moment. If you are developing applications for such users, they will ideally be adaptive enough to be usable in any of those situations.

At this point, the underlying infrastructure has no standard way to tell an application what the current performance capabilities are. Therefore, the application developer (or possibly middleware developer) will need to include mechanisms that constantly test the performance capabilities of the link between the end user and the backend services and adapt accordingly. An application or middleware software developer must be ready for periods of disconnects, variable latency, changes in bandwidth, or even changes in the end node’s IP address as the end user moves from location to location. Changes in location mean changes not only in the RF characteristics between the end user and the local base station, but potentially in the technology and protocols being used at that moment. For instance with multi-band/multi-protocol devices that mix cellular data and 802.11, a user may move out of an 802.11 hotspot and into cellular GPRS coverage. At that point, latency and bandwidth will change dramatically.

Unfortunately, the wireless options will require this kind of adaptability for several more years until we have near-ubiquitous broadband wireless. As we look at what is in store with the open spectrum, we can hope for such broadband ubiquity within one or two more cranks of Moore’s Law. Q

Resources

Techno-economic-political aspects of the open spectrum
“An Open Spectrum FAQ,” by David Weinberger, Jock Gill, Dewayne Hendricks, and David P. Reed, Greater Democracy, www.greaterdemocracy.org/OpenSpectrumFAQ.html
“Why Open Spectrum Matters: The End of the Broadcast Nation,” by David Weinberger, Greater Democracy, www.greaterdemocracy.org/framing_openspectrum.html
“Open Spectrum: The New Wireless Paradigm,” by Kevin Werbach, New American Foundation, werbach.com/docs/new_wireless_paradigm.htm
“Some Economics of Wireless Communications, by Yochai Benkler, Harvard Journal of Law & Technology, www.benkler.org/OwlEcon.html

UWB overview
“Wireless Data Blaster,” by David G. Leeper, Scientific American, May 2002,
www.sciam.com/article.cfm?articleID=0002D51D-0A78-1CD4-B4A8809EC588EEDF
“Ultra Wideband (UWB) Frequently Asked Questions (FAQ),”
MultiSpectral Solutions Inc., www.multispectral.com/UWBFAQ.html
“Ultra-Wideband Technology for Short- or Medium-Range Wireless Communications,” Intel Technology Journal, intel.com/technology/itj/q22001/articles/art_4.htm

Software-defined radio/Cognitive radio
“Wireless Architectures for the 21st Century,” by Joseph Mitola III, ourworld.compuserve.com/homepages/jmitola
Software Defined Radio Forum, www.sdrforum.org/
Mesh Networks Inc., www.meshnetworks.com/
“Promise of Intelligent Networks,” BBC News, news.bbc.co.uk/1/hi/technology/2787953.stm
“Decentralized Channel Management in Scalable Multihop Spread-Spectrum
Packet Radio Networks,” by Timothy J. Shepard, MIT Doctoral Thesis,
www.lcs.mit.edu/publications/pubs/pdf/MIT-LCS-TR-670.pdf

ROBERT BERGER is a consultant with Internet Bandwidth Development, LLC, helping companies develop and use Internet technology and infrastructure, specializing in backbone, point-of-presence, wireless, ServerFarm, and e-commerce technical and business architectures, as well as Internet application services and software. He is also a visiting research fellow with Tokyo-based Global Communications. He is the founder of several start-ups, including UltraDevices and Internex Information Services.

acmqueue

Originally published in Queue vol. 1, no. 3
Comment on this article in the ACM Digital Library





More related articles:

Andre Charland, Brian LeRoux - Mobile Application Development: Web vs. Native
A few short years ago, most mobile devices were, for want of a better word, "dumb." Sure, there were some early smartphones, but they were either entirely e-mail focused or lacked sophisticated touch screens that could be used without a stylus. Even fewer shipped with a decent mobile browser capable of displaying anything more than simple text, links, and maybe an image. This meant if you had one of these devices, you were either a businessperson addicted to e-mail or an alpha geek hoping that this would be the year of the smartphone.


Stephen Johnson - Java in a Teacup
Few technology sectors evolve as fast as the wireless industry. As the market and devices mature, the need (and potential) for mobile applications grows. More and more mobile devices are delivered with the Java platform installed, enabling a large base of Java programmers to try their hand at embedded programming. Unfortunately, not all Java mobile devices are created equal, presenting many challenges to the new J2ME (Java 2 Platform, Micro Edition) programmer. Using a sample game application, this article illustrates some of the challenges associated with J2ME and Bluetooth programming.


- Streams and Standards: Delivering Mobile Video
Don’t believe me? Follow along… Mobile phones are everywhere. Everybody has one. Think about the last time you were on an airplane and the flight was delayed on the ground. Immediately after the dreaded announcement, you heard everyone reach for their phones and start dialing.


Fred Kitson - Mobile Media: Making It a Reality
Many future mobile applications are predicated on the existence of rich, interactive media services. The promise and challenge of such services is to provide applications under the most hostile conditions - and at low cost to a user community that has high expectations. Context-aware services require information about who, where, when, and what a user is doing and must be delivered in a timely manner with minimum latency. This article reveals some of the current state-of-the-art "magic" and the research challenges.





© ACM, Inc. All Rights Reserved.