All Over the Map
Mappa.Mundi Magazine
Carl
Carl Malamud currently collaborates with webchick at media.org. He was the founder of the Internet Multicasting Service and is the author of eight books.

» Archives

Related Links

Links that are related to the article.

» Cisco's Multicast Page
» PGM Specification
» IRTF Task Force

Editor's Choice

Past articles by Carl Malamud.

» Exploring the Internet
» E-Work

Hug A Hacker
By Carl Malamud, media.org All Over the Map Archives»


Multicasting Matters

       In 1993 I hooked up the National Press Club in Washington, D.C. to the experimental multicasting backbone overlaid on the Internet (known as the MBONE) and sent out the audio from Larry King Live to the Internet. A reporter from the Associated Press asked to be present at this historic occasion. In the middle of the speech, as we peered anxiously over our computers to monitor the transmission he got to the money question.

       "How many people are listening?" he asked.

       I hedged. "We have listeners in six countries!"

       A few minutes later I fessed up. We did in fact have listeners in six countries. One listener in each country for a total of six people. I pointed out that we were still early in the game of streaming audio over the Internet. I wasn't too far off: streaming audio is now reaching audiences that rival the major radio networks.

       Multicasting is a simple matter conceptually, a way for one computer to use the Internet to communicate simultaneously with many computers. Local Area Networks have long used multicasting (one computer talking to a set of computers) or broadcasting (one computer talking to all computers) as a way of finding service providers. For example, when a diskless PC boots itself over a network, it will use a broadcast or multicast query to find a server willing to provide it with the operating system it needs.

       While multicasting is simple conceptually, implementation has been tough. Distribution on a local area network is fairly simple. General-purpose multicasting on the global Internet only became possible after Dr. Stephen Deering of Cisco Systems spent seven years tackling the problem as his Ph.D. topic. His research on multicasting is considered to be one of the seminal contributions to the Internet. Deering also led the effort to develop version 6 of the Internet Protocol, known as IPng ("Internet Protocol the Next Generation") which may replace the current protocol stack with a new improved set of protocols with larger address spaces and built-in support for things like multicast.

       Multicasting has come a long way since the early days. RealNetworks, the leading maker of audio servers, has built in multicasting support. This allows a set of RealAudio servers to communicate directly with several other servers, putting together an efficient distribution network. Say, for example, that you are originating an audio stream in Washington, D.C. and wish to push that audio stream out to several servers located in the San Francisco Bay Area. With multicasting, a single stream of audio goes from Washington to the Bay Area over a transit network (say Worldcom's Alternet backbone). Once the data hits the Bay Area, the audio stream is split into several copies, each one headed for a different server.

       Multicasting is thus a more efficient way of sending the data over a network when multiple recipients will receive the data. Since the transmission costs are a direct factor of distance, the large Internet Service Providers charge for the amount of data they need to haul over their national backbones. If a content provider has 10 recipients at the other end of the national backbone, multicasting has the potential for reducing the amount of network traffic (and thus costs) by an order of magnitude.

       Reducing traffic is more than just a cost-saving device: it is essential for the health of the Internet. In the case of web browsers, for example, http requires a separate connection for each image or html page that is retrieved. In the early days of Netscape's browser implementation, people could set the number of maximum connections as high as they want. That resulted in huge overhead on the net as each web client resulted in a dozen simultaneous TCP connections. Netscape quickly imposed a limit on the number of connections per client.

       Sending out a large number of simultaneous audio and video streams using unicast transmission has the same potentially disastrous effect on the net. Multicasting is thus more than a potential cost saving for one particular user, it is an important way to allow us to scale the global IP network up to handle new classes of applications.

Multicasting: More Than Radio and TV

       While money matters, multicasting is crucial for a growing class of applications. Most of the focus has been on audio and video distribution, but multicasting is a crucial part of the Internet infrastructure. We live in a real-time world, and real-time information distribution is a crucial aspect of the Internet. I don't believe the Internet will be a replacement for our current cable and satellite transmission of TV, at least not in the short term. However, things like real-time stock information and the transmission of reports from the SEC's EDGAR database are crucial applications that we can expect to see in general distribution in a very short time frame.

       The SEC's EDGAR database is currently available through two mechanisms. On the web, several groups, including the SEC itself, make data available with a 24-hour delay on web sites and ftp archives. Real-time transmission of the data is available through TRW, a government contractor, and is used to feed commercial systems like Disclosure and to feed web sites like 10-K Wizards and Edgar Online.

       While searching a web site for EDGAR data is a very useful service, I believe the service can go much farther. My vision of EDGAR is that any corporate user, say a trader at an investment firm, should be able to set a trap: "If any insider trader reports are issued for companies in my portfolio, give me that data immediately, convert it to an Excel spreadsheet, look up a biography of the people doing the trading, and page me."

       Here's why multicasting matters so much for this application. The EDGAR data trickles out during most of the year at a rate of a few tens of kilobytes per second. During peak periods, however, such as the end of a quarter, the data feed mushrooms to at least 1.5 million bits per second, enough to fill a T1 line. Let's say you get this data into the main server of your corporate network and you have 500 traders on corporate LANs. While a few random requests are not an appreciable server load, at the end of the quarter you could easily see several hundred simultaneous data feeds coming in, swamping the corporate LAN and the web server.

       With multicasting, the data can go out as a single feed of data, being distributed throughout the LAN. A simple little application allows traders to grab the data off the wire if it meets their server criteria. The mechanism is much simpler and more efficient than other push technologies such as Channels (which require a separate channel out to each client).

Reliable Multicasting

       The reason multicasting has not been widely deployed for these "grab it off the wire" applications is because a text application is very different from audio or video. With audio or video, if you loose an occasional packet, the signal breaks up a bit, but it is not a catastrophic failure. With a text-based application, you don't particularly want a random segment of your document to be dropped in mid-stream. Applications like EDGAR thus require a reliable transport layer from the underlying network, whereas audio and video can make do with a best-effort datagram service.

       Reliable multicasting of data is actually a very tough problem and some of the best minds in the business have been working on the problem for several years. Known as Scalable Reliable Multicasting (SRM), the area has received the attention of a special task force convened by the Internet Architecture Board.



The Mighty Cisco



Read About This Proud and Dignified Fish of the Great Lakes



      Luckily, some hope is on the horizon. Under the leadership of Cisco Systems, a new proposal has been advanced known to some as Pragmatic General Multicasting and to others as Pretty Good Multicasting. The PGM specification has been advanced as an Internet Draft, the first step towards becoming an Internet Standard (click here for a good elevator pitch of PGM written by McClellan Consulting). More importantly, because PGM was written by several of Cisco's leading engineers, it is highly probable that this standard will be implemented in the routers.

       What is missing for PGM is a general-purpose reference implementation for end systems. PGM in the routers provides the first stepping stone towards universal deployment of reliable multicast. What is needed now is a publicly available reference implementation that allows application developers, both commercial and non-commercial, to start writing applications that take advantage of this new protocol. Multicasting is one of those fundamental pieces of infrastructure, like TCP/IP itself, that should be used to advantage by all application developers, not just TV wannabes.

       Many of our readers may not know this, but before the name Cisco became synonymous with routers, the Cisco was better known as a fish. Our hope is that routers will scale as well as fish do. Multicasting is the next great challenge if the Internet is to scale and handle a new class of applications. PGM is a great first step and provides great hope towards moving multicasting out into the mainstream.



 Copyright © 1999, 2000 media.org.



Mappa.Mundi
contact | about | site map | home T-O