STAR TAP
About Connect Network Applications Engineering Publications Home

High-end Networking: An Interview With Tom DeFanti
By Alan Beck, Special to HPCwire

August 15, 1997

One of the most prominent figures in the area of high-bandwidth, high-speed networking is Thomas A. DeFanti, director of the Electronic Visualization Laboratory (EVL), professor in the department of Electrical Engineering and Computer Science, and director of the Software Technologies Research Center at the University of Illinois at Chicago (UIC). He is also the associate director for virtual environments at the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign. Following is an interview HPCwire recently conducted with DeFanti to learn more about leading-edge developments in this field.

HPCwire: Please give a general overview of the STAR TAP project, including its goals, current status, and your own role in it.

DEFANTI: “The STAR TAP project supports international high-performance networking connectivity, applications development and documentation. At its center is a large commercially run switch with enough capacity to provide stable configurations of emerging networking technology. The principal contribution is the design and enabling of a truly integrated approach to the management, monitoring, scheduling, and consumption of geographically-distributed network, computing, storage, and display resources, with specific focus on advanced computational science and engineering applications.”

“A further design contribution is the elimination of the transit problem of international high-performance traffic across the USA. The transit problem arises, for example, when traffic from Brazil wants to go to Germany or Japan and there is no service established to get this traffic from the obvious USA coastal connection points across to each other. This creates compelling reason for the star topology in Chicago as a starting point for global high-performance interconnectivity, since the transit traffic would be handled totally within one switch. This also creates a single point of failure, but has expediency as a compensating feature. Duplication, replication, and improvement of the STAR TAP facility will then be actively pursued as the next-generation Internet initiatives take hold. Creative and/or financial solutions to the transit problem in a cloud-based topology are future challenges the community will face.”

“Following previous successful models, such as the I-WAY, this project supplies the necessary network engineering, applications programming assistance, and Web documentation creation and maintenance to USA scientific researchers and their approved international partners. We have found that this level of support, on a persistent basis, is key to the formation of long-lasting, productive, international research relationships.”

“In this fast-moving domain, academics and researchers can only hope to build a successful model, train graduate students, encourage technology replication through publications and conferences, and skillfully anticipate change. The STAR TAP effort seeks to lead, not control, and to assist, rather than limit, the international community of our peers. The goal is to provide the support for the international connections successfully brought forth during the I-WAY experiment.”

“Layer 2 services will be provided by Ameritech out of the Ameritech Advanced Data Service (AADS) Network Access Point (NAP) in Chicago, Illinois. The AADS NAP currently connects to the vBNS network via the Downers Grove, Illinois MCI facility via layer 3 (routed IP) at 155Mbps. MREN (the emerging midwest “gigapop”) uses the NAP as its switch and represents an active, community-driven and funded prototype for the NSF gigapop concept.”

“CANARIE, Canada’s advanced networking organization, connects CA*net 2, its advanced network at the STAR TAP. Intense discussions with Asian, European and Latin American consortia are underway.”

“I am the PI of the NSF grants which support STAR TAP. My role is to make sure the science and engineering applications are well served.”

HPCwire: Please discuss the significance of gigapops for global networking.

DEFANTI: “Gigapops are, in essence, a way to achieve high-bandwidth connectivity with a minimum of trans-continental and trans-oceanic lines. Gigapops allow regional alliances of users to custom configure to meet local needs, yet connect to the major research networks for the long haul. The alternative would be lines from every university and National Lab to the MCI / SPRINT / AT&T PoPs, which would be a lot of fiber and connections. Clearly, this situation is even more critical with international trans-oceanic links because of the limited amount of fiber and extreme cost.”

HPCwire: What do you see as the most critical current issues in high-speed / high-bandwidth networking? What strategies should be adopted by the academic, governmental and commercial sectors to best meet these challenges?

DEFANTI: “My interest is in assisting applications, so my focus tends to be on delivery of network services needed by scientists and engineers, educators and artists. We have been developing collaborative room-sized VR systems (called CAVEs(tm)) for the past 6 years and have been working very hard to connect them via high-bandwidth wide-area networking, “tele-immersion” for short. In order for this shared virtual reality to be effective, to be, in essence, better than being there, we need to have application programmer control over bandwidth reservation, real-time feedback about (if not control of) latency and jitter, and ways to schedule resources like supercomputers and high-bandwidth data servers.”

“Not much in the current offerings in wide-area networking indicates such control is imminent, so it is the role of the academics and government agencies in partnership with the commercial sector to create testbeds for experimentation with new approaches. There is also a massive shortage of talent in most areas of computing, networking among the most critical. Universities need support for educational programs to create a whole generation of students as quickly as possible.”

“The major computer companies are NOT (yet) major networking companies, which is sort of strange in my thinking, given phrases like: ‘the network is the computer.’ The grand synthesis of networking and computing hasn’t happened yet - the networks are simply not operating in what you would call a programmer’s model. Efforts like Globus at Argonne and ISI are aimed at bridging the gap, at providing high-level middleware that presents the network as a computer system, in essence. I’d recommend serious attention to funding of both applications and the enabling middleware along side funding the building of faster and faster switches and pipes.”

HPCwire: What, if any, technologies now under development promise the greatest impact on high-speed / high-bandwidth networking?

DEFANTI: “I actually think that enabling software intelligence in networking will optimize the use of the networks, once programmers get network-enabled. SVC’s, RSVP and other likely emerging ways of quickly configuring bandwidth and guaranteeing a bound on latency hold great promise, but the conflicting design of ATM and IP specifically as to which end initiates the resource reservation validates the pessimism of the networking folks who have to make it work. I think it will take enlightened intervention (that is, large-scale agency funding) to get us off the market-driven over-provisioning jag, at least until the commercial sector sees an advantage in optimizing rather than duplicating resources.”

HPCwire: “Please discuss whether more effective networking will progressively obviate the need for dedicated HPC platforms. What role will HPC assume as high-speed / high-bandwidth networks are enabled over the next decade?”

DEFANTI: “Big Iron justifies big applications support, something that big networks haven’t got yet as a concept. 1000 workstations running Condor as a shared computing resource without any applications support will not replace HPC in a hurry, unless HPC as a field is so defunded that there is no one left to help the user. Naturally, it would make sense to support 1000 workstations running Condor with the same gusto one does a 1000-processor supercomputer, providing the same programming and networking support, but that is not currently the situation.”

“HPC will likely continue to provide the basis for applications support. It doesn’t seem to be getting much competition from the NGI at this point, given the near total lack of funding for applications, and I frankly don’t see another enabling mechanism on the horizon. HPC will likely evolve, of course, to support the networks, approaches like Condor, and non-compute intensive uses of networks because HPC center directors understand users needs as much as they do big iron.”

Copyright 1993-1999 HPCwire. Redistribution of this article is forbidden by law without the expressed written consent of the publisher. For HPCwire subscription information, send e-mail to sub@hpcwire.com. Tabor Griffin Communications’ HPCwire is also available at tgc.com/hpcwire.html


web @ startap.net STAR TAP