Symposium - Panels

Advances Toward Economic and Efficient Terabit LANs and WANs

Lawrence G. Roberts, Anagran, USA, lroberts @

Lawrence G. Roberts, Anagran, USA (ppt 0.5MB)
Osamu Ishida, NTT Network Innovation Laboratory, Japan, ishida.osamu @ (pdf 0.3MB)
Cees de Laat, University of Amsterdam, The Netherlands, delaat @ (ppt 4.3MB)
Hank Dardy, Naval Research Laboratory, USA, dardy @ (pdf 21.4MB), 10:56, 16:13, 11:04, 13:34

Major advances in routing and switching are making possible the economic interconnection of many thousands of servers to form multi-terabit LANs with much higher utilization and QoS. Panelists brainstorm potential application and system requirements as well as alternative architectural design approaches and WAN interfaces.

Bridging the Challenges: Medicine Meets the LambdaGrid

Mary Kratz, University of Michigan and TATRC (Telemedicine Advanced Technology Research Center), USA, mkratz @

Jean-Louis Belard, TATRC International Relations, USA, belard @

Jan Patterson, Telemedicine Advanced Technology Research Center, USA, patterson @ (ppt 10.5MB)
Kevin Montgomery, National Biocomputation Center, Stanford University, USA, kevin @ (ppt 34.5MB)
Parvati Dev, Stanford University Medical Media & Information Technologies (SUMMIT) Laboratory, Stanford University, USA, parvati @ (ppt 6.5MB)
Jonathan Silverstein, University of Chicago and lead Global Grid Forum Healthcare, USA, jcs @ (ppt 7.5MB), 14:46, 21:13, 15:49, 19:41

The medical industry represents a challenge to the grid community. Tools such as Globus remain tangentially relevant to the medical research community, and nearly irrelevant to the practice of clinical care. Fundamental transformations are currently underway to produce value to both the grid and medical cultures. The TATRC as a subordinate element of the United States Army Research and Material Command (USAMRMC), is charged with managing core Research Development Test and Evaluation (RDT&E) and congressionally mandated projects in telemedicine and advanced medical technologies. This panel presents exemplary telemedicine research projects that provide a framework of technological challenges to the grid community from the perspective of medical researchers.

Data Plane and Content Security on Optical Networks

Leon Gommans, University of Amsterdam, The Netherlands, lgommans @

Leon Gommans, University of Amsterdam, The Netherlands (ppt 0.4MB)
Kim Roberts, Nortel Networks, Canada, krob @ (ppt 4.3MB)
Laurin Herr, Pacific Interface Inc., USA, laurin @

In some grid environments, optical networks may not connect publicly accessible resources. When optical networks in different domains are stitched together into an end-to-end path, each domain may have its own policies regarding access. How do we ensure that every stakeholder is able to express and enforce its admission policies so only authorized people or applications can access resources in that domain? Firewalls may support requirements that are unacceptable to network security managers. “Punch 1002 holes in my firewall for GridFTP? You must be joking!” is a commonly heard complaint. How can we keep both worlds from fighting each other and create sensible solutions instead. Digital content transmitted across an optical network may have monetary value, and therefore needs protection against piracy. How do we protect this data while maintaining good network performance? This panel addresses some of the security issues that must be considered when designing optical networks for production environments.

Earth Science Applications

John Orcutt, Scripps Institution of Oceanography, UCSD, USA, jorcutt @

John Orcutt, Scripps Institution of Oceanography, UCSD, USA
Mikhail Zhizhin, Institute of Physics of the Earth and Geophysical Center, Russian Academy of Science, Russia, jjn @ (ppt 9.9MB)
Gail McConaughy, NASA Goddard Space Flight Center, USA, Gail.R.McConaughy @, 19:41, 22:04, 10:45

New computing models based on high-performance networks are enabling Earth science researchers to develop next-generation, fully interactive Earth system prediction models and data assimilation systems capable of resolving many questions about geological, atmospheric and oceanographic variability and change. These models will be at least an order of magnitude more compute and data intensive than today’s most advanced operational models, enabling scientists to interactively and visually examine shared large and detailed data objects. Panelists discuss various advancements taking place today.

Enabling Data Intensive iGrid Applications with Advanced Optical Technologies

Joe Mambretti, Northwestern University, USA, j-mambretti @

Joe Mambretti, Northwestern University, USA (ppt 3.1MB)
Kim Roberts, Nortel Networks, Canada, krob @ (ppt 2.9MB)
Lawrence G. Roberts, Anagran, USA, lroberts @ (ppt 0.6MB)
Payam Torab, Lambda OpticalSystems, USA, ptorab @ (ppt 3.8MB)
Per Hansen, ADVA Optical Networking, USA, PHansen @ (pdf 1.8MB)
Joseph Berthold, Ciena, USA, berthold @ (pdf 1.2MB)
Cees DeLaat, University of Amsterdam, The Netherlands, delaat @ (ppt 3.3MB), 32:30, 11:46, 20:36, 20:13, 18:25, 11:41

This panel presents an overview of some of the leading-edge technologies that are enabling iGrid applications, particularly those related to dynamic optical networking, including grid optical networking research trends; iGrid demonstrations and emerging optical technologies; grid applications, flow control and optical transport; GMPLS and flexible optical switching; grids and optical access; designing and implementing optical Open Exchanges; and, grid networking and trends in advanced optical component research and development.

From ARPANET to LambdaGrid: 10-Year Eruptions in Networking

Larry Smarr, Calit2, USA, lsmarr @

Lawrence G. Roberts, Anagran, USA, lroberts @ (ppt 1.9MB)
Dennis Jennings, University College Dublin, Ireland, Dennis.Jennings @ (ppt 0.2MB)
Tom DeFanti, University of Illinois at Chicago (UIC) and UCSD / Calit2, USA, tom @
Maxine Brown, UIC, USA, maxine @ (ppt 4.1MB), 27:27, 13:29, 13:00

This panel looks at eruptive (though some might say disruptive) decadal networking advancements that have changed the way we work and communicate. We start with 1969, when the first economic crossover of routing and transmission occurred, just as Larry Roberts joined ARPANET, making packet routing economical and allowing the Internet to be built. In 1985, Dennis Jennings was the NSF program manager who funded NSFNET. By 1995, the SC conference featured a major event called I-WAY, organized by Tom DeFanti, Larry Smarr and Rick Stevens, which featured scientific applications using the most advanced R&E networks in the USA and Canada, with major outgrowths being Globus and the grid, national partnerships, interconnectivity and interoperability of USA Federal agency networks and a focus on international collaboration. By 2005, optical technologies are enabling more advancements in global e-Science, as showcased at iGrid.

Global Data Repositories: Storage, Access, Mining and Analysis

Radha Nandkumar, National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign, USA, radha @

Robert Grossman, University of Illinois at Chicago, USA, grossman @
Norbert Meyer, Poznan Supercomputing and Networking Center, Poland, meyer @ (pdf 2.4MB)
Mark Ellisman, Biomedical Imaging Research Network / University of California, San Diego, USA, mark @
Kei Hiraki. University of Tokyo, Japan, hiraki @, 10:39, 15:50, 9:56, 11:40

Data is growing at an exponential rate, and the number of distributed data repositories is also increasing. Global collaborations, advanced networks and new transport protocols are now changing the way scientists use and analyze large distributed datasets. Panelists discuss their respective research involving distributed data storage, data movement, data mining, and data analysis.

High-Resolution Streaming Media

Laurin Herr, Pacific Interface Inc., USA, laurin @

Laurin Herr, Pacific Interface Inc., USA (ppt 2.1MB)
Richard Weinberg, University of Southern California, USA, weinberg @
Michael Wellings, ResearchChannel, USA, wellings @ (ppt 3.0MB)
Ludek Matyska, CESNET and Masaryk University Brno, Czech Republic, ludek @ (ppt 0.3MB)
Naohisa Ohta, Keio University, Japan, naohisa @ (ppt 1.5MB), 13:59, 9:35, 12:40, 14:56, 13:09

It has been long recognized that streaming media is required for a class of networked applications, such as interactive conferencing, remote scientific observation, long-distance educational mentoring and distributed media production. But increasing the resolution of streaming media from standard definition to high definition and then to super high definition only tends to increase the technical challenges associated with high data rates, latency, codecs, switching, multicasting, and input / output devices like cameras and displays. This panel examines the state of the art of the technology and the potential applications for high-resolution streaming media.

International e-Science Infrastructure

Bernhard Fabianek, Information Society and Media Directorate, European Commission (EC), Belgium, Bernhard.Fabianek @

Bernhard Fabianek, EC, Belgium (ppt 2.1MB)
Bill St. Arnaud, CANARIE, Canada, @ (ppt 5.8MB)
Peter Clarke, National e-Science Center, Edinburgh, UK, peter.clarke @ (ppt 4.3MB)
William Johnston, ESnet, Department of Energy, USA, wej @ (ppt 4.6MB)
George McLaughlin, AARnet, Australia, George.McLaughlin @ (ppt 3.4MB)
Yoichi Shinoda, JAIST, Japan, shinoda @ (ppt 1.8MB)
Kevin Thompson, National Science Foundation, USA, kthompso @ (ppt 1.6MB)
Kai Nan, Chinese Academy of Science, China, nankai @ (ppt 1.2MB), 12:26, 6:42, 12:27 , 11:03, 16:51, 11:57, 15:33, 15:13

Modern information and communication technologies based infrastructures - so-called International e-Science Infrastructures, (or e-Infrastructures, or cyberinfrastructures or i-infrastructures) - are critical for all fields in science and technology. This infrastructure plays a pivotal role in the creation and exploration of knowledge and thus in promoting innovation. The momentum created by the pioneering developments of International e-Science Infrastructures is huge; advanced communication technologies and services transform the way science is carried out and benefit research, education and innovation. Each panelist gives a short introduction to key elements of his/her country’s international e-Science infrastructure initiatives, followed by a discussion of common elements and where further collaboration (or joint projects) can be envisioned to advance global e-Science collaborations and discoveries.

L1 / L2 Services of National Networks in Support of the GLIF Community

Gigi Karmous-Edwards, NLR and MCNC Grid Computing and Network Services, USA, gigi @

Tom West, National LambdaRail, USA, twest @ (ppt 9.1MB)
Kees Neggers, SURFnet, The Netherlands, kees.neggers @ (ppt 1.0MB)
Bill St. Arnaud, CANARIE, Canada, @, 14:41, 11:37, 11:05

The panel explores how national optical backbones can support the GLIF community. The panelists first describe their existing Layer-1/Layer-2 service offerings, and then discuss their experiences, challenges and successes from a national and international perspective. Several key topics will be covered:

North American Networking Requirements for the Large Hadron Collider

William E. Johnston, Lawrence Berkeley Laboratory and ESnet, USA, wej @

William E. Johnston, Lawrence Berkeley Laboratory and ESnet, USA
Don Petravick, Fermi National Accelerator Laboratory, USA, petravick @ (pdf 2.3MB)
Steven McDonald, TRIUMF, Canada, mcdonald @ (ppt 3.1MB)
David Foster, CERN, David.Foster @, 19:22, 10:37, 16:23, 12:43

Abstract: The CERN Large Hadron Collider (LHC) experiments (ATLAS, CMS, ALICE, etc.) will generate huge amounts of data starting in 2007. There are seven data centers in Europe, three in North America, and two in Southeast Asia, which will be the source of data for the physics groups that analyze the data. The network requirements for the North American data centers are 20 Gbps in 2007, ramping up to 40+ Gbps by 2010. (The physics community has been running a series of increasingly realistic Service Challenges to ensure that the data centers, analysis centers, and networks will be able to meet their needs.) In addition to the primary data paths (which are being provided as part of the LHC experiment infrastructure) there must be secondary and tertiary backup paths to the data centers. This panel describes the LHC networking architecture and the needed backup architecture and then solicits input from the GLIF community as to how the GLIF circuits might participate in this largest of all science projects.

OptIPuter Application-Centered Tools and Techniques

Jason Leigh, University of Illinois at Chicago (UIC), USA, spiff @ (ppt 0.01MB)

Luc Renambot, UIC, USA, luc @ (ppt 35.9MB)
Xingfu Wu, Texas A&M University, USA, wuxf @ (ppt 14.0MB)
Paul Wielinga, SARA, The Netherlands, wielinga @ (ppt 4.5MB)
Tomohiro Kudoh, AIST, Japan, t.kudoh @ (ppt 5.4MB), 8:42, 17:16, 16:48, 20:35

The OptIPuter is a project to examine a new model of computing whereby ultra-high-speed networks form the backplane of a distributed virtual computer whose peripherals consist of dedicated storage, computation and visualization clusters. Panelists describe the impact that such a model has or will have on their areas of research and discuss the challenges ahead to bring these advanced capabilities to scientists, engineers and the general public.