Emblem Sub Level Top PUBLICATIONS
Archived Press Releases
Emblem Sub Level Logo High Energy Physics Team Sets New Data Transfer World Records
Emblem Sub Level Bottom
December 8, 2008

110 Gbps Sustained Rates Among Storage Systems Over Wide Area Networks, and 200 Gbps Metro Data Rates on Next Generation Optical Links, Set New Standards for Networks and Computing Clusters

Contacts: Harvey B. Newman, Caltech High Energy Physics, newman @ hep.caltech.edu
Caltech Media Relations: Jon R. Weiner, jrweiner @ caltech.edu

AUSTIN, Texas - Building on seven years of record-breaking developments, an international team of physicists, computer scientists, and network engineers led by the California Institute of Technology, with partners from Michigan, Florida, Tennessee, Fermilab, Brookhaven, CERN, Brazil, Pakistan, Korea and Estonia joined forces to set new records for sustained data transfer among storage systems during the SuperComputing 2008 (SC08) conference.

Caltech’s exhibit at SC08 by the High Energy Physics (HEP) group and the Center for Advanced Computing Research (CACR) demonstrated new applications and systems for globally distributed data analysis for the Large Hadron Collider (LHC) at CERN, along with Caltech’s global monitoring system MonALISA (monalisa.caltech.edu) and its collaboration system EVO (Enabling Virtual Organizations; evo.caltech.edu), together with near real-time simulations of earthquakes in the Southern California region, experiences in time-domain astronomy with Google Sky, and recent results in multi-physics multi-scale modeling.

A highlight of the exhibit was the HEP team’s record-breaking demonstration of storage-to-storage data transfers over wide area networks from a single rack of servers on the exhibit floor. The high-energy physics team’s demonstration of “High Speed LHC Data Gathering, Distribution and Analysis Using Next Generation Networks” achieved a bidirectional peak throughput of 114 gigabits per second (Gbps) and a sustained data flow of more than 110 Gbps among clusters of servers on the show floor and at Caltech, Michigan, CERN (Geneva), Fermilab (Batavia), Brazil (Rio de Janiero, Sao Paulo), Korea (Daegu), Estonia and locations in the US LHCNet network in Chicago, New York, Geneva and Amsterdam.

Following up on the previous record transfer of more than 80 Gbps sustained among storage systems over continental and transoceanic distances in Reno at SC07, the team used a small fraction of the global LHC grid to sustain transfers at a total rate of 110 Gbps (114 Gbps peak) between the Tier1, Tier2 and Tier3 center facilities at the partners’ sites and the Tier2-scale computing and storage facility constructed by the HEP and Caltech Center for Advanced Computing Research team within 2 days on the Exhibit floor. The team sustained rates of more than 40 Gbps in both directions for many hours (and up to 71 Gbps in one direction), showing that a well-designed and configured single rack of servers is now capable of saturating the highest speed wide area network links in production use today, which have a capacity of 40 Gbps in each direction.

The overseas partners achieved excellent storage-to-storage results during the demonstrations: 3 Gbps (on two 1 Gbps links) with the Tier2 center in Tallinn, Estonia, and approaching 2 Gbps on two 1 Gbps links with the Tier2 Centers at UERJ (Rio) and SPRACE (Sao Paulo).

The record-setting demonstration was made possible through the use of twelve 10 Gbps wide area network links to SC08 provided by SCinet, National LambdaRail (6), Internet2 (3), ESnet, Pacific Wave, and the Cisco Research Wave, with onward connections provided by CENIC in California, the TransLight / StarLight link to Amsterdam, SURFNet (Netherlands) to Amsterdam and CERN, and CANARIE (Canada) to Amsterdam, as well as CENIC, Atlantic Wave and Florida LambdaRail to Gainesville and Miami, US Net to Chicago and Sunnyvale, Gloriad and KreoNet2 to Daegu in Korea, GEANT to Estonia, and the WHREN link co-operated by FIU and the Brazilian RNP and ANSP networks to reach the Tier2 centers in Rio and Sao Paulo.

Two fully populated Cisco 6500E series switch-routers, and more than one hundred 10 gigabit Ethernet (10GE) server interfaces provided by Myricom and Intel, as well as two fiber channel S2A9900 storage platforms provided by DataDirect Networks (DDN) equipped with 8 Gbps host bus adapters from QLogic, along with five X4500 and X4540 disk servers from Sun Microsystems were used to set the new record. The computational nodes were 32 widely available dual-motherboard Supermicro servers housing 128 quad-core Xeon processors on 64 motherboards with a like number of 10 GE interfaces, as well as Seagate SATA II disks providing 128 Terabytes of storage.

One of the key elements in this demonstration was Fast Data Transfer (monalisa.cern.ch/FDT), an open source Java application based on TCP, developed by the Caltech team in close collaboration with the Polytehnica Bucharest team. FDT runs on all major platforms and it achieves stable disk reads and writes coordinated with smooth data flow across long-range networks. The ability of FDT to sustain drive data flows at speeds reaching the capacity limits of the links, a full 10 Gbps, was shown repeatedly during the SC08 demonstrations. The FDT application works by streaming data across an open TCP socket, so that a large data set composed of thousands of files, as is typical in high energy physics applications, can be sent or received at full speed, without the network transfer restarting between files, and without any packets being lost. FDT works with Caltech’s MonALISA system to dynamically monitor the capability of the storage systems as well as the network path in real-time, and sends data out to the network at a moderated rate that is matched to the capacity (measured in real-time) of long range network paths.

FDT was combined with an optimized Linux kernel provided by Shawn McKee of Michigan known as the “UltraLight kernel,” and the FAST TCP protocol stack developed by Steven Low of Caltech’s computer science department, to reach its unprecedented sustained throughput level of 14.3 Gigabytes/sec with a single rack of servers, limited by the speed of the disks.

MonALISA’s ability to monitor a worldwide global ensemble of grids and networks, from the individual process in a single processing core to the major links to the overall network topology in real-time, was shown throughout the conference, running (since 2002) around the clock to keep track of more than 1 million parameters at 350 sites on a large overhead global display.

A second major milestone was achieved by the HEP team working together with Ciena, who had just completed its first OTU-4 (112 Gbps) standard link carrying a 100 Gbps payload (or 200 Gbps bidirectional) with forward error correction. The Caltech and Ciena teams used an optical fiber cable with ten fiber-pairs linking their neighboring booths, Ciena’s system to multiplex and demultiplex ten 10 Gbps links onto the single OTU-4 wavelength running on an 80 km fiber loop, and some of Caltech’s nodes used in setting the wide area network records together with FDT, to achieve full throughput over the new link.

Thanks to FDT’s high throughput capabilities, and the error free links between the booths, the teams were able to achieve a maximum of 199.90 Gbps bi-directionally (memory-to-memory) within minutes of the start of the test, and an average of 191 Gbps during a 12 hour period that logged the transmission of 1.02 Petabytes overnight. Before dismantling the exhibit at the end of the conference, Caltech and DDN worked together to quickly reach 69.3 Gbps over the fiber cable, limited by the disk speed and the kernel, reaching 92% of the full throughput capacity of the DDN platforms. The team expects to be able to reach 100% of the storage platforms’ capacity with further kernel-tuning.

The two largest physics collaborations at the LHC, CMS and ATLAS, each encompassing more than 2,000 physicists and engineers from 170 universities and laboratories, are about to embark on a new round of exploration at the frontier of high energies, breaking new ground in our understanding the nature of matter and space-time and searching for new particles, when the LHC accelerator and their experiments resume operation next Spring. In order to fully exploit the potential for scientific discoveries, the many Petabytes of data produced by the experiments will be processed, distributed, and analyzed using a global grid of 150 computing and storage facilities located at laboratories and universities around the world.

The key to discovery is the analysis phase, where individual physicists and small groups located at sites around the world repeatedly access, and sometimes extract and transport multi-terabyte data sets on demand, in order to optimally select the rare “signals” of new physics from potentially overwhelming “backgrounds” from already-understood particle interactions. This data will amount to many tens of Petabytes in the early years of LHC operation, rising to the Exabyte range within the coming decade. The SC08 HEP team hopes that the demonstrations at SC08 will pave the way towards more effective distribution and use for discoveries of the masses of LHC data.

Professor Harvey Newman of Caltech, head of the HEP team and Chair of the US LHC Users Organization’s Executive Committee, who originated the LHC Data Grid Hierarchy concept, said, “The record-setting demonstrations at SC08 have established our continued rapid progress, advancing the state of the art in computing and storage systems for data intensive science, and keeping pace with the leading edge of long range optical networks. By sharing our methods and tools with scientists in many fields, we hope that the research community will be well-positioned to further enable their discoveries, taking full advantage of current networks, as well as next-generation networks with much greater capacity as soon as they become available. In particular, we hope that these developments will afford physicists and young students throughout the world the opportunity to participate directly in the LHC program, and potentially to make important discoveries.”

David Foster, head of Communications and Networking at CERN said, “The efficient use of high-speed networks to transfer large data sets is an essential component of CERN’s LHC Computing Grid (LCG) infrastructure that will enable the LHC experiments to carry out their scientific missions.”

Iosif Legrand, senior software and distributed system engineer at Caltech, the technical coordinator of the MonALISA and FDT projects, said, “We demonstrated a realistic, worldwide deployment of distributed, data-intensive applications capable of effectively using and coordinating high performance networks. A distributed agent-based system was used to dynamically discover network and storage resources, and to monitor, control, and orchestrate efficient data transfers among hundreds of computers.”

Richard Cavanaugh of the University of Illinois at Chicago, technical coordinator of the UltraLight project that is developing the next generation of network-integrated grids aimed at LHC data analysis, said, “By demonstrating that many 10 Gbps wavelengths can be used efficiently over continental and transoceanic distances (often in both directions simultaneously), the high-energy physics team showed that this vision of a worldwide dynamic Grid supporting many Terabyte and larger data transactions is practical. By also demonstrating the full use of a 100 Gbps wavelength for the first time, we can now be confident that we will be ready to fully exploit the next generation of networks as the LHC ramps up to full luminosity over the next few years, through our continued developments in the NSF-funded UltraLight and PLaNetS projects.”

Shawn McKee, research scientist in the University of Michigan department of physics and leader of the UltraLight network technical group, said, “This achievement is an impressive example of what a focused network and storage system effort can accomplish. It is an important step towards the goal of delivering a highly capable end-to-end network-aware system and architecture that meet the needs of next-generation e-Science.”

Artur Barczyk, network engineer and US LHCNet team leader with Caltech, said “the impressive capability of setting up the many light paths used in this demonstration in such a short time frame, spanning three continents and providing guaranteed bandwidth channels for applications requiring them, together with the efficient use of the provisioned bandwidth by the data transfer applications, shows the high potential in circuit network services. The light path setup among USLHCNet, Surfnet, CANARIE, TransLight / StarLight, ESnet SDN and Internet2 DCN, and using the MANLAN, Starlight and Netherlight exchange points took only days to accomplish (minutes in the case of SDN and DCN dynamic circuits). It shows how the network can already today be used as a dedicated resource in data intensive research and other fields, and demonstrates how applications can make best use of this resource basically on-demand.”

Paul Sheldon of Vanderbilt University, who leads the NSF-funded Research and Education Data Depot Network (REDDnet) project that is deploying a distributed storage infrastructure, commented on the innovative network storage technology that helped the group achieve such high performance in wide-area, disk-to-disk transfers.

“When you combine this network-storage technology, including its cost profile, with the remarkable tools that Harvey Newman’s networking team has produced, I think we are well positioned to address the incredible infrastructure demands that the LHC experiments are going to make on our community worldwide.”

Further information about the demonstration at: supercomputing.caltech.edu

About Caltech:
With an outstanding faculty, including five Nobel laureates, and such off-campus facilities as the Jet Propulsion Laboratory, Palomar Observatory, and the W. M. Keck Observatory, the California Institute of Technology is one of the world’s major research centers. The Institute also conducts instruction in science and engineering for a student body of approximately 900 undergraduates and 1,300 graduate students who maintain a high level of scholarship and intellectual achievement. www.caltech.edu

About CACR:
Caltech’s Center for Advanced Computing Research (CACR) performs research and development on leading-edge networking and computing systems, and methods for computational science and engineering. Some current efforts at CACR include the National Virtual Observatory, PSAAP Center for Predictive Modeling and Simulation, Computational Infrastructure for Geodynamics, Cell Center, and TeraGrid gateway development. www.cacr.caltech.edu

About CERN:
CERN, the European Organization for Nuclear Research, has its headquarters in Geneva. At present, its member states are Austria, Belgium, Bulgaria, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Italy, the Netherlands, Norway, Poland, Portugal, Slovakia, Spain, Sweden, Switzerland, and the United Kingdom. Israel, Japan, the Russian Federation, the United States of America, Turkey, the European Commission, and UNESCO have observer status. www.cern.ch

About Netlab:
Caltech’s Networking Laboratory, led by Professor Steven Low, develops FAST TCP. The group does research in the control and optimization of protocols and networks, and designs, analyzes, implements, and experiments with new algorithms and systems. About the University of Michigan: The University of Michigan, with its size, complexity, and academic strength, the breadth of its scholarly resources, and the quality of its faculty and students, is one of America’s great public universities and one of the world’s premier research institutions. www.umich.edu

About the National University of Sciences & technology (NUST), Pakistan:
The NUST School of Electrical Engineering and Computer Science (SEECS) was established in 1999 as NUST Institute of IT (NIIT). SEECS is known today as the premier engineering institution in Pakistan and prides itself on its faculty, talented pool of students and a team of highly capable and dedicated professionals. SEECS has active collaborative research linkages with CERN Geneva; Stanford (SLAC) USA; Caltech USA; and EPFL Switzerland. www.niit.edu.pk

About Fermilab:
Fermi National Accelerator Laboratory (Fermilab) is a national laboratory funded by the Office of Science of the U.S. Department of Energy, operated by Fermi Research Alliance, LLC. Experiments at Fermilab’s Tevatron, the world’s highest-energy particle accelerator, generate petabytes data per year, and involve large, international collaborations with requirements for high-volume data movement to their home institutions. It is also the western hemisphere Tier-1 data host for the upcoming CMS experiment at the HC. www.fnal.gov

About UERJ (Rio de Janeiro):
Founded in 1950, the Rio de Janeiro State University (UERJ) ranks among the ten largest universities in Brazil, with more than 23,000 students. The UERJ High Energy Physics group includes 15 faculty, engineers, postdoctoral, and visiting Ph.D. physicists and 12 Ph.D. and master’s students, working on experiments at Fermilab (D0) and CERN (CMS). The group has constructed a Tier2 center to enable it to take part in the Grid-based data analysis planned for the LHC, *having this year 512 cores and 280 TB*, and has originated the concept of a Brazilian “HEP Grid,” working in cooperation with *UNESP, CEFET-RIO, UFRGS, *RNP and several other universities in Rio and São Paulo. www.uerj.br

About UNESP (São Paulo):
Created in 1976 with the administrative union of several isolated Institutes of higher education in the State of São Paulo, the São Paulo State University, UNESP, has 39 institutes in 23 different cities in the State of São Paulo. Since 1999 the university has had a group participating in the DZero Collaboration of Fermilab, which is operating the São Paulo Regional Analysis Center (SPRACE). This group is now a member of CMS Collaboration of CERN. www.unesp.br

About USP (São Paulo):
The University of São Paulo, USP, is the largest institution of higher education and research in Brazil, and the third in size in Latin America. The university has most of its 35 units located on the campus of the capital of the state. It is responsible for almost 25 percent of all Brazilian papers and publications indexed on the Institute for Scientific Information (ISI). The SPRACE cluster is located at the Physics Institute. www.usp.br

About Kyungpook National University (Daegu):
Kyungpook National University is one of leading universities in Korea, especially in physics and information science. The university has 13 colleges and 9 graduate schools with 24,000 students. It houses the Center for High Energy Physics (CHEP) in which most Korean high-energy physicists participate. CHEP was approved as one of the designated Excellent Research Centers supported by the Korean Ministry of Science. www.chep.knu.ac.kr

About GLORIAD:
GLORIAD (GLObal RIng network for Advanced application development) is the first round-the-world high-performance ring network jointly established by Korea, the United States, Russia, China, Canada, the Netherlands, and the Nordic countries, with optical networking tools that improve networked collaboration with e-Science and Grid applications. It is currently constructing a dedicated lightwave link connecting the scientific organizations in GLORIAD partners. www.gloriad.org/

About CHEPREO:
Florida International University (FIU), in collaboration with partners at Florida State University, the University of Florida, and the California Institute of Technology, has been awarded an NSF grant to create and operate an interregional Grid-enabled Center from High-Energy Physics Research and Educational Outreach (CHEPREO) at FIU. CHEPREO encompasses an integrated program of collaborative physics research on CMS, network infrastructure development, and educational outreach at one of the largest minority universities in the U.S. The center is funded by four NSF directorates, including Mathematical and Physical Sciences, Scientific Computing Infrastructure, Elementary, Secondary and Informal Education, and International Programs. www.chepreo.orgu

About NICPB:
About NICPB: The National Institute of Chemical Physics and Biophysics (Tallinn, Estonia) is engaged in a multitude of research topics including HEP research. NICPB staff has constructed and operate the local Tier 2 center, the largest computing facility in the country, which connects to Geant2 through the Estonian NREN EENet. The center is also a member of the global LHC Grid infrastructure, and takes part in the BalticGrid project that aims to build a sustainable e-Infrastructure in the Baltics and Belarus.

About TransLight / StarLight:
The US National Science foundation’s International Research network connections (IRNC) “TransLight/StarLight” award to the University of Illinois at Chicago provides two connections between the USA and Europe for production science: a routed connection that connects the Internet2, ESnet and National LambdaRail (NLR) networks to the pan-European GEANT2, and a switched connection between NLR and the Regional Optical Networks (RONs) at StarLight and optical connections at NetherLight, which are part of the Global Lambda Integrated Facility (GLIF) fabric. Major funding is provided by the NSF’s International Research Network Connections (IRNC) program (NSF OCI-0441094) for the period February 2005 - January 2010. www.startap.net/translight

About StarLight:
About StarLight: StarLight is an advanced optical infrastructure and proving ground for network services optimized for high-performance applications. Operational since summer 2001, StarLight is a 1 GE and 10 GE switch / router facility for high-performance access to participating networks and also offers true optical switching for wavelengths. StarLight is being developed by the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago (UIC), the International Center for Advanced Internet Research (iCAIR) at Northwestern University, and the Mathematics and Computer Science Division at Argonne National Laboratory, in partnership with Canada’s CANARIE and the Netherlands’ SURFnet. STAR TAP and StarLight are made possible by major funding from the U.S. National Science Foundation to UIC. StarLight is a service mark of the Board of Trustees of the University of Illinois. www.startap.net/starlight

About Internet2®:
Led by more than 200 U.S. universities working with industry and government, Internet2 develops and deploys advanced network applications and technologies for research and higher education, accelerating the creation of tomorrow’s Internet. Internet2 recreates the partnerships among academia, industry, and government that helped foster today’s Internet in its infancy. www.internet2.edu

About National LambdaRail:
National LambdaRail (NLR) is a major initiative of U.S. research universities and private sector technology companies to provide a national scale infrastructure for research and experimentation in networking technologies and applications. NLR puts the control, the power, and the promise of experimental network infrastructure in the hands of the nation’s scientists and researchers. www.nlr.net

About the Florida LambdaRail:
Florida LambdaRail LLC (FLR) is a Florida limited liability company formed by member higher education institutions to advance optical research and education networking within Florida. Florida LambdaRail is a high-bandwidth optical network that links Florida’s research institutions and provides a next-generation network in support of large-scale research, education outreach, public / private partnerships, and information technology infrastructure essential to Florida’s economic development. www.flrnet.org

About CENIC:
CENIC is a not-for-profit corporation serving the California Institute of Technology, California State University, Stanford University, University of California, University of Southern California, California Community Colleges, and the statewide K-12 school system. CENIC’s mission is to facilitate and coordinate the development, deployment, and operation of a set of robust multi tiered advanced network services for this research and education community. www.cenic.org

About Pacific Wave:
Pacific Wave is a joint project between the Corporation for Education Network Initiatives in California (CENIC) and the Pacific Northwest Gigapop (PNWGP), and is operated in collaboration with the University of Washington. Pacific Wave enhances research and education network capabilities by increasing network efficiency, reducing latency, increasing throughput, and reducing costs. www.pacificwave.net

About ESnet:
The Energy Sciences Network (ESnet) is a high-speed network serving thousands of Department of Energy scientists and collaborators worldwide. A pioneer in providing high-bandwidth, reliable connections, ESnet enables researchers at national laboratories, universities, and other institutions to communicate with each other using the collaborative capabilities needed to address some of the world’s most important scientific challenges. Managed and operated by the ESnet staff at Lawrence Berkeley National Laboratory, ESnet provides direct high-bandwidth connections to all major DOE sites, multiple cross connections with Internet2 / Abilene, and connections to Europe via GEANT and to Japan via SuperSINET, as well as fast interconnections to more than 100 other networks. Funded principally by DOE’s Office of Science, ESnet services allow scientists to make effective use of unique DOE research facilities and computing resources, independent of time and geographic location. www.es.net

About Ciena:
Ciena specializes in providing advanced optical and Ethernet networking platforms, intelligent software and professional services to help customers deliver optimized networks that fit strategic and operational requirements. Ciena collaborates with leading institutions to provide the research and education community robust, automated and fully redundant networks that minimize cost and complexity. www.ciena.com

About AMPATH:
Florida International University’s Center for Internet Augmented Research and Assessment (CIARA) has developed an international, high-performance research connection point in Miami, Florida, called AMPATH (AMericasPATH). AMPATH extends participation to underrepresented groups in Latin America and the Caribbean, in science and engineering research and education activities through the use of high-performance network connections. AMPATH is home to the WHREN-LILA high-performance network link connecting Latin America to the U.S., funded by the National Science Foundation (NSF), award #0441095 and the Academic Network of Sao Paulo (award #2003/13708-0). www.ampath.fiu.edu

About the Academic Network of São Paulo (ANSP):
About the Academic Network of São Paulo (ANSP): ANSP unites São Paulo’s University networks with Scientific and Technological Research Centers in São Paulo, and is managed by the State of São Paulo Research Foundation (FAPESP). The ANSP Network is another example of international collaboration and exploration. Through its connection to WHREN-LILA, all of the institutions connected to ANSP will be involved in research with U.S. universities and research centers, offering significant contributions and the potential to develop new applications and services. This connectivity with WHREN-LILA and ANSP will allow researchers to enhance the quality of current data, inevitably increasing the quality of new scientific developments. www.ansp.br

About RNP:
RNP, the National Education and Research Network of Brazil, is a not-for-profit company that promotes the innovative use of advanced networking, with the joint support of the Ministry of Science and Technology and the Ministry of Education. In the early 1990s, RNP was responsible for the introduction and adoption of Internet technology in Brazil. Today, RNP operates a nationally deployed multi-gigabit network used for collaboration and communication in research and education throughout the country, reaching all 26 states and the Federal District, and provides both commodity and advanced research Internet connectivity to more than 300 universities, research centers, and technical schools. www.rnp.br

About SouthernLight (SoL):
SouthernLight is the GOLE (GLIF Open Lightpath Exchange) in São Paulo, Brazil, fruit of collaboration between ANSP and RNP, and is responsible for providing end to end circuits between Brazilian scientific institutions and their international collaborators. Currently, international connectivity of SouthernLight is provided by a high-speed link to the GOLE located at AMPATH. wiki.glif.is/index.php/SouthernLight

About CANARIE:
CANARIE Inc., based in Ottawa, is Canada’s advanced network organization. It facilitates the development and use of its network as well as the advanced products, applications and services that run on it. The CANARIE Network serves universities, colleges, schools, government labs, research institutes, hospitals and other organizations in a wide variety of fields in both the public and private sectors. The national organization was created in 1993 by the private sector and academia under the leadership of the Government of Canada. CANARIE Inc. is supported by membership fees, with major funding of its programs and activities provided by the Government of Canada through Industry Canada. www.canarie.ca

About KISTI:
KISTI (Korea Institute of Science and Technology Information) is a national institute under the supervision of MOST (Ministry of Science and Technology) of Korea and is playing a leading role in building the nationwide infrastructure for advanced application researches by linking the supercomputing resources with the optical research network, KREONet2. The National Supercomputing Center in KISTI is carrying out national e-Science and Grid projects as well as the GLORIAD-KR project and will become the most important institution based on e-Science and advanced network technologies. www.kisti.re.kr

About Myricom:
Founded in 1994, Myricom (Arcadia, California) is a leading supplier of high-performance networking products, and a pioneer in the development of cluster computing. With their Myri-10G solutions, Myricom achieved a convergence at 10-Gigabit/s data rates between their low-latency Myrinet HPC-cluster technology and mainstream Ethernet. Myri-10G NICs deliver wire-speed TCP/IP throughput at low host-CPU load, and can also operate in firmware-enabled low-latency modes ideal for demanding applications such as HPC clusters, cluster file systems, and UDP streaming for IPTV video. www.myri.com

About Data Direct Networks:
DataDirect Networks, Inc. (DDN) is the data infrastructure provider for the most extreme, content-intensive environments in the world. With more than 160 petabytes installed worldwide, the company’s S2A™ (Silicon Storage Architecture™) technology delivers massive throughput, scalable capacity, consistency, efficiency and data integrity for today’s extremely competitive and evolving markets. Founded in 1998, DDN serves customers through its global partnerships with Dell, IBM, Sony and other industry leaders; and through its offices in Europe, India, Asia Pacific, Japan and throughout the U.S. www.datadirectnet.com

About QLogic:
QLogic is a leading supplier of high performance storage networking solutions including Fibre Channel host bus adapters (HBAs), blade server embedded Fibre Channel switches, Fibre Channel stackable switches, iSCSI HBAs, iSCSI routers and storage services platforms for enabling advanced storage management applications. The company is also a leading supplier of server networking products including InfiniBand host channel adapters that accelerate cluster performance. QLogic products are delivered to small-to-medium businesses and large enterprises around the world via its channel partner community. QLogic products are also powering solutions from leading companies like Cisco, Dell, EMC, Hitachi Data Systems, HP, IBM, NEC, Network Appliance and Sun Microsystems. www.qlogic.com

About XKL:
XKL, located in Redmond, Washington and led by CEO and Cisco Systems co-founder Len Bosack, develops products that epitomize the company’s mission: Bringing fundamental change to worldwide telecommunications. XKL’s DarkStar Optical Transmission System (DXM10) was used to transport 10 x 10GE connections from Cenic to Caltech over a single pair of fibers. This small highly-compact system deployed quickly and easily, and more than doubled the available capacity to Caltech resources from the showfloor. www.xkl.com

About Force10:
Force10 Networks® is the pioneer in building and securing reliable, high performance networks. With its no compromise approach to networking and advances in high density Gigabit and 10 Gigabit Ethernet switching, routing and security, Force10 delivers the innovative technologies that allow customers to transform their networks into strategic assets at the lowest total cost of ownership. Force10 S2410CP CX4 based ultra low latency 10 Gigabit Ethernet switches were used to aggregate traffic from the dCache 10GE CX4 based pool nodes to the head nodes. Force10 Networks is a registered trademark of Force10 Networks, Inc. www.force10networks.com

About Glimmerglass:
Glimmerglass is the world’s leader in developing and marketing Intelligent Optical Switching solutions for Telecommunications, Enterprise and Government markets. With over 50 customers in Europe, Asia and North America, Glimmerglass has shipped over 37,000 ports of photonic cross-connects with a strong reputation for product quality and reliability. During SC08, a Glimmerglass 48x48 System 100 switch was used to demonstrate self healing through optical fail-over utilizing different WAN paths and was also used to interconnect Ciena 100G optical platforms. www.glimmerglass.org

About the National Science Foundation:
About the National Science Foundation: The National Science Foundation (NSF) is an independent federal agency created by Congress in 1950 “to promote the progress of science; to advance the national health, prosperity, and welfare; to secure the national defense…” With an annual budget of about $6.06 billion, NSF is the funding source for approximately 20 percent of all federally supported basic research conducted by America’s colleges and universities. In many fields such as mathematics, computer science and the social sciences, NSF is the major source of federal backing. www.nsf.gov

About the DOE Office of Science:
The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, providing more than 40 percent of total funding for this vital area of national importance. It oversees - and is the principal federal funding agency of - the Nation’s research programs in high-energy physics, nuclear physics, and fusion energy sciences. The Office of Science also manages 10 world-class laboratories, which often are called the “crown jewels” of our national research infrastructure, and it oversees the construction and operation of some of the Nation’s most advanced R&D user facilities, located at national laboratories and universities. These include particle and nuclear physics accelerators, synchrotron light sources, neutron scattering facilities, supercomputers and high-speed computer networks. In the 2007 fiscal year, these facilities were used by more than 21,000 researchers from universities, national laboratories, private industry, and other federal science agencies.

Acknowledgements
The demonstration and the developments leading up to it were made possible through the strong support of the partner network organizations mentioned, the U.S. Department of Energy Office of Science and the National Science Foundation, in cooperation with the funding agencies of the international partners, through the following grants: US LHCNet (DOE DE-FG02-05-ER41359), WAN In Lab (NSF EIA-0303620), UltraLight (NSF PHY-0427110), DISUN (NSF PHY-0533280), CHEPREO/WHREN-LILA (NSF PHY-0312038/OCI-0441095 and FAPESP Projeto 04/14414-2), as well as the NSF-funded PLaNetS (NSF PHY-0622423), FAST TCP and CAIGEE projects, and the US LHC Research Program funded jointly by DOE and NSF.