Caltech Booth 428. SLAC/FNAL Booth 302
Official UltraLight Page with full details of our winning entry.
We reached a peak speed of 151 Gbits/sec aggregated across the waves connected to the CACR and SLAC/FNAL Booths. We sustained > 100Gbits.sec for more than four hours, equivalent to transferring over one PetaByte in a day. The network traffic was generated by a rich mixture of real physics applications, including bbcp, xrootd, Clarens, Root, gridftp and dCache transfers. We used FAST TCP predominantly, as well as vanilla TCP stacks.
Harvey's presentation, giving full details of the event and partners, is here (~10MByte).
Movie 1: Walking through the CACR booth, and looking behind the BWC racks
Movie 2: Showing activity in the booth during the BWC measurement on Wednesday evening
Movie 3: A short clip taken at the start of Harvey's presentation at the HP Booth
Movie 4: Iosif tries the Segway
The Caltech-CERN-Florida-FNAL-Michigan-Manchester-SLAC
entry will demonstrate high speed transfers of physics data between host labs
and collaborating institutes in the USA and worldwide. Caltech and FNAL are
major participants in the CMS collaboration at CERN’s Large Hadron Collider (LHC).
SLAC is the host of the BaBar collaboration. Using state of the art WAN
infrastructure and Grid-based Web Services based on the LHC Tiered Architecture,
our demonstration will show real-time particle event analysis requiring
transfers of Terabyte-scale datasets.
We propose to saturate at least fifteen lambdas at Seattle, full duplex
(potentially over 300 Gbps of scientific data).
The lambdas will carry traffic between SLAC, Caltech and other partner Grid
Service sites including UKlight, UERJ, FNAL and AARnet.
We will monitor the WAN performance using Caltech's MonALISA agent-based system.
The analysis software will use a suite of Grid-enabled Analysis tools developed
at Caltech and University of Florida. There will be a realistic mixture of
streams: those due to the transfer of the TeraByte event datasets, and those due
to a set of background flows of varied character absorbing the remaining
capacity. The intention is to simulate the environment in which distributed
physics analysis will be carried out at the LHC. We expect to easily beat our
SC2004 record of ~100Gbits/sec (roughly equivalent to downloading 1000 DVDs in
less than an hour).
Discovering the Higgs and
SuperSymmetry - with a Global Grid
Worldwide collaborations of physicists working together
CERN's Large Hadron Collider experiments: Data/Compute/Network Intensive
Les Cottrell's site: http://www-iepm.slac.stanford.edu/monitoring/bulk/sc2005/hiperf.html
Yang Xia's site: http://www.its.caltech.edu/~yxia/sc2005/
Animated Logo Display: sc2005/SC2005-BWC-Logos.ppt
Handout: (Word)
Booth layout:
Anticipated flows:
GAE Poster:
LHC Data Grid Hierarchy:
COJAC Event:
Black Hole Event (Lucas Taylor/Ianna Osborne):
Some preparations with bbcp:
Presentation at the UltraLight meeting regarding WAN file transfers.
Setting up: