HOME

TheInfoList



OR:

In the
high-performance computing High-performance computing (HPC) uses supercomputers and computer clusters to solve advanced computation problems. Overview HPC integrates systems administration (including network and security knowledge) and parallel programming into a mult ...
environment, burst buffer is a fast intermediate storage layer positioned between the front-end computing processes and the back-end storage systems. It bridges the performance gap between the processing speed of the compute nodes and the
Input/output In computing, input/output (I/O, or informally io or IO) is the communication between an information processing system, such as a computer, and the outside world, possibly a human or another information processing system. Inputs are the signals ...
(I/O) bandwidth of the storage systems. Burst buffers are often built from arrays of high-performance storage devices, such as
NVRAM Non-volatile random-access memory (NVRAM) is random-access memory that retains data without applied power. This is in contrast to dynamic random-access memory (DRAM) and static random-access memory (SRAM), which both maintain data only for as lon ...
and SSD. It typically offers from one to two orders of magnitude higher I/O bandwidth than the back-end storage systems.


Use cases

Burst buffers accelerate scientific data movement on supercomputers. For example, scientific applications' life cycles typically alternate between computation phases and I/O phases. Namely, after each round of computation (i.e., computation phase), all the computing processes concurrently write their intermediate data to the back-end storage systems (i.e., I/O phase), followed by another round of computation and data movement operations. With the deployment of burst buffer, processes can quickly write their data to burst buffer after one round of computation instead of writing to the slow hard disk based storage systems, and immediately proceed to the next round of computation without waiting for the data to be moved to the back-end storage systems; the data are then asynchronously flushed from burst buffer to the storage systems at the same time with the next round of computation. In this way, the long I/O time spent in moving data to the storage systems is hidden behind the computation time. In addition, buffering data in burst buffer also gives applications plenty of opportunities to reshape the data traffic to the back-end storage systems for efficient bandwidth utilization of the storage systems. In another common use case, scientific applications can stage their intermediate data in and out of burst buffer without interacting with the slower storage systems. Bypassing the storage systems allows applications to realize most of the performance benefit from burst buffer.


Representative burst buffer architectures

There are two representative burst buffer architectures in the high-performance computing environment: node-local burst buffer and remote shared burst buffer. In the node-local burst buffer architecture, burst buffer storage is located on the individual compute node, so the aggregate burst buffer bandwidth grows linearly with the compute node count. This
scalability Scalability is the property of a system to handle a growing amount of work by adding resources to the system. In an economic context, a scalable business model implies that a company can increase sales given increased resources. For example, a ...
benefit has been well-documented in recent literature. It also comes with the demand for a scalable metadata management strategy to maintain a global namespace for data distributed across all the burst buffers. In the remote shared burst buffer architecture, burst buffer storage resides on a fewer number of I/O nodes positioned between the compute nodes and the back-end storage systems. Data movement between the compute nodes and burst buffer needs to go through the network. Placing burst buffer on the I/O nodes facilitates the independent development, deployment and maintenance of the burst buffer service. Hence, several well-known commercialized software products have been developed to manage this type of burst buffer, such as DataWarp and Infinite Memory Engine. As supercomputers are deployed with multiple heterogeneous burst buffer layers, such as NVRAM on the compute nodes, and SSDs on the dedicated I/O nodes, there is a need to transparently move data across multiple storage layers.


Supercomputers deployed with burst buffer

Due to its importance, burst buffer has been widely deployed on the leadership-scale supercomputers. For example, node-local burst buffer has been installed on DASH supercomputer at the
San Diego Supercomputer Center The San Diego Supercomputer Center (SDSC) is an organized research unit of the University of California, San Diego (UCSD). SDSC is located at the UCSD campus' Eleanor Roosevelt College east end, immediately north the Hopkins Parking Structure. ...
, Tsubame supercomputers at
Tokyo Institute of Technology is a national research university located in Greater Tokyo Area, Japan. Tokyo Tech is the largest institution for higher education in Japan dedicated to science and technology, one of first five Designated National University and selected as ...
, Theta and
Aurora An aurora (plural: auroras or aurorae), also commonly known as the polar lights, is a natural light display in Earth's sky, predominantly seen in high-latitude regions (around the Arctic and Antarctic). Auroras display dynamic patterns of bri ...
supercomputers at the Argonne National Laboratory, Summit supercomputer at the
Oak Ridge National Laboratory Oak Ridge National Laboratory (ORNL) is a U.S. multiprogram science and technology national laboratory sponsored by the U.S. Department of Energy (DOE) and administered, managed, and operated by UT–Battelle as a federally funded research an ...
, and Sierra supercomputer at the Lawrence Livermore National Laboratory, etc. Remote shared burst buffer has been adopted by
Tianhe-2 Tianhe-2 or TH-2 (, i.e. 'Milky Way 2') is a 33.86- petaflops supercomputer located in the National Supercomputer Center in Guangzhou, China. It was developed by a team of 1,300 scientists and engineers. It was the world's fastest supercomputer ...
supercomputer at the
National Supercomputer Center in Guangzhou The National Supercomputer Center in Guangzhou houses Tianhe-2, which is currently the seventh fastest supercomputer in the world, with a measured 33.86 petaflop/s (quadrillions of calculations per second). Tianhe-2 is operated by the National ...
, Trinity supercomputer at the
Los Alamos National Laboratory Los Alamos National Laboratory (often shortened as Los Alamos and LANL) is one of the sixteen research and development laboratories of the United States Department of Energy (DOE), located a short distance northwest of Santa Fe, New Mexico, ...
, Cori supercomputer at the Lawrence Berkeley National Laboratory and ARCHER2 supercomputer at
Edinburgh Parallel Computing Centre EPCC, formerly the Edinburgh Parallel Computing Centre, is a supercomputing centre based at the University of Edinburgh. Since its foundation in 1990, its stated mission has been to ''accelerate the effective exploitation of novel computing t ...
.


References

{{reflist, 30em


External links


Cray DataWarp
a production burst buffer system developed by Cray.
Infinite Memory Engine
a production burst buffer system developed by Data Direct Network.
Theta supercomputer
a supercomputer hosted in the Argonne National Laboratory.
Summit supercomputer
a supercomputer hosted in the Oak Ridge National Laboratory.
Sierra supercomputer
a supercomputer hosted in the Lawrence National National Laboratory.
Trinity supercomputer
a supercomputer hosted in the Los Alamos National Laboratory.
Cori supercomputer
a supercomputer hosted in the Lawrence Berkeley National Laboratory. Supercomputers Non-volatile memory Distributed file systems Cluster computing Big data