HOME

TheInfoList



OR:

The Advanced Simulation and Computing Program (ASC) is a super-computing program run by the National Nuclear Security Administration, in order to simulate, test, and maintain the United States nuclear stockpile. The program was created in 1995 in order to support the Stockpile Stewardship Program (or SSP). The goal of the initiative is to extend the lifetime of the current aging stockpile.


History

After the United States' 1992 moratorium on live nuclear testing, the Stockpile Stewardship Program was created in order to find a way to test, and maintain the nuclear stockpile. In response, the National Nuclear Security Administration began to simulate the nuclear warheads using supercomputers. As the stockpile ages, the simulations have become more complex, and the maintenance of the stockpile requires more computing power. Over the years, due to
Moore's Law Moore's law is the observation that the Transistor count, number of transistors in an integrated circuit (IC) doubles about every two years. Moore's law is an observation and Forecasting, projection of a historical trend. Rather than a law of ...
, the ASC program has created several different supercomputers with increasing power, in order to compute the simulations and mathematics. In celebration of 25 years of ASC accomplishments, the Advanced Simulation and Computing Program has publishe
this report


Research

The majority of ASC's research is done on supercomputers in three different laboratories. The calculations are verified by human calculations.


Laboratories

The ASC program has three laboratories: * Sandia National Laboratories * Los Alamos National Laboratory *
Lawrence Livermore National Laboratory Lawrence Livermore National Laboratory (LLNL) is a Federally funded research and development centers, federally funded research and development center in Livermore, California, United States. Originally established in 1952, the laboratory now i ...


Computing


Current supercomputers

The ASC program currently houses numerous supercomputers on the
TOP500 The TOP500 project ranks and details the 500 most powerful non-distributed computing, distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these ...
list for computing power. This list changes every six months, so please visit https://top500.org/lists/top500/ for the latest list of NNSA machines. Although these computers may be in separate laboratories, remote computing has been established between the three main laboratories.


Previous supercomputers

* ASCI Purple * Red Storm * Blue Gene/L: World's fastest supercomputer, November 2004 – November 2007
Blue Gene Q (aka, Sequoia)
* ASCI Q: Installed in 2003, it was a AlphaServer SC45/GS Cluster and reached 7.727 Teraflops. ASQI Q used DEC Alpha 1250 MHz (2.5 GFlops) processors and a Quadrics interconnect. ASCI Q placed as the 2nd fastest supercomputer in the world in 2003. * ASCI White: World's fastest supercomputer, November 2000 – November 2001 * ASCI Blue Mountain * ASCI Blue Pacific * ASCI Red: World's fastest supercomputer, June 1997 – June 2000


Newsletter

The ASC program publishe
a quarterly newsletter
describing many of its research accomplishments and hardware milestones.


Elements

Within the ASC program, there are six subdivisions, each having their own role in the extension of the life of the stockpile.


Facility Operations and User Support

The Facility Operations and User Support subdivision is responsible for the physical computers and facilities and the computing network within ASC. They are responsible for making sure the tri-lab network, computing storage space, power usage, and the customer computing resources are all in line.


Computational Systems and Software Environment

The Computational and User Support subdivision is responsible for maintaining and creating the supercomputer software according to NNSA's standards. They also deal with the data, networking and software tools. The ASCI Path Forward project substantially funded the initial development of the Lustre parallel file system from 2001 to 2004.


Verification and Validation

The Verification and Validation subdivision is responsible for mathematically verifying the simulations and outcomes. They also help software engineers write more precise codes in order to decrease the margin of error when the computations are run.


Physics and Engineering Models

The Physics and Engineering Models subdivision is responsible for deciphering the mathematical and physical analysis of nuclear weapons. They integrate physics models into the codes in order to gain a more accurate simulation. They deal with the way that the nuclear weapon will act under certain conditions based on physics. They also study nuclear properties, vibrations, high explosives, advanced hydrodynamics, material strength and damage, thermal and fluid response, and radiation and electrical responses.


Integrated Codes

The Integrated Codes subdivision is responsible for the mathematical codes that are produced by the supercomputers. They use these mathematical codes, and present them in a way that is understandable to humans. These codes are then used by the National Nuclear Society Administration, the Stockpile Steward Program, Life Extension Program, and Significant Finding Investigation, in order to decide the next steps that need to be taken in order to secure and lengthen the life of the nuclear stockpile.


Advanced Technology Development and Mitigation

The Advanced Technology Development and Mitigation subdivision is responsible for researching developments in high performance computing. Once information is found on the next generation of high performance computing, they decide what software and hardware needs to be adapted in order to prepare for the next generation of computers.


References

{{Reflist Supercomputing Research and development in the United States