HPX
   HOME
*





HPX
HPX, short for High Performance ParalleX, is a runtime system for high-performance computing. It is currently under active development by the STEAR group at Louisiana State University. Focused on scientific computing, it provides an alternative execution model to conventional approaches such as MPI. HPX aims to overcome the challenges MPI faces with increasing large supercomputers by using asynchronous communication between nodes and lightweight control objects instead of global barriers, allowing application developers to exploit fine-grained parallelism. HPX is developed in idiomatic C++ and released as open source under the Boost Software License, which allows usage in commercial applications. Applications Though designed as a general-purpose environment for high-performance computing, HPX has primarily been used in * Astrophysics simulation, including the N-body problem, neutron star evolution, and the merging of stars **Octo-Tiger, An astrophysics application simulating t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Hemopexin
Hemopexin (or haemopexin; Hpx; Hx), also known as beta-1B-glycoprotein, is a glycoprotein that in humans is encoded by the ''HPX'' gene and belongs to the hemopexin family of proteins. Hemopexin is the plasma protein with the highest binding affinity for heme. Hemoglobin ''itself'' circulating ''alone'' in the blood plasma (called ''free hemoglobin'', as opposed to the hemoglobin situated in and circulating with the red blood cell.) will soon be oxidized into met-hemoglobin which then further disassociates into ''free'' heme along with globin chain. The free heme will then be oxidized into free met-heme and sooner or later the hemopexin will come to bind free met-heme together, forming a complex of met-heme and hemopexin, continuing their journey in the circulation until reaching a receptor, such as CD91, on hepatocytes or macrophages within the spleen, liver and bone marrow. Hemopexin's arrival and subsequent binding to the free heme not only prevent heme's pro-oxidant and pr ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Scientific Computing
Computational science, also known as scientific computing or scientific computation (SC), is a field in mathematics that uses advanced computing capabilities to understand and solve complex problems. It is an area of science that spans many disciplines, but at its core, it involves the development of models and simulations to understand natural systems. * Algorithms ( numerical and non-numerical): mathematical models, computational models, and computer simulations developed to solve science (e.g., biological, physical, and social), engineering, and humanities problems * Computer hardware that develops and optimizes the advanced system hardware, firmware, networking, and data management components needed to solve computationally demanding problems * The computing infrastructure that supports both the science and engineering problem solving and the developmental computer and information science In practical use, it is typically the application of computer simulation and other fo ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Peridynamics
Peridynamics is a formulation of continuum mechanics that is oriented toward deformations with discontinuities, especially fractures. Purpose The peridynamic theory is based on integral equations, in contrast with the classical theory of continuum mechanics, which is based on partial differential equations. Since partial derivatives do not exist on crack surfaces and other singularities, the classical equations of continuum mechanics cannot be applied directly when such features are present in a deformation. The integral equations of the peridynamic theory can be applied directly, because they do not require partial derivatives. The ability to apply the same equations directly at all points in a mathematical model of a deforming structure helps the peridynamic approach avoid the need for the special techniques of fracture mechanics. For example, in peridynamics, there is no need for a separate crack growth law based on a stress intensity factor. Definition and basic termin ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Neutron Star
A neutron star is the collapsed core of a massive supergiant star, which had a total mass of between 10 and 25 solar masses, possibly more if the star was especially metal-rich. Except for black holes and some hypothetical objects (e.g. white holes, quark stars, and strange stars), neutron stars are the smallest and densest currently known class of stellar objects. Neutron stars have a radius on the order of and a mass of about 1.4 solar masses. They result from the supernova explosion of a massive star, combined with gravitational collapse, that compresses the core past white dwarf star density to that of atomic nuclei. Once formed, they no longer actively generate heat, and cool over time; however, they may still evolve further through collision or accretion. Most of the basic models for these objects imply that neutron stars are composed almost entirely of neutrons (subatomic particles with no net electrical charge and with slightly larger mass than protons); the electro ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

N-body Problem
In physics, the -body problem is the problem of predicting the individual motions of a group of celestial objects interacting with each other gravitationally.Leimanis and Minorsky: Our interest is with Leimanis, who first discusses some history about the -body problem, especially Ms. Kovalevskaya's 1868–1888 twenty-year complex-variables approach, failure; Section 1: "The Dynamics of Rigid Bodies and Mathematical Exterior Ballistics" (Chapter 1, "The motion of a rigid body about a fixed point (Euler and Poisson equations)"; Chapter 2, "Mathematical Exterior Ballistics"), good precursor background to the -body problem; Section 2: "Celestial Mechanics" (Chapter 1, "The Uniformization of the Three-body Problem (Restricted Three-body Problem)"; Chapter 2, "Capture in the Three-Body Problem"; Chapter 3, "Generalized -body Problem"). Solving this problem has been motivated by the desire to understand the motions of the Sun, Moon, planets, and visible stars. In the 20th century, unde ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Fine-grained
Granularity (also called graininess), the condition of existing in granules or grains, refers to the extent to which a material or system is composed of distinguishable pieces. It can either refer to the extent to which a larger entity is subdivided, or the extent to which groups of smaller indistinguishable entities have joined together to become larger distinguishable entities. Precision and ambiguity Coarse-grained materials or systems have fewer, larger discrete components than fine-grained materials or systems. * A coarse-grained description of a system regards large subcomponents. * A fine-grained description regards smaller components of which the larger ones are composed. The concepts granularity, coarseness, and fineness are relative; and are used when comparing systems or descriptions of systems. An example of increasingly fine granularity: a list of nations in the United Nations, a list of all states/provinces in those nations, a list of all cities in those states, ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Asynchronous I/O
In computer science, asynchronous I/O (also non-sequential I/O) is a form of input/output processing that permits other processing to continue before the transmission has finished. A name used for asynchronous I/O in the Windows API is overlapped I/O. Input and output (I/O) operations on a computer can be extremely slow compared to the processing of data. An I/O device can incorporate mechanical devices that must physically move, such as a hard drive seeking a track to read or write; this is often orders of magnitude slower than the switching of electric current. For example, during a disk operation that takes ten milliseconds to perform, a processor that is clocked at one gigahertz could have performed ten million instruction-processing cycles. A simple approach to I/O would be to start the access and then wait for it to complete. But such an approach (called synchronous I/O, or blocking I/O) would block the progress of a program while the communication is in progress, leaving ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Supercomputer
A supercomputer is a computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second ( FLOPS) instead of million instructions per second (MIPS). Since 2017, there have existed supercomputers which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers. Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in var ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Louisiana State University
Louisiana State University (officially Louisiana State University and Agricultural and Mechanical College, commonly referred to as LSU) is a public land-grant research university in Baton Rouge, Louisiana. The university was founded in 1860 near Pineville, Louisiana, under the name Louisiana State Seminary of Learning & Military Academy. The current LSU main campus was dedicated in 1926, consists of more than 250 buildings constructed in the style of Italian Renaissance architect Andrea Palladio, and the main campus historic district occupies a plateau on the banks of the Mississippi River. LSU is the flagship school of the state of Louisiana, as well as the flagship institution of the Louisiana State University System, and is the most comprehensive university in Louisiana. In 2021, the university enrolled over 28,000 undergraduate and more than 4,500 graduate students in 14 schools and colleges. Several of LSU's graduate schools, such as the E. J. Ourso College of Business ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Center For Computation And Technology
The Center for Computation and Technology (CCT) is an interdisciplinary research center located on the campus of Louisiana State University in Baton Rouge, Louisiana. In 2003, the Center for Applied Information Technology and Learning (LSU CAPITAL) was integrated as a full research center on LSU's campus as part of the Governor's Vision 2020 plan, and then renamed the Center for Computation & Technology. CCT's first director was Ed Seidel. Seidel led the CCT from 2003 to 2008, then accepted a position as director of the National Science Foundation's Office of Cyberinfrastructure (OCI). CCT faculty members Stephen David Beck and Jorge Pullin served as Interim Co-directors from 2008 to 2010. In December 2010, Joel Tohline, the interim director of the original LSU CAPITAL, was named CCT director. Other faculty and executive staff members at the CCT included Gabrielle Allen, computer scientist and co-creator of the Cactus Framework; Thomas Sterling, former NASA scientist and co-creato ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


High-performance Computing
High-performance computing (HPC) uses supercomputers and computer clusters to solve advanced computation problems. Overview HPC integrates systems administration (including network and security knowledge) and parallel programming into a multidisciplinary field that combines digital electronics, computer architecture, system software, programming languages, algorithms and computational techniques. HPC technologies are the tools and systems used to implement and create high performance computing systems. Recently, HPC systems have shifted from supercomputing to computing clusters and grids. Because of the need of networking in clusters and grids, High Performance Computing Technologies are being promoted by the use of a collapsed network backbone, because the collapsed backbone architecture is simple to troubleshoot and upgrades can be applied to a single router as opposed to multiple ones. The term is most commonly associated with computing used for scientific research or ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]