HOME
*





Open MPI
Open MPI is a Message Passing Interface (MPI) library project combining technologies and resources from several other projects (FT-MPI, LA-MPI, LAM/MPI, and PACX-MPI). It is used by many TOP500 supercomputers including Roadrunner, which was the world's fastest supercomputer from June 2008 to November 2009, and K computer, the fastest supercomputer from June 2011 to June 2012. Overview Open MPI represents the merger between three well-known MPI implementations: * FT-MPI from the University of Tennessee * LA-MPI from Los Alamos National Laboratory * LAM/MPI from Indiana University with contributions from the PACX-MPI team at the University of Stuttgart. These four institutions comprise the founding members of the Open MPI development team. The Open MPI developers selected these MPI implementations as excelling in one or more areas. Open MPI aims to use the best ideas and technologies from the individual projects and create one world-class open-source MPI implementation that ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Unix
Unix (; trademarked as UNIX) is a family of multitasking, multiuser computer operating systems that derive from the original AT&T Unix, whose development started in 1969 at the Bell Labs research center by Ken Thompson, Dennis Ritchie, and others. Initially intended for use inside the Bell System, AT&T licensed Unix to outside parties in the late 1970s, leading to a variety of both academic and commercial Unix variants from vendors including University of California, Berkeley (Berkeley Software Distribution, BSD), Microsoft (Xenix), Sun Microsystems (SunOS/Solaris (operating system), Solaris), Hewlett-Packard, HP/Hewlett Packard Enterprise, HPE (HP-UX), and IBM (IBM AIX, AIX). In the early 1990s, AT&T sold its rights in Unix to Novell, which then sold the UNIX trademark to The Open Group, an industry consortium founded in 1996. The Open Group allows the use of the mark for certified operating systems that comply with the Single UNIX Specification (SUS). Unix systems are chara ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Los Alamos National Laboratory
Los Alamos National Laboratory (often shortened as Los Alamos and LANL) is one of the sixteen research and development laboratories of the United States Department of Energy (DOE), located a short distance northwest of Santa Fe, New Mexico, in the American southwest. Best known for its central role in helping develop the first atomic bomb, LANL is one of the world's largest and most advanced scientific institutions. Los Alamos was established in 1943 as Project Y, a top-secret site for designing nuclear weapons under the Manhattan Project during World War II.The site was variously called Los Alamos Laboratory and Los Alamos Scientific Laboratory. Chosen for its remote yet relatively accessible location, it served as the main hub for conducting and coordinating nuclear research, bringing together some of the world's most famous scientists, among them numerous Nobel Prize winners. The town of Los Alamos, directly north of the lab, grew extensively through this period. After ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Simple Linux Utility For Resource Management
The Slurm Workload Manager, formerly known as Simple Linux Utility for Resource Management (SLURM), or simply Slurm, is a free and open-source job scheduler for Linux and Unix-like kernels, used by many of the world's supercomputers and computer clusters. It provides three key functions: * allocating exclusive and/or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work, * providing a framework for starting, executing, and monitoring work, typically a parallel job such as Message Passing Interface (MPI) on a set of allocated nodes, and * arbitrating contention for resources by managing a queue of pending jobs. Slurm is the workload manager on about 60% of the TOP500 supercomputers. Slurm uses a best fit algorithm based on Hilbert curve scheduling or fat tree network topology in order to optimize locality of task assignments on parallel computers. History Slurm began development as a collaborative effort primarily b ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


MPICH
MPICH, formerly known as MPICH2, is a freely available, portable implementation of MPI, a standard for message-passing for distributed-memory applications used in parallel computing. MPICH is Free and open source software with some public domain components that were developed by a US governmental organisation, and is available for most flavours of Unix-like OS (including Linux and Mac OS X). History The Argonne National Laboratory and Mississippi State University jointly developed early versions (MPICH-1) as public domain software. The CH part of the name was derived from "Chameleon", which was a portable parallel programming library developed by William Gropp, one of the founders of MPICH. The original implementation of MPICH (sometimes called "MPICH1") implemented the MPI-1.1 standard. Starting around 2001, work began on a new code base to replace the MPICH1 code and support the MPI-2 standard. Until November 2012, this project was known as "MPICH2". As of November 2012, ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Groupe Bull
Bull SAS (also known as Groupe Bull, Bull Information Systems, or simply Bull) is a French computer company headquartered in Les Clayes-sous-Bois, in the western suburbs of Paris. The company has also been known at various times as Bull General Electric, Honeywell Bull, CII Honeywell Bull, and Bull HN. Bull was founded in 1931, as H.W. Egli - Bull, to capitalize on the punched card technology patents of Norwegian engineer Fredrik Rosing Bull (1882–1925). After a reorganization in 1933, with new owners coming in, the name was changed to Compagnie des Machines Bull (CMB). Bull has a worldwide presence in more than 100 countries and is particularly active in the defense, finance, health care, manufacturing, public, and telecommunication sectors. History Origins On 31 July 1919, a Norwegian engineer named Fredrik Rosing Bull filed a patent for a "combined sorter-recorder-tabulator of punch cards" machine that he had developed with financing from the Norwegian insurance comp ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Sun Microsystems
Sun Microsystems, Inc. (Sun for short) was an American technology company that sold computers, computer components, software, and information technology services and created the Java programming language, the Solaris operating system, ZFS, the Network File System (NFS), and SPARC microprocessors. Sun contributed significantly to the evolution of several key computing technologies, among them Unix, RISC processors, thin client computing, and virtualized computing. Notable Sun acquisitions include Cray Business Systems Division, Storagetek, and ''Innotek GmbH'', creators of VirtualBox. Sun was founded on February 24, 1982. At its height, the Sun headquarters were in Santa Clara, California (part of Silicon Valley), on the former west campus of the Agnews Developmental Center. Sun products included computer servers and workstations built on its own RISC-based SPARC processor architecture, as well as on x86-based AMD Opteron and Intel Xeon processors. Sun also developed its own ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Fork (software Development)
In software engineering, a project fork happens when developers take a copy of source code from one software package and start independent development on it, creating a distinct and separate piece of software. The term often implies not merely a development branch, but also a split in the developer community; as such, it is a form of schism. Grounds for forking are varying user preferences and stagnated or discontinued development of the original software. Free and open-source software is that which, by definition, may be forked from the original development team without prior permission, and without violating copyright law. However, licensed forks of proprietary software (''e.g.'' Unix) also happen. Etymology The word "fork" has been used to mean "to divide in branches, go separate ways" as early as the 14th century. In the software environment, the word evokes the fork system call, which causes a running process to split itself into two (almost) identical copies that (ty ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


High-performance Computing
High-performance computing (HPC) uses supercomputers and computer clusters to solve advanced computation problems. Overview HPC integrates systems administration (including network and security knowledge) and parallel programming into a multidisciplinary field that combines digital electronics, computer architecture, system software, programming languages, algorithms and computational techniques. HPC technologies are the tools and systems used to implement and create high performance computing systems. Recently, HPC systems have shifted from supercomputing to computing clusters and grids. Because of the need of networking in clusters and grids, High Performance Computing Technologies are being promoted by the use of a collapsed network backbone, because the collapsed backbone architecture is simple to troubleshoot and upgrades can be applied to a single router as opposed to multiple ones. The term is most commonly associated with computing used for scientific research or ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Bandwidth (computing)
In computing, bandwidth is the maximum rate of data transfer across a given path. Bandwidth may be characterized as network bandwidth, data bandwidth, or digital bandwidth. This definition of ''bandwidth'' is in contrast to the field of signal processing, wireless communications, modem data transmission, digital communications, and electronics, in which ''bandwidth'' is used to refer to analog signal bandwidth measured in hertz, meaning the frequency range between lowest and highest attainable frequency while meeting a well-defined impairment level in signal power. The actual bit rate that can be achieved depends not only on the signal bandwidth but also on the noise on the channel. Network capacity The term ''bandwidth'' sometimes defines the net bit rate 'peak bit rate', 'information rate,' or physical layer 'useful bit rate', channel capacity, or the maximum throughput of a logical or physical communication path in a digital communication system. For example, bandwidth test ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Latency (engineering)
Latency, from a general point of view, is a time delay between the cause and the effect of some physical change in the system being observed. Lag, as it is known in gaming circles, refers to the latency between the input to a simulation and the visual or auditory response, often occurring because of network delay in online games. Latency is physically a consequence of the limited velocity at which any physical interaction can propagate. The magnitude of this velocity is always less than or equal to the speed of light. Therefore, every physical system with any physical separation (distance) between cause and effect will experience some sort of latency, regardless of the nature of the stimulation at which it has been exposed to. The precise definition of latency depends on the system being observed or the nature of the simulation. In communications, the lower limit of latency is determined by the medium being used to transfer information. In reliable two-way communication syst ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Open Source Software
Open-source software (OSS) is computer software that is released under a license in which the copyright holder grants users the rights to use, study, change, and distribute the software and its source code to anyone and for any purpose. Open-source software may be developed in a collaborative public manner. Open-source software is a prominent example of open collaboration, meaning any capable user is able to participate online in development, making the number of possible contributors indefinite. The ability to examine the code facilitates public trust in the software. Open-source software development can bring in diverse perspectives beyond those of a single company. A 2008 report by the Standish Group stated that adoption of open-source software models has resulted in savings of about $60 billion per year for consumers. Open source code can be used for studying and allows capable end users to adapt software to their personal needs in a similar way user scripts a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Free Software
Free software or libre software is computer software distributed under terms that allow users to run the software for any purpose as well as to study, change, and distribute it and any adapted versions. Free software is a matter of liberty, not price; all users are legally free to do what they want with their copies of a free software (including profiting from them) regardless of how much is paid to obtain the program.Selling Free Software
(gnu.org)
Computer programs are deemed "free" if they give end-users (not just the developer) ultimate control over the software and, subsequently, over their devices. The right to study and modify a computer program entails that