Allinea Distributed Debugging Tool
   HOME





Allinea Distributed Debugging Tool
Linaro DDT is a commercial C, C++ and Fortran 90 debugger. It is widely used for debugging parallel Message Passing Interface (MPI) and threaded (pthread or OpenMP) programs, including those running on clusters of Linux machines. Debugger It is used to find bugs on both small and large clusters, from 1 to 100,000s of processors. It features memory debugging which detect memory leaks, or reading and writing beyond the bounds of arrays. It was the first debugger to be able to debug petascale applications - having been used to debug applications running concurrently on 220,000 processes on a Cray XT5 at Oak Ridge National Laboratories. This is possible interactively as the debugger's control tree architecture leads to logarithmic performance for most collective operations. Linaro DDT uses the GNU Debugger as debug engine. Linaro DDT also supports coprocessor architectures such as Intel Xeon Phi coprocessors and Nvidia CUDA GPUs. It is part of Linaro Forge - a suite of tools f ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Linaro
Linaro Limited is an engineering organization that works on free and open-source software such as the Linux kernel, the GNU Compiler Collection (GCC), QEMU, power management, graphics and multimedia interfaces for the ARM family of instruction sets and implementations thereof as well as for the Heterogeneous System Architecture (HSA). The company provides a collaborative engineering forum for companies to share engineering resources and funding to solve common problems on ARM software. In addition to Linaro's collaborative engineering forum, Linaro also works with companies on a one-to-one basis through its Services division. Linaro works on software that is close to the silicon such as kernel, multimedia, power management, graphics and security. The company aims to provide stable, tested tools and code for multiple software distributions to use to reduce low-level fragmentation of embedded Linux software. It also provides engineering and investment in upstream open source pr ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Message Passing Interface
The Message Passing Interface (MPI) is a portable message-passing standard designed to function on parallel computing architectures. The MPI standard defines the syntax and semantics of library routines that are useful to a wide range of users writing portable message-passing programs in C, C++, and Fortran. There are several open-source MPI implementations, which fostered the development of a parallel software industry, and encouraged development of portable and scalable large-scale parallel applications. History The message passing interface effort began in the summer of 1991 when a small group of researchers started discussions at a mountain retreat in Austria. Out of that discussion came a Workshop on Standards for Message Passing in a Distributed Memory Environment, held on April 29–30, 1992 in Williamsburg, Virginia. Attendees at Williamsburg discussed the basic features essential to a standard message-passing interface and established a working group to continu ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Graphics Processing Unit
A graphics processing unit (GPU) is a specialized electronic circuit designed for digital image processing and to accelerate computer graphics, being present either as a discrete video card or embedded on motherboards, mobile phones, personal computers, workstations, and game consoles. GPUs were later found to be useful for non-graphic calculations involving embarrassingly parallel problems due to their parallel structure. The ability of GPUs to rapidly perform vast numbers of calculations has led to their adoption in diverse fields including artificial intelligence (AI) where they excel at handling data-intensive and computationally demanding tasks. Other non-graphical uses include the training of neural networks and cryptocurrency mining. History 1970s Arcade system boards have used specialized graphics circuits since the 1970s. In early video game hardware, RAM for frame buffers was expensive, so video chips composited data together as the display was being scann ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Nvidia
Nvidia Corporation ( ) is an American multinational corporation and technology company headquartered in Santa Clara, California, and incorporated in Delaware. Founded in 1993 by Jensen Huang (president and CEO), Chris Malachowsky, and Curtis Priem, it designs and supplies graphics processing units (GPUs), application programming interfaces (APIs) for data science and high-performance computing, and system on a chip units (SoCs) for mobile computing and the automotive market. Nvidia is also a leading supplier of artificial intelligence (AI) hardware and software. Nvidia outsources the manufacturing of the hardware it designs. Nvidia's professional line of GPUs are used for edge-to-cloud computing and in supercomputers and workstations for applications in fields such as architecture, engineering and construction, media and entertainment, automotive, scientific research, and manufacturing design. Its GeForce line of GPUs are aimed at the consumer market and are used in ap ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Intel Xeon Phi Coprocessor
Xeon Phi is a discontinued series of x86 manycore processors designed and made by Intel. It was intended for use in supercomputers, servers, and high-end workstations. Its architecture allowed use of standard programming languages and application programming interfaces (APIs) such as OpenMP. Xeon Phi launched in 2010. Since it was originally based on an earlier GPU design ( codenamed "Larrabee") by Intel that was cancelled in 2009, it shared application areas with GPUs. The main difference between Xeon Phi and a GPGPU like Nvidia Tesla was that Xeon Phi, with an x86-compatible core, could, with less modification, run software that was originally targeted to a standard x86 CPU. Initially in the form of PCI Express-based add-on cards, a second-generation product, codenamed ''Knights Landing'', was announced in June 2013. These second-generation chips could be used as a standalone CPU, rather than just as an add-in card. In June 2013, the Tianhe-2 supercomputer at the National S ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


GNU Debugger
The GNU Debugger (GDB) is a portable debugger that runs on many Unix-like systems and works for many programming languages, including Ada, Assembly, C, C++, D, Fortran, Haskell, Go, Objective-C, OpenCL C, Modula-2, Pascal, Rust, and partially others. It detects problems in a program while letting it run and allows users to examine different registers. History GDB was first written by Richard Stallman in 1986 as part of his GNU system, after his GNU Emacs was "reasonably stable". GDB is free software released under the GNU General Public License (GPL). It was modeled after the DBX debugger, which came with Berkeley Unix distributions. From 1990 to 1993 it was maintained by John Gilmore. Now it is maintained by the GDB Steering Committee which is appointed by the Free Software Foundation. Technical details Features GDB offers extensive facilities for tracing and altering the execution of computer programs. The user can monitor and modify the values of programs' intern ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Oak Ridge National Laboratories
Oak Ridge National Laboratory (ORNL) is a federally funded research and development center in Oak Ridge, Tennessee, United States. Founded in 1943, the laboratory is sponsored by the United States Department of Energy and administered by UT–Battelle, LLC. Established in 1943, ORNL is the largest science and energy national laboratory in the Department of Energy system by size and third largest by annual budget. It is located in the Roane County section of Oak Ridge. Its scientific programs focus on materials, nuclear science, neutron science, energy, high-performance computing, environmental science, systems biology and national security, sometimes in partnership with the state of Tennessee, universities and other industries. ORNL has several of the world's top supercomputers, including Frontier, ranked by the TOP500 as the world's second most powerful. The lab is a leading neutron and nuclear power research facility that includes the Spallation Neutron Source, the High ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Cray XT5
The Cray XT5 is an updated version of the Cray XT4 supercomputer, launched on November 6, 2007. It includes a faster version of the XT4's SeaStar2 interconnect router called PowerPC 400#SeaStar, SeaStar2+, and can be configured either with XT4 compute blades, which have four dual-core Opteron, AMD Opteron processor sockets, or XT5 blades, with eight sockets supporting dual or quad-core Opterons. The XT5 uses a 3-dimensional Torus interconnect, torus network topology. XT5 family The XT5 family run the Cray Linux Environment, formerly known as UNICOS/lc. This incorporates SUSE Linux Enterprise Server and Cray's Compute Node Linux. The XT5h (''hybrid'') variant also includes support for Cray X2 vector processor blades, and Cray XR1 blades which combine Opterons with FPGA-based Reconfigurable Processor Units (RPUs) provided by DRC Computer Corporation. The XT5m variant is a mid-ranged supercomputer with most of the features of the XT5, but having a 2-dimensional torus network topolo ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Petascale
Petascale computing refers to computing systems capable of performing at least 1 quadrillion (10^15) floating-point operations per second (FLOPS). These systems are often called petaflops systems and represent a significant leap from traditional supercomputers in terms of raw performance, enabling them to handle vast datasets and complex computations. Definition Floating point operations per second (FLOPS) are one measure of computer performance. FLOPS can be recorded in different measures of precision, however the standard measure (used by the TOP500 supercomputer list) uses 64 bit (double-precision floating-point format) operations per second using the LINPACK#HPLinpack, High Performance LINPACK (HPLinpack) LINPACK benchmarks#HPLinpack, benchmark. The metric typically refers to single computing systems, although can be used to measure distributed computing systems for comparison. It can be noted that there are alternative precision measures using the LINPACK benchmarks which a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Memory Leak
In computer science, a memory leak is a type of resource leak that occurs when a computer program incorrectly manages memory allocations in a way that memory which is no longer needed is not released. A memory leak may also happen when an object is stored in memory but cannot be accessed by the running code (i.e. unreachable memory). A memory leak has symptoms similar to a number of other problems and generally can only be diagnosed by a programmer with access to the program's source code. A related concept is the "space leak", which is when a program consumes excessive memory but does eventually release it. Because they can exhaust available system memory as an application runs, memory leaks are often the cause of or a contributing factor to software aging. Effects Minor leaks If a program has a memory leak and its memory usage is steadily increasing, there will not usually be an immediate symptom. In modern operating systems, normal memory used by an application is releas ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Memory Debugging
A memory debugger is a debugger for finding software memory problems such as memory leaks and buffer overflows. These are due to bugs related to the allocation and deallocation of dynamic memory. Programs written in languages that have garbage collection, such as managed code, might also need memory debuggers, e.g. for memory leaks due to "living" references in collections. Overview Memory debuggers work by monitoring memory access, allocations, and deallocation of memory. Many memory debuggers require applications to be recompiled with special dynamic memory allocation libraries, whose APIs are mostly compatible with conventional dynamic memory allocation libraries, or else use dynamic linking. Electric Fence is such a debugger which debugs memory allocation with malloc. Some memory debuggers (e.g. Valgrind) work by running the executable in a virtual machine-like environment, monitoring memory access, allocation and deallocation so that no recompilation with special memory al ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Processor (computing)
In computing and computer science, a processor or processing unit is an electrical component (digital circuit) that performs operations on an external data source, usually memory or some other data stream. It typically takes the form of a microprocessor, which can be implemented on a single or a few tightly integrated metal–oxide–semiconductor integrated circuit chips. In the past, processors were constructed using multiple individual vacuum tubes, multiple individual transistors, or multiple integrated circuits. The term is frequently used to refer to the central processing unit (CPU), the main processor in a system. However, it can also refer to other coprocessors, such as a graphics processing unit (GPU). Traditional processors are typically based on silicon; however, researchers have developed experimental processors based on alternative materials such as carbon nanotubes, graphene, diamond, and alloys made of elements from groups three and five of the periodic tabl ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]