Blas C. Herrero
   HOME

TheInfoList



OR:

Basic Linear Algebra Subprograms (BLAS) is a specification that prescribes a set of low-level routines for performing common linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. They are the '' de facto'' standard low-level routines for linear algebra libraries; the routines have bindings for both C ("CBLAS interface") and Fortran ("BLAS interface"). Although the BLAS specification is general, BLAS implementations are often optimized for speed on a particular machine, so using them can bring substantial performance benefits. BLAS implementations will take advantage of special floating point hardware such as vector registers or SIMD instructions. It originated as a Fortran library in 1979* and its interface was standardized by the BLAS Technical (BLAST) Forum, whose latest BLAS report can be found on the
netlib Netlib is a repository of software for scientific computing maintained by AT&T, Bell Laboratories, the University of Tennessee and Oak Ridge National Laboratory. Netlib comprises many separate programs and libraries. Most of the code is written in ...
website. This Fortran library is known as the '' reference implementation'' (sometimes confusingly referred to as ''the'' BLAS library) and is not optimized for speed but is in the public domain. Most computing libraries that offer linear algebra routines conform to common BLAS user interface command structures, thus queries to those libraries (and the associated results) are often portable between BLAS library branches, such as cuBLAS (nvidia GPU, GPGPU), rocBLAS (amd GPU, GPGP), and
OpenBLAS In scientific computing, OpenBLAS is an open-source implementation of the BLAS (Basic Linear Algebra Subprograms) and LAPACK APIs with many hand-crafted optimizations for specific processor types. It is developed at the Lab of Parallel Software ...
. This interoperability is then the basis of functioning homogenous code implementations between heterzygous cascades of computing architectures (such as those found in some advanced clustering implementations). Examples of CPU-based BLAS library branches include:
OpenBLAS In scientific computing, OpenBLAS is an open-source implementation of the BLAS (Basic Linear Algebra Subprograms) and LAPACK APIs with many hand-crafted optimizations for specific processor types. It is developed at the Lab of Parallel Software ...
, BLIS (BLAS-like Library Instantiation Software), Arm Performance Libraries, ATLAS, and Intel Math Kernel Library (iMKL). AMD maintains a fork of BLIS that is optimized for the AMD platform, although it is unclear whether integrated ombudsmen resources are present in that particular software-hardware implementation. ATLAS is a portable library that automatically optimizes itself for an arbitrary architecture. iMKL is a freeware and proprietary vendor library optimized for x86 and x86-64 with a performance emphasis on Intel processors. OpenBLAS is an open-source library that is hand-optimized for many of the popular architectures. The LINPACK benchmarks rely heavily on the BLAS routine gemm for its performance measurements. Many numerical software applications use BLAS-compatible libraries to do linear algebra computations, including LAPACK, LINPACK,
Armadillo Armadillos (meaning "little armored ones" in Spanish) are New World placental mammals in the order Cingulata. The Chlamyphoridae and Dasypodidae are the only surviving families in the order, which is part of the superorder Xenarthra, along wi ...
, GNU Octave,
Mathematica Wolfram Mathematica is a software system with built-in libraries for several areas of technical computing that allow machine learning, statistics, symbolic computation, data manipulation, network analysis, time series analysis, NLP, optimizat ...
, MATLAB, NumPy, R, and Julia.


Background

With the advent of numerical programming, sophisticated subroutine libraries became useful. These libraries would contain subroutines for common high-level mathematical operations such as root finding, matrix inversion, and solving systems of equations. The language of choice was FORTRAN. The most prominent numerical programming library was IBM's
Scientific Subroutine Package Science is a systematic endeavor that builds and organizes knowledge in the form of testable explanations and predictions about the universe. Science may be as old as the human species, and some of the earliest archeological evidence for ...
(SSP). These subroutine libraries allowed programmers to concentrate on their specific problems and avoid re-implementing well-known algorithms. The library routines would also be better than average implementations; matrix algorithms, for example, might use full pivoting to get better numerical accuracy. The library routines would also have more efficient routines. For example, a library may include a program to solve a matrix that is upper triangular. The libraries would include single-precision and double-precision versions of some algorithms. Initially, these subroutines used hard-coded loops for their low-level operations. For example, if a subroutine needed to perform a matrix multiplication, then the subroutine would have three nested loops. Linear algebra programs have many common low-level operations (the so-called "kernel" operations, not related to operating systems). Between 1973 and 1977, several of these kernel operations were identified. These kernel operations became defined subroutines that math libraries could call. The kernel calls had advantages over hard-coded loops: the library routine would be more readable, there were fewer chances for bugs, and the kernel implementation could be optimized for speed. A specification for these kernel operations using
scalars Scalar may refer to: *Scalar (mathematics), an element of a field, which is used to define a vector space, usually the field of real numbers *Scalar (physics), a physical quantity that can be described by a single element of a number field such a ...
and vectors, the level-1 Basic Linear Algebra Subroutines (BLAS), was published in 1979. BLAS was used to implement the linear algebra subroutine library LINPACK. The BLAS abstraction allows customization for high performance. For example, LINPACK is a general purpose library that can be used on many different machines without modification. LINPACK could use a generic version of BLAS. To gain performance, different machines might use tailored versions of BLAS. As computer architectures became more sophisticated, vector machines appeared. BLAS for a vector machine could use the machine's fast vector operations. (While vector processors eventually fell out of favor, vector instructions in modern CPUs are essential for optimal performance in BLAS routines.) Other machine features became available and could also be exploited. Consequently, BLAS was augmented from 1984 to 1986 with level-2 kernel operations that concerned vector-matrix operations. Memory hierarchy was also recognized as something to exploit. Many computers have cache memory that is much faster than main memory; keeping matrix manipulations localized allows better usage of the cache. In 1987 and 1988, the level 3 BLAS were identified to do matrix-matrix operations. The level 3 BLAS encouraged block-partitioned algorithms. The LAPACK library uses level 3 BLAS. The original BLAS concerned only densely stored vectors and matrices. Further extensions to BLAS, such as for sparse matrices, have been addressed.


Functionality

BLAS functionality is categorized into three sets of routines called "levels", which correspond to both the chronological order of definition and publication, as well as the degree of the polynomial in the complexities of algorithms; Level 1 BLAS operations typically take
linear time In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by ...
, , Level 2 operations quadratic time and Level 3 operations cubic time. Modern BLAS implementations typically provide all three levels.


Level 1

This level consists of all the routines described in the original presentation of BLAS (1979), which defined only ''vector operations'' on strided arrays: dot products, vector norms, a generalized vector addition of the form :\boldsymbol \leftarrow \alpha \boldsymbol + \boldsymbol (called "axpy", "a x plus y") and several other operations.


Level 2

This level contains ''matrix-vector operations'' including, among other things, a ''ge''neralized ''m''atrix-''v''ector multiplication (gemv): :\boldsymbol \leftarrow \alpha \boldsymbol \boldsymbol + \beta \boldsymbol as well as a solver for in the linear equation :\boldsymbol \boldsymbol = \boldsymbol with being triangular. Design of the Level 2 BLAS started in 1984, with results published in 1988. The Level 2 subroutines are especially intended to improve performance of programs using BLAS on vector processors, where Level 1 BLAS are suboptimal "because they hide the matrix-vector nature of the operations from the compiler."


Level 3

This level, formally published in 1990, contains ''matrix-matrix operations'', including a "general matrix multiplication" (gemm), of the form :\boldsymbol \leftarrow \alpha \boldsymbol \boldsymbol + \beta \boldsymbol, where and can optionally be transposed or hermitian-conjugated inside the routine, and all three matrices may be strided. The ordinary matrix multiplication can be performed by setting to one and to an all-zeros matrix of the appropriate size. Also included in Level 3 are routines for computing :\boldsymbol \leftarrow \alpha \boldsymbol^ \boldsymbol, where is a triangular matrix, among other functionality. Due to the ubiquity of matrix multiplications in many scientific applications, including for the implementation of the rest of Level 3 BLAS, and because faster algorithms exist beyond the obvious repetition of matrix-vector multiplication, gemm is a prime target of optimization for BLAS implementers. E.g., by decomposing one or both of , into block matrices, gemm can be implemented recursively. This is one of the motivations for including the parameter, so the results of previous blocks can be accumulated. Note that this decomposition requires the special case which many implementations optimize for, thereby eliminating one multiplication for each value of . This decomposition allows for better locality of reference both in space and time of the data used in the product. This, in turn, takes advantage of the cache on the system. For systems with more than one level of cache, the blocking can be applied a second time to the order in which the blocks are used in the computation. Both of these levels of optimization are used in implementations such as ATLAS. More recently, implementations by
Kazushige Goto is a software engineer specializing in high performance, hand-written, machine code. Education Goto was a research associate at the Texas Advanced Computing Center at the University of Texas at Austin when he wrote his famously hand-optimized as ...
have shown that blocking only for the L2 cache, combined with careful
amortizing Amortization or amortisation may refer to: * The process by which loan principal decreases over the life of an amortizing loan * Amortization (accounting), the expensing of acquisition cost minus the residual value of intangible assets in a system ...
of copying to contiguous memory to reduce TLB misses, is superior to ATLAS. A highly tuned implementation based on these ideas is part of the
GotoBLAS In scientific computing, GotoBLAS and GotoBLAS2 are open source implementations of the BLAS (Basic Linear Algebra Subprograms) API with many hand-crafted optimizations for specific processor types. GotoBLAS was developed by Kazushige Goto at th ...
,
OpenBLAS In scientific computing, OpenBLAS is an open-source implementation of the BLAS (Basic Linear Algebra Subprograms) and LAPACK APIs with many hand-crafted optimizations for specific processor types. It is developed at the Lab of Parallel Software ...
and BLIS. A common variation of is the , which calculates a complex product using "three real matrix multiplications and five real matrix additions instead of the conventional four real matrix multiplications and two real matrix additions", an algorithm similar to
Strassen algorithm In linear algebra, the Strassen algorithm, named after Volker Strassen, is an algorithm for matrix multiplication. It is faster than the standard matrix multiplication algorithm for large matrices, with a better asymptotic complexity, although t ...
first described by Peter Ungar.


Implementations

; Accelerate: Apple's framework for macOS and iOS, which includes tuned versions of BLAS and LAPACK. ; Arm Performance Libraries:
Arm Performance Libraries In human anatomy, the arm refers to the upper limb in common usage, although academically the term specifically means the upper arm between the glenohumeral joint (shoulder joint) and the elbow joint. The distal part of the upper limb between th ...
, supporting Arm 64-bit AArch64-based processors, available from Arm. ; ATLAS:
Automatically Tuned Linear Algebra Software Automatically Tuned Linear Algebra Software (ATLAS) is a software library for linear algebra. It provides a mature open source implementation of BLAS APIs for C and Fortran77. ATLAS is often recommended as a way to automatically generate an ...
, an
open source Open source is source code that is made freely available for possible modification and redistribution. Products include permission to use the source code, design documents, or content of the product. The open-source model is a decentralized sof ...
implementation of BLAS APIs for C and Fortran 77. ; BLIS: BLAS-like Library Instantiation Software framework for rapid instantiation. Optimized for most modern CPUs. BLIS is a complete refactoring of the GotoBLAS that reduces the amount of code that must be written for a given platform. ; C++ AMP BLAS: The
C++ AMP C, or c, is the third letter in the Latin alphabet, used in the modern English alphabet, the alphabets of other western European languages and others worldwide. Its name in English is ''cee'' (pronounced ), plural ''cees''. History "C" ...
BLAS Library is an
open source Open source is source code that is made freely available for possible modification and redistribution. Products include permission to use the source code, design documents, or content of the product. The open-source model is a decentralized sof ...
implementation of BLAS for Microsoft's AMP language extension for Visual C++. ; cuBLAS: Optimized BLAS for NVIDIA based GPU cards, requiring few additional library calls. ; NVBLAS: Optimized BLAS for NVIDIA based GPU cards, providing only Level 3 functions, but as direct drop-in replacement for other BLAS libraries. ; clBLAS: An OpenCL implementation of BLAS by AMD. Part of the AMD Compute Libraries. ; clBLAST: A tuned OpenCL implementation of most of the BLAS api. ; Eigen BLAS: A Fortran 77 and C BLAS library implemented on top of the MPL-licensed Eigen library, supporting x86, x86-64, ARM (NEON), and
PowerPC PowerPC (with the backronym Performance Optimization With Enhanced RISC – Performance Computing, sometimes abbreviated as PPC) is a reduced instruction set computer (RISC) instruction set architecture (ISA) created by the 1991 Apple Inc., App ...
architectures. ; ESSL: IBM's Engineering and Scientific Subroutine Library, supporting the
PowerPC PowerPC (with the backronym Performance Optimization With Enhanced RISC – Performance Computing, sometimes abbreviated as PPC) is a reduced instruction set computer (RISC) instruction set architecture (ISA) created by the 1991 Apple Inc., App ...
architecture under
AIX Aix or AIX may refer to: Computing * AIX, a line of IBM computer operating systems *An Alternate Index, for a Virtual Storage Access Method Key Sequenced Data Set *Athens Internet Exchange, a European Internet exchange point Places Belgium ...
and Linux. ;
GotoBLAS In scientific computing, GotoBLAS and GotoBLAS2 are open source implementations of the BLAS (Basic Linear Algebra Subprograms) API with many hand-crafted optimizations for specific processor types. GotoBLAS was developed by Kazushige Goto at th ...
:
Kazushige Goto is a software engineer specializing in high performance, hand-written, machine code. Education Goto was a research associate at the Texas Advanced Computing Center at the University of Texas at Austin when he wrote his famously hand-optimized as ...
's BSD-licensed implementation of BLAS, tuned in particular for Intel Nehalem/ Atom,
VIA Via or VIA may refer to the following: Science and technology * MOS Technology 6522, Versatile Interface Adapter * ''Via'' (moth), a genus of moths in the family Noctuidae * Via (electronics), a through-connection * VIA Technologies, a Taiwan ...
Nanoprocessor, AMD Opteron. ;
GNU Scientific Library The GNU Scientific Library (or GSL) is a software library for numerical computations in applied mathematics and science. The GSL is written in C; wrappers are available for other programming languages. The GSL is part of the GNU Project and is d ...
: Multi-platform implementation of many numerical routines. Contains a CBLAS interface. ; HP MLIB: HP's Math library supporting IA-64, PA-RISC, x86 and Opteron architecture under
HP-UX HP-UX (from "Hewlett Packard Unix") is Hewlett Packard Enterprise's proprietary implementation of the Unix operating system, based on Unix System V (initially System III) and first released in 1984. Current versions support HPE Integrity Ser ...
and Linux. ; Intel MKL: The Intel
Math Kernel Library Intel oneAPI Math Kernel Library (Intel oneMKL; formerly Intel Math Kernel Library or Intel MKL) is a library of optimized math routines for science, engineering, and financial applications. Core math functions include BLAS, LAPACK, ScaLAPACK, ...
, supporting x86 32-bits and 64-bits, available free from Intel. Includes optimizations for Intel Pentium, Core and Intel Xeon CPUs and Intel Xeon Phi; support for Linux, Windows and macOS. ; MathKeisan: NEC's math library, supporting
NEC SX architecture is a Japanese multinational information technology and electronics corporation, headquartered in Minato, Tokyo. The company was known as the Nippon Electric Company, Limited, before rebranding in 1983 as NEC. It provides IT and network soluti ...
under SUPER-UX, and Itanium under Linux ; Netlib BLAS: The official reference implementation on
Netlib Netlib is a repository of software for scientific computing maintained by AT&T, Bell Laboratories, the University of Tennessee and Oak Ridge National Laboratory. Netlib comprises many separate programs and libraries. Most of the code is written in ...
, written in Fortran 77. ; Netlib CBLAS: Reference C interface to the BLAS. It is also possible (and popular) to call the Fortran BLAS from C. ;
OpenBLAS In scientific computing, OpenBLAS is an open-source implementation of the BLAS (Basic Linear Algebra Subprograms) and LAPACK APIs with many hand-crafted optimizations for specific processor types. It is developed at the Lab of Parallel Software ...
: Optimized BLAS based on GotoBLAS, supporting x86, x86-64, MIPS and ARM processors. ; PDLIB/SX: NEC's Public Domain Mathematical Library for the NEC SX-4 system. ; rocBLAS: Implementation that runs on AMD GPUs via ROCm. ;SCSL :
SGI SGI may refer to: Companies *Saskatchewan Government Insurance *Scientific Games International, a gambling company *Silicon Graphics, Inc., a former manufacturer of high-performance computing products *Silicon Graphics International, formerly Rac ...
's Scientific Computing Software Library contains BLAS and LAPACK implementations for SGI's
Irix IRIX ( ) is a discontinued operating system developed by Silicon Graphics (SGI) to run on the company's proprietary MIPS workstations and servers. It is based on UNIX System V with BSD extensions. In IRIX, SGI originated the XFS file system and ...
workstations. ; Sun Performance Library: Optimized BLAS and LAPACK for SPARC, Core and AMD64 architectures under Solaris 8, 9, and 10 as well as Linux. ; uBLAS: A generic C++ template class library providing BLAS functionality. Part of the Boost library. It provides bindings to many hardware-accelerated libraries in a unifying notation. Moreover, uBLAS focuses on correctness of the algorithms using advanced C++ features.


Libraries using BLAS

; Armadillo:
Armadillo Armadillos (meaning "little armored ones" in Spanish) are New World placental mammals in the order Cingulata. The Chlamyphoridae and Dasypodidae are the only surviving families in the order, which is part of the superorder Xenarthra, along wi ...
is a C++ linear algebra library aiming towards a good balance between speed and ease of use. It employs template classes, and has optional links to BLAS/ATLAS and LAPACK. It is sponsored by NICTA (in Australia) and is licensed under a free license. ; LAPACK: LAPACK is a higher level Linear Algebra library built upon BLAS. Like BLAS, a reference implementation exists, but many alternatives like libFlame and MKL exist. ; Mir: An LLVM-accelerated generic numerical library for science and machine learning written in D. It provides generic linear algebra subprograms (GLAS). It can be built on a CBLAS implementation.


Similar libraries (not compatible with BLAS)

; Elemental: Elemental is an open source software for distributed-memory dense and sparse-direct linear algebra and optimization. ; HASEM: is a C++ template library, being able to solve linear equations and to compute eigenvalues. It is licensed under BSD License. ; LAMA: The Library for Accelerated Math Applications (
LAMA Lama (; "chief") is a title for a teacher of the Dharma in Tibetan Buddhism. The name is similar to the Sanskrit term ''guru'', meaning "heavy one", endowed with qualities the student will eventually embody. The Tibetan word "lama" means "hi ...
) is a C++ template library for writing numerical solvers targeting various kinds of hardware (e.g. GPUs through CUDA or OpenCL) on distributed memory systems, hiding the hardware specific programming from the program developer ; MTL4: The
Matrix Template Library The Matrix Template Library (MTL) is a linear algebra library for C++ programs. The MTL uses template programming, which considerably reduces the code length. All matrices and vectors are available in all classical numerical formats: float, dou ...
version 4 is a generic C++ template library providing sparse and dense BLAS functionality. MTL4 establishes an intuitive interface (similar to MATLAB) and broad applicability thanks to generic programming.


Sparse BLAS

Several extensions to BLAS for handling sparse matrices have been suggested over the course of the library's history; a small set of sparse matrix kernel routines was finally standardized in 2002.


Batched BLAS

The traditional BLAS functions have been also ported to architectures that support large amounts of parallelism such as GPUs. Here, the traditional BLAS functions provide typically good performance for large matrices. However, when computing e.g., matrix-matrix-products of many small matrices by using the GEMM routine, those architectures show significant performance losses. To address this issue, in 2017 a batched version of the BLAS function has been specified. Taking the GEMM routine from above as an example, the batched version performs the following computation simultaneously for many matrices: \boldsymbol \leftarrow \alpha \boldsymbol \boldsymbol + \beta \boldsymbol \quad \forall k The index k in square brackets indicates that the operation is performed for all matrices k in a stack. Often, this operation is implemented for a strided batched memory layout where all matrices follow concatenated in the arrays A, B and C. Batched BLAS functions can be a versatile tool and allow e.g. a fast implementation of exponential integrators and
Magnus integrators Magnus, meaning "Great" in Latin, was used as cognomen of Gnaeus Pompeius Magnus in the first century BC. The best-known use of the name during the Roman Empire is for the fourth-century Western Roman Emperor Magnus Maximus. The name gained wid ...
that handle long integration periods with many time steps. Here, the
matrix exponentiation In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives ...
, the computationally expensive part of the integration, can be implemented in parallel for all time-steps by using Batched BLAS functions.


See also

*
List of numerical libraries This is a list of numerical libraries, which are libraries used in software development for performing numerical calculations. It is not a complete listing but is instead a list of numerical libraries with articles on Wikipedia, with few exceptio ...
*
Math Kernel Library Intel oneAPI Math Kernel Library (Intel oneMKL; formerly Intel Math Kernel Library or Intel MKL) is a library of optimized math routines for science, engineering, and financial applications. Core math functions include BLAS, LAPACK, ScaLAPACK, ...
, math library optimized for the Intel architecture; includes BLAS, LAPACK * Numerical linear algebra, the type of problem BLAS solves


References


Further reading

* * * * J. J. Dongarra, J. Du Croz, S. Hammarling, and R. J. Hanson, Algorithm 656: An extended set of FORTRAN Basic Linear Algebra Subprograms, ACM Trans. Math. Softw., 14 (1988), pp. 18–32. * J. J. Dongarra, J. Du Croz, I. S. Duff, and S. Hammarling, A set of Level 3 Basic Linear Algebra Subprograms, ACM Trans. Math. Softw., 16 (1990), pp. 1–17. * J. J. Dongarra, J. Du Croz, I. S. Duff, and S. Hammarling, Algorithm 679: A set of Level 3 Basic Linear Algebra Subprograms, ACM Trans. Math. Softw., 16 (1990), pp. 18–28. ; New BLAS * L. S. Blackford, J. Demmel, J. Dongarra, I. Duff, S. Hammarling, G. Henry, M. Heroux, L. Kaufman, A. Lumsdaine, A. Petitet, R. Pozo, K. Remington, R. C. Whaley, An Updated Set of Basic Linear Algebra Subprograms (BLAS), ACM Trans. Math. Softw., 28-2 (2002), pp. 135–151. * J. Dongarra, Basic Linear Algebra Subprograms Technical Forum Standard, International Journal of High Performance Applications and Supercomputing, 16(1) (2002), pp. 1–111, and International Journal of High Performance Applications and Supercomputing, 16(2) (2002), pp. 115–199.


External links


BLAS homepage
on Netlib.org



from LAPACK Users' Guide

One of the original authors of the BLAS discusses its creation in an oral history interview. Charles L. Lawson Oral history interview by Thomas Haigh, 6 and 7 November 2004, San Clemente, California. Society for Industrial and Applied Mathematics, Philadelphia, PA.

In an oral history interview, Jack Dongarra explores the early relationship of BLAS to LINPACK, the creation of higher level BLAS versions for new architectures, and his later work on the ATLAS system to automatically optimize BLAS for particular machines. Jack Dongarra, Oral history interview by Thomas Haigh, 26 April 2005, University of Tennessee, Knoxville TN. Society for Industrial and Applied Mathematics, Philadelphia, PA
How does BLAS get such extreme performance?
Ten naive 1000×1000 matrix multiplications (1010 floating point multiply-adds) takes 15.77 seconds on 2.6 GHz processor; BLAS implementation takes 1.32 seconds. * An Overview of the Sparse Basic Linear Algebra Subprograms: The New Standard from the BLAS Technical Forum {{Linear algebra Numerical linear algebra Numerical software Public-domain software with source code