Strassen Algorithm
   HOME
*





Strassen Algorithm
In linear algebra, the Strassen algorithm, named after Volker Strassen, is an algorithm for matrix multiplication. It is faster than the standard matrix multiplication algorithm for large matrices, with a better asymptotic complexity, although the naive algorithm is often better for smaller matrices. The Strassen algorithm is slower than the fastest known algorithms for extremely large matrices, but such galactic algorithms are not useful in practice, as they are much slower for matrices of practical size. For small matrices even faster algorithms exist. Strassen's algorithm works for any ring, such as plus/multiply, but not all semirings, such as min-plus or boolean algebra, where the naive algorithm still works, and so called combinatorial matrix multiplication. History Volker Strassen first published this algorithm in 1969 and thereby proved that the n^3 general matrix multiplication algorithm wasn't optimal. The Strassen algorithm's publication resulted in more resear ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Schönhage–Strassen Algorithm
The Schönhage–Strassen algorithm is an asymptotically fast multiplication algorithm for large integers. It was developed by Arnold Schönhage and Volker Strassen in 1971.A. Schönhage and V. Strassen,Schnelle Multiplikation großer Zahlen, ''Computing'' 7 (1971), pp. 281–292. The run-time bit complexity is, in big O notation, O(n \cdot \log n \cdot \log \log n) for two ''n''-digit numbers. The algorithm uses recursive fast Fourier transforms in rings with 2''n''+1 elements, a specific type of number theoretic transform. The Schönhage–Strassen algorithm was the asymptotically fastest multiplication method known from 1971 until 2007, when a new method, Fürer's algorithm, was announced with lower asymptotic complexity, and is used in the Basic Polynomial Algebra Subprograms (BPAS) open source library. The algorithm does not adapt for polynomials over finite fields though. The current best multiplication algorithm in terms of asymptotic complexity is by David Harvey and Jor ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Numerical Stability
In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context. One is numerical linear algebra and the other is algorithms for solving ordinary and partial differential equations by discrete approximation. In numerical linear algebra, the principal concern is instabilities caused by proximity to singularities of various kinds, such as very small or nearly colliding eigenvalues. On the other hand, in numerical algorithms for differential equations the concern is the growth of round-off errors and/or small fluctuations in initial data which might cause a large deviation of final answer from the exact solution. Some numerical algorithms may damp out the small fluctuations (errors) in the input data; others might magnify such errors. Calculations that can be proven not to magnify approximation errors are called ''numerically stable''. One of the common task ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Z-order (curve)
In mathematical analysis and computer science, functions which are Z-order, Lebesgue curve, Morton space-filling curve, Morton order or Morton code map multidimensional data to one dimension while preserving locality of the data points. It is named in France after Henri Lebesgue, who studied it in 1904, and named in US after Guy Macdonald Morton, who first applied the order to file sequencing in 1966. The z-value of a point in multidimensions is simply calculated by interleaving the binary representations of its coordinate values. Once the data are sorted into this ordering, any one-dimensional data structure can be used such as binary search trees, B-trees, skip lists or (with low significant bits truncated) hash tables. The resulting ordering can equivalently be described as the order one would get from a depth-first traversal of a quadtree or octree. Coordinate values The figure below shows the Z-values for the two dimensional case with integer coordinates 0 ≤&nb ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Coppersmith–Winograd Algorithm
In theoretical computer science, the computational complexity of matrix multiplication dictates how quickly the operation of matrix multiplication can be performed. Matrix multiplication algorithms are a central subroutine in theoretical and numerical algorithms for numerical linear algebra and optimization, so finding the right amount of time it should take is of major practical relevance. Directly applying the mathematical definition of matrix multiplication gives an algorithm that requires field operations to multiply two matrices over that field ( in big O notation). Surprisingly, algorithms exist that provide better running times than this straightforward "schoolbook algorithm". The first to be discovered was Strassen's algorithm, devised by Volker Strassen in 1969 and often referred to as "fast matrix multiplication". The optimal number of field operations needed to multiply two square matrices up to constant factors is still unknown. This is a major open question ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Gauss–Jordan Elimination
In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of operations performed on the corresponding matrix of coefficients. This method can also be used to compute the rank of a matrix, the determinant of a square matrix, and the inverse of an invertible matrix. The method is named after Carl Friedrich Gauss (1777–1855) although some special cases of the method—albeit presented without proof—were known to Chinese mathematicians as early as circa 179 AD. To perform row reduction on a matrix, one uses a sequence of elementary row operations to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations: * Swapping two rows, * Multiplying a row by a nonzero number, * Adding a multiple of one row to another row. (subtraction can be achieved by multiplying one row with -1 and adding ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Computational Complexity Of Mathematical Operations
The following tables list the computational complexity of various algorithms for common mathematical operations. Here, complexity refers to the time complexity of performing computations on a multitape Turing machine. See big O notation for an explanation of the notation used. Note: Due to the variety of multiplication algorithms, M(n) below stands in for the complexity of the chosen multiplication algorithm. Arithmetic functions This table lists the complexity of mathematical operations on integers. Algebraic functions Special functions Many of the methods in this section are given in Borwein & Borwein. Elementary functions The elementary functions are constructed by composing arithmetic operations, the exponential function (\exp), the natural logarithm (\log), trigonometric functions (\sin, \cos), and their inverses. The complexity of an elementary function is equivalent to that of its inverse, since all elementary functions are analytic and hence invertible b ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

CPU Cache
A CPU cache is a hardware cache used by the central processing unit (CPU) of a computer to reduce the average cost (time or energy) to access data from the main memory. A cache is a smaller, faster memory, located closer to a processor core, which stores copies of the data from frequently used main memory locations. Most CPUs have a hierarchy of multiple cache levels (L1, L2, often L3, and rarely even L4), with different instruction-specific and data-specific caches at level 1. The cache memory is typically implemented with static random-access memory (SRAM), in modern CPUs by far the largest part of them by chip area, but SRAM is not always used for all levels (of I- or D-cache), or even any level, sometimes some latter or all levels are implemented with eDRAM. Other types of caches exist (that are not counted towards the "cache size" of the most important caches mentioned above), such as the translation lookaside buffer (TLB) which is part of the memory management unit (MMU) w ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Cache-oblivious Algorithm
In computing, a cache-oblivious algorithm (or cache-transcendent algorithm) is an algorithm designed to take advantage of a processor cache without having the size of the cache (or the length of the cache lines, etc.) as an explicit parameter. An optimal cache-oblivious algorithm is a cache-oblivious algorithm that uses the cache optimally (in an asymptotic sense, ignoring constant factors). Thus, a cache-oblivious algorithm is designed to perform well, without modification, on multiple machines with different cache sizes, or for a memory hierarchy with different levels of cache having different sizes. Cache-oblivious algorithms are contrasted with explicit ''loop tiling'', which explicitly breaks a problem into blocks that are optimally sized for a given cache. Optimal cache-oblivious algorithms are known for matrix multiplication, matrix transposition, sorting, and several other problems. Some more general algorithms, such as Cooley–Tukey FFT, are optimally cache-oblivious ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Tensor Product
In mathematics, the tensor product V \otimes W of two vector spaces and (over the same field) is a vector space to which is associated a bilinear map V\times W \to V\otimes W that maps a pair (v,w),\ v\in V, w\in W to an element of V \otimes W denoted v \otimes w. An element of the form v \otimes w is called the tensor product of and . An element of V \otimes W is a tensor, and the tensor product of two vectors is sometimes called an ''elementary tensor'' or a ''decomposable tensor''. The elementary tensors span V \otimes W in the sense that every element of V \otimes W is a sum of elementary tensors. If bases are given for and , a basis of V \otimes W is formed by all tensor products of a basis element of and a basis element of . The tensor product of two vector spaces captures the properties of all bilinear maps in the sense that a bilinear map from V\times W into another vector space factors uniquely through a linear map V\otimes W\to Z (see Universal property). Tenso ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Hadamard Product (matrices)
In mathematics, the Hadamard product (also known as the element-wise product, entrywise product or Schur product) is a binary operation that takes two matrices of the same dimensions and produces another matrix of the same dimension as the operands, where each element is the product of elements of the original two matrices. It is to be distinguished from the more common matrix product. It is attributed to, and named after, either French mathematician Jacques Hadamard or German Russian mathematician Issai Schur. The Hadamard product is associative and distributive. Unlike the matrix product, it is also commutative. Definition For two matrices and of the same dimension , the Hadamard product A \circ B (or A \odot B) is a matrix of the same dimension as the operands, with elements given by :(A \circ B)_ = (A \odot B)_ = (A)_ (B)_. For matrices of different dimensions ( and , where or ), the Hadamard product is undefined. Example For example, the Hadamard product for a 3  ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Dyadics
In mathematics, specifically multilinear algebra, a dyadic or dyadic tensor is a second order tensor, written in a notation that fits in with vector algebra. There are numerous ways to multiply two Euclidean vectors. The dot product takes in two vectors and returns a scalar, while the cross product returns a pseudovector. Both of these have various significant geometric interpretations and are widely used in mathematics, physics, and engineering. The dyadic product takes in two vectors and returns a second order tensor called a ''dyadic'' in this context. A dyadic can be used to contain physical or geometric information, although in general there is no direct way of geometrically interpreting it. The dyadic product is distributive over vector addition, and associative with scalar multiplication. Therefore, the dyadic product is linear in both of its operands. In general, two dyadics can be added to get another dyadic, and multiplied by numbers to scale the dyadic. However, the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Dual Space
In mathematics, any vector space ''V'' has a corresponding dual vector space (or just dual space for short) consisting of all linear forms on ''V'', together with the vector space structure of pointwise addition and scalar multiplication by constants. The dual space as defined above is defined for all vector spaces, and to avoid ambiguity may also be called the . When defined for a topological vector space, there is a subspace of the dual space, corresponding to continuous linear functionals, called the ''continuous dual space''. Dual vector spaces find application in many branches of mathematics that use vector spaces, such as in tensor analysis with finite-dimensional vector spaces. When applied to vector spaces of functions (which are typically infinite-dimensional), dual spaces are used to describe measures, distributions, and Hilbert spaces. Consequently, the dual space is an important concept in functional analysis. Early terms for ''dual'' include ''polarer Raum'' ahn 1 ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]