Zassenhaus Algorithm
   HOME
*





Zassenhaus Algorithm
In mathematics, the Zassenhaus algorithm is a method to calculate a basis for the intersection and sum of two subspaces of a vector space. It is named after Hans Zassenhaus, but no publication of this algorithm by him is known. It is used in computer algebra systems. Algorithm Input Let be a vector space and , two finite-dimensional subspaces of with the following spanning sets: :U = \langle u_1, \ldots, u_n\rangle and :W = \langle w_1, \ldots, w_k\rangle. Finally, let B_1, \ldots, B_m be linearly independent vectors so that u_i and w_i can be written as :u_i = \sum_^m a_B_j and :w_i = \sum_^m b_B_j. Output The algorithm computes the base of the sum U + W and a base of the intersection U \cap W. Algorithm The algorithm creates the following block matrix of size ((n+k) \times (2m)): :\begin a_&a_&\cdots&a_&a_&a_&\cdots&a_\\ \vdots&\vdots&&\vdots&\vdots&\vdots&&\vdots\\ a_&a_&\cdots&a_&a_&a_&\cdots&a_\\ b_&b_&\cdots&b_&0&0&\cdots&0\\ \vdots&\vdots&&\vdots&\vdots ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Basis (linear Algebra)
In mathematics, a set of vectors in a vector space is called a basis if every element of may be written in a unique way as a finite linear combination of elements of . The coefficients of this linear combination are referred to as components or coordinates of the vector with respect to . The elements of a basis are called . Equivalently, a set is a basis if its elements are linearly independent and every element of is a linear combination of elements of . In other words, a basis is a linearly independent spanning set. A vector space can have several bases; however all the bases have the same number of elements, called the ''dimension'' of the vector space. This article deals mainly with finite-dimensional vector spaces. However, many of the principles are also valid for infinite-dimensional vector spaces. Definition A basis of a vector space over a field (such as the real numbers or the complex numbers ) is a linearly independent subset of that spans . This me ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Block Matrix
In mathematics, a block matrix or a partitioned matrix is a matrix that is '' interpreted'' as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of smaller matrices. Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation defined by how its rows and columns are partitioned. This notion can be made more precise for an n by m matrix M by partitioning n into a collection \text, and then partitioning m into a collection \text. The original matrix is then considered as the "total" of these groups, in the sense that the (i, j) entry of the original matrix corresponds in a 1-to-1 way with some (s, t) offset entry of some (x,y), where x \in \text and y \in \text. Block matrix algebra arises in general from biproducts in categories of matrices ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Gröbner Basis
In mathematics, and more specifically in computer algebra, computational algebraic geometry, and computational commutative algebra, a Gröbner basis is a particular kind of generating set of an ideal in a polynomial ring over a field . A Gröbner basis allows many important properties of the ideal and the associated algebraic variety to be deduced easily, such as the dimension and the number of zeros when it is finite. Gröbner basis computation is one of the main practical tools for solving systems of polynomial equations and computing the images of algebraic varieties under projections or rational maps. Gröbner basis computation can be seen as a multivariate, non-linear generalization of both Euclid's algorithm for computing polynomial greatest common divisors, and Gaussian elimination for linear systems. Gröbner bases were introduced in 1965, together with an algorithm to compute them (Buchberger's algorithm), by Bruno Buchberger in his Ph.D. thesis. He named them after h ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Standard Basis
In mathematics, the standard basis (also called natural basis or canonical basis) of a coordinate vector space (such as \mathbb^n or \mathbb^n) is the set of vectors whose components are all zero, except one that equals 1. For example, in the case of the Euclidean plane \mathbb^2 formed by the pairs of real numbers, the standard basis is formed by the vectors :\mathbf_x = (1,0),\quad \mathbf_y = (0,1). Similarly, the standard basis for the three-dimensional space \mathbb^3 is formed by vectors :\mathbf_x = (1,0,0),\quad \mathbf_y = (0,1,0),\quad \mathbf_z=(0,0,1). Here the vector e''x'' points in the ''x'' direction, the vector e''y'' points in the ''y'' direction, and the vector e''z'' points in the ''z'' direction. There are several common notations for standard-basis vectors, including , , , and . These vectors are sometimes written with a hat to emphasize their status as unit vectors (standard unit vectors). These vectors are a basis in the sense that any other vector can ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Restriction (mathematics)
In mathematics, the restriction of a function f is a new function, denoted f\vert_A or f , obtained by choosing a smaller domain A for the original function f. The function f is then said to extend f\vert_A. Formal definition Let f : E \to F be a function from a set E to a set F. If a set A is a subset of E, then the restriction of f to A is the function _A : A \to F given by _A(x) = f(x) for x \in A. Informally, the restriction of f to A is the same function as f, but is only defined on A. If the function f is thought of as a relation (x,f(x)) on the Cartesian product E \times F, then the restriction of f to A can be represented by its graph where the pairs (x,f(x)) represent ordered pairs in the graph G. Extensions A function F is said to be an ' of another function f if whenever x is in the domain of f then x is also in the domain of F and f(x) = F(x). That is, if \operatorname f \subseteq \operatorname F and F\big\vert_ = f. A '' '' (respectively, '' '', etc.) of ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Kernel (linear Algebra)
In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the linear subspace of the domain of the map which is mapped to the zero vector. That is, given a linear map between two vector spaces and , the kernel of is the vector space of all elements of such that , where denotes the zero vector in , or more symbolically: :\ker(L) = \left\ . Properties The kernel of is a linear subspace of the domain .Linear algebra, as discussed in this article, is a very well established mathematical discipline for which there are many sources. Almost all of the material in this article can be found in , , and Strang's lectures. In the linear map L : V \to W, two elements of have the same image in if and only if their difference lies in the kernel of , that is, L\left(\mathbf_1\right) = L\left(\mathbf_2\right) \quad \text \quad L\left(\mathbf_1-\mathbf_2\right) = \mathbf. From this, it follows that the image of is isomorphic to the quotient of by the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Row Echelon Form
In linear algebra, a matrix is in echelon form if it has the shape resulting from a Gaussian elimination. A matrix being in row echelon form means that Gaussian elimination has operated on the rows, and column echelon form means that Gaussian elimination has operated on the columns. In other words, a matrix is in column echelon form if its transpose is in row echelon form. Therefore, only row echelon forms are considered in the remainder of this article. The similar properties of column echelon form are easily deduced by transposing all the matrices. Specifically, a matrix is in row echelon form if * All rows consisting of only zeroes are at the bottom. * The leading entry (that is the left-most nonzero entry) of every nonzero row is to the right the leading entry of every row above. Some texts add the condition that the leading coefficient must be 1 while others regard this as ''reduced'' row echelon form. These two conditions imply that all entries in a column below a lead ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Elementary Row Operations
In mathematics, an elementary matrix is a matrix which differs from the identity matrix by one single elementary row operation. The elementary matrices generate the general linear group GL''n''(F) when F is a field. Left multiplication (pre-multiplication) by an elementary matrix represents elementary row operations, while right multiplication (post-multiplication) represents elementary column operations. Elementary row operations are used in Gaussian elimination to reduce a matrix to row echelon form. They are also used in Gauss–Jordan elimination to further reduce the matrix to reduced row echelon form. Elementary row operations There are three types of elementary matrices, which correspond to three types of row operations (respectively, column operations): ;Row switching: A row within the matrix can be switched with another row. : R_i \leftrightarrow R_j ;Row multiplication: Each element in a row can be multiplied by a non-zero constant. It is also known as ''scaling'' ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Linear Independence
In the theory of vector spaces, a set of vectors is said to be if there is a nontrivial linear combination of the vectors that equals the zero vector. If no such linear combination exists, then the vectors are said to be . These concepts are central to the definition of dimension. A vector space can be of finite dimension or infinite dimension depending on the maximum number of linearly independent vectors. The definition of linear dependence and the ability to determine whether a subset of vectors in a vector space is linearly dependent are central to determining the dimension of a vector space. Definition A sequence of vectors \mathbf_1, \mathbf_2, \dots, \mathbf_k from a vector space is said to be ''linearly dependent'', if there exist scalars a_1, a_2, \dots, a_k, not all zero, such that :a_1\mathbf_1 + a_2\mathbf_2 + \cdots + a_k\mathbf_k = \mathbf, where \mathbf denotes the zero vector. This implies that at least one of the scalars is nonzero, say a_1\ne 0, an ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Linear Subspace
In mathematics, and more specifically in linear algebra, a linear subspace, also known as a vector subspaceThe term ''linear subspace'' is sometimes used for referring to flats and affine subspaces. In the case of vector spaces over the reals, linear subspaces, flats, and affine subspaces are also called ''linear manifolds'' for emphasizing that there are also manifolds. is a vector space that is a subset of some larger vector space. A linear subspace is usually simply called a ''subspace'' when the context serves to distinguish it from other types of subspaces. Definition If ''V'' is a vector space over a field ''K'' and if ''W'' is a subset of ''V'', then ''W'' is a linear subspace of ''V'' if under the operations of ''V'', ''W'' is a vector space over ''K''. Equivalently, a nonempty subset ''W'' is a subspace of ''V'' if, whenever are elements of ''W'' and are elements of ''K'', it follows that is in ''W''. As a corollary, all vector spaces are equipped with at least two ( ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Spanning Set
In mathematics, the linear span (also called the linear hull or just span) of a set of vectors (from a vector space), denoted , pp. 29-30, §§ 2.5, 2.8 is defined as the set of all linear combinations of the vectors in . It can be characterized either as the intersection of all linear subspaces that contain , or as the smallest subspace containing . The linear span of a set of vectors is therefore a vector space itself. Spans can be generalized to matroids and modules. To express that a vector space is a linear span of a subset , one commonly uses the following phrases—either: spans , is a spanning set of , is spanned/generated by , or is a generator or generator set of . Definition Given a vector space over a field , the span of a set of vectors (not necessarily infinite) is defined to be the intersection of all subspaces of that contain . is referred to as the subspace ''spanned by'' , or by the vectors in . Conversely, is called a ''spanning set'' of , and we s ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Computer Algebra System
A computer algebra system (CAS) or symbolic algebra system (SAS) is any mathematical software with the ability to manipulate mathematical expressions in a way similar to the traditional manual computations of mathematicians and scientists. The development of the computer algebra systems in the second half of the 20th century is part of the discipline of "computer algebra" or "symbolic computation", which has spurred work in algorithms over mathematical objects such as polynomials. Computer algebra systems may be divided into two classes: specialized and general-purpose. The specialized ones are devoted to a specific part of mathematics, such as number theory, group theory, or teaching of elementary mathematics. General-purpose computer algebra systems aim to be useful to a user working in any scientific field that requires manipulation of mathematical expressions. To be useful, a general-purpose computer algebra system must include various features such as: *a user interface allo ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]