Math.NET Numerics
   HOME
*





Math.NET Numerics
Math.NET Numerics is an open-source numerical library for .NET and Mono, written in C# and F#. It features functionality similar to BLAS and LAPACK. History Math.NET Numerics started 2009 by merging code and teams of dnAnalytics with Math.NET Iridium. It is influenced by ALGLIB, JAMA and Boost, among others, and has accepted numerous code contributions. It is part of the Math.NET initiative to build and maintain open mathematical toolkits for the .NET platform since 2002. Math.NET is used by several open source libraries and research projects, like MyMediaLite, FermiSim and LightField Retrieval, and various theses and papers. Features The software library provides facilities for: * Probability distributions: discrete, continuous and multivariate. * Pseudo-random number generation, including Mersenne Twister MT19937. * Real and complex linear algebra types and solvers with support for sparse matrices and vectors. * LU, QR, SVD, EVD, and Cholesky decompositions. * Matr ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

C Sharp (programming Language)
C# (pronounced ) is a general-purpose, high-level multi-paradigm programming language. C# encompasses static typing, strong typing, lexically scoped, imperative, declarative, functional, generic, object-oriented ( class-based), and component-oriented programming disciplines. The C# programming language was designed by Anders Hejlsberg from Microsoft in 2000 and was later approved as an international standard by Ecma (ECMA-334) in 2002 and ISO/ IEC (ISO/IEC 23270) in 2003. Microsoft introduced C# along with .NET Framework and Visual Studio, both of which were closed-source. At the time, Microsoft had no open-source products. Four years later, in 2004, a free and open-source project called Mono began, providing a cross-platform compiler and runtime environment for the C# programming language. A decade later, Microsoft released Visual Studio Code (code editor), Roslyn (compiler), and the unified .NET platform (software framework), all of which support C# and are ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

LU Decomposition
In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix (see matrix decomposition). The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. The LU decomposition was introduced by the Polish mathematician Tadeusz Banachiewicz in 1938. Definitions Let ''A'' be a square matrix. An LU factorization refers to the factorization of ''A'', with proper row and/or column orderings or permutations, into two factors – a lower triangular matrix ''L'' and an upper triangular matrix ''U'': : A = LU. In the lower triangular matrix all elements above the diagonal are zero, in the upper triangular matrix, all ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


List Of Numerical Libraries
This is a list of numerical libraries, which are libraries used in software development for performing numerical calculations. It is not a complete listing but is instead a list of numerical libraries with articles on Wikipedia, with few exceptions. The choice of a typical library depends on a range of requirements such as: desired features (e.g. large dimensional linear algebra, parallel computation, partial differential equations), licensing, readability of API, portability or platform/compiler dependence (e.g. Linux, Windows, Visual C++, GCC), performance, ease-of-use, continued support from developers, standard compliance, specialized optimization in code for specific application scenarios or even the size of the code-base to be installed. Multi-language C C++ Delphi * ALGLIB - an open source numerical analysis library. .NET Framework languages C#, F#, VB.NET and PowerShell Fortran Java * SuanShu is an open-source Java math library. It supports numerical ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Markov Chain Monte Carlo
In statistics, Markov chain Monte Carlo (MCMC) methods comprise a class of algorithms for sampling from a probability distribution. By constructing a Markov chain that has the desired distribution as its equilibrium distribution, one can obtain a sample of the desired distribution by recording states from the chain. The more steps that are included, the more closely the distribution of the sample matches the actual desired distribution. Various algorithms exist for constructing chains, including the Metropolis–Hastings algorithm. Application domains MCMC methods are primarily used for calculating numerical approximations of multi-dimensional integrals, for example in Bayesian statistics, computational physics, computational biology and computational linguistics. In Bayesian statistics, the recent development of MCMC methods has made it possible to compute large hierarchical models that require integrations over hundreds to thousands of unknown parameters. In rare even ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Pearson Product-moment Correlation Coefficient
In statistics, the Pearson correlation coefficient (PCC, pronounced ) ― also known as Pearson's ''r'', the Pearson product-moment correlation coefficient (PPMCC), the bivariate correlation, or colloquially simply as the correlation coefficient ― is a measure of linear correlation between two sets of data. It is the ratio between the covariance of two variables and the product of their standard deviations; thus, it is essentially a normalized measurement of the covariance, such that the result always has a value between −1 and 1. As with covariance itself, the measure can only reflect a linear correlation of variables, and ignores many other types of relationships or correlations. As a simple example, one would expect the age and height of a sample of teenagers from a high school to have a Pearson correlation coefficient significantly greater than 0, but less than 1 (as 1 would represent an unrealistically perfect correlation). Naming and history It was developed by Kar ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Lagrange Polynomial
In numerical analysis, the Lagrange interpolating polynomial is the unique polynomial of lowest degree that interpolates a given set of data. Given a data set of coordinate pairs (x_j, y_j) with 0 \leq j \leq k, the x_j are called ''nodes'' and the y_j are called ''values''. The Lagrange polynomial L(x) has degree \leq k and assumes each value at the corresponding node, L(x_j) = y_j. Although named after Joseph-Louis Lagrange, who published it in 1795, the method was first discovered in 1779 by Edward Waring. It is also an easy consequence of a formula published in 1783 by Leonhard Euler. Uses of Lagrange polynomials include the Newton–Cotes method of numerical integration and Shamir's secret sharing scheme in cryptography. For equispaced nodes, Lagrange interpolation is susceptible to Runge's phenomenon of large oscillation. Definition Given a set of k + 1 nodes \, which must all be distinct, x_j \neq x_m for indices j \neq m, the Lagrange basis for polynomials of degre ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Struve Function
In mathematics, the Struve functions , are solutions of the non-homogeneous Bessel's differential equation: : x^2 \frac + x \frac + \left (x^2 - \alpha^2 \right )y = \frac introduced by . The complex number α is the order of the Struve function, and is often an integer. And further defined its second-kind version \mathbf_\alpha(x) as \mathbf_\alpha(x)=\mathbf_\alpha(x)-Y_\alpha(x). The modified Struve functions are equal to , are solutions of the non-homogeneous Bessel's differential equation: : x^2 \frac + x \frac - \left (x^2 + \alpha^2 \right )y = \frac And further defined its second-kind version \mathbf_\alpha(x) as \mathbf_\alpha(x)=\mathbf_\alpha(x)-I_\alpha(x). Definitions Since this is a non-homogeneous equation, solutions can be constructed from a single particular solution by adding the solutions of the homogeneous problem. In this case, the homogeneous solutions are the Bessel functions, and the particular solution may be chosen as the corresponding Struve fu ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Bessel Function
Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are canonical solutions of Bessel's differential equation x^2 \frac + x \frac + \left(x^2 - \alpha^2 \right)y = 0 for an arbitrary complex number \alpha, the ''order'' of the Bessel function. Although \alpha and -\alpha produce the same differential equation, it is conventional to define different Bessel functions for these two values in such a way that the Bessel functions are mostly smooth functions of \alpha. The most important cases are when \alpha is an integer or half-integer. Bessel functions for integer \alpha are also known as cylinder functions or the cylindrical harmonics because they appear in the solution to Laplace's equation in cylindrical coordinates. #Spherical Bessel functions, Spherical Bessel functions with half-integer \alpha are obtained when the Helmholtz equation is solved in spherical coordinates. Applications of Bessel functions The Bessel f ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Error Function
In mathematics, the error function (also called the Gauss error function), often denoted by , is a complex function of a complex variable defined as: :\operatorname z = \frac\int_0^z e^\,\mathrm dt. This integral is a special (non- elementary) sigmoid function that occurs often in probability, statistics, and partial differential equations. In many of these applications, the function argument is a real number. If the function argument is real, then the function value is also real. In statistics, for non-negative values of , the error function has the following interpretation: for a random variable that is normally distributed with mean 0 and standard deviation , is the probability that falls in the range . Two closely related functions are the complementary error function () defined as :\operatorname z = 1 - \operatorname z, and the imaginary error function () defined as :\operatorname z = -i\operatorname iz, where is the imaginary unit Name The name "error functi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Beta Function
In mathematics, the beta function, also called the Euler integral of the first kind, is a special function that is closely related to the gamma function and to binomial coefficients. It is defined by the integral : \Beta(z_1,z_2) = \int_0^1 t^(1-t)^\,dt for complex number inputs z_1, z_2 such that \Re(z_1), \Re(z_2)>0. The beta function was studied by Leonhard Euler and Adrien-Marie Legendre and was given its name by Jacques Binet; its symbol is a Greek capital beta. Properties The beta function is symmetric, meaning that \Beta(z_1,z_2) = \Beta(z_2,z_1) for all inputs z_1 and z_2.Davis (1972) 6.2.2 p.258 A key property of the beta function is its close relationship to the gamma function: : \Beta(z_1,z_2)=\frac. A proof is given below in . The beta function is also closely related to binomial coefficients. When (or , by symmetry) is a positive integer, it follows from the definition of the gamma function thatDavis (1972) 6.2.1 p.258 : \Beta(m,n) =\dfrac = \frac ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Gamma Function
In mathematics, the gamma function (represented by , the capital letter gamma from the Greek alphabet) is one commonly used extension of the factorial function to complex numbers. The gamma function is defined for all complex numbers except the non-positive integers. For every positive integer , \Gamma(n) = (n-1)!\,. Derived by Daniel Bernoulli, for complex numbers with a positive real part, the gamma function is defined via a convergent improper integral: \Gamma(z) = \int_0^\infty t^ e^\,dt, \ \qquad \Re(z) > 0\,. The gamma function then is defined as the analytic continuation of this integral function to a meromorphic function that is holomorphic in the whole complex plane except zero and the negative integers, where the function has simple poles. The gamma function has no zeroes, so the reciprocal gamma function is an entire function. In fact, the gamma function corresponds to the Mellin transform of the negative exponential function: \Gamma(z) = \mathcal M ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Cholesky Decomposition
In linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced ) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices, and posthumously published in 1924. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations. Statement The Cholesky decomposition of a Hermitian positive-definite matrix A, is a decomposition of the form : \mathbf = \mathbf^*, where L is a lower triangular matrix with real and positive diagonal entries, and L* denotes the conjugate transpose of L. Every Hermitian positive-definite matrix (and thus also every real-valued symmetric positive-definite matrix) has a unique Cholesky decomposition. The converse holds trivially: if A ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]