The Unscrambler
The Unscrambler X is a commercial software product for multivariate data analysis, used for calibration of multivariate data which is often in the application of analytical data such as near infrared spectroscopy and Raman spectroscopy, and development of predictive models for use in real-time spectroscopic analysis of materials. The software was originally developed in 1986 by Harald Martens and later by CAMO Software. Functionality The Unscrambler X was an early adaptation of the use of partial least squares (PLS). Other techniques supported include principal component analysis (PCA), 3-way PLS, multivariate curve resolution, design of experiments, supervised classification, unsupervised classification and cluster analysis. The software is used in spectroscopy (IR, NIR, Raman, etc.), chromatography, and process applications in research and non-destructive quality control systems in pharmaceutical manufacturing, sensory analysis Sensory analysis (or sensory evaluation ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
CAMO Software
Camo is a ''frazione'' of the ''comune'' of Santo Stefano Belbo in the Province of Cuneo, in the Italian region Piedmont, located about southeast of Turin and about 60 km northeast of Cuneo. The municipality of Camo was abolished on 31 December 2018. The former ''comune'' bordered the following municipalities: Cossano Belbo, Mango, Santo Stefano Belbo Santo Stefano Belbo is a ''comune'' (municipality) in the Province of Cuneo in the Italy, Italian region Piedmont, located southeast of Turin and northeast of Cuneo. It is the birthplace of 20th century author Cesare Pavese. On its hill are a me .... References Cities and towns in Piedmont Frazioni of the Province of Cuneo {{Cuneo-geo-stub ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Principal Component Analysis
Principal component analysis (PCA) is a popular technique for analyzing large datasets containing a high number of dimensions/features per observation, increasing the interpretability of data while preserving the maximum amount of information, and enabling the visualization of multidimensional data. Formally, PCA is a statistical technique for reducing the dimensionality of a dataset. This is accomplished by linearly transforming the data into a new coordinate system where (most of) the variation in the data can be described with fewer dimensions than the initial data. Many studies use the first two principal components in order to plot the data in two dimensions and to visually identify clusters of closely related data points. Principal component analysis has applications in many fields such as population genetics, microbiome studies, and atmospheric science. The principal components of a collection of points in a real coordinate space are a sequence of p unit vectors, where th ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Computational Chemistry
Computational chemistry is a branch of chemistry that uses computer simulation to assist in solving chemical problems. It uses methods of theoretical chemistry, incorporated into computer programs, to calculate the structures and properties of molecules, groups of molecules, and solids. It is essential because, apart from relatively recent results concerning the hydrogen molecular ion (dihydrogen cation, see references therein for more details), the quantum many-body problem cannot be solved analytically, much less in closed form. While computational results normally complement the information obtained by chemical experiments, it can in some cases predict hitherto unobserved chemical phenomena. It is widely used in the design of new drugs and materials. Examples of such properties are structure (i.e., the expected positions of the constituent atoms), absolute and relative (interaction) energies, electronic charge density distributions, dipoles and higher multipole moments, vi ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Statistical Software
Statistical software are specialized computer programs for analysis in statistics and econometrics. Open-source * ADaMSoft – a generalized statistical software with data mining algorithms and methods for data management * ADMB – a software suite for non-linear statistical modeling based on C++ which uses automatic differentiation * Chronux – for neurobiological time series data * DAP – free replacement for SAS * Environment for DeveLoping KDD-Applications Supported by Index-Structures (ELKI) a software framework for developing data mining algorithms in Java * Epi Info – statistical software for epidemiology developed by Centers for Disease Control and Prevention (CDC). Apache 2 licensed * Fityk – nonlinear regression software (GUI and command line) * GNU Octave – programming language very similar to MATLAB with statistical features * gretl – gnu regression, econometrics and time-series library * intrinsic Noise Analyzer (iNA) – For analyzing intrinsic fluctua ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Pier Giorgio Righetti
Pier Giorgio Righetti (born 25 April 1941, Forlì, Northern Italy) is a professor emeritus of chemistry. He worked primarily at the University of Milano (1971-1995) and at the Department of Chemistry of the Politecnico di Milano in Milan, Italy (2005-2011). He has served as the President of the Società Italiana di Proteomica (Italian Proteome Society, IPSo). A special issue of ''Electrophoresis'' was published in Righetti's honor in 2006, recognizing his work developing new methods and techniques for electrophoresis. His contributions have been described as "paramount in the ability to separate biomolecules electrophoretically". Righetti uses those techniques to study proteomes, sets of proteins that are expressed in organisms. He focuses particularly on animals and foods. Proteomics describes the proteins that can exist in a given type of cell and their interactions, form and structure. Proteins can remain intact for hundreds or even thousands of years. One of many applications ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Chemical Industry
The chemical industry comprises the companies that produce industrial chemicals. Central to the modern world economy, it converts raw materials (oil, natural gas, air, water, metals, and minerals) into more than 70,000 different products. The plastics industry contains some overlap, as some chemical companies produce plastics as well as chemicals. Various professionals are involved in the chemical industry including chemical engineers, chemists and lab technicians. History Although chemicals were made and used throughout history, the birth of the heavy chemical industry (production of chemicals in large quantities for a variety of uses) coincided with the beginnings of the Industrial Revolution. Industrial Revolution One of the first chemicals to be produced in large amounts through industrial processes was sulfuric acid. In 1736 pharmacist Joshua Ward developed a process for its production that involved heating saltpeter, allowing the sulfur to oxidize and combine with water ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Sensory Analysis
Sensory analysis (or sensory evaluation) is a science, scientific discipline that applies principles of experimental design and statistical analysis to the use of human senses (visual perception, sight, olfaction, smell, taste, touch and Hearing (sense), hearing) for the purposes of evaluating consumer products. The discipline requires panels of human assessors, on whom the products are tested, and recording the responses made by them. By applying statistical techniques to the results it is possible to make inferences and insights about the products under test. Most large consumer goods companies have departments dedicated to sensory analysis. Sensory analysis can mainly be broken down into three sub-sections: * Analytical testing (dealing with objective facts about products) * Affective testing (dealing with subjective facts such as preferences) * Perception (the biochemical and psychological aspects of sensation) Analytical testing This type of testing is concerned with obtain ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Quality Control
Quality control (QC) is a process by which entities review the quality of all factors involved in production. ISO 9000 defines quality control as "a part of quality management focused on fulfilling quality requirements". This approach places emphasis on three aspects (enshrined in standards such as ISO 9001): # Elements such as controls, job management, defined and well managed processes, performance and integrity criteria, and identification of records # Competence, such as knowledge, skills, experience, and qualifications # Soft elements, such as personnel, integrity, confidence, organizational culture, motivation, team spirit, and quality relationships. Inspection is a major component of quality control, where physical product is examined visually (or the end results of a service are analyzed). Product inspectors will be provided with lists and descriptions of unacceptable product defects such as cracks or surface blemishes for example. History and introduction Ea ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Cluster Analysis
Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters). It is a main task of exploratory data analysis, and a common technique for statistics, statistical data analysis, used in many fields, including pattern recognition, image analysis, information retrieval, bioinformatics, data compression, computer graphics and machine learning. Cluster analysis itself is not one specific algorithm, but the general task to be solved. It can be achieved by various algorithms that differ significantly in their understanding of what constitutes a cluster and how to efficiently find them. Popular notions of clusters include groups with small Distance function, distances between cluster members, dense areas of the data space, intervals or particular statistical distributions. Clustering can therefore be formulated as a multi-object ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Unsupervised Classification
Unsupervised learning is a type of algorithm that learns patterns from untagged data. The hope is that through mimicry, which is an important mode of learning in people, the machine is forced to build a concise representation of its world and then generate imaginative content from it. In contrast to supervised learning where data is tagged by an expert, e.g. tagged as a "ball" or "fish", unsupervised methods exhibit self-organization that captures patterns as probability densities or a combination of neural feature preferences encoded in the machine's weights and activations. The other levels in the supervision spectrum are reinforcement learning where the machine is given only a numerical performance score as guidance, and semi-supervised learning where a small portion of the data is tagged. Neural networks Tasks vs. methods Neural network tasks are often categorized as discriminative (recognition) or generative (imagination). Often but not always, discriminative ta ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |