Intel Fortran Compiler
   HOME





Intel Fortran Compiler
Intel Fortran Compiler, as part of Intel OneAPI HPC toolkit, is a group of Fortran compilers from Intel for Windows, macOS, and Linux. Overview The compilers generate code for IA-32 and Intel 64 processors and certain non-Intel but compatible processors, such as certain AMD processors. A specific release of the compiler (11.1) remains available for development of Linux-based applications for IA-64 ( Itanium 2) processors. On Windows, it is known as Intel Visual Fortran. On macOS and Linux, it is known as Intel Fortran. In 2020 the existing compiler was renamed “Intel Fortran Compiler Classic” (ifort) and a new Intel Fortran Compiler for oneAPI (ifx) supporting GPU offload was introduced. The 2021 release of the Classic compiler adds full Fortran support through the 2018 standard, full OpenMP* 4.5, and Initial Open MP 5.1 for CPU only. The 2021 beta compiler focuses on OpenMP for GPU Offload. When used with the Intel OneAPI HPC toolkit (see the "Description of Packaging" ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Intel
Intel Corporation is an American multinational corporation and technology company headquartered in Santa Clara, California, and Delaware General Corporation Law, incorporated in Delaware. Intel designs, manufactures, and sells computer components such as central processing units (CPUs) and related products for business and consumer markets. It is one of the world's List of largest semiconductor chip manufacturers, largest semiconductor chip manufacturers by revenue, and ranked in the Fortune 500, ''Fortune'' 500 list of the List of largest companies in the United States by revenue, largest United States corporations by revenue for nearly a decade, from 2007 to 2016 Fiscal year, fiscal years, until it was removed from the ranking in 2018. In 2020, it was reinstated and ranked 45th, being the List of Fortune 500 computer software and information companies, 7th-largest technology company in the ranking. It was one of the first companies listed on Nasdaq. Intel supplies List of I ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Profile-guided Optimization
In computer programming, profile-guided optimization (PGO, sometimes pronounced as ''pogo''), also known as profile-directed feedback (PDF) or feedback-directed optimization (FDO), is the compiler optimization technique of using prior analyses of software artifacts or behaviors (" profiling") to improve the expected runtime performance of the program. Method Optimization techniques based on static program analysis of the source code consider code performance improvements without actually executing the program. No dynamic program analysis is performed. For example, inferring or placing formal constraints on the number of iterations a loop is likely to execute is fundamentally useful when considering whether to unroll it or not, but such facts typically rely on complex runtime factors that are difficult to conclusively establish. Usually, static analysis will have incomplete information and only be able to approximate estimates of the eventual runtime conditions. The first high ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


DWARF
Dwarf, dwarfs or dwarves may refer to: Common uses *Dwarf (folklore), a supernatural being from Germanic folklore * Dwarf, a human or animal with dwarfism Arts, entertainment, and media Fictional entities * Dwarf (''Dungeons & Dragons''), a short humanoid race * Dwarf (Middle-earth), a humanoid race in J. R. R. Tolkien's literature * Dwarf (''Warhammer''), a humanoid race * Dwarfs (''Discworld''), a race of characters * Dwarves (''Warcraft''), a short, strong race * Dwarves (Marvel Comics) * Seven Dwarfs Literature * ''The Dwarf'' (Cho novel), a 1978 novel by Cho Se-hui * ''The Dwarf'' (Lagerkvist novel), a 1944 novel by Pär Lagerkvist Other arts, entertainment, and media * ''Dwarfs?!'' (video game) *Dwarves (band) The Dwarves are an American punk rock band formed in Chicago, Illinois and based in San Francisco, California as of 2009.Gentile, John (2009)Interview: Blag Dahlia of the Dwarves", '' The A.V. Club'', April 13, 2009, retrieved February 7, 2010 ..., American p ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


AVX-512
AVX-512 are 512-bit extensions to the 256-bit Advanced Vector Extensions SIMD instructions for x86 instruction set architecture (ISA) proposed by Intel in July 2013, and first implemented in the 2016 Intel Xeon Phi x200 (Knights Landing), and then later in a number of AMD and other Intel CPUs ( see list below). AVX-512 consists of multiple extensions that may be implemented independently. This policy is a departure from the historical requirement of implementing the entire instruction block. Only the core extension AVX-512F (AVX-512 Foundation) is required by all AVX-512 implementations. Besides widening most 256-bit instructions, the extensions introduce various new operations, such as new data conversions, scatter operations, and permutations. The number of AVX registers is increased from 16 to 32, and eight new "mask registers" are added, which allow for variable selection and blending of the results of instructions. In CPUs with the vector length (VL) extension—included in m ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Microsoft Visual Studio
Visual Studio is an integrated development environment (IDE) developed by Microsoft. It is used to develop computer programs including websites, web apps, web services and mobile apps. Visual Studio uses Microsoft software development platforms including Windows API, Windows Forms, Windows Presentation Foundation (WPF), Microsoft Store and Microsoft Silverlight. It can produce both native code and managed code. Visual Studio includes a code editor supporting IntelliSense (the code completion component) as well as code refactoring. The integrated debugger works as both a source-level debugger and as a machine-level debugger. Other built-in tools include a code profiler, designer for building GUI applications, web designer, class designer, and database schema designer. It accepts plug-ins that expand the functionality at almost every level—including adding support for source control systems (like Subversion and Git) and adding new toolsets like editors and visual des ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




SSE4
SSE4 (Streaming SIMD Extensions 4) is a SIMD CPU instruction set used in the Intel Core microarchitecture and AMD K10 (K8L). It was announced on September 27, 2006, at the Fall 2006 Intel Developer Forum, with vague details in a white paper;Intel Streaming SIMD Extensions 4 (SSE4) Instruction Set Innovation
, Intel.
more precise details of 47 instructions became available at the Spring 2007 Intel Developer Forum in , in the presentation. SSE4 extended the SSE3 instruction set which was released in early 2004. All software using previous Intel SIMD instructio ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Intel Xe
Intel Xe (stylized as Xe and pronounced as two separate letters, abbreviation for "exascale for everyone"), earlier known unofficially as Gen12, is a GPU architecture developed by Intel. Intel Xe includes a new instruction set architecture. The Xe GPU family consists of a series of microarchitectures, ranging from integrated/low power (Xe-LP), to enthusiast/high performance gaming (Xe-HPG), datacenter/high performance (Xe-HP) and high performance computing (Xe-HPC). History Intel's first attempt at a dedicated graphics card was the Intel740, released in February 1998. The Intel740 was considered unsuccessful due to its performance which was lower than market expectations, causing Intel to cease development on future discrete graphics products. However, its technology lived on in the Intel Extreme Graphics lineup. Intel made another attempt with the Larrabee architecture before canceling it in 2009; this time, the technology developed was used in the Xeon Phi, which was d ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Intel MIC
Xeon Phi is a discontinued series of x86 manycore processors designed and made by Intel. It was intended for use in supercomputers, servers, and high-end workstations. Its architecture allowed use of standard programming languages and application programming interfaces (APIs) such as OpenMP. Xeon Phi launched in 2010. Since it was originally based on an earlier GPU design ( codenamed "Larrabee") by Intel that was cancelled in 2009, it shared application areas with GPUs. The main difference between Xeon Phi and a GPGPU like Nvidia Tesla was that Xeon Phi, with an x86-compatible core, could, with less modification, run software that was originally targeted to a standard x86 CPU. Initially in the form of PCI Express-based add-on cards, a second-generation product, codenamed ''Knights Landing'', was announced in June 2013. These second-generation chips could be used as a standalone CPU, rather than just as an add-in card. In June 2013, the Tianhe-2 supercomputer at the National S ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

AMD64
x86-64 (also known as x64, x86_64, AMD64, and Intel 64) is a 64-bit extension of the x86 instruction set. It was announced in 1999 and first available in the AMD Opteron family in 2003. It introduces two new operating modes: 64-bit mode and compatibility mode, along with a new four-level paging mechanism. In 64-bit mode, x86-64 supports significantly larger amounts of virtual memory and physical memory compared to its 32-bit predecessors, allowing programs to utilize more memory for data storage. The architecture expands the number of general-purpose registers from 8 to 16, all fully general-purpose, and extends their width to 64 bits. Floating-point arithmetic is supported through mandatory SSE2 instructions in 64-bit mode. While the older x87 FPU and MMX registers are still available, they are generally superseded by a set of sixteen 128-bit vector registers (XMM registers). Each of these vector registers can store one or two double-precision floating-point numbers, ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

X86-64
x86-64 (also known as x64, x86_64, AMD64, and Intel 64) is a 64-bit extension of the x86 instruction set architecture, instruction set. It was announced in 1999 and first available in the AMD Opteron family in 2003. It introduces two new operating modes: 64-bit mode and compatibility mode, along with a new four-level paging mechanism. In 64-bit mode, x86-64 supports significantly larger amounts of virtual memory and physical memory compared to its 32-bit computing, 32-bit predecessors, allowing programs to utilize more memory for data storage. The architecture expands the number of general-purpose registers from 8 to 16, all fully general-purpose, and extends their width to 64 bits. Floating-point arithmetic is supported through mandatory SSE2 instructions in 64-bit mode. While the older x87 FPU and MMX registers are still available, they are generally superseded by a set of sixteen 128-bit Processor register, vector registers (XMM registers). Each of these vector registers ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Loop Distribution
Loop fission (or loop distribution) is a compiler optimization in which a loop is broken into multiple loops over the same index range with each taking only a part of the original loop's body. The goal is to break down a large loop body into smaller ones to achieve better utilization of locality of reference. This optimization is most efficient in multi-core processors that can split a task into multiple tasks for each processor. Conversely, loop fusion (or loop jamming) is a compiler optimization and loop transformation which replaces multiple loops with a single one. Loop fusion does not always improve run-time speed. On some architectures, two loops may actually perform better than one loop because, for example, there is increased data locality within each loop. One of the main benefits of loop fusion is that it allows temporary allocations to be avoided, which can lead to huge performance gains in numerical computing languages such as Julia when doing elementwise operations ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Loop Unrolling
Loop unrolling, also known as loop unwinding, is a loop transformation technique that attempts to optimize a program's execution speed at the expense of its binary size, which is an approach known as space–time tradeoff. The transformation can be undertaken manually by the programmer or by an optimizing compiler. On modern processors, loop unrolling is often counterproductive, as the increased code size can cause more cache misses; ''cf.'' Duff's device. The goal of loop unwinding is to increase a program's speed by reducing or eliminating instructions that control the loop, such as pointer arithmetic and "end of loop" tests on each iteration; reducing branch penalties; as well as hiding latencies, including the delay in reading data from memory. To eliminate this computational overhead, loops can be re-written as a repeated sequence of similar independent statements. Loop unrolling is also part of certain formal verification techniques, in particular bounded model chec ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]