MAJC
   HOME





MAJC
MAJC (Microprocessor Architecture for Java Computing) was a Sun Microsystems multi-core, multithreaded, very long instruction word (VLIW) microprocessor design from the mid-to-late 1990s. Originally called the UltraJava processor, the MAJC processor was targeted at running Java programs, whose "late compiling" allowed Sun to make several favourable design decisions. The processor was released into two commercial graphical cards from Sun. Lessons learned regarding multi-threads on a multi-core processor provided a basis for later OpenSPARC implementations such as the UltraSPARC T1. Design elements Move instruction scheduling to the compiler Like other VLIW designs, notably Intel's IA-64 (Itanium), MAJC attempted to improve performance by moving several expensive operations out of the processor and into the related compilers. In general, VLIW designs attempt to eliminate the ''instruction scheduler'', which often represents a relatively large amount of the overall processor's ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

SIMD
Single instruction, multiple data (SIMD) is a type of parallel computer, parallel processing in Flynn's taxonomy. SIMD describes computers with multiple processing elements that perform the same operation on multiple data points simultaneously. SIMD can be internal (part of the hardware design) and it can be directly accessible through an instruction set architecture (ISA), but it should not be confused with an ISA. Such machines exploit Data parallelism, data level parallelism, but not Concurrent computing, concurrency: there are simultaneous (parallel) computations, but each unit performs exactly the same instruction at any given moment (just with different data). A simple example is to add many pairs of numbers together, all of the SIMD units are performing an addition, but each one has different pairs of values to add. SIMD is particularly applicable to common tasks such as adjusting the contrast in a digital image or adjusting the volume of digital audio. Most modern Cen ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


PicoJava
picoJava is a microprocessor specification dedicated to native execution of Java bytecode without the need for an interpreter or just-in-time compilation. The aim is to speed bytecode execution up by up to 20 times, compared to standard Intel CPU with a Java virtual machine. GNU Compiler Collection added picoJava support in 1999 as machine definition 'pj,'. The open-source version of picoJava has been implemented in an FPGA. See also * Jazelle * MAJC Notes References * McGhan, Harlan; O’Connor, Mike (October 1998). "PicoJava: A Direct Execution Engine For Java Bytecode". ''Computer'', Volume 31, Issue 10: pp. 22–30. * O’Connor, J. Michael; Tremblay, Marc (March/April 1997). "picoJava-I: The Java Virtual Machine in Hardware". ''IEEE Micro'', Volume 17, Issue 2: pp. 45–53. * Hangal, Sudheendra; O'Connor, J. Michael (May/June 1999). "Performance analysis and validation of the picoJava processor." ''IEEE Micro ''IEEE Micro'' is a bimonthly peer-reviewed sc ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Instruction Packet
In computing, an opcode (abbreviated from operation code) is an enumerated value that specifies the operation to be performed. Opcodes are employed in hardware devices such as arithmetic logic units (ALUs), central processing units (CPUs), and software instruction sets. In ALUs, the opcode is directly applied to circuitry via an input signal bus. In contrast, in CPUs, the opcode is the portion of a machine language instruction that specifies the operation to be performed. CPUs Opcodes are found in the machine language instructions of CPUs as well as in some abstract computing machines. In CPUs, an opcode may be referred to as an instruction machine code, instruction code, instruction syllable, instruction parcel, or opstring. For any particular processor (which may be a general CPU or a more specialized processing unit), the opcodes are defined by the processor's instruction set architecture (ISA). They can be described using an opcode table. The types of operations may include ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Microprocessor Report
''Microprocessor Report'' is a newsletter covering the microprocessor industry. The publication is accessible only to paying subscribers. To avoid bias, it does not take advertisements. The publication provides extensive analysis of new high-performance microprocessor chips. It also covers microprocessor design issues, microprocessor-based systems, memory and system logic chips, embedded processors, GPUs, DSPs, and intellectual property (IP) cores. History and profile ''Microprocessor Report'' was first published in 1987 by Michael Slater (engineer). Original board members included Bruce Koball, George Morrow, Brian Case, John Wakerly, Nick Tredennick, Bernard Peuto, Rich Belgard, Dennis Allison, and J H Wharton all of whom served for many years. Slater left MicroDesign Resources (MDR), at the end of 1999. Slater's company MDR, based in Sebastopol, California, originally published ''Microprocessor Report.'' MDR also hosted an annual conference, the Microprocessor Forum, and r ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Thread-level Parallelism
Task parallelism (also known as function parallelism and control parallelism) is a form of parallelization of computer code across multiple processors in parallel computing environments. Task parallelism focuses on distributing tasks—concurrently performed by processes or threads—across different processors. In contrast to data parallelism which involves running the same task on different components of data, task parallelism is distinguished by running many different tasks at the same time on the same data. A common type of task parallelism is pipelining, which consists of moving a single set of data through a series of separate tasks where each task can execute independently of the others. Description In a multiprocessor system, task parallelism is achieved when each processor executes a different thread (or process) on the same or different data. The threads may execute the same or different code. In the general case, different execution threads communicate with one a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Workstation
A workstation is a special computer designed for technical or computational science, scientific applications. Intended primarily to be used by a single user, they are commonly connected to a local area network and run multi-user operating systems. The term ''workstation'' has been used loosely to refer to everything from a mainframe computer terminal to a Personal computer, PC connected to a Computer network, network, but the most common form refers to the class of hardware offered by several current and defunct companies such as Sun Microsystems, Silicon Graphics, Apollo Computer, Digital Equipment Corporation, DEC, HP Inc., HP, NeXT, and IBM which powered the 3D computer graphics revolution of the late 1990s. Workstations formerly offered higher performance than mainstream personal computers, especially in Central processing unit, CPU, Graphics processing unit, graphics, memory, and multitasking. Workstations are optimized for the Visualization (graphics), visualization and ma ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Speculative Multithreading
Thread Level Speculation (TLS), also known as Speculative Multi-threading, or Speculative Parallelization, is a technique to speculatively execute a section of computer code that is anticipated to be executed later in parallel with the normal execution on a separate independent thread. Such a speculative thread may need to make assumptions about the values of input variables. If these prove to be invalid, then the portions of the speculative thread that rely on these input variables will need to be discarded and squashed. If the assumptions are correct the program can complete in a shorter time provided the thread was able to be scheduled efficiently. Description TLS extracts threads from serial code and executes them speculatively in parallel with a safe thread. The speculative thread will need to be discarded or re-run if its presumptions on the input state prove to be invalid. It is a dynamic (runtime) parallelization Parallel computing is a type of computation i ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Speculative Execution
Speculative execution is an optimization (computer science), optimization technique where a computer system performs some task that may not be needed. Work is done before it is known whether it is actually needed, so as to prevent a delay that would have to be incurred by doing the work after it is known that it is needed. If it turns out the work was not needed after all, most changes made by the work are reverted and the results are ignored. The objective is to provide more Concurrency (computer science), concurrency if extra Resource (computer science), resources are available. This approach is employed in a variety of areas, including branch predictor, branch prediction in instruction pipeline, pipelined CPU, processors, value prediction for exploiting value locality, prefetching Instruction prefetch, memory and File system, files, and optimistic concurrency control in Relational database management system, database systems. Speculative multithreading is a special case of specu ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Context Switch
In computing, a context switch is the process of storing the state of a process or thread, so that it can be restored and resume execution at a later point, and then restoring a different, previously saved, state. This allows multiple processes to share a single central processing unit (CPU), and is an essential feature of a multiprogramming or multitasking operating system. In a traditional CPU, each process – a program in execution – uses the various CPU registers to store data and hold the current state of the running process. However, in a multitasking operating system, the operating system switches between processes or threads to allow the execution of multiple processes simultaneously. For every switch, the operating system must save the state of the currently running process, followed by loading the next process state, which will run on the CPU. This sequence of operations that stores the state of the running process and loads the following running process is called a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Bytecode
Bytecode (also called portable code or p-code) is a form of instruction set designed for efficient execution by a software interpreter. Unlike human-readable source code, bytecodes are compact numeric codes, constants, and references (normally numeric addresses) that encode the result of compiler parsing and performing semantic analysis of things like type, scope, and nesting depths of program objects. The name ''bytecode'' stems from instruction sets that have one- byte opcodes followed by optional parameters. Intermediate representations such as bytecode may be output by programming language implementations to ease interpretation, or it may be used to reduce hardware and operating system dependence by allowing the same code to run cross-platform, on different devices. Bytecode may often be either directly executed on a virtual machine (a p-code machine, i.e., interpreter), or it may be further compiled into machine code for better performance. Since bytecode instruct ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]