HOME





High-level Language Computer Architecture
A high-level language computer architecture (HLLCA) is a computer architecture designed to be targeted by a specific high-level programming language (HLL), rather than the architecture being dictated by hardware considerations. It is accordingly also termed language-directed computer design, coined in and primarily used in the 1960s and 1970s. HLLCAs were popular in the 1960s and 1970s, but largely disappeared in the 1980s. This followed the dramatic failure of the Intel 432 (1981) and the emergence of optimizing compilers and reduced instruction set computer (RISC) architectures and RISC-like complex instruction set computer (CISC) architectures, and the later development of just-in-time compilation (JIT) for HLLs. A detailed survey and critique can be found in . HLLCAs date almost to the beginning of HLLs, in the Burroughs large systems (1961), which were designed for ALGOL 60 (1960), one of the first HLLs. The best known HLLCAs may be the Lisp machines of the 1970s and 1980s, ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Tokenization (lexical Analysis)
Lexical tokenization is conversion of a text into (semantically or syntactically) meaningful ''lexical tokens'' belonging to categories defined by a "lexer" program. In case of a natural language, those categories include nouns, verbs, adjectives, punctuations etc. In case of a programming language, the categories include identifiers, operators, grouping symbols, data types and language keywords. Lexical tokenization is related to the type of tokenization used in large language models (LLMs) but with two differences. First, lexical tokenization is usually based on a lexical grammar, whereas LLM tokenizers are usually probability-based. Second, LLM tokenizers perform a second step that converts the tokens into numerical values. Rule-based programs A rule-based program, performing lexical tokenization, is called ''tokenizer'', or ''scanner'', although ''scanner'' is also a term for the first stage of a lexer. A lexer forms the first phase of a compiler frontend in processing. ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Burroughs Medium Systems
The Burroughs B2500 through Burroughs B4900 was a series of mainframe computers developed and manufactured by Burroughs Corporation in Pasadena, California, United States, from 1966 to 1991. They were aimed at the business world with an instruction set optimized for the COBOL programming language. They were also known as Burroughs Medium Systems, by contrast with the Burroughs Large Systems and Burroughs Small Systems. History and architecture First generation The B2500 and B3500 computers were announced in 1966. They operated directly on COBOL-68's primary decimal data types: strings of up to 100 digits, with one EBCDIC or ASCII digit character or two 4-bit binary-coded decimal BCD digits per byte. Portable COBOL programs did not use binary integers at all, so the B2500 did not either, not even for memory addresses. Memory was addressed down to the 4-bit digit in big-endian style, using 5-digit decimal addresses. Floating point numbers also used base 10 rather than some b ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Burroughs Large Systems
The Burroughs Large Systems Group produced a family of large 48-bit computing, 48-bit mainframe computer, mainframes using stack machine instruction sets with dense Syllable (computing), syllables.E.g., 12-bit syllables for B5000, 8-bit syllables for B6500 The first machine in the family was the B5000 in 1961, which was optimized for compiling ALGOL 60 programs extremely well, using single-pass compilers. The B5000 evolved into the B5500 (disk rather than drum) and the B5700 (up to four systems running as a cluster). Subsequent major redesigns include the B6500/B6700 line and its successors, as well as the separate B8500 line. In the 1970s, the Burroughs Corporation was organized into three divisions with very different product line architectures for high-end, mid-range, and entry-level business computer systems. Each division's product line grew from a different concept for how to optimize a computer's instruction set for particular programming languages. "Burroughs Large Systems" ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Pascal MicroEngine
Pascal MicroEngine is a series of microcomputer products manufactured by Western Digital from 1979 through the mid-1980s, designed specifically to run the UCSD p-System efficiently. Compared to other microcomputers, which use a machine language p-code interpreter, the Pascal MicroEngine has its interpreter implemented in microcode; p-code is its machine language. The most common programming language used on the p-System is Pascal. The MicroEngine runs a special release III p-System. The enhancements of release III were incorporated into release IV which was made publicly available for other platforms but not for the MicroEngine. Products The MicroEngine series of products was offered at various levels of integration: * WD-9000 five chip microprocessor chip set * WD-900 single board computer * WD-90 packaged system * SB-1600 MicroEngine single board computer * ME-1600 Modular MicroEngine packaged system The MicroEngine chipset was based on the MCP-1600 chipset, which formed the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Compiler
In computing, a compiler is a computer program that Translator (computing), translates computer code written in one programming language (the ''source'' language) into another language (the ''target'' language). The name "compiler" is primarily used for programs that translate source code from a high-level programming language to a lower level language, low-level programming language (e.g. assembly language, object code, or machine code) to create an executable program.Compilers: Principles, Techniques, and Tools by Alfred V. Aho, Ravi Sethi, Jeffrey D. Ullman - Second Edition, 2007 There are many different types of compilers which produce output in different useful forms. A ''cross-compiler'' produces code for a different Central processing unit, CPU or operating system than the one on which the cross-compiler itself runs. A ''bootstrap compiler'' is often a temporary compiler, used for compiling a more permanent or better optimised compiler for a language. Related software ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Assembly Language
In computing, assembly language (alternatively assembler language or symbolic machine code), often referred to simply as assembly and commonly abbreviated as ASM or asm, is any low-level programming language with a very strong correspondence between the instructions in the language and the architecture's machine code instructions. Assembly language usually has one statement per machine instruction (1:1), but constants, comments, assembler directives, symbolic labels of, e.g., memory locations, registers, and macros are generally also supported. The first assembly code in which a language is used to represent machine code instructions is found in Kathleen and Andrew Donald Booth's 1947 work, ''Coding for A.R.C.''. Assembly code is converted into executable machine code by a utility program referred to as an '' assembler''. The term "assembler" is generally attributed to Wilkes, Wheeler and Gill in their 1951 book '' The Preparation of Programs for an Electronic Dig ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Machine Code
In computer programming, machine code is computer code consisting of machine language instructions, which are used to control a computer's central processing unit (CPU). For conventional binary computers, machine code is the binaryOn nonbinary machines it is, e.g., a decimal representation. representation of a computer program that is actually read and interpreted by the computer. A program in machine code consists of a sequence of machine instructions (possibly interspersed with data). Each machine code instruction causes the CPU to perform a specific task. Examples of such tasks include: # Load a word from memory to a CPU register # Execute an arithmetic logic unit (ALU) operation on one or more registers or memory locations # Jump or skip to an instruction that is not the next one In general, each architecture family (e.g., x86, ARM) has its own instruction set architecture (ISA), and hence its own specific machine code language. There are exceptions, such as the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Bytecode
Bytecode (also called portable code or p-code) is a form of instruction set designed for efficient execution by a software interpreter. Unlike human-readable source code, bytecodes are compact numeric codes, constants, and references (normally numeric addresses) that encode the result of compiler parsing and performing semantic analysis of things like type, scope, and nesting depths of program objects. The name ''bytecode'' stems from instruction sets that have one- byte opcodes followed by optional parameters. Intermediate representations such as bytecode may be output by programming language implementations to ease interpretation, or it may be used to reduce hardware and operating system dependence by allowing the same code to run cross-platform, on different devices. Bytecode may often be either directly executed on a virtual machine (a p-code machine, i.e., interpreter), or it may be further compiled into machine code for better performance. Since bytecode instruct ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Reverse Polish Notation
Reverse Polish notation (RPN), also known as reverse Łukasiewicz notation, Polish postfix notation or simply postfix notation, is a mathematical notation in which operators ''follow'' their operands, in contrast to prefix or Polish notation (PN), in which operators ''precede'' their operands. The notation does not need any parentheses for as long as each operator has a fixed number of operands. The term ''postfix notation'' describes the general scheme in mathematics and computer sciences, whereas the term ''reverse Polish notation'' typically refers specifically to the method used to enter calculations into hardware or software calculators, which often have additional side effects and implications depending on the actual implementation involving a stack. The description "Polish" refers to the nationality of logician Jan Łukasiewicz, who invented Polish notation in 1924. The first computer to use postfix notation, though it long remained essentially unknown outside of G ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Substring
In formal language theory and computer science, a substring is a contiguous sequence of characters within a string. For instance, "''the best of''" is a substring of "''It was the best of times''". In contrast, "''Itwastimes''" is a subsequence of "''It was the best of times''", but not a substring. Prefixes and suffixes are special cases of substrings. A prefix of a string S is a substring of S that occurs at the beginning of S; likewise, a suffix of a string S is a substring that occurs at the end of S. The substrings of the string "''apple''" would be: "''a''", "''ap''", "''app''", "''appl''", "''apple''", "''p''", "''pp''", "''ppl''", "''pple''", "''pl''", "''ple''", "''l''", "''le''" "''e''", "" (note the empty string at the end). Substring A string u is a substring (or factor) of a string t if there exists two strings p and s such that t = pus. In particular, the empty string is a substring of every string. Example: The string u=ana is equal to substrings (and subs ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Infix Notation
Infix notation is the notation commonly used in arithmetical and logical formulae and statements. It is characterized by the placement of operators between operands—"infixed operators"—such as the plus sign in . Usage Binary relations are often denoted by an infix symbol such as set membership ''a'' ∈ ''A'' when the set ''A'' has ''a'' for an element. In geometry, perpendicular lines ''a'' and ''b'' are denoted a \perp b \ , and in projective geometry two points ''b'' and ''c'' are in perspective when b \ \doublebarwedge \ c while they are connected by a projectivity when b \ \barwedge \ c . Infix notation is more difficult to parse by computers than prefix notation (e.g. + 2 2) or postfix notation (e.g. 2 2 +). However many programming languages use it due to its familiarity. It is more used in arithmetic, e.g. 5 × 6. Further notations Infix notation may also be distinguished from function notation, where the name of a function suggests a particular operation, a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]