NIL (programming Language)
New Implementation of LISP (NIL) is a programming language, a dialect of the language Lisp, developed at the Massachusetts Institute of Technology (MIT) during the 1970s, and intended to be the successor to the language Maclisp. It is a 32-bit implementation, and was in part a response to Digital Equipment Corporation's (DEC) VAX computer. The project was headed by Jon L White, with a stated goal of maintaining compatibility with MacLisp while fixing many of its problems. History The Lisp language was invented in 1958 by John McCarthy while he was at Massachusetts Institute of Technology (MIT). From its inception, Lisp was closely connected with the artificial intelligence (AI) research community, especially on PDP-10 systems. The 36-bit word size of the PDP-6 and PDP-10 was influenced by the usefulness of having two Lisp 18-bit pointers in one word: "The PDP-6 project started in early 1963, as a 24-bit machine. It grew to 36 bits for LISP, a design goal." Li ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Multi-paradigm Programming Language
Programming languages can be grouped by the number and types of Programming paradigm, paradigms supported. Paradigm summaries A concise reference for the programming paradigms listed in this article. * Concurrent programming language, Concurrent programming – have language constructs for concurrency, these may involve multi-threading, support for distributed computing, message passing, shared resources (including shared memory), or Futures and promises, futures ** Actor model, Actor programming – concurrent computation with ''actors'' that make local decisions in response to the environment (capable of selfish or competitive behaviour) * Constraint programming – relations between variables are expressed as constraints (or constraint networks), directing allowable solutions (uses constraint satisfaction or simplex algorithm) * Dataflow, Dataflow programming – forced recalculation of formulas when data values change (e.g. spreadsheets) * Declarative programming – describes ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
MIT Press
The MIT Press is the university press of the Massachusetts Institute of Technology (MIT), a private research university in Cambridge, Massachusetts. The MIT Press publishes a number of academic journals and has been a pioneer in the Open Access movement in academic publishing. History MIT Press traces its origins back to 1926 when MIT published a lecture series entitled ''Problems of Atomic Dynamics'' given by the visiting German physicist and later Nobel Prize winner, Max Born. In 1932, MIT's publishing operations were first formally instituted by the creation of an imprint called Technology Press. This imprint was founded by James R. Killian, Jr., at the time editor of MIT's alumni magazine and later to become MIT president. Technology Press published eight titles independently, then in 1937 entered into an arrangement with John Wiley & Sons in which Wiley took over marketing and editorial responsibilities. In 1961, the centennial of MIT's founding charter, the ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
SHRDLU
SHRDLU is an early natural-language understanding computer program that was developed by Terry Winograd at MIT in 1968–1970. In the program, the user carries on a conversation with the computer, moving objects, naming collections and querying the state of a simplified " blocks world", essentially a virtual box filled with different blocks. SHRDLU was written in the Micro Planner and Lisp programming language on the DEC PDP-6 computer and a DEC graphics terminal. Later additions were made at the computer graphics labs at the University of Utah, adding a full 3D rendering of SHRDLU's "world". The name SHRDLU was derived from ETAOIN SHRDLU, the arrangement of the letter keys on a Linotype machine, arranged in descending order of usage frequency in English. Functionality SHRDLU is primarily a language parser that allows user interaction using English terms. The user instructs SHRDLU to move various objects around in the "blocks world" containing various basic objects: ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Planner (programming Language)
Planner (often seen in publications as "PLANNER" although it is not an acronym) is a programming language designed by Carl Hewitt at MIT, and first published in 1969. First, subsets such as Micro-Planner and Pico-Planner were implemented, and then essentially the whole language was implemented as ''Popler'' by Julian Davies at the University of Edinburgh in the POP-2 programming language. Derivations such as QA4, Conniver, QLISP and Ether (see scientific community metaphor) were important tools in artificial intelligence research in the 1970s, which influenced commercial developments such as Knowledge Engineering Environment (KEE) and Automated Reasoning Tool (ART). Procedural approach versus logical approach The two major paradigms for constructing semantic software systems were procedural and logical. The procedural paradigm was epitomized by Lisp which featured recursive procedures that operated on list structures. The logical paradigm was epitomized by uniform proo ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
24-bit Computing
Notable 24-bit machines include the CDC 924 – a 24-bit version of the CDC 1604, CDC lower 3000 series, SDS 930 and SDS 940, the ICT 1900 series, the Elliott 4100 series, and the Datacraft minicomputers/ Harris H series. The term SWORD is sometimes used to describe a 24-bit data type with the S prefix referring to sesqui. The range of unsigned integers that can be represented in 24 bits is 0 to 16,777,215 ( in hexadecimal). The range of signed integers that can be represented in 24 bits is −8,388,608 to 8,388,607. Usage The IBM System/360, announced in 1964, was a popular computer system with 24-bit addressing and 32-bit general registers and arithmetic. The early 1980s saw the first popular personal computers, including the IBM PC/AT with an Intel 80286 processor using 24-bit addressing and 16-bit general registers and arithmetic, and the Apple Macintosh 128K with a Motorola 68000 processor featuring 24-bit addressing and 32-bit registers. The eZ ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Pointer (computer Programming)
In computer science, a pointer is an object in many programming languages that stores a memory address. This can be that of another value located in computer memory, or in some cases, that of memory-mapped computer hardware. A pointer ''references'' a location in memory, and obtaining the value stored at that location is known as ''dereferencing'' the pointer. As an analogy, a page number in a book's index could be considered a pointer to the corresponding page; dereferencing such a pointer would be done by flipping to the page with the given page number and reading the text found on that page. The actual format and content of a pointer variable is dependent on the underlying computer architecture. Using pointers significantly improves performance for repetitive operations, like traversing iterable data structures (e.g. strings, lookup tables, control tables, linked lists, and tree structures). In particular, it is often much cheaper in time and space to copy and deref ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
18-bit Computing
Eighteen binary digits have 262,144 (1000000 octal, 40000 hexadecimal) distinct combinations. Eighteen bits was a common word size for smaller computers in the 1960s, when large computers often using 36 bit words and 6-bit character sets, sometimes implemented as extensions of BCD, were the norm. There were also 18-bit teletypes experimented with in the 1940s. Example computer architectures Possibly the most well-known 18-bit computer architectures are the PDP-1, PDP-4, PDP-7, PDP-9 and PDP-15 minicomputers produced by Digital Equipment Corporation from 1960 to 1975. Digital's PDP-10 used 36-bit words but had 18-bit addresses. The UNIVAC division of Remington Rand produced several 18-bit computers, including the UNIVAC 418 and several military systems. The IBM 7700 Data Acquisition System was announced by IBM on December 2, 1963. The BCL Molecular 18 was a group of systems designed and manufactured in the UK in the 1970s and 1980s. The NASA Standard Spacecraft ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
PDP-6
The PDP-6, short for Programmed Data Processor model 6, is a computer developed by Digital Equipment Corporation (DEC) during 1963 and first delivered in the summer of 1964. It was an expansion of DEC's existing 18-bit systems to use a 36-bit data word, which was at that time a common word size for large machines like IBM mainframes. The system was constructed using the same germanium individual transistor-based System Module layout as DEC's earlier machines, like the PDP-1 and PDP-4. The system was designed with real-time computing use in mind, not just batch processing as was typical for most mainframes. Using a 36-bit word with 18-bit addresses allowed it to efficiently store the cons structure found in the Lisp language, which made it particularly useful in artificial intelligence labs like Project MAC at MIT. The PDP-6 was also notable for its inclusion of floating point instructions as a standard feature, which was relatively rare at that time. It was also comple ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Word (computer Architecture)
In computing, a word is any processor design's natural unit of data. A word is a fixed-sized datum handled as a unit by the instruction set or the hardware of the processor. The number of bits or digits in a word (the ''word size'', ''word width'', or ''word length'') is an important characteristic of any specific processor design or computer architecture. The size of a word is reflected in many aspects of a computer's structure and operation; the majority of the registers in a processor are usually word-sized and the largest datum that can be transferred to and from the working memory in a single operation is a word in many (not all) architectures. The largest possible address size, used to designate a location in memory, is typically a hardware word (here, "hardware word" means the full-sized natural word of the processor, as opposed to any other definition used). Documentation for older computers with fixed word size commonly states memory sizes in words rather than bytes ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
36-bit Computing
36-bit computers were popular in the early mainframe computer era from the 1950s through the early 1970s. Starting in the 1960s, but especially the 1970s, the introduction of 7-bit ASCII and 8-bit EBCDIC led to the move to machines using 8-bit computing, 8-bit bytes, with word sizes that were multiples of 8, notably the 32-bit computing, 32-bit IBM System/360 mainframe computer, mainframe and VAX, Digital Equipment VAX and Data General Eclipse MV/8000, Data General MV series superminicomputers. By the mid-1970s the conversion was largely complete, and microprocessors quickly moved from 8-bit to 16-bit to 32-bit over a period of a decade. The number of 36-bit machines rapidly fell during this period, offered largely for backward compatibility purposes running legacy system, legacy programs. History Prior to the introduction of computers, the state of the art in precision scientific and engineering calculation was the ten-digit, electrically powered, mechanical calculator, such a ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
PDP-10
Digital Equipment Corporation (DEC)'s PDP-10, later marketed as the DECsystem-10, is a mainframe computer family manufactured beginning in 1966 and discontinued in 1983. 1970s models and beyond were marketed under the DECsystem-10 name, especially as the TOPS-10 operating system became widely used. The PDP-10's architecture is almost identical to that of DEC's earlier PDP-6, sharing the same 36-bit Word (computer architecture), word length and slightly extending the instruction set. The main difference was a greatly improved hardware implementation. Some aspects of the instruction set are unusual, most notably the ''byte'' instructions, which operate on bit fields of any size from 1 to 36 bits inclusive, according to the general definition of a byte as ''a contiguous sequence of a fixed number of bits''. The PDP-10 was found in many university computing facilities and research labs during the 1970s, the most notable being Harvard University's Aiken Computation Laboratory, Mass ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Community Of Practice
A community of practice (CoP) is a group of people who "share a concern or a passion for something they do and learn how to do it better as they interact regularly". The concept was first proposed by cognitive anthropologist Jean Lave and educational theorist Etienne Wenger in their 1991 book ''Situated Learning''. Wenger significantly expanded on this concept in his 1998 book ''Communities of Practice''. A CoP can form around members' shared interests or goals. Through being part of a CoP, the members learn from each other and develop their identities. CoP members can engage with one another in physical settings (for example, in a lunchroom at work, an office, a factory floor), but CoP members are not necessarily co-located. They can form a virtual community of practice (VCoP) where the CoP is primarily located in an online community such as a discussion board, newsgroup, or on a social networking service. Communities of practice have existed for as long as people have ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |