HOME
*





Hardware Emulation
In integrated circuit design, hardware emulation is the process of imitating the behavior of one or more pieces of hardware (typically a system under design) with another piece of hardware, typically a special purpose emulation system. The emulation model is usually based on a hardware description language (e.g. Verilog) source code, which is compiled into the format used by emulation system. The goal is normally debugging and functional verification of the system being designed. Often an emulator is fast enough to be plugged into a working target system in place of a yet-to-be-built chip, so the whole system can be debugged with live data. This is a specific case of in-circuit emulation. Sometimes hardware emulation can be confused with hardware devices such as expansion cards with hardware processors that assist functions of software emulation, such as older daughterboards with x86 chips to allow x86 OSes to run on motherboards of different processor families. Introduct ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Field-programmable Gate Array
A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturinghence the term ''Field-programmability, field-programmable''. The FPGA configuration is generally specified using a hardware description language (HDL), similar to that used for an Application-Specific Integrated Circuit, application-specific integrated circuit (ASIC). Circuit diagrams were previously used to specify the configuration, but this is increasingly rare due to the advent of electronic design automation tools. FPGAs contain an array of programmable logic device, programmable logic blocks, and a hierarchy of reconfigurable interconnects allowing blocks to be wired together. Logic blocks can be configured to perform complex combinational logic, combinational functions, or act as simple logic gates like AND gate, AND and XOR gate, XOR. In most FPGAs, logic blocks also include Memory cell (computing), memory elements, which may be simp ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Background Debug Mode Interface
Background debug mode (BDM) interface is an electronic interface that allows debugging of embedded systems. Specifically, it provides in-circuit debugging functionality in microcontrollers. It requires a single wire and specialized electronics in the system being debugged. It appears in many Freescale Semiconductor products. The interface allows a ''Host'' to manage and query a ''target''. Specialized hardware is required in the target device. No special hardware is required in the host; a simple bidirectional I/O pin is sufficient. I/O signals The signals used by BDM to communicate data to and from the target are initiated by the host processor. The host negates the transmission line, and then either * Asserts the line sooner, to output a 1, * Asserts the line later, to output a 0, * Tri-states its output, allowing the target to drive the line. The host can sense a 1 or 0 as an input value. At the start of the next bit time, the host negates the transmission line, and ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Emulator
In computing, an emulator is hardware or software that enables one computer system (called the ''host'') to behave like another computer system (called the ''guest''). An emulator typically enables the host system to run software or use peripheral devices designed for the guest system. Emulation refers to the ability of a computer program in an electronic device to emulate (or imitate) another program or device. Many printers, for example, are designed to emulate HP LaserJet printers because so much software is written for HP printers. If a non-HP printer emulates an HP printer, any software written for a real HP printer will also run in the non-HP printer emulation and produce equivalent printing. Since at least the 1990s, many video game enthusiasts and hobbyists have used emulators to play classic arcade games from the 1980s using the games' original 1980s machine code and data, which is interpreted by a current-era system, and to emulate old video game consoles. ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Hardware-assisted Virtualization
In computing, hardware-assisted virtualization is a platform virtualization approach that enables efficient full virtualization using help from hardware capabilities, primarily from the host processors. A full virtualization is used to emulate a complete hardware environment, or virtual machine, in which an unmodified guest operating system (using the same instruction set as the host machine) effectively executes in complete isolation. Hardware-assisted virtualization was added to x86 processors (Intel VT-x, AMD-V or VIA VT) in 2005, 2006 and 2010 (respectively). Hardware-assisted virtualization is also known as accelerated virtualization; Xen calls it hardware virtual machine (HVM), and Virtual Iron calls it native virtualization. History Hardware-assisted virtualization first appeared on the IBM System/370 in 1972, for use with VM/370, the first virtual machine operating system. With the increasing demand for high-definition computer graphics (e.g. CAD), virtualization ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Static Timing Analysis
Static timing analysis (STA) is a simulation method of computing the expected timing of a synchronous digital circuit without requiring a simulation of the full circuit. High-performance integrated circuits have traditionally been characterized by the clock frequency at which they operate. Measuring the ability of a circuit to operate at the specified speed requires an ability to measure, during the design process, its delay at numerous steps. Moreover, delay calculation must be incorporated into the inner loop of timing optimizers at various phases of design, such as logic synthesis, layout ( placement and routing), and in in-place optimizations performed late in the design cycle. While such timing measurements can theoretically be performed using a rigorous circuit simulation, such an approach is liable to be too slow to be practical. Static timing analysis plays a vital role in facilitating the fast and reasonably accurate measurement of circuit timing. The speedup comes fr ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Savestate
A saved game (also called a game save, savegame, savefile, save point, or simply save) is a piece of digitally stored information about the progress of a player in a video game. From the earliest games in the 1970s onward, game platform hardware and memory improved, which led to bigger and more complex computer games, which, in turn, tended to take more and more time to play them from start to finish. This naturally led to the need to store in some way the progress, and how to handle the case where the player received a " game over". More modern games with a heavier emphasis on storytelling are designed to allow the player many choices that impact the story in a profound way later on, and some game designers do not want to allow more than one save game so that the experience will always be "fresh". Game designers allow players to prevent the loss of progress in the game (as might happen after a game over). Games designed this way encourage players to 'try things out', and on r ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Amdahl's Law
In computer architecture, Amdahl's law (or Amdahl's argument) is a formula which gives the theoretical speedup in latency of the execution of a task at fixed workload that can be expected of a system whose resources are improved. It states that "the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used". It is named after computer scientist Gene Amdahl, and was presented at the American Federation of Information Processing Societies (AFIPS) Spring Joint Computer Conference in 1967. Amdahl's law is often used in parallel computing to predict the theoretical speedup when using multiple processors. For example, if a program needs 20 hours to complete using a single thread, but a one-hour portion of the program cannot be parallelized, therefore only the remaining 19 hours' () execution time can be parallelized, then regardless of how many threads are devoted to a parallelized execut ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

FPGA Prototyping
Field-programmable gate array prototyping (FPGA prototyping), also referred to as FPGA-based prototyping, ASIC prototyping or system-on-chip (SoC) prototyping, is the method to prototype system-on-chip and application-specific integrated circuit designs on FPGAs for hardware verification and early software development. Verification methods for hardware design as well as early software and firmware co-design have become mainstream. Prototyping SoC and ASIC designs with one or more FPGAs and electronic design automation (EDA) software has become a good method to do this. Why prototyping is important #Running a SoC design on FPGA prototype is a reliable way to ensure that it is functionally correct. This is compared to designers only relying on software simulations to verify that their hardware design is sound. About a third of all current SoC designs are fault-free during first silicon pass, with nearly half of all re-spins caused by functional logic errors. A single prototyp ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Integrated Circuit Design
Integrated circuit design, or IC design, is a sub-field of electronics engineering, encompassing the particular logic and circuit design techniques required to design integrated circuits, or ICs. ICs consist of miniaturized electronic components built into an electrical network on a monolithic semiconductor substrate by photolithography. IC design can be divided into the broad categories of digital and analog IC design. Digital IC design is to produce components such as microprocessors, FPGAs, memories (RAM, ROM, and flash) and digital ASICs. Digital design focuses on logical correctness, maximizing circuit density, and placing circuits so that clock and timing signals are routed efficiently. Analog IC design also has specializations in power IC design and RF IC design. Analog IC design is used in the design of op-amps, linear regulators, phase locked loops, oscillators and active filters. Analog design is more concerned with the physics of the semiconductor devices ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Logic Simulation
Logic simulation is the use of simulation software to predict the behavior of digital circuits and hardware description languages. Simulation can be performed at varying degrees of physical abstraction, such as at the transistor level, gate level, register-transfer level (RTL), electronic system-level (ESL), or behavioral level. Use in verification Logic simulation may be used as part of the verification process in designing hardware. Simulations have the advantage of providing a familiar look and feel to the user in that it is constructed from the same language and symbols used in design. By allowing the user to interact directly with the design, simulation is a natural way for the designer to get feedback on their design. Length of simulation The level of effort required to debug and then verify the design is proportional to the maturity of the design. That is, early in the design's life, bugs and incorrect behavior are usually found quickly. As the design matures, the simu ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Register-transfer Level
In digital circuit design, register-transfer level (RTL) is a design abstraction which models a synchronous digital circuit in terms of the flow of digital signals (data) between hardware registers, and the logical operations performed on those signals. Register-transfer-level abstraction is used in hardware description languages (HDLs) like Verilog and VHDL to create high-level representations of a circuit, from which lower-level representations and ultimately actual wiring can be derived. Design at the RTL level is typical practice in modern digital design. Unlike in software compiler design, where the register-transfer level is an intermediate representation and at the lowest level, the RTL level is the usual input that circuit designers operate on. In fact, in circuit synthesis, an intermediate language between the input register transfer level representation and the target netlist is sometimes used. Unlike in netlist, constructs such as cells, functions, and multi-bit re ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]