Interrupt Latency
   HOME
*





Interrupt Latency
In computing, interrupt latency refers to the delay between the start of an Interrupt Request (IRQ) and the start of the respective Interrupt Service Routine (ISR). For many operating systems, devices are serviced as soon as the device's interrupt handler is executed. Interrupt latency may be affected by microprocessor design, interrupt controllers, interrupt masking, and the operating system's (OS) interrupt handling methods. Background There is usually a trade-off between interrupt latency, throughput, and processor utilization. Many of the techniques of CPU and OS design that improve interrupt latency will decrease throughput and increase processor utilization. Techniques that increase throughput may increase interrupt latency and increase processor utilization. Lastly, trying to reduce processor utilization may increase interrupt latency and decrease throughput. Minimum interrupt latency is largely determined by the interrupt controller circuit and its configuration. The ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Interrupt Handler
In computer systems programming, an interrupt handler, also known as an interrupt service routine or ISR, is a special block of code associated with a specific interrupt condition. Interrupt handlers are initiated by hardware interrupts, software interrupt instructions, or software exceptions, and are used for implementing device drivers or transitions between protected modes of operation, such as system calls. The traditional form of interrupt handler is the hardware interrupt handler. Hardware interrupts arise from electrical conditions or low-level protocols implemented in digital logic, are usually dispatched via a hard-coded table of interrupt vectors, asynchronously to the normal execution stream (as interrupt masking levels permit), often using a separate stack, and automatically entering into a different execution context (privilege level) for the duration of the interrupt handler's execution. In general, hardware interrupts and their handlers are used to handle high-pri ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Context Switch
In computing, a context switch is the process of storing the state of a process or thread, so that it can be restored and resume execution at a later point, and then restoring a different, previously saved, state. This allows multiple processes to share a single central processing unit (CPU), and is an essential feature of a multitasking operating system. The precise meaning of the phrase "context switch" varies. In a multitasking context, it refers to the process of storing the system state for one task, so that task can be paused and another task resumed. A context switch can also occur as the result of an interrupt, such as when a task needs to access disk storage, freeing up CPU time for other tasks. Some operating systems also require a context switch to move between user mode and kernel mode tasks. The process of context switching can have a negative impact on system performance. Cost Context switches are usually computationally intensive, and much of the design of opera ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Response Time (technology)
In technology, response time is the time a system or functional unit takes to react to a given input. Computing Response time is the total amount of time it takes to respond to a request for service. That service can be anything from a memory fetch, to a disk IO, to a complex database query, or loading a full web page. Ignoring transmission time for a moment, the response time is the sum of the service time and wait time. The service time is the time it takes to do the work you requested. For a given request the service time varies little as the workload increases – to do X amount of work it always takes X amount of time. The wait time is how long the request had to wait in a queue before being serviced and it varies from zero, when no waiting is required, to a large multiple of the service time, as many requests are already in the queue and have to be serviced first. With basic queueing theory math you can calculate how the average wait time increases as the device providing ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Non-maskable Interrupt
In computing, a non-maskable interrupt (NMI) is a hardware interrupt that standard interrupt-masking techniques in the system cannot ignore. It typically occurs to signal attention for non-recoverable hardware errors. Some NMIs may be masked, but only by using proprietary methods specific to the particular NMI. An NMI is often used when response time is critical or when an interrupt should never be disabled during normal system operation. Such uses include reporting non-recoverable hardware errors, system debugging and profiling, and handling of special cases like system resets. Modern computer architectures typically use NMIs to handle non-recoverable errors which need immediate attention. Therefore, such interrupts should not be masked in the normal operation of the system. These errors include non-recoverable internal system chipset errors, corruption in system memory such as parity and ECC errors, and data corruption detected on system and peripheral buses. On some systems, ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Interrupt Handler
In computer systems programming, an interrupt handler, also known as an interrupt service routine or ISR, is a special block of code associated with a specific interrupt condition. Interrupt handlers are initiated by hardware interrupts, software interrupt instructions, or software exceptions, and are used for implementing device drivers or transitions between protected modes of operation, such as system calls. The traditional form of interrupt handler is the hardware interrupt handler. Hardware interrupts arise from electrical conditions or low-level protocols implemented in digital logic, are usually dispatched via a hard-coded table of interrupt vectors, asynchronously to the normal execution stream (as interrupt masking levels permit), often using a separate stack, and automatically entering into a different execution context (privilege level) for the duration of the interrupt handler's execution. In general, hardware interrupts and their handlers are used to handle high-pri ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Inter-processor Interrupt
An inter-processor interrupt (IPI), also known as a ''shoulder tap'', is a special type of interrupt by which one processor may interrupt another processor in a multiprocessor system if the interrupting processor requires action from the other processor. Actions that might be requested include: * flushes of memory management unit caches, such as translation lookaside buffers, on other processors when memory mappings are changed by one processor; * stopping when the system is being shut down by one processor. * Notify a processor that higher priority work is available. * Notify a processor of work that cannot be done on all processors due to, e.g., ** asymmetric access to I/O channels ** special features on some processors Mechanism The M65MP option of OS/360 used the Direct Control feature of the S/360 to generate an interrupt on another processor; on S/370 and its successors, including z/Architecture, the SIGNAL PROCESSOR instruction provides a more formalized interface. The ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




IEEE 802
IEEE 802 is a family of Institute of Electrical and Electronics Engineers (IEEE) standards for local area networks (LAN), personal area network (PAN), and metropolitan area networks (MAN). The IEEE 802 LAN/MAN Standards Committee (LMSC) maintains these standards. The IEEE 802 family of standards has had twenty-four members, numbered 802.1 through 802.24, with a working group of the LMSC devoted to each. However, not all of these working groups are currently active. The IEEE 802 standards are restricted to computer networks carrying variable-size packets, unlike cell relay networks, for example, in which data is transmitted in short, uniformly sized units called cells. Isochronous signal networks, in which data is transmitted as a steady stream of octets, or groups of octets, at regular time intervals, are also outside the scope of the IEEE 802 standards. The number 802 has no significance: it was simply the next number in the sequence that the IEEE used for standards projects. ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Ethernet Flow Control
Ethernet flow control is a mechanism for temporarily stopping the transmission of data on Ethernet family computer networks. The goal of this mechanism is to avoid packet loss in the presence of network congestion. The first flow control mechanism, the pause frame, was defined by the IEEE 802.3x standard. The follow-on priority-based flow control, as defined in the IEEE 802.1Qbb standard, provides a link-level flow control mechanism that can be controlled independently for each class of service (CoS), as defined by IEEE P802.1p and is applicable to data center bridging (DCB) networks, and to allow for prioritization of voice over IP (VoIP), video over IP, and database synchronization traffic over default data traffic and bulk file transfers. Description A sending station (computer or network switch) may be transmitting data faster than the other end of the link can accept it. Using flow control, the receiving station can signal the sender requesting suspension of transmissions ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Advanced Programmable Interrupt Controller
In computing, Intel's Advanced Programmable Interrupt Controller (APIC) is a family of interrupt controllers. As its name suggests, the APIC is more advanced than Intel's 8259 Programmable Interrupt Controller (PIC), particularly enabling the construction of multiprocessor systems. It is one of several architectural designs intended to solve interrupt routing efficiency issues in multiprocessor computer systems. The APIC is a split architecture design, with a local component (LAPIC) usually integrated into the processor itself, and an optional I/O APIC on a system bus. The first APIC was the 82489DX it was a discrete chip that functioned both as local and I/O APIC. The 82489DX enabled construction of symmetric multiprocessor (SMP) systems with the Intel 486 and early Pentium processors; for example, the reference two-way 486 SMP system used three 82489DX chips, two as local APICs and one as I/O APIC. Starting with the P54C processor, the local APIC functionality was integrated ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Live-lock
In concurrent computing, deadlock is any situation in which no member of some group of entities can proceed because each waits for another member, including itself, to take action, such as sending a message or, more commonly, releasing a lock. Deadlocks are a common problem in multiprocessing systems, parallel computing, and distributed systems, because in these contexts systems often use software or hardware locks to arbitrate shared resources and implement process synchronization. In an operating system, a deadlock occurs when a process or thread enters a waiting state because a requested system resource is held by another waiting process, which in turn is waiting for another resource held by another waiting process. If a process remains indefinitely unable to change its state because resources requested by it are being used by another process that itself is waiting, then the system is said to be in a deadlock. In a communications system, deadlocks occur mainly due to los ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Interrupt Storm
In operating systems, an interrupt storm is an event during which a processor receives an inordinate number of interrupts that consume the majority of the processor's time. Interrupt storms are typically caused by hardware devices that do not support interrupt rate limiting. Background Because interrupt processing is typically a non- preemptible task in time-sharing operating systems, an interrupt storm will cause sluggish response to user input, or even appear to freeze the system completely. This state is commonly known as '' live lock''. In such a state, the system is spending most of its resources processing interrupts instead of completing other work. To the end-user, it does not appear to be processing anything at all as there is often no output. An interrupt storm is sometimes mistaken for thrashing, since they both have similar symptoms (unresponsive or sluggish response to user input, little or no output). Common causes include: misconfigured or faulty hardware, faulty d ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]