Weighted Fair Queuing
   HOME
*





Weighted Fair Queuing
Weighted fair queueing (WFQ) is a network scheduling algorithm. WFQ is both a packet-based implementation of the generalized processor sharing (GPS) policy, and a natural extension of fair queuing (FQ). Whereas FQ shares the link's capacity in equal subparts, WFQ allows schedulers to specify, for each flow, which fraction of the capacity will be given. Weighted fair queuing is also known as packet-by-packet GPS (PGPS or P-GPS) since it approximates generalized processor sharing "to within one packet transmission time, regardless of the arrival patterns." Parametrization and fairness Like other GPS-like scheduling algorithms, the choice of the weights is left to the network administrator. There is no unique definition of what is "fair" (see for further discussion). By regulating the WFQ weights dynamically, WFQ can be utilized for controlling the quality of service, for example, to achieve guaranteed data rate. Proportionally fair behavior can be achieved by setting the we ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Network Scheduling
A network scheduler, also called packet scheduler, queueing discipline (qdisc) or queueing algorithm, is an Arbiter (electronics), arbiter on a Node (networking), node in a packet switching communication network. It manages the sequence of network packets in the transmit and receive queue (abstract data type), queues of the protocol stack and network interface controller. There are several network schedulers available for the different operating systems, that implement many of the existing network scheduling algorithms. The network scheduler logic decides which network packet to forward next. The network scheduler is associated with a queuing system, storing the network packets temporarily until they are transmitted. Systems may have a single or multiple queues in which case each may hold the packets of one Traffic flow (computer networking), flow, Traffic classification, classification, or priority. In some cases it may not be possible to schedule all transmissions within the c ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Deficit Round Robin
Deficit Round Robin (DRR), also Deficit Weighted Round Robin (DWRR), is a scheduling algorithm for the network scheduler. DRR is, like weighted fair queuing (WFQ), a packet-based implementation of the ideal Generalized Processor Sharing (GPS) policy. It was proposed by M. Shreedhar and G. Varghese in 1995 as an efficient (with ''O(1)'' complexity) and fair algorithm. Details In DRR, a scheduler handling N flows is configured with one quantum Q_i for each flow. This global idea is that, at each round, the flow i can send at most Q_i bytes, and the remaining, if any, is reported to the next round. In this way, the flow of number will achieve a minimal long term data rate of \fracR, where R is the link rate. Algorithm The DRR scans all non-empty queues in sequence. When a non-empty queue i is selected, its deficit counter is incremented by its quantum value. Then, the value of the deficit counter is a maximal number of bytes that can be sent at this turn: if the deficit counter i ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Weighted Round Robin
Weighted round robin (WRR) is a network scheduler for data flows, but also used to schedule processes. Weighted round robin is a generalisation of round-robin scheduling. It serves a set of queues or tasks. Whereas round-robin cycles over the queues or tasks and gives one service opportunity per cycle, weighted round robin offers to each a fixed number of opportunities, as specified by the configured weight which serves to influence the portion of capacity received by each queue or task. In computer networks, a service opportunity is the emission of one packet, if the selected queue is non-empty. If all packets have the same size, WRR is the simplest approximation of generalized processor sharing (GPS). Several variations of WRR exist. The main ones are the ''classical'' WRR, and the ''interleaved'' WRR. Algorithm Principles WRR is presented in the following as a network scheduler. It can also be used to schedule tasks in a similar way. A weighted round-robin network sch ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Statistical Time Division Multiplexing
Statistical multiplexing is a type of communication link sharing, very similar to dynamic bandwidth allocation (DBA). In statistical multiplexing, a communication channel is divided into an arbitrary number of variable bitrate digital channels or data streams. The link sharing is adapted to the instantaneous traffic demands of the data streams that are transferred over each channel. This is an alternative to creating a fixed sharing of a link, such as in general time division multiplexing (TDM) and frequency division multiplexing (FDM). When performed correctly, statistical multiplexing can provide a link utilization improvement, called the ''statistical multiplexing gain''. Statistical multiplexing is facilitated through packet mode or packet-oriented communication, which among others is utilized in packet switched computer networks. Each stream is divided into packets that normally are delivered asynchronously in a first-come first-served fashion. In alternative fashion, the pack ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Scheduling Algorithm
In computing, scheduling is the action of assigning ''resources'' to perform ''tasks''. The ''resources'' may be processors, network links or expansion cards. The ''tasks'' may be threads, processes or data flows. The scheduling activity is carried out by a process called scheduler. Schedulers are often designed so as to keep all computer resources busy (as in load balancing), allow multiple users to share system resources effectively, or to achieve a target quality-of-service. Scheduling is fundamental to computation itself, and an intrinsic part of the execution model of a computer system; the concept of scheduling makes it possible to have computer multitasking with a single central processing unit (CPU). Goals A scheduler may aim at one or more goals, for example: * maximizing ''throughput'' (the total amount of work completed per time unit); * minimizing '' wait time'' (time from work becoming ready until the first point it begins execution); * minimizing '' latency ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Max-min Fairness
In communication networks, multiplexing and the division of scarce resources, max-min fairness is said to be achieved by an allocation if and only if the allocation is feasible and an attempt to increase the allocation of any participant necessarily results in the decrease in the allocation of some other participant with an equal or smaller allocation. In best-effort statistical multiplexing, a first-come first-served (FCFS) scheduling policy is often used. The advantage with max-min fairness over FCFS is that it results in traffic shaping, meaning that an ill-behaved flow, consisting of large data packets or bursts of many packets, will only punish itself and not other flows. Network congestion is consequently to some extent avoided. Fair queuing is an example of a max-min fair packet scheduling algorithm for statistical multiplexing and best-effort networks, since it gives scheduling priority to users that have achieved lowest data rate since they became active. In case of equ ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Fairness Measure
Fairness measures or metrics are used in network engineering to determine whether users or applications are receiving a fair share of system resources. There are several mathematical and conceptual definitions of fairness. TCP fairness Congestion control mechanisms for new network transmission protocols or peer-to-peer applications must interact well with TCP. TCP fairness requires that a new protocol receive a no larger share of the network than a comparable TCP flow. This is important as TCP is the dominant transport protocol on the Internet, and if new protocols acquire unfair capacity they tend to cause problems such as congestion collapse. This was the case with the first versions of RealMedia's streaming protocol: it was based on UDP and was widely blocked at organizational firewalls until a TCP-based version was developed. TCP throughput unfairness over WiFi is a critical problem and needs further investigations. Jain's fairness index Raj Jain's equation, :\mathcal (x_ ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Deficit Round Robin
Deficit Round Robin (DRR), also Deficit Weighted Round Robin (DWRR), is a scheduling algorithm for the network scheduler. DRR is, like weighted fair queuing (WFQ), a packet-based implementation of the ideal Generalized Processor Sharing (GPS) policy. It was proposed by M. Shreedhar and G. Varghese in 1995 as an efficient (with ''O(1)'' complexity) and fair algorithm. Details In DRR, a scheduler handling N flows is configured with one quantum Q_i for each flow. This global idea is that, at each round, the flow i can send at most Q_i bytes, and the remaining, if any, is reported to the next round. In this way, the flow of number will achieve a minimal long term data rate of \fracR, where R is the link rate. Algorithm The DRR scans all non-empty queues in sequence. When a non-empty queue i is selected, its deficit counter is incremented by its quantum value. Then, the value of the deficit counter is a maximal number of bytes that can be sent at this turn: if the deficit counter i ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Generalized Processor Sharing
Generalized processor sharing (GPS) is an ideal scheduling algorithm for scheduling (computing), process schedulers and network schedulers. It is related to the Fair queuing#FQ Principle, fair-queuing principle which groups packets into classes and shares the service capacity between them. GPS shares this capacity ''according to some fixed weights''. In process scheduling, GPS is "an idealized scheduling algorithm that achieves perfect fairness. All practical schedulers approximate GPS and use it as a reference to measure fairness." Generalized processor sharing assumes that traffic is fluid (infinitesimal packet sizes), and can be arbitrarily split. There are several service disciplines which track the performance of GPS quite closely such as weighted fair queuing (WFQ), also known as packet-by-packet generalized processor sharing (PGPS). Justification In a network such as the internet, different application types require different levels of performance. For example, email is a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Leaky Bucket
The leaky bucket is an algorithm based on an analogy of how a bucket with a constant leak will overflow if either the average rate at which water is poured in exceeds the rate at which the bucket leaks or if more water than the capacity of the bucket is poured in all at once. It can be used to determine whether some sequence of discrete events conforms to defined limits on their average and peak rates or frequencies, e.g. to limit the actions associated with these events to these rates or delay them until they do conform to the rates. It may also be used to check conformance or limit to an average rate alone, i.e. remove any variation from the average. It is used in packet-switched computer networks and telecommunications networks in both the traffic policing, traffic shaping and scheduling of data transmissions, in the form of packets, to defined limits on bandwidth and burstiness (a measure of the variations in the traffic flow). A version of the leaky bucket, the gener ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Channel Allocation Schemes
In radio resource management for wireless and cellular networks, channel allocation schemes allocate bandwidth and communication channels to base stations, access points and terminal equipment. The objective is to achieve maximum system spectral efficiency in bit/s/Hz/site by means of frequency reuse, but still assure a certain grade of service by avoiding co-channel interference and adjacent channel interference among nearby cells or networks that share the bandwidth. Channel-allocation schemes follow one of two types of strategy:Guowang Miao, Jens Zander, Ki Won Sung, and Ben Slimane, Fundamentals of Mobile Data Networks, Cambridge University Press, , 2016. # Fixed: FCA, fixed channel allocation: manually assigned by the network operator # Dynamic: ## DCA, dynamic channel allocation ## DFS, dynamic frequency selection ## Spread spectrum Static Channel Allocation In Fixed Channel Allocation or Fixed Channel Assignment (FCA) each cell is given a predetermined set of frequency ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]