HOME

TheInfoList



OR:

In
computer science Computer science is the study of computation, information, and automation. Computer science spans Theoretical computer science, theoretical disciplines (such as algorithms, theory of computation, and information theory) to Applied science, ...
, a strict Fibonacci heap is a
priority queue In computer science, a priority queue is an abstract data type similar to a regular queue (abstract data type), queue or stack (abstract data type), stack abstract data type. In a priority queue, each element has an associated ''priority'', which ...
data structure with low
worst case In computer science, best, worst, and average cases of a given algorithm express what the resource usage is ''at least'', ''at most'' and ''on average'', respectively. Usually the resource being considered is running time, i.e. time complexity, b ...
time bounds. It matches the
amortized In computer science, amortized analysis is a method for analyzing a given algorithm's complexity, or how much of a resource, especially time or memory, it takes to execute. The motivation for amortized analysis is that looking at the worst-case ...
time bounds of the
Fibonacci heap In computer science, a Fibonacci heap is a data structure for priority queue operations, consisting of a collection of heap-ordered trees. It has a better amortized running time than many other priority queue data structures including the binar ...
in the worst case. To achieve these time bounds, strict Fibonacci heaps maintain several invariants by performing restoring transformations after every operation. These transformations can be done in constant time by using auxiliary data structures to track invariant violations, and the
pigeonhole principle In mathematics, the pigeonhole principle states that if items are put into containers, with , then at least one container must contain more than one item. For example, of three gloves, at least two must be right-handed or at least two must be l ...
guarantees that these can be fixed. Strict Fibonacci heaps were invented in 2012 by Gerth S. Brodal, George Lagogiannis, and Robert E. Tarjan, with an update in 2025. Along with
Brodal queue In computer science, the Brodal queue is a heap/priority queue structure with very low worst case time bounds: O(1) for insertion, find-minimum, meld (merge two queues) and decrease-key and O(\mathrm(n)) for delete-minimum and general deletion. ...
s, strict Fibonacci heaps belong to a class of
asymptotically optimal In computer science, an algorithm is said to be asymptotically optimal if, roughly speaking, for large inputs it performs at worst a constant factor (independent of the input size) worse than the best possible algorithm. It is a term commonly en ...
data structures for priority queues. All operations on strict Fibonacci heaps run in worst case constant time except ''delete-min'', which is necessarily logarithmic. This is optimal, because any priority queue can be used to
sort Sort may refer to: * Sorting, any process of arranging items in sequence or in sets ** Sorting algorithm, any algorithm for ordering a list of elements ** Mainframe sort merge, sort utility for IBM mainframe systems ** Sort (Unix), which sorts the ...
a list of n elements by performing n insertions and n ''delete-min'' operations. However, strict Fibonacci heaps are simpler than Brodal queues, which make use of
dynamic array In computer science, a dynamic array, growable array, resizable array, dynamic table, mutable array, or array list is a random access, variable-size list data structure that allows elements to be added or removed. It is supplied with standard l ...
s and redundant counters, whereas the strict Fibonacci heap is pointer based only.


Structure

A strict Fibonacci heap is a single
tree In botany, a tree is a perennial plant with an elongated stem, or trunk, usually supporting branches and leaves. In some usages, the definition of a tree may be narrower, e.g., including only woody plants with secondary growth, only ...
satisfying the minimum-heap property. That is, the key of a node is always smaller than or equal to its children. As a direct consequence, the node with the minimum key always lies at the root. Like ordinary Fibonacci heaps, strict Fibonacci heaps possess substructures similar to
binomial heap In computer science, a binomial heap is a data structure that acts as a priority queue. It is an example of a mergeable heap (also called meldable heap), as it supports merging two heaps in logarithmic time. It is implemented as a Heap (data st ...
s. To identify these structures, we label every node with one of two types. We thus introduce the following definitions and rules: * All nodes are either active (colored ''white'') or passive (colored ''red''). *An active root is an active node with a passive parent. *A passive linkable node is a passive node where all its descendants are ''passive'' (a passive node with no children is considered to be linkable). * The rank of an active node is the number of active children it has. * The loss of an active node is the number of active children it has lost. * For any node, the active children lie to the left of the passive children. * An active root always has zero loss. * The root is passive. * The passive linkable children of the root lie to the right of the passive non-linkable children.


Invariants

; Invariant 1: Structure : The ith rightmost active child c_i of an active node satisfies c_i.\mathrm + c_i.\mathrm \ge i - 1. Thus, the loss of an active node can be viewed as a generalisation of
Fibonacci heap In computer science, a Fibonacci heap is a data structure for priority queue operations, consisting of a collection of heap-ordered trees. It has a better amortized running time than many other priority queue data structures including the binar ...
'marks'. For example, a subtree consisting of only active nodes with loss zero is a binomial tree. In addition, several invariants which impose logarithmic bounds on three main quantities: the number of active roots, the total loss, and the degrees of nodes. This is in contrast to the ordinary Fibonacci heap, which is more flexible and allows structural violations to grow on the order of O(n) to be cleaned up later, as it is a lazy data structure. To assist in keeping the degrees of nodes logarithmic, every non-root node also participates in a queue Q. In the following section, and for rest of this article, we define the real number R = 2 \lg n + 6, where n is the number of nodes in the heap, and \lg denotes the
binary logarithm In mathematics, the binary logarithm () is the exponentiation, power to which the number must be exponentiation, raised to obtain the value . That is, for any real number , :x=\log_2 n \quad\Longleftrightarrow\quad 2^x=n. For example, th ...
. ; Invariant 2: Active roots : The total number of active roots is at most R+1. ; Invariant 3: Total loss : The total loss in the heap is at most R+1. ; Invariant 4: Root degree : The degree of the root is at most R+3. ; Invariant 5: Non-root degrees : For an active node with zero loss, the degree is at most 2 \lg (2n - p) + 10, where p is its position in Q (with 1 as the first element). For all other non-root nodes, the degree is at most 2 \lg (2n - p) + 9. ; Corollary 1: Maximum degree : The degree of any non-root node is at most R+6. : ''Proof'': : This follows immediately from invariant 5. Letting p = 0, we have :: 2 \lg (2n - 0) + 10 = 2 \lg n + 12 = R + 6 ; Lemma 1: Maximum rank : If invariant 1 holds, the maximum rank r or any active node is at most \lg n + \sqrt + 2, where L is the total loss. : ''Proof'': : We proceed by contradiction. Let x be an active node with maximal rank r in a heap with n nodes and total loss L, and assume that r \ge \lg n + k + 1, where k is the smallest integer such that k(k+1)/2 \ge L. Our goal is to show that the subtree T_x rooted at x contains n + 1 nodes, which is a contradiction because there are only n nodes in the heap. : Discard all subtrees rooted at passive nodes from T_x, leaving it with only active nodes. Cut off all the grandchildren of x whose subtrees contain any node of positive loss, and increase the loss of the children of x accordingly, once for each grandchild lost. The quantity \mathrm + \mathrm is unchanged for the remaining nodes, preserving invariant 1. Furthermore, the total loss is still at most L. : The children of x now consists of loss-free subtrees and leaf nodes with positive loss. Currently, T_x satisfies c_i.\mathrm + c_i.\mathrm \ge i - 1 for the ith rightmost child c_i of x. We make this an exact equality by first reducing the loss of each c_i, and pruning any grandchildren if necessary. Afterwards, c_i.\mathrm + c_i.\mathrm = i - 1 exactly. All other descendants of x are also converted into binomial subtrees by pruning children as necessary. : We now attempt to reconstruct a minimal version of T_x by starting with a binomial tree of degree r, containing 2^r active nodes. We wish to increase the loss to L, but keep the rank of x as r and the number of nodes as low as possible. For a binomial tree of degree r, there is one child of each degree from 0 to r-1. Hence, there are j grandchildren of order r-j-1. If we cut all the grandchildren whose degree \le r-k-1, then we have cut \sum_^k j = k(k+1)/2 grandchildren, which is sufficient to bring the loss up to L. All grandchildren with degree \le r-k-2 survive. Let w be the child of x with degree r-k-1 and loss 0. By assumption, r-k-1 \ge \lg n, and w is a complete binomial tree, so it has at least 2^ = n nodes. Since this would mean T_x has at least n+1 nodes, we have reached a contradiction, and therefore r < \lg n + k + 1. Noting that k < \sqrt = \sqrt, we obtain r < \lg n + \sqrt + 2. ; Corollary 2: Maximum rank : If invariants 1 and 3 both hold, then the maximum rank is R. : ''Proof'': : From invariant 3, we have L \le R + 1. By substituting this into lemma 1, we calculate as follows: :: \begin r &\le \lg n + \sqrt + 2 \\ &\le \lg n + \sqrt + 2 \\ & = \lg n + \sqrt + 2 \\ &\le \lg n + \sqrt + 2 \\ & = \lg n + (\lg n + 4) + 2 \\ & = R \end


Transformations

The following transformations restore the above invariants after a priority queue operation has been performed. There are three main quantities we wish to minimize: the number of active roots, the total loss in the heap, and the degree of the root. All transformations can be performed in O(1) time, which is possible by maintaining auxiliary data structures to track candidate nodes (described in the section on implementation).


Active root reduction

Let x and y be active roots with equal rank r, and assume x.\mathrm \le y.\mathrm. Link y as the leftmost child of x and increase the rank of x by 1. If the rightmost child z of x is passive, link z to the root. As a result, y is no longer an active root, so the number of active roots decreases by 1. However, the degree of the root node may increase by 1, Since y becomes the (r+1)th rightmost child of x, and y has rank r, invariant 1 is preserved. ; Lemma 2: Availability of active root reduction : If invariant 2 is violated, but invariants 1 and 3 hold, then active root reduction is possible. : ''Proof'': : Because invariant 2 is broken, there are more than R+1 active roots present. From corollary 2, the maximum rank of a node is R. By the
pigeonhole principle In mathematics, the pigeonhole principle states that if items are put into containers, with , then at least one container must contain more than one item. For example, of three gloves, at least two must be right-handed or at least two must be l ...
, there exists a pair of active roots with the same rank.


Loss reduction


One node loss reduction

Let x be an active non-root with loss at least 2. Link x to the root, thus turning it into an active root, and resetting its loss to 0. Let the original parent of x be y. y must be active, since otherwise x would have previously been an active root, and thus could not have had positive loss. The rank of y is decreased by 1. If y is not an active root, increase its loss by 1. Overall, the total loss decreases by 1 or 2. As a side effect, the root degree and number of active roots increase by 1, making it less preferable to two node loss reduction, but still a necessary operation.


Two node loss reduction

Let x and y be active nodes with equal rank r and loss equal to 1, and let z be the parent of y. Without loss of generality, assume that x.\mathrm \le y.\mathrm. Detach y from z, and link y to x. Increase the rank of x by 1 and reset the loss of x and y from 1 to 0. z must be active, since y had positive loss and could not have been an active root. Hence, the rank of z is decreased by 1. If z is not an active root, increase its loss by 1. Overall, the total loss decreases by either 1 or 2, with no side effects. ; Lemma 3: Availability of loss reduction : If invariant 3 is violated by 1, but invariant 2 holds, then loss reduction is possible. : ''Proof'': We apply the pigeonhole principle again. If invariant 3 is violated by 1, the total loss is L=R+2. Lemma 1 can be reformulated to also work with L=R+2. Thus, corollary 2 holds. Since the maximum rank is R, either there either exists a pair of active nodes with equal rank and loss 1, or an active node with \mathrm \ge 2. Both cases present an opportunity for loss reduction.


Root degree reduction

Let x, y, and z be the three rightmost passive linkable children of the root. Detach them all from the root and sort them such that x.\mathrm \le y.\mathrm \le z.\mathrm. Change x and y to be active. Link z to y, link y to x, and link x as the leftmost child of the root. As a result, x becomes an active root with rank 1 and loss 0. The rank and loss of y is set to 0. The net change of this transformation is that the degree of the root node decreases by 2. As a side effect, the number of active roots increases by 1. ; Lemma 4: Availability of root degree reduction : If invariant 4 is violated, but invariant 2 holds, then root degree reduction is possible. : ''Proof'': : If invariant 4 is broken, then the degree of the root is at least R+4. The children of the root fall into three categories: active roots, passive non-linkable nodes, and passive linkable nodes. Each passive non-linkable node subsumes an active root, since its subtree contains at least one active node. Because the number of active roots is at most R+1, the rightmost three children of the root must therefore be passive linkable.


Summary

The following table summarises the effect of each transformation on the three important quantities. Individually, each transformation may violate invariants, but we are only interested in certain combinations of transformations which do not increase any of these quantities. When deciding which transformations to perform, we consider only the worst case effect of these operations, for simplicity. The two types of loss reduction are also considered to be the same operation. As such, we define 'performing a loss reduction' to mean attempting each type of loss reduction in turn.


Implementation


Linking nodes

To ensure active nodes lie to the left of passive nodes, and preserve invariant 1, the linking operation should place active nodes on the left, and passive nodes on the right. It is necessary for active and passive nodes to coexist in the same list, because the merge operation changes all nodes in the smaller heap to be passive. If they existed in two separate lists, the lists would have to be concatenated, which cannot be done in constant time for all nodes. For the root, we also pose the requirement that passive linkable children lie to the right of the passive non-linkable children. Since we wish to be able link nodes to the root in constant time, a pointer to the first passive linkable child of the root must be maintained.


Finding candidate nodes

The invariant restoring transformations rely on being able to find candidate nodes in O(1) time. This means that we must keep track of active roots with the same rank, nodes with loss 1 of the same rank, and nodes with loss at least 2. The original paper by Brodal et al. described a ''fix-list'' and a ''rank-list'' as a way of tracking candidate nodes.


Fix-list

The fix-list is divided into four parts: # Active roots ready for active root reduction – active roots with a partner of the same rank. Nodes with the same rank are kept adjacent. # Active roots not yet ready for active reduction – the only active roots for that rank. # Active nodes with loss 1 that are not yet ready for loss reduction – the only active nodes with loss 1 for that rank. # Active nodes that are ready for loss reduction – This includes active nodes with loss 1 that have a partner of the same rank, and active nodes with loss at least 2, which do not need partners to be reduced. Nodes with the same rank are kept adjacent. To check if active root reduction is possible, we simply check if part 1 is non-empty. If it is non-empty, the first two nodes can be popped off and transformed. Similarly, to check if loss reduction is possible, we check the end of part 4. If it contains a node with loss at least 2, one node loss reduction is performed. Otherwise, if the last two nodes both have loss 1, and are of the same rank, two node loss reduction is performed.


Rank-list

The rank-list is a doubly linked list containing information about each rank, to allow nodes of the same rank to be partnered together in the fix-list. For each node representing rank r in the rank-list, we maintain: * A pointer to the first active root in the fix-list with rank r. If such a node does not exist, this is NULL. * A pointer to the first active node in the fix-list with rank r and loss 1. If such a node does not exist, this is NULL. * A pointer to the node representing rank r+1 and r-1, to facilitate the incrementation and decrementation of ranks. The fix-list and rank-list require extensive bookkeeping, which must be done whenever a new active node arises, or when the rank or loss of a node is changed.


Shared flag

The merge operation changes all of the active nodes of the smaller heap into passive nodes. This can be done in O(1) time by introducing a level of indirection. Instead of a boolean flag, each active node has a pointer towards an ''active flag'' object containing a boolean value. For passive nodes, it does not matter which active flag object they point to, as long as the flag object is set to passive, because it is not required to change many passive nodes into active nodes simultaneously.


Storing keys

The decrease-key operation requires a reference to the node we wish to decrease the key of. However, the decrease-key operation itself sometimes swaps the key of a node and the key root. Assume that the insert operation returns some opaque reference that we can call decrease-key on, as part of the public API. If these references are internal heap nodes, then by swapping keys we have mutated these references, causing other references to become undefined. To ensure a key is always stays with the same reference, it is necessary to 'box' the key. Each heap node now contains a pointer to a box containing a key, and the box also has a pointer to the heap node. When inserting an item, we create a box to store the key in, link the heap node to the box both ways, and return the box object. To swap the keys between two nodes, we re-link the pointers between the boxes and nodes instead.


Operations


Merge

Let h_1 and h_2 be strict Fibonacci heaps. If either is empty, return the other. Otherwise, let n_1 and n_2 be their corresponding sizes. Without loss of generality, assume that n_1 \le n_2. Since the sizes of the fix-list and rank-list of each heap are logarithmic with respect to the heap size, it is not possible to merge these auxiliary structures in constant time. Instead, we throw away the structure of the smaller heap h_1 by discarding its fix-list and rank-list, and converting all of its nodes into passive nodes. This can be done in constant time, using a shared flag, as shown above. Link h_1 and h_2, letting the root with the smaller key become the parent of the other. Let Q_1 and Q_2 be the queues of h_1 and h_2 respectively. The queue of resulting heap is set toQ_ = Q_1 + \ + Q_2, where r_\mathrm is the root with the larger key. The only possible structural violation is the root degree. This is solved by performing 1 active root reduction, and 1 root degree reduction, if each transformation is possible.


Proof of correctness

Invariants 1, 2, and 3 hold automatically, since the structure of the heap is discarded. As calculated above, any violations of invariant 4 are solved by the root degree reduction transformation. To verify invariant 5, we consider the final positions of nodes in Q. Each node has its degree bounded by 2 \lg (2n - p) + 10 or 2 \lg (2n - p) + 9. For the smaller heap h_1 the positions in Q_1 are unchanged. However, all nodes in h_1 are now passive, which means that their constraint may change from the +10 case to the +9 case. But noting that n_1 \le n_2, the resulting size n = n_1 + n_2 is at least double n_1. This results in an increase of at least 1 on each constraint, which eliminates the previous concern. The root with the larger key between h_1 and h_2 becomes a non-root, and is placed between Q_1 and Q_2 at position n_1. By invariant 4, its degree was bounded by either R_1+3 = 2 \lg n_1 + 9 or R_2+3 = 2 \lg n_2 + 9, depending on which heap it came from. It is easy to see that this is less than 2 \lg + 9 in any case. For the larger heap, the positions increase by n_1. But since the resulting size is n = n_1 + n_2, the value 2n - p actually increases, weakening the constraint.


Insert

Insertion can be considered a special case of the merge operation. To insert a single key, create a new heap containing a single passive node and an empty queue, and merge it with the main heap.


Find-min

Due to the minimum-heap property, the node with the minimum key is always at the root, if it exists.


Delete-min

If the root is the only node in the heap, we are done by simply removing it. Otherwise, search the children of the root to find the node x with minimum key, and set the new root to x. If x is active, make it passive, causing all active children of x to implicitly become active roots. Link the children of the old root to x. Since x is now the root, move all of its passive linkable children to the right, and remove x from Q. The degree of the root approximately doubles, because we have linked all the children of the old root to x. We perform the following restorative transformations: # Repeat twice: rotate Q by moving the head y of Q to the back, and link the two rightmost passive children of y to the root. # If a loss reduction is possible, perform it. # Perform active root reductions and root degree reductions until neither is possible. To see how step 3 is bounded, consider the state after step 3: Observe that, 3 active root reductions and 2 root reductions decreases the root degree and active roots by 1: Since R = 2 \lg n + 6, step 3 never executes more than O(\log n) times.


Proof of correctness

Invariant 1 holds trivially, since no active roots are created. The size of the heap n decreases by one, causing R = 2 \lg n + 6 decreases by at most one. Thus, invariant 3 is violated by at most 1. By lemma 3, loss reduction is possible, which has been done by step 2. Invariants 1 and 3 now hold. If invariants 2 and 4 were still violated after step 3, it would be possible to apply active root reduction and root degree reduction, by lemmas 2 and 4. However, active root reduction and root degree reduction have already been exhaustively applied. Therefore, invariants 2 and 4 also hold. To show that invariant 5 is satisfied, we first note that the heap size n has decreased by 1. Because the first 2 nodes in Q are popped in step 1, the positions of the other elements in Q decrease by 2. Therefore, the degree constraints 2 \lg (2n - p) + 10 and 2 \lg (2n - p) + 9 remain constant for these nodes. The two nodes which were popped previously had positions 1 and 2 in Q, and now have positions n-2 and n-1 respectively. The effect is that their degree constraints have strengthened by 2, however, we cut off two passive children for each of these nodes, which is sufficient to satisfy the constraint again.


Decrease-key

Let x be the node whose key has been decreased. If x is the root, we are done. Otherwise, detach the subtree rooted at x, and link it to the root. If the key of x is smaller than the key of the root, swap their keys. Up to three structural violations may have occurred. Unless x was already a child of the root, the degree of the root increases by 1. When x was detached from its original parent y, we have the following cases: * If x is passive, then there are no extra violations. * If x was previously an active root with y passive, then moving x from being a child of y to a child of the root does not create any additional active roots, nor does it increase the loss of any node. * If both x and y are active, then the loss of y increases by 1, and an extra active root is created (by linking x to the root). In the worst case, all three quantities (root degree, total loss, active roots) increase by 1. After performing 1 loss reduction, the worst case result is still that the root degree and number of active roots have both increased by 2. To fix these violations, we use the fact that 3 active root reductions and 2 root reductions decrease both of these quantities by 1. Hence, applying these transformations 6 and 4 times respectively is sufficient to eliminate all violations.


Proof of correctness

The nodes which were previously the left siblings of x move to fill the gap left by x, decreasing their index. Since their constraint has weakened, invariant 1 is unaffected. Invariant 5 trivially holds as Q is unchanged. Lemmas 2, 3 and 4 guarantee the availability of active root reduction, loss reduction, and root degree reduction. Therefore, invariants 2, 3 and 4 hold.


Performance

Although theoretically optimal, strict Fibonacci heaps are not useful in practical applications. They are extremely complicated to implement, requiring management of more than 10 pointers per node. While most operations run in O(1) time, the constant factors may be very high, making them up to 20 times slower than their more common counterparts such as binary heaps or pairing heaps. Despite being relatively simpler, experiments show that in practice the strict Fibonacci heap performs slower than the Brodal queue.


Summary of running times


References


External links


JavaScript simulation of strict Fibonacci heap
{{Data structures Fibonacci numbers Heaps (data structures)