HOME

TheInfoList



OR:

Merge algorithms are a family of
algorithm In mathematics and computer science, an algorithm () is a finite sequence of Rigour#Mathematics, mathematically rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algo ...
s that take multiple sorted lists as input and produce a single list as output, containing all the elements of the inputs lists in sorted order. These algorithms are used as subroutines in various
sorting algorithm In computer science, a sorting algorithm is an algorithm that puts elements of a List (computing), list into an Total order, order. The most frequently used orders are numerical order and lexicographical order, and either ascending or descending ...
s, most famously
merge sort In computer science, merge sort (also commonly spelled as mergesort and as ) is an efficient, general-purpose, and comparison sort, comparison-based sorting algorithm. Most implementations of merge sort are Sorting algorithm#Stability, stable, wh ...
.


Application

The merge algorithm plays a critical role in the
merge sort In computer science, merge sort (also commonly spelled as mergesort and as ) is an efficient, general-purpose, and comparison sort, comparison-based sorting algorithm. Most implementations of merge sort are Sorting algorithm#Stability, stable, wh ...
algorithm, a comparison-based sorting algorithm. Conceptually, the merge sort algorithm consists of two steps: # Recursively divide the list into sublists of (roughly) equal length, until each sublist contains only one element, or in the case of iterative (bottom up) merge sort, consider a list of ''n'' elements as ''n'' sub-lists of size 1. A list containing a single element is, by definition, sorted. # Repeatedly merge sublists to create a new sorted sublist until the single list contains all elements. The single list is the sorted list. The merge algorithm is used repeatedly in the merge sort algorithm. An example merge sort is given in the illustration. It starts with an unsorted array of 7 integers. The array is divided into 7 partitions; each partition contains 1 element and is sorted. The sorted partitions are then merged to produce larger, sorted, partitions, until 1 partition, the sorted array, is left.


Merging two lists

Merging two sorted lists into one can be done in linear time and linear or constant space (depending on the data access model). The following pseudocode demonstrates an algorithm that merges input lists (either
linked list In computer science, a linked list is a linear collection of data elements whose order is not given by their physical placement in memory. Instead, each element points to the next. It is a data structure consisting of a collection of nodes whi ...
s or
arrays An array is a systematic arrangement of similar objects, usually in rows and columns. Things called an array include: {{TOC right Music * In twelve-tone and serial composition, the presentation of simultaneous twelve-tone sets such that the ...
) and into a new list . The function yields the first element of a list; "dropping" an element means removing it from its list, typically by incrementing a pointer or index. algorithm merge(A, B) is inputs A, B : list returns list C := new empty list while A is not empty and B is not empty do if head(A) ≤ head(B) then append head(A) to C drop the head of A else append head(B) to C drop the head of B ''// By now, either A or B is empty. It remains to empty the other input list.'' while A is not empty do append head(A) to C drop the head of A while B is not empty do append head(B) to C drop the head of B return C When the inputs are linked lists, this algorithm can be implemented to use only a constant amount of working space; the pointers in the lists' nodes can be reused for bookkeeping and for constructing the final merged list. In the merge sort algorithm, this subroutine is typically used to merge two sub-arrays , of a single array . This can be done by copying the sub-arrays into a temporary array, then applying the merge algorithm above. The allocation of a temporary array can be avoided, but at the expense of speed and programming ease. Various in-place merge algorithms have been devised, sometimes sacrificing the linear-time bound to produce an algorithm; see for discussion.


K-way merging

-way merging generalizes binary merging to an arbitrary number of sorted input lists. Applications of -way merging arise in various sorting algorithms, including patience sorting and an external sorting algorithm that divides its input into blocks that fit in memory, sorts these one by one, then merges these blocks. Several solutions to this problem exist. A naive solution is to do a loop over the lists to pick off the minimum element each time, and repeat this loop until all lists are empty:
* Input: a list of lists. * While any of the lists is non-empty: ** Loop over the lists to find the one with the minimum first element. ** Output the minimum element and remove it from its list.
In the worst case, this algorithm performs element comparisons to perform its work if there are a total of elements in the lists. It can be improved by storing the lists in a
priority queue In computer science, a priority queue is an abstract data type similar to a regular queue (abstract data type), queue or stack (abstract data type), stack abstract data type. In a priority queue, each element has an associated ''priority'', which ...
( min-heap) keyed by their first element:
* Build a min-heap of the lists, using the first element as the key. * While any of the lists is non-empty: ** Let . ** Output the first element of list and remove it from its list. ** Re-heapify .
Searching for the next smallest element to be output (find-min) and restoring heap order can now be done in time (more specifically, comparisons), and the full problem can be solved in time (approximately comparisons). A third algorithm for the problem is a divide and conquer solution that builds on the binary merge algorithm:
* If , output the single input list. * If , perform a binary merge. * Else, recursively merge the first lists and the final lists, then binary merge these.
When the input lists to this algorithm are ordered by length, shortest first, it requires fewer than comparisons, i.e., less than half the number used by the heap-based algorithm; in practice, it may be about as fast or slow as the heap-based algorithm.


Parallel merge

A parallel version of the binary merge algorithm can serve as a building block of a parallel merge sort. The following pseudocode demonstrates this algorithm in a parallel divide-and-conquer style (adapted from Cormen ''et al.''). It operates on two sorted arrays and and writes the sorted output to array . The notation denotes the part of from index through , exclusive. algorithm merge(A ...j B ...ℓ C ...q is inputs A, B, C : array i, j, k, ℓ, p, q : indices let m = j - i, n = ℓ - k if m < n then swap A and B ''// ensure that A is the larger array: i, j still belong to A; k, ℓ to B'' swap m and n if m ≤ 0 then return ''// base case, nothing to merge'' let r = ⌊(i + j)/2⌋ let s = binary-search(A B ...ℓ let t = p + (r - i) + (s - k) C = A in parallel do merge(A ...r B ...s C ...t merge(A +1...j B ...ℓ C +1...q The algorithm operates by splitting either or , whichever is larger, into (nearly) equal halves. It then splits the other array into a part with values smaller than the midpoint of the first, and a part with larger or equal values. (The binary search subroutine returns the index in where would be, if it were in ; that this always a number between and .) Finally, each pair of halves is merged recursively, and since the recursive calls are independent of each other, they can be done in parallel. Hybrid approach, where serial algorithm is used for recursion base case has been shown to perform well in practice The work performed by the algorithm for two arrays holding a total of elements, i.e., the running time of a serial version of it, is . This is optimal since elements need to be copied into . To calculate the span of the algorithm, it is necessary to derive a
Recurrence relation In mathematics, a recurrence relation is an equation according to which the nth term of a sequence of numbers is equal to some combination of the previous terms. Often, only k previous terms of the sequence appear in the equation, for a parameter ...
. Since the two recursive calls of ''merge'' are in parallel, only the costlier of the two calls needs to be considered. In the worst case, the maximum number of elements in one of the recursive calls is at most \frac 3 4 n since the array with more elements is perfectly split in half. Adding the \Theta\left( \log(n)\right) cost of the Binary Search, we obtain this recurrence as an upper bound: T_^\text(n) = T_^\text\left(\frac n\right) + \Theta\left( \log(n)\right) The solution is T_^\text(n) = \Theta\left(\log(n)^2\right), meaning that it takes that much time on an ideal machine with an unbounded number of processors. Note: The routine is not stable: if equal items are separated by splitting and , they will become interleaved in ; also swapping and will destroy the order, if equal items are spread among both input arrays. As a result, when used for sorting, this algorithm produces a sort that is not stable.


Parallel merge of two lists

There are also algorithms that introduce parallelism within a single instance of merging of two sorted lists. These can be used in field-programmable gate arrays ( FPGAs), specialized sorting circuits, as well as in modern processors with single-instruction multiple-data ( SIMD) instructions. Existing parallel algorithms are based on modifications of the merge part of either the bitonic sorter or odd-even mergesort. In 2018, Saitoh M. et al. introduced MMS for FPGAs, which focused on removing a multi-cycle feedback datapath that prevented efficient pipelining in hardware. Also in 2018, Papaphilippou P. et al. introduced FLiMS that improved the hardware utilization and performance by only requiring \log_2(P)+1 pipeline stages of compare-and-swap units to merge with a parallelism of elements per FPGA cycle.


Language support

Some
computer language A computer language is a formal language used to communicate with a computer. Types of computer languages include: * Software construction#Construction languages, Construction language – all forms of communication by which a human can Comput ...
s provide built-in or library support for merging sorted collections.


C++

The C++'s Standard Template Library has the function , which merges two sorted ranges of iterators, and , which merges two consecutive sorted ranges ''in-place''. In addition, the (linked list) class has its own method which merges another list into itself. The type of the elements merged must support the less-than () operator, or it must be provided with a custom comparator. C++17 allows for differing execution policies, namely sequential, parallel, and parallel-unsequenced.


Python

Python's standard library (since 2.6) also has a function in the module, that takes multiple sorted iterables, and merges them into a single iterator.


See also

* Merge (revision control) * Join (relational algebra) *
Join (SQL) A join clause in the Structured Query Language (SQL) combines column (database), columns from one or more table (database), tables into a new table. The operation corresponds to a Join (relational algebra), join operation in relational algebra. In ...
* Join (Unix)


References


Further reading

* Donald Knuth. '' The Art of Computer Programming'', Volume 3: ''Sorting and Searching'', Third Edition. Addison-Wesley, 1997. . Pages 158–160 of section 5.2.4: Sorting by Merging. Section 5.3.2: Minimum-Comparison Merging, pp. 197–207.


External links


High Performance Implementation
of Parallel and Serial Merge in C# with source i
GitHub
and in C++br>GitHub
{{DEFAULTSORT:Merge Algorithm Articles with example pseudocode Sorting algorithms