Binary search algorithm




Search algorithm finding the position of a target value within a sorted array



































Binary search algorithm

Binary Search Depiction.svg
Visualization of the binary search algorithm where 7 is the target value

Class Search algorithm
Data structure Array
Worst-case performance
O(log n)
Best-case performance
O(1)
Average performance
O(log n)
Worst-case space complexity
O(1)

In computer science, binary search, also known as half-interval search,[1]logarithmic search,[2] or binary chop,[3] is a search algorithm that finds the position of a target value within a sorted array.[4][5] Binary search compares the target value to the middle element of the array. If they are not equal, the half in which the target cannot lie is eliminated and the search continues on the remaining half, again taking the middle element to compare to the target value, and repeating this until the target value is found. If the search ends with the remaining half being empty, the target is not in the array. Even though the idea is simple, implementing binary search correctly requires attention to some subtleties about its exit conditions and midpoint calculation, particularly if the values in the array are not all of the whole numbers in the range.


Binary search runs in logarithmic time in the worst case, making O(log n) comparisons, where n is the number of elements in the array, the O is Big O notation, and log is the logarithm. Binary search takes constant (O(1)) space, meaning that the space taken by the algorithm is the same for any number of elements in the array.[6] Binary search is faster than linear search except for small arrays, but the array must be sorted first. Although specialized data structures designed for fast searching, such as hash tables, can be searched more efficiently, binary search applies to a wider range of problems.


There are numerous variations of binary search. In particular, fractional cascading speeds up binary searches for the same value in multiple arrays. Fractional cascading efficiently solves a number of search problems in computational geometry and in numerous other fields. Exponential search extends binary search to unbounded lists. The binary search tree and B-tree data structures are based on binary search.




Contents






  • 1 Algorithm


    • 1.1 Procedure


      • 1.1.1 Alternative procedure




    • 1.2 Duplicate elements


      • 1.2.1 Procedure for finding the leftmost element


      • 1.2.2 Procedure for finding the rightmost element




    • 1.3 Approximate matches




  • 2 Performance


    • 2.1 Performance of alternative procedure




  • 3 Binary search versus other schemes


    • 3.1 Hashing


    • 3.2 Trees


    • 3.3 Linear search


    • 3.4 Set membership algorithms


    • 3.5 Other data structures




  • 4 Variations


    • 4.1 Uniform binary search


    • 4.2 Exponential search


    • 4.3 Interpolation search


    • 4.4 Fractional cascading


    • 4.5 Noisy binary search


    • 4.6 Quantum binary search




  • 5 History


  • 6 Implementation issues


  • 7 Library support


  • 8 See also


  • 9 Notes and references


    • 9.1 Notes


    • 9.2 Citations


    • 9.3 Works




  • 10 External links





Algorithm


Binary search works on sorted arrays. Binary search begins by comparing the middle element of the array with the target value. If the target value matches the middle element, its position in the array is returned. If the target value is less than the middle element, the search continues in the lower half of the array. If the target value is greater than the middle element, the search continues in the upper half of the array. By doing this, the algorithm eliminates the half in which the target value cannot lie in each iteration.[7]



Procedure


Given an array A{displaystyle A}A of n{displaystyle n}n elements with values or records A0,A1,A2,…,An−1{displaystyle A_{0},A_{1},A_{2},ldots ,A_{n-1}}{displaystyle A_{0},A_{1},A_{2},ldots ,A_{n-1}}sorted such that A0≤A1≤A2≤An−1{displaystyle A_{0}leq A_{1}leq A_{2}leq cdots leq A_{n-1}}{displaystyle A_{0}leq A_{1}leq A_{2}leq cdots leq A_{n-1}}, and target value T{displaystyle T}T, the following subroutine uses binary search to find the index of T{displaystyle T}T in A{displaystyle A}A.[7]



  1. Set L{displaystyle L}L to 0{displaystyle 0}{displaystyle 0} and R{displaystyle R}R to n−1{displaystyle n-1}n-1.

  2. If L>R{displaystyle L>R}{displaystyle L>R}, the search terminates as unsuccessful.

  3. Set m{displaystyle m}m (the position of the middle element) to the floor of L+R2{displaystyle {frac {L+R}{2}}}{displaystyle {frac {L+R}{2}}}, which is the greatest integer less than or equal to L+R2{displaystyle {frac {L+R}{2}}}{displaystyle {frac {L+R}{2}}}.

  4. If Am<T{displaystyle A_{m}<T}{displaystyle A_{m}<T}, set L{displaystyle L}L to m+1{displaystyle m+1}m+1 and go to step 2.

  5. If Am>T{displaystyle A_{m}>T}{displaystyle A_{m}>T}, set R{displaystyle R}R to m−1{displaystyle m-1}m-1 and go to step 2.

  6. Now Am=T{displaystyle A_{m}=T}{displaystyle A_{m}=T}, the search is done; return m{displaystyle m}m.


This iterative procedure keeps track of the search boundaries with the two variables L{displaystyle L}L and R{displaystyle R}R. The procedure may be expressed in pseudocode as follows, where the variable names and types remain the same as above, floor is the floor function, and unsuccessful refers to a specific value that conveys the failure of the search.[7]


function binary_search(A, n, T):
L := 0
R := n − 1
while L <= R:
m := floor((L + R) / 2)
if A[m] < T:
L := m + 1
else if A[m] > T:
R := m - 1
else:
return m
return unsuccessful


Alternative procedure


In the above procedure, the algorithm checks whether the middle element (m{displaystyle m}m) is equal to the target (T{displaystyle T}T) in every iteration. Some implementations leave out this check during each iteration. The algorithm would perform this check only when one element is left (when L=R{displaystyle L=R}{displaystyle L=R}). This results in a faster comparison loop, as one comparison is eliminated per iteration. However, it requires one more iteration on average.[8]


Hermann Bottenbruch published the first implementation to leave out this check in 1962.[8][9]



  1. Set L{displaystyle L}L to 0{displaystyle 0}{displaystyle 0} and R{displaystyle R}R to n−1{displaystyle n-1}n-1.

  2. If L=R{displaystyle L=R}{displaystyle L=R}, go to step 6.

  3. Set m{displaystyle m}m (the position of the middle element) to the ceiling of L+R2{displaystyle {frac {L+R}{2}}}{displaystyle {frac {L+R}{2}}}, which is the least integer greater than or equal to L+R2{displaystyle {frac {L+R}{2}}}{displaystyle {frac {L+R}{2}}}.

  4. If Am>T{displaystyle A_{m}>T}{displaystyle A_{m}>T}, set R{displaystyle R}R to m−1{displaystyle m-1}m-1 and go to step 2.

  5. Set L{displaystyle L}L to m{displaystyle m}m and go to step 2.

  6. Now L=R{displaystyle L=R}{displaystyle L=R}, the search is done. If AL=T{displaystyle A_{L}=T}{displaystyle A_{L}=T}, return L{displaystyle L}L. Otherwise, the search terminates as unsuccessful.



Duplicate elements


The procedure may return any index whose element is equal to the target value, even if there are duplicate elements in the array. For example, if the array to be searched was [1,2,3,4,4,5,6,7]{displaystyle [1,2,3,4,4,5,6,7]}{displaystyle [1,2,3,4,4,5,6,7]} and the target was 4{displaystyle 4}4, then it would be correct for the algorithm to either return the 4th (index 3) or 5th (index 4) element. The regular procedure would return the 4th element (index 3). However, it is sometimes necessary to find the leftmost element or the rightmost element if the target value is duplicated in the array. In the above example, the 4th element is the leftmost element of the value 4, while the 5th element is the rightmost element of the value 4. The alternative procedure above will always return the index of the rightmost element if an element is duplicated in the array.[9]



Procedure for finding the leftmost element


To find the leftmost element, the following procedure can be used:[10]



  1. Set L{displaystyle L}L to 0{displaystyle 0}{displaystyle 0} and R{displaystyle R}R to n{displaystyle n}n.

  2. If L≥R{displaystyle Lgeq R}{displaystyle Lgeq R}, go to step 6.

  3. Set m{displaystyle m}m (the position of the middle element) to the floor of L+R2{displaystyle {frac {L+R}{2}}}{displaystyle {frac {L+R}{2}}}, which is the greatest integer less than or equal to L+R2{displaystyle {frac {L+R}{2}}}{displaystyle {frac {L+R}{2}}}

  4. If Am<T{displaystyle A_{m}<T}{displaystyle A_{m}<T}, set L{displaystyle L}L to m+1{displaystyle m+1}m+1 and go to step 2.

  5. Otherwise, if Am≥T{displaystyle A_{m}geq T}{displaystyle A_{m}geq T}, set R{displaystyle R}R to m{displaystyle m}m and go to step 2.

  6. Now L=R{displaystyle L=R}{displaystyle L=R}, the search is done, return L{displaystyle L}L.


If L<n{displaystyle L<n}{displaystyle L<n} and AL=T{displaystyle A_{L}=T}{displaystyle A_{L}=T}, then AL{displaystyle A_{L}}A_{L} is the leftmost element that equals T{displaystyle T}T. Even if T{displaystyle T}T is not in the array, L{displaystyle L}L is the rank of T{displaystyle T}T in the array, or the number of elements in the array that are less than T{displaystyle T}T.


Where floor is the floor function, the pseudocode for this version is:


function binary_search_leftmost(A, n, T):
L := 0
R := n
while L < R:
m := floor((L + R) / 2)
if A[m] < T:
L := m + 1
else:
R := m
return L


Procedure for finding the rightmost element


To find the rightmost element, the following procedure can be used:[10]



  1. Set L{displaystyle L}L to 0{displaystyle 0}{displaystyle 0} and R{displaystyle R}R to n{displaystyle n}n.

  2. If L≥R{displaystyle Lgeq R}{displaystyle Lgeq R}, go to step 6.

  3. Set m{displaystyle m}m (the position of the middle element) to the floor of L+R2{displaystyle {frac {L+R}{2}}}{displaystyle {frac {L+R}{2}}}, which is the greatest integer less than or equal to L+R2{displaystyle {frac {L+R}{2}}}{displaystyle {frac {L+R}{2}}}.

  4. If Am>T{displaystyle A_{m}>T}{displaystyle A_{m}>T}, set R{displaystyle R}R to m{displaystyle m}m and go to step 2.

  5. Otherwise, if Am≤T{displaystyle A_{m}leq T}{displaystyle A_{m}leq T}, set L{displaystyle L}L to m+1{displaystyle m+1}m+1 and go to step 2.

  6. Now L=R{displaystyle L=R}{displaystyle L=R}, the search is done, return L−1{displaystyle L-1}L-1.


If L>0{displaystyle L>0}L > 0 and AL−1=T{displaystyle A_{L-1}=T}{displaystyle A_{L-1}=T}, then AL−1{displaystyle A_{L-1}}{displaystyle A_{L-1}} is the rightmost element that equals T{displaystyle T}T. Even if T{displaystyle T}T is not in the array, (n−1)−L{displaystyle (n-1)-L}{displaystyle (n-1)-L} is the number of elements in the array that are greater than T{displaystyle T}T.


Where floor is the floor function, the pseudocode for this version is:


function binary_search_rightmost(A, n, T):
L := 0
R := n
while L < R:
m := floor((L + R) / 2)
if A[m] > T:
R := m
else:
L := m + 1
return L - 1


Approximate matches




Binary search can be adapted to compute approximate matches. In the example above, the rank, predecessor, successor, and nearest neighbor are shown for the target value 5{displaystyle 5}5, which is not in the array.


The above procedure only performs exact matches, finding the position of a target value. However, it is trivial to extend binary search to perform approximate matches because binary search operates on sorted arrays. For example, binary search can be used to compute, for a given value, its rank (the number of smaller elements), predecessor (next-smallest element), successor (next-largest element), and nearest neighbor. Range queries seeking the number of elements between two values can be performed with two rank queries.[11]



  • Rank queries can be performed with the procedure for finding the leftmost element. The number of elements less than the target value is returned by the procedure.[11]

  • Predecessor queries can be performed with rank queries. If the rank of the target value is r{displaystyle r}r, its predecessor is r−1{displaystyle r-1}r-1.[12]

  • For successor queries, the procedure for finding the rightmost element can be used. If the result of running the procedure for the target value is r{displaystyle r}r, then the successor of the target value is r+1{displaystyle r+1}r+1.[12]

  • The nearest neighbor of the target value is either its predecessor or successor, whichever is closer.

  • Range queries are also straightforward.[12] Once the ranks of the two values are known, the number of elements greater than or equal to the first value and less than the second is the difference of the two ranks. This count can be adjusted up or down by one according to whether the endpoints of the range should be considered to be part of the range and whether the array contains keys matching those endpoints.[13]



Performance




A tree representing binary search. The array being searched here is [20,30,40,50,80,90,100]{displaystyle [20,30,40,50,80,90,100]}{displaystyle [20,30,40,50,80,90,100]}, and the target value is 40{displaystyle 40}{displaystyle 40}.




The worst case is reached when the search reaches the deepest level of the tree, while the best case is reached when the target value is the middle element.


The performance of binary search can be analyzed by reducing the procedure to a binary comparison tree. The root node of the tree is the middle element of the array. The middle element of the lower half is the left child node of the root and the middle element of the upper half is the right child node of the root. The rest of the tree is built in a similar fashion. This model represents binary search. Starting from the root node, the left or right subtrees are traversed depending on whether the target value is less or more than the node under consideration. This represents the successive elimination of elements.[6][14]


In the worst case, binary search makes log2⁡(n)+1⌋{textstyle lfloor log _{2}(n)+1rfloor }{textstyle lfloor log _{2}(n)+1rfloor } iterations of the comparison loop, where the  ⌋{textstyle lfloor rfloor }{textstyle lfloor  rfloor } notation denotes the floor function that yields the greatest integer less than or equal to the argument, and log2{textstyle log _{2}}{textstyle log _{2}} is the binary logarithm. The worst case is reached when the search reaches the deepest level of the tree. This is equivalent to a binary search that has reduced to one element and always eliminates the smaller subarray out of the two in each iteration if they are not of equal size.[a][14]


The worst case may also be reached when the target element is not in the array. If n{textstyle n}{textstyle n} is one less than a power of two, then this is always the case. Otherwise, the search may perform log2⁡(n)+1⌋{textstyle lfloor log _{2}(n)+1rfloor }{textstyle lfloor log _{2}(n)+1rfloor }iterations if the search reaches the deepest level of the tree. However, it may make log2⁡(n)⌋{textstyle lfloor log _{2}(n)rfloor }{textstyle lfloor log _{2}(n)rfloor } iterations, which is one less than the worst case, if the search ends at the second-deepest level of the tree.[15]


On average, assuming that each element is equally likely to be searched, binary search makes log2⁡(n)⌋+1−(2⌊log2⁡(n)⌋+1−log2⁡(n)⌋2)/n{displaystyle lfloor log _{2}(n)rfloor +1-(2^{lfloor log _{2}(n)rfloor +1}-lfloor log _{2}(n)rfloor -2)/n}{displaystyle lfloor log _{2}(n)rfloor +1-(2^{lfloor log _{2}(n)rfloor +1}-lfloor log _{2}(n)rfloor -2)/n} iterations when the target element is in the array. This is approximately equal to log2⁡(n)−1{displaystyle log _{2}(n)-1}{displaystyle log _{2}(n)-1} iterations. When the target element is not in the array, binary search makes log2⁡(n)⌋+2−2⌊log2⁡(n)⌋+1/(n+1){displaystyle lfloor log _{2}(n)rfloor +2-2^{lfloor log _{2}(n)rfloor +1}/(n+1)}{displaystyle lfloor log _{2}(n)rfloor +2-2^{lfloor log _{2}(n)rfloor +1}/(n+1)} iterations on average, assuming that the range between and outside elements is equally likely to be searched.[14]


In the best case, where the target value is the middle element of the array, its position is returned after one iteration.[16]


In terms of iterations, no search algorithm that works only by comparing elements can exhibit better average and worst-case performance than binary search. The comparison tree representing binary search has the fewest levels possible as every level above the lowest level of the tree is filled completely.[b] Otherwise, the search algorithm can eliminate few elements in an iteration, increasing the number of iterations required in the average and worst case. This is the case for other search algorithms based on comparisons, as while they may work faster on some target values, the average performance over all elements is worse than binary search. By dividing the array in half, binary search ensures that the size of both subarrays are as similar as possible.[14]



Performance of alternative procedure


Each iteration of the binary search procedure defined above makes one or two comparisons, checking if the middle element is equal to the target in each iteration. Assuming that each element is equally likely to be searched, each iteration makes 1.5 comparisons on average. A variation of the algorithm checks whether the middle element is equal to the target at the end of the search. On average, this eliminates half a comparison from each iteration. This slightly cuts the time taken per iteration on most computers. However, it guarantees that the search takes the maximum number of iterations, on average adding one iteration to the search. Because the comparison loop is performed only log2⁡(n)+1⌋{textstyle lfloor log _{2}(n)+1rfloor }{textstyle lfloor log _{2}(n)+1rfloor } times in the worst case, the slight increase in efficiency per iteration does not compensate for the extra iteration for all but enormous n{textstyle n}{textstyle n}.[c][17][18]



Binary search versus other schemes


Sorted arrays with binary search are a very inefficient solution when insertion and deletion operations are interleaved with retrieval, taking O(n){textstyle O(n)}{textstyle O(n)} time for each such operation. In addition, sorted arrays can complicate memory use especially when elements are often inserted into the array.[19] There are other data structures that support much more efficient insertion and deletion. Binary search can be used to perform exact matching and set membership (determining whether a target value is in a collection of values). There are data structures that support faster exact matching and set membership. However, unlike many other searching schemes, binary search can be used for efficient approximate matching, usually performing such matches in O(log⁡n){textstyle O(log n)}{textstyle O(log n)} time regardless of the type or structure of the values themselves.[20] In addition, there are some operations, like finding the smallest and largest element, that can be performed efficiently on a sorted array.[11]



Hashing


For implementing associative arrays, hash tables, a data structure that maps keys to records using a hash function, are generally faster than binary search on a sorted array of records.[21] Most hash table implementations require only amortized constant time on average.[d][23] However, hashing is not useful for approximate matches, such as computing the next-smallest, next-largest, and nearest key, as the only information given on a failed search is that the target is not present in any record.[24] Binary search is ideal for such matches, performing them in logarithmic time. Binary search also supports approximate matches. Some operations, like finding the smallest and largest element, can be done efficiently on sorted arrays but not on hash tables.[20]



Trees





Binary search trees are searched using an algorithm similar to binary search.


A binary search tree is a binary tree data structure that works based on the principle of binary search. The records of the tree are arranged in sorted order, and each record in the tree can be searched using an algorithm similar to binary search, taking on average logarithmic time. Insertion and deletion also require on average logarithmic time in binary search trees. This can be faster than the linear time insertion and deletion of sorted arrays, and binary trees retain the ability to perform all the operations possible on a sorted array, including range and approximate queries.[20][25]


However, binary search is usually more efficient for searching as binary search trees will most likely be imperfectly balanced, resulting in slightly worse performance than binary search. This even applies to balanced binary search trees, binary search trees that balance their own nodes, because they rarely produce optimally-balanced trees. Although unlikely, the tree may be severely imbalanced with few internal nodes with two children, resulting in the average and worst-case search time approaching n{textstyle n}{textstyle n} comparisons.[e] Binary search trees take more space than sorted arrays.[27]


Binary search trees lend themselves to fast searching in external memory stored in hard disks, as binary search trees can efficiently be structured in filesystems. The B-tree generalizes this method of tree organization. B-trees are frequently used to organize long-term storage such as databases and filesystems.[28][29]



Linear search


Linear search is a simple search algorithm that checks every record until it finds the target value. Linear search can be done on a linked list, which allows for faster insertion and deletion than an array. Binary search is faster than linear search for sorted arrays except if the array is short, although the array needs to be sorted beforehand.[f][31] All sorting algorithms based on comparing elements, such as quicksort and merge sort, require at least O(nlog⁡n){textstyle O(nlog n)}{textstyle O(nlog n)} comparisons in the worst case.[32] Unlike linear search, binary search can be used for efficient approximate matching. There are operations such as finding the smallest and largest element that can be done efficiently on a sorted array but not on an unsorted array.[33]



Set membership algorithms


A related problem to search is set membership. Any algorithm that does lookup, like binary search, can also be used for set membership. There are other algorithms that are more specifically suited for set membership. A bit array is the simplest, useful when the range of keys is limited. It compactly stores a collection of bits, with each bit representing a single key within the range of keys. Bit arrays are very fast, requiring only O(1){textstyle O(1)}{textstyle O(1)} time.[34] The Judy1 type of Judy array handles 64-bit keys efficiently.[35]


For approximate results, Bloom filters, another probabilistic data structure based on hashing, store a set of keys by encoding the keys using a bit array and multiple hash functions. Bloom filters are much more space-efficient than bit arrays in most cases and not much slower: with k{textstyle k}{textstyle k} hash functions, membership queries require only O(k){textstyle O(k)}{textstyle O(k)} time. However, Bloom filters suffer from false positives.[g][h][37]



Other data structures


There exist data structures that may improve on binary search in some cases for both searching and other operations available for sorted arrays. For example, searches, approximate matches, and the operations available to sorted arrays can be performed more efficiently than binary search on specialized data structures such as van Emde Boas trees, fusion trees, tries, and bit arrays. These specialized data structures are usually only faster because they take advantage of the properties of keys with a certain attribute (usually keys that are small integers), and thus will be time or space consuming for keys that lack that attribute.[20] As long as the keys can be ordered, these operations can always be done at least efficiently on a sorted array regardless of the keys. Some structures, such as Judy arrays, use a combination of approaches to mitigate this while retaining efficiency and the ability to perform approximate matching.[35]



Variations



Uniform binary search





Uniform binary search stores the difference between the current and the two next possible middle elements instead of specific bounds.



Uniform binary search stores, instead of the lower and upper bounds, the index of the middle element and the change in the middle element from the current iteration to the next iteration. Each step reduces the change by about half. For example, if the array to be searched is [1,2,3,4,5,6,7,8,9,10,11]{displaystyle [1,2,3,4,5,6,7,8,9,10,11]}{displaystyle [1,2,3,4,5,6,7,8,9,10,11]}, the middle element would be 6{displaystyle 6}6. Uniform binary search works on the basis that the difference between the index of middle element of the array and the left and right subarrays is the same. In this case, the middle element of the left subarray ([1,2,3,4,5]{displaystyle [1,2,3,4,5]}{displaystyle [1,2,3,4,5]}) is 3{displaystyle 3}3 and the middle element of the right subarray ([7,8,9,10,11]{displaystyle [7,8,9,10,11]}{displaystyle [7,8,9,10,11]}) is 9{displaystyle 9}9. Uniform binary search would store the value of 3{displaystyle 3}3 as both indices differ from 6{displaystyle 6}6 by this same amount.[38] To reduce the search space, the algorithm either adds or subtracts this change from the middle element. The main advantage of uniform binary search is that the procedure can store a table of the differences between indices for each iteration of the procedure. Uniform binary search may be faster on systems where it is inefficient to calculate the midpoint, such as on decimal computers.[39]



Exponential search




Visualization of exponential searching finding the upper bound for the subsequent binary search



Exponential search extends binary search to unbounded lists. It starts by finding the first element with an index that is both a power of two and greater than the target value. Afterwards, it sets that index as the upper bound, and switches to binary search. A search takes log2⁡x+1⌋{textstyle lfloor log _{2}x+1rfloor }{textstyle lfloor log _{2}x+1rfloor } iterations of the exponential search and at most log2⁡x⌋{textstyle lfloor log _{2}xrfloor }{textstyle lfloor log _{2}xrfloor } iterations of the binary search, where x{textstyle x}{textstyle x} is the position of the target value. Exponential search works on bounded lists, but becomes an improvement over binary search only if the target value lies near the beginning of the array.[40]



Interpolation search




Visualization of interpolation search. In this case, no searching is needed because the estimate of the target's location within the array is correct. Other implementations may specify another function for estimating the target's location.



Instead of calculating the midpoint, interpolation search estimates the position of the target value, taking into account the lowest and highest elements in the array as well as length of the array. This is only possible if the array elements are numbers. It works on the basis that the midpoint is not the best guess in many cases. For example, if the target value is close to the highest element in the array, it is likely to be located near the end of the array.[41] When the distribution of the array elements is uniform or near uniform, it makes O(log⁡log⁡n){textstyle O(log log n)}{textstyle O(log log n)} comparisons.[41][42][43]


In practice, interpolation search is slower than binary search for small arrays, as interpolation search requires extra computation. Its time complexity grows more slowly than binary search, but this only compensates for the extra computation for large arrays.[41]



Fractional cascading




In fractional cascading, each array has pointers to every second element of another array, so only one binary search has to be performed to search all the arrays.



Fractional cascading is a technique that speeds up binary searches for the same element in multiple sorted arrays. Searching each array separately requires O(klog⁡n){textstyle O(klog n)}{textstyle O(klog n)} time, where k{textstyle k}{textstyle k} is the number of arrays. Fractional cascading reduces this to O(k+log⁡n){textstyle O(k+log n)}{textstyle O(k+log n)} by storing specific information in each array about each element and its position in the other arrays.[44][45]


Fractional cascading was originally developed to efficiently solve various computational geometry problems. Fractional cascading has been applied elsewhere, such as in data mining and Internet Protocol routing.[44]



Noisy binary search




In noisy binary search, there is a certain probability that a comparison is incorrect.


Noisy binary search algorithms solve the case where the algorithm cannot reliably compare elements of the array. For each pair of elements, there is a certain probability that the algorithm makes the wrong comparison. Noisy binary search can find the correct position of the target with a given probability that controls the reliability of the yielded position.[i][j][49][50]



Quantum binary search


Classical computers are bounded to the worst case of exactly log2⁡n+1⌋{textstyle lfloor log _{2}n+1rfloor }{textstyle lfloor log _{2}n+1rfloor } iterations when performing binary search. Quantum algorithms for binary search are still bounded to a proportion of log2⁡n{textstyle log _{2}n}{textstyle log _{2}n} queries (representing iterations of the classical procedure), but the constant factor is less than one, providing for faster performance on quantum computers. Any exact quantum binary search procedure—that is, a procedure that always yields the correct result—requires at least (ln⁡n−1)≈0.220log2⁡n{textstyle {frac {1}{pi }}(ln n-1)approx 0.220log _{2}n}{textstyle {frac {1}{pi }}(ln n-1)approx 0.220log _{2}n} queries in the worst case, where ln{textstyle ln }{textstyle ln } is the natural logarithm.[51] There is an exact quantum binary search procedure that runs in 4log605⁡n≈0.433log2⁡n{textstyle 4log _{605}napprox 0.433log _{2}n}{textstyle 4log _{605}napprox 0.433log _{2}n} queries in the worst case.[52] In comparison, Grover's algorithm is the optimal quantum algorithm for searching an unordered list of elements, and it requires O(n){displaystyle O({sqrt {n}})}O({sqrt {n}}) queries.[53]



History


In 1946, John Mauchly made the first mention of binary search as part of the Moore School Lectures, a seminal and foundational college course in computing.[9] In 1957, William Wesley Peterson published the first method for interpolation search.[9][54] Every published binary search algorithm worked only for arrays whose length is one less than a power of two[k] until 1960, when Derrick Henry Lehmer published a binary search algorithm that worked on all arrays.[56] In 1962, Hermann Bottenbruch presented an ALGOL 60 implementation of binary search that placed the comparison for equality at the end, increasing the average number of iterations by one, but reducing to one the number of comparisons per iteration.[8] The uniform binary search was developed by A. K. Chandra of Stanford University in 1971.[9] In 1986, Bernard Chazelle and Leonidas J. Guibas introduced fractional cascading as a method to solve numerous search problems in computational geometry.[44][57][58]



Implementation issues



Although the basic idea of binary search is comparatively straightforward, the details can be surprisingly tricky ... — Donald Knuth[2]



When Jon Bentley assigned binary search as a problem in a course for professional programmers, he found that ninety percent failed to provide a correct solution after several hours of working on it, mainly because the incorrect implementations failed to run or returned a wrong answer in rare edge cases.[59] A study published in 1988 shows that accurate code for it is only found in five out of twenty textbooks.[60] Furthermore, Bentley's own implementation of binary search, published in his 1986 book Programming Pearls, contained an overflow error that remained undetected for over twenty years. The Java programming language library implementation of binary search had the same overflow bug for more than nine years.[61]


In a practical implementation, the variables used to represent the indices will often be of fixed size, and this can result in an arithmetic overflow for very large arrays. If the midpoint of the span is calculated as L+R2{displaystyle {frac {L+R}{2}}}{displaystyle {frac {L+R}{2}}}, then the value of L+R{displaystyle L+R}L+R may exceed the range of integers of the data type used to store the midpoint, even if L{displaystyle L}L and R{displaystyle R}R are within the range. If L{displaystyle L}L and R{displaystyle R}R are nonnegative, this can be avoided by calculating the midpoint as L+R−L2{displaystyle L+{frac {R-L}{2}}}{displaystyle L+{frac {R-L}{2}}}.[62]


If the target value is greater than the greatest value in the array, and the last index of the array is the maximum representable value of L{displaystyle L}L, the value of L{displaystyle L}L will eventually become too large and overflow. A similar problem will occur if the target value is smaller than the least value in the array and the first index of the array is the smallest representable value of R{displaystyle R}R. In particular, this means that R{displaystyle R}R must not be an unsigned type if the array starts with index 0{displaystyle 0}{displaystyle 0}.[60][62]


An infinite loop may occur if the exit conditions for the loop are not defined correctly. Once L{displaystyle L}L exceeds R{displaystyle R}R, the search has failed and must convey the failure of the search. In addition, the loop must be exited when the target element is found, or in the case of an implementation where this check is moved to the end, checks for whether the search was successful or failed at the end must be in place. Bentley found that most of the programmers who incorrectly implemented binary search made an error in defining the exit conditions.[8][63]



Library support


Many languages' standard libraries include binary search routines:




  • C provides the function bsearch() in its standard library, which is typically implemented via binary search, although the official standard does not require it so.[64]


  • C++'s Standard Template Library provides the functions binary_search(), lower_bound(), upper_bound() and equal_range().[65]


  • COBOL provides the SEARCH ALL verb for performing binary searches on COBOL ordered tables.[66]


  • Go's sort standard library package contains the functions Search, SearchInts, SearchFloat64s, and SearchStrings, which implement general binary search, as well as specific implementations for searching slices of integers, floating-point numbers, and strings, respectively.[67]


  • Java offers a set of overloaded binarySearch() static methods in the classes Arrays and Collections in the standard java.util package for performing binary searches on Java arrays and on Lists, respectively.[68][69]


  • Microsoft's .NET Framework 2.0 offers static generic versions of the binary search algorithm in its collection base classes. An example would be System.Array's method BinarySearch<T>(T array, T value).[70]

  • For Objective-C, the Cocoa framework provides the NSArray -indexOfObject:inSortedRange:options:usingComparator: method in Mac OS X 10.6+.[71] Apple's Core Foundation C framework also contains a CFArrayBSearchValues() function.[72]


  • Python provides the bisect module.[73]


  • Ruby's Array class includes a bsearch method with built-in approximate matching.[74]



See also





  • Bisection method – the same idea used to solve equations in the real numbers


  • Multiplicative binary search – binary search variation with simplified midpoint calculation



Notes and references



Notes





  1. ^ This happens as binary search will not always divide the array perfectly. Take for example the array [1, 2, ..., 16]. The first iteration will select the midpoint of 8. On the left subarray are eight elements, but on the right are nine. If the search takes the right path, there is a higher chance that the search will make the maximum number of comparisons.[14]


  2. ^ Any search algorithm based solely on comparisons can be represented using a binary comparison tree. An internal path is any path from the root to an existing node. Let I{displaystyle I}I be the internal path length, the sum of the lengths of all internal paths. If each element is equally likely to be searched, the average case is 1+In{displaystyle 1+{frac {I}{n}}}{displaystyle 1+{frac {I}{n}}} or simply one plus the average of all the internal path lengths of the tree. This is because internal paths represent the elements that the search algorithm compares to the target. The lengths of these internal paths represent the number of iterations after the root node. Adding the average of these lengths to the one iteration at the root yields the average case. Therefore, to minimize the average number of comparisons, the internal path length I{displaystyle I}I must be minimized. It turns out that the tree for binary search minimizes the internal path length. Knuth 1998 proved that the external path length (the path length over all nodes where both children are present for each already-existing node) is minimized when the external nodes (the nodes with no children) lie within two consecutive levels of the tree. This also applies to internal paths as internal path length I{displaystyle I}I is linearly related to external path length E{displaystyle E}E. For any tree of n{displaystyle n}n nodes, I=E−2n{displaystyle I=E-2n}{displaystyle I=E-2n}. When each subtree has a similar number of nodes, or equivalently the array is divided into halves in each iteration, the external nodes as well as their interior parent nodes lie within two levels. It follows that binary search minimizes the number of average comparisons as its comparison tree has the lowest possible internal path length.[14]


  3. ^ Knuth 1998 showed on his MIX computer model, intended to represent an ordinary computer, that the average running time of this variation for a successful search is 17.5log2⁡n+17{textstyle 17.5log _{2}n+17}{textstyle 17.5log _{2}n+17} units of time compared to 18log2⁡n−16{textstyle 18log _{2}n-16}{textstyle 18log _{2}n-16} units for regular binary search. The time complexity for this variation grows slightly more slowly, but at the cost of higher initial complexity. [17]


  4. ^ It is possible to perform hashing in guaranteed constant time.[22]


  5. ^ The worst binary search tree for searching can be produced by inserting the values in sorted order or in an alternating lowest-highest key pattern.[26]


  6. ^ Knuth 1998 performed a formal time performance analysis of both of these search algorithms. On Knuth's MIX computer, which Knuth designed as a representation of an ordinary computer, binary search takes on average 18log⁡n−16{textstyle 18log n-16}{textstyle 18log n-16} units of time for a successful search, while linear search with a sentinel node at the end of the list takes 1.75n+8.5−n mod 24n{textstyle 1.75n+8.5-{frac {n{text{ mod }}2}{4n}}}{textstyle 1.75n+8.5-{frac {n{text{ mod }}2}{4n}}} units. Linear search has lower initial complexity because it requires minimal computation, but it quickly outgrows binary search in complexity. On the MIX computer, binary search only outperforms linear search with a sentinel if n>44{textstyle n>44}{textstyle n>44}.[14][30]


  7. ^ This is because simply setting all of the bits which the hash functions point to for a specific key can affect queries for other keys which have a common hash location for one or more of the functions.[36]


  8. ^ There exist improvements of the Bloom filter which improve on its complexity or support deletion; for example, the cuckoo filter exploits cuckoo hashing to gain these advantages.[36]


  9. ^ Using noisy comparisons, Ben-Or & Hassidim 2008 established that any noisy binary search procedure must make at least (1−τ)log2⁡(n)H(p)−10H(p){displaystyle (1-tau ){frac {log _{2}(n)}{H(p)}}-{frac {10}{H(p)}}}{displaystyle (1-tau ){frac {log _{2}(n)}{H(p)}}-{frac {10}{H(p)}}} comparisons on average, where H(p)=−plog2⁡(p)−(1−p)log2⁡(1−p){displaystyle H(p)=-plog _{2}(p)-(1-p)log _{2}(1-p)}{displaystyle H(p)=-plog _{2}(p)-(1-p)log _{2}(1-p)} is the binary entropy function and τ{displaystyle tau }tau is the probability that the procedure yields the wrong position.[46]


  10. ^ The noisy binary search problem can be considered as a case of the Rényi–Ulam game,[47] a variant of Twenty Questions where the answers may be wrong.[48]


  11. ^ That is, arrays of length 1, 3, 7, 15, 31 ...[55]




Citations





  1. ^ Williams, Jr., Louis F. (22 April 1976). A modification to the half-interval search (binary search) method. Proceedings of the 14th ACM Southeast Conference. ACM. pp. 95–101. doi:10.1145/503561.503582. Archived from the original on 12 March 2017. Retrieved 29 June 2018..mw-parser-output cite.citation{font-style:inherit}.mw-parser-output q{quotes:"""""""'""'"}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-lock-free a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-subscription a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{font-size:100%}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}


  2. ^ ab Knuth 1998, §6.2.1 ("Searching an ordered table"), subsection "Binary search".


  3. ^ Butterfield & Ngondi 2016, p. 46.


  4. ^ Cormen et al. 2009, p. 39.


  5. ^ Weisstein, Eric W. "Binary search". MathWorld.


  6. ^ ab Flores, Ivan; Madpis, George (1 September 1971). "Average binary search length for dense ordered lists". Communications of the ACM. 14 (9): 602–603. Bibcode:1985CACM...28...22S. doi:10.1145/362663.362752. ISSN 0001-0782. Retrieved 29 June 2018.


  7. ^ abc Knuth 1998, §6.2.1 ("Searching an ordered table"), subsection "Algorithm B".


  8. ^ abcd Bottenbruch, Hermann (1 April 1962). "Structure and use of ALGOL 60". Journal of the ACM. 9 (2): 161–221. doi:10.1145/321119.321120. ISSN 0004-5411. Retrieved 30 June 2018. Procedure is described at p. 214 (§43), titled "Program for Binary Search".


  9. ^ abcde Knuth 1998, §6.2.1 ("Searching an ordered table"), subsection "History and bibliography".


  10. ^ ab Kasahara & Morishita 2006, pp. 8–9.


  11. ^ abc Sedgewick & Wayne 2011, §3.1, subsection "Rank and selection".


  12. ^ abc Goldman & Goldman 2008, pp. 461–463.


  13. ^ Sedgewick & Wayne 2011, §3.1, subsection "Range queries".


  14. ^ abcdefg Knuth 1998, §6.2.1 ("Searching an ordered table"), subsection "Further analysis of binary search".


  15. ^ Knuth 1998, §6.2.1 ("Searching an ordered table"), "Theorem B".


  16. ^ Chang 2003, p. 169.


  17. ^ ab Knuth 1998, §6.2.1 ("Searching an ordered table"), subsection "Exercise 23".


  18. ^ Rolfe, Timothy J. (1997). "Analytic derivation of comparisons in binary search". ACM SIGNUM Newsletter. 32 (4): 15–19. doi:10.1145/289251.289255.


  19. ^ Knuth 1997, §2.2.2 ("Sequential Allocation").


  20. ^ abcd Beame, Paul; Fich, Faith E. (2001). "Optimal bounds for the predecessor problem and related problems". Journal of Computer and System Sciences. 65 (1): 38–72. doi:10.1006/jcss.2002.1822. Archived from the original on 6 March 2017. Retrieved 3 April 2016.
    open access publication – free to read



  21. ^ Knuth 1998, §6.4 ("Hashing").


  22. ^ Knuth 1998, §6.4 ("Hashing"), subsection "History".


  23. ^ Dietzfelbinger, Martin; Karlin, Anna; Mehlhorn, Kurt; Meyer auf der Heide, Friedhelm; Rohnert, Hans; Tarjan, Robert E. (August 1994). "Dynamic perfect hashing: upper and lower bounds". SIAM Journal on Computing. 23 (4): 738–761. doi:10.1137/S0097539791194094.


  24. ^ Morin, Pat. "Hash tables" (PDF). p. 1. Archived (PDF) from the original on 22 February 2016. Retrieved 28 March 2016.


  25. ^ Sedgewick & Wayne 2011, §3.2 ("Binary Search Trees"), subsection "Order-based methods and deletion".


  26. ^ Knuth 1998, §6.2.2 ("Binary tree searching"), subsection "But what about the worst case?".


  27. ^ Sedgewick & Wayne 2011, §3.5 ("Applications"), "Which symbol-table implementation should I use?".


  28. ^ Knuth 1998, §5.4.9 ("Disks and Drums").


  29. ^ Knuth 1998, §6.2.4 ("Multiway trees").


  30. ^ Knuth 1998, Answers to Exercises (§6.2.1) for "Exercise 5".


  31. ^ Knuth 1998, §6.2.1 ("Searching an ordered table").


  32. ^ Knuth 1998, §5.3.1 ("Minimum-Comparison sorting").


  33. ^ Sedgewick & Wayne 2011, §3.2 ("Ordered symbol tables").


  34. ^ Knuth 2011, §7.1.3 ("Bitwise Tricks and Techniques").


  35. ^ ab Silverstein, Alan, Judy IV shop manual (PDF), Hewlett-Packard, pp. 80–81, archived (PDF) from the original on 18 March 2016, retrieved 26 October 2017


  36. ^ ab Fan, Bin; Andersen, Dave G.; Kaminsky, Michael; Mitzenmacher, Michael D. (2014). Cuckoo filter: practically better than Bloom. Proceedings of the 10th ACM International on Conference on Emerging Networking Experiments and Technologies. pp. 75–88. doi:10.1145/2674005.2674994.


  37. ^ Bloom, Burton H. (1970). "Space/time trade-offs in hash coding with allowable errors" (PDF). Communications of the ACM. 13 (7): 422–426. Bibcode:1985CACM...28...22S. CiteSeerX 10.1.1.641.9096. doi:10.1145/362686.362692. Archived from the original (PDF) on 4 November 2004. Retrieved 26 October 2017.


  38. ^ Knuth 1998, §6.2.1 ("Searching an ordered table"), subsection "An important variation".


  39. ^ Knuth 1998, §6.2.1 ("Searching an ordered table"), subsection "Algorithm U".


  40. ^ Moffat & Turpin 2002, p. 33.


  41. ^ abc Knuth 1998, §6.2.1 ("Searching an ordered table"), subsection "Interpolation search".


  42. ^ Knuth 1998, §6.2.1 ("Searching an ordered table"), subsection "Exercise 22".


  43. ^ Perl, Yehoshua; Itai, Alon; Avni, Haim (1978). "Interpolation search—a log log n search". Communications of the ACM. 21 (7): 550–553. Bibcode:1985CACM...28...22S. doi:10.1145/359545.359557.


  44. ^ abc Chazelle, Bernard; Liu, Ding (6 July 2001). Lower bounds for intersection searching and fractional cascading in higher dimension. 33rd ACM Symposium on Theory of Computing. ACM. pp. 322–329. doi:10.1145/380752.380818. ISBN 978-1-58113-349-3. Retrieved 30 June 2018.


  45. ^ Chazelle, Bernard; Liu, Ding (1 March 2004). "Lower bounds for intersection searching and fractional cascading in higher dimension" (PDF). Journal of Computer and System Sciences. 68 (2): 269–284. doi:10.1016/j.jcss.2003.07.003. ISSN 0022-0000. Archived (PDF) from the original on 25 March 2017. Retrieved 30 June 2018.


  46. ^ Ben-Or, Michael; Hassidim, Avinatan (2008). "The Bayesian learner is optimal for noisy binary search (and pretty good for quantum as well)" (PDF). 49th Symposium on Foundations of Computer Science. pp. 221–230. doi:10.1109/FOCS.2008.58. ISBN 978-0-7695-3436-7. Archived (PDF) from the original on 9 August 2017. Retrieved 26 September 2017.


  47. ^ Pelc, Andrzej (2002). "Searching games with errors—fifty years of coping with liars". Theoretical Computer Science. 270 (1–2): 71–109. doi:10.1016/S0304-3975(01)00303-6.


  48. ^ Rényi, Alfréd (1961). "On a problem in information theory". Magyar Tudományos Akadémia Matematikai Kutató Intézetének Közleményei (in Hungarian). 6: 505–516. MR 0143666.


  49. ^ Pelc, Andrzej (1989). "Searching with known error probability". Theoretical Computer Science. 63 (2): 185–202. doi:10.1016/0304-3975(89)90077-7.


  50. ^ Rivest, Ronald L.; Meyer, Albert R.; Kleitman, Daniel J.; Winklmann, K. Coping with errors in binary search procedures. 10th ACM Symposium on Theory of Computing. doi:10.1145/800133.804351.


  51. ^ Høyer, Peter; Neerbek, Jan; Shi, Yaoyun (2002). "Quantum complexities of ordered searching, sorting, and element distinctness". Algorithmica. 34 (4): 429–448. arXiv:quant-ph/0102078. doi:10.1007/s00453-002-0976-3.


  52. ^ Childs, Andrew M.; Landahl, Andrew J.; Parrilo, Pablo A. (2007). "Quantum algorithms for the ordered search problem via semidefinite programming". Physical Review A. 75 (3). 032335. arXiv:quant-ph/0608161. Bibcode:2007PhRvA..75c2335C. doi:10.1103/PhysRevA.75.032335.


  53. ^ Grover, Lov K. (1996). A fast quantum mechanical algorithm for database search. 28th ACM Symposium on Theory of Computing. Philadelphia, PA. pp. 212–219. arXiv:quant-ph/9605043. doi:10.1145/237814.237866.


  54. ^ Peterson, William Wesley (1957). "Addressing for random-access storage". IBM Journal of Research and Development. 1 (2): 130–146. doi:10.1147/rd.12.0130.


  55. ^ "2n−1". OEIS A000225 Archived 8 June 2016 at the Wayback Machine.. Retrieved 7 May 2016.


  56. ^ Lehmer, Derrick (1960). Teaching combinatorial tricks to a computer. Proceedings of Symposia in Applied Mathematics. 10. pp. 180–181. doi:10.1090/psapm/010.


  57. ^ Chazelle, Bernard; Guibas, Leonidas J. (1986). "Fractional cascading: I. A data structuring technique" (PDF). Algorithmica. 1 (1): 133–162. CiteSeerX 10.1.1.117.8349. doi:10.1007/BF01840440. Archived (PDF) from the original on 3 March 2016. Retrieved 22 April 2016.


  58. ^ Chazelle, Bernard; Guibas, Leonidas J. (1986), "Fractional cascading: II. Applications" (PDF), Algorithmica, 1 (1): 163–191, doi:10.1007/BF01840441, archived (PDF) from the original on 4 March 2016, retrieved 22 April 2016


  59. ^ Bentley 2000, §4.1 ("The Challenge of Binary Search").


  60. ^ ab Pattis, Richard E. (1988). "Textbook errors in binary searching". SIGCSE Bulletin. 20: 190–194. doi:10.1145/52965.53012.


  61. ^ Bloch, Joshua (2 June 2006). "Extra, extra – read all about it: nearly all binary searches and mergesorts are broken". Google Research Blog. Archived from the original on 1 April 2016. Retrieved 21 April 2016.


  62. ^ ab Ruggieri, Salvatore (2003). "On computing the semi-sum of two integers" (PDF). Information Processing Letters. 87 (2): 67–71. CiteSeerX 10.1.1.13.5631. doi:10.1016/S0020-0190(03)00263-1. Archived (PDF) from the original on 3 July 2006. Retrieved 19 March 2016.


  63. ^ Bentley 2000, §4.4 ("Principles").


  64. ^ "bsearch – binary search a sorted table". The Open Group Base Specifications (7th ed.). The Open Group. 2013. Archived from the original on 21 March 2016. Retrieved 28 March 2016.


  65. ^ Stroustrup 2013, p. 945.


  66. ^ Unisys (2012), COBOL ANSI-85 programming reference manual, 1, pp. 598–601


  67. ^ "Package sort". The Go Programming Language. Archived from the original on 25 April 2016. Retrieved 28 April 2016.


  68. ^ "java.util.Arrays". Java Platform Standard Edition 8 Documentation. Oracle Corporation. Archived from the original on 29 April 2016. Retrieved 1 May 2016.


  69. ^ "java.util.Collections". Java Platform Standard Edition 8 Documentation. Oracle Corporation. Archived from the original on 23 April 2016. Retrieved 1 May 2016.


  70. ^ "List<T>.BinarySearch method (T)". Microsoft Developer Network. Archived from the original on 7 May 2016. Retrieved 10 April 2016.


  71. ^ "NSArray". Mac Developer Library. Apple Inc. Archived from the original on 17 April 2016. Retrieved 1 May 2016.


  72. ^ "CFArray". Mac Developer Library. Apple Inc. Archived from the original on 20 April 2016. Retrieved 1 May 2016.


  73. ^ "8.6. bisect — Array bisection algorithm". The Python Standard Library. Python Software Foundation. Archived from the original on 25 March 2018. Retrieved 26 March 2018.


  74. ^ Fitzgerald 2007, p. 152.




Works





  • Bentley, Jon (2000). Programming pearls (2nd ed.). Addison-Wesley. ISBN 978-0-201-65788-3.


  • Butterfield, Andrew; Ngondi, Gerard E. (2016). A dictionary of computer science (7th ed.). Oxford, UK: Oxford University Press. ISBN 978-0-19-968897-5.


  • Chang, Shi-Kuo (2003). Data structures and algorithms. Software Engineering and Knowledge Engineering. 13. Singapore: World Scientific. ISBN 978-981-238-348-8.


  • Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2009). Introduction to algorithms (3rd ed.). MIT Press and McGraw-Hill. ISBN 978-0-262-03384-8.


  • Fitzgerald, Michael (2007). Ruby pocket reference. Sebastopol, California: O'Reilly Media. ISBN 978-1-4919-2601-7.


  • Goldman, Sally A.; Goldman, Kenneth J. (2008). A practical guide to data structures and algorithms using Java. Boca Raton, Florida: CRC Press. ISBN 978-1-58488-455-2.


  • Kasahara, Masahiro; Morishita, Shinichi (2006). Large-scale genome sequence processing. London, UK: Imperial College Press. ISBN 978-1-86094-635-6.


  • Knuth, Donald (1997). Fundamental algorithms. The Art of Computer Programming. 1 (3rd ed.). Reading, MA: Addison-Wesley Professional. ISBN 978-0-201-89683-1.


  • Knuth, Donald (1998). Sorting and searching. The Art of Computer Programming. 3 (2nd ed.). Reading, MA: Addison-Wesley Professional. ISBN 978-0-201-89685-5.


  • Knuth, Donald (2011). Combinatorial algorithms. The Art of Computer Programming. 4A (1st ed.). Reading, MA: Addison-Wesley Professional. ISBN 978-0-201-03804-0.


  • Moffat, Alistair; Turpin, Andrew (2002). Compression and coding algorithms. Hamburg, Germany: Kluwer Academic Publishers. doi:10.1007/978-1-4615-0935-6. ISBN 978-0-7923-7668-2.


  • Sedgewick, Robert; Wayne, Kevin (2011). Algorithms (4th ed.). Upper Saddle River, New Jersey: Addison-Wesley Professional. ISBN 978-0-321-57351-3. Condensed web version: open access publication – free to read; book version closed access publication – behind paywall.


  • Stroustrup, Bjarne (2013). The C++ programming language (4th ed.). Upper Saddle River, New Jersey: Addison-Wesley Professional. ISBN 978-0-321-56384-2.




External links






  • NIST Dictionary of Algorithms and Data Structures: binary search



Comments

Popular posts from this blog

Information security

Lambak Kiri

章鱼与海女图