Was searching for an algorithm that has the worst-case time complexity as $O(\lg(n)).$
Have read that binary-search algorithm, has both the worst-case and best-case time-complexity as $\theta (\lg(n)),$ and am confused, as it seems they miss the underlying point that the data structure, for implementation is important.
Say, for sorted-array it is possible; as also stated in the exercise #2.3-6, of CLRS, 4th edition:
2.3-6
Referring back to the searching problem (see Exercise 2.1-4), observe that if the subarray being searched is already sorted, the searching algorithm can check the midpoint of the subarray against $v$ and eliminate half of the subarray from further consideration. The binary search algorithm repeats this procedure, halving the size of the remaining portion of the subarray each time. Write pseudocode, either iterative or recursive, for binary search. Argue that the worst-case running time of binary search is $\theta(\lg n).$
For reference, have given the exercise #2.1-4, of CLRS, 4th edition:
2.1-4
Consider the searching problem:
Input: A sequence of $n$ numbers $(a_1, a_2, … , a_n)$ stored in array $A[1 : n]$ and a value $x.$
Output: An index $i$ such that x equals $A[i]$ or the special value NIL if x does not appear in $A.$
Write pseudocode for linear search, which scans through the array from beginning to end, looking for $x.$ Using a loop invariant, prove that your algorithm is correct. Make sure that your loop invariant fulfills the three necessary properties.
But, if change the data-structure for implementation, as a tree, i.e. the pointers are there for the parent and child nodes; it is possible that if the data distribution is skewed, then in the worst-case, can have all nodes (elements) in the same branch.
This means that the worst-case time-complexity, of binary-search algorithm is $O(n).$
So, request an algorithm whose worst-case time-complexity is $O(\lg(n)).$