4

Suppose we have the function below:

def func(x, value_):
    assert 0 < x < value_
    while x < value_:
        x *= 2

Although value_ can be arbitrarily large, the while loop is not infinite and the number of comparisons is bounded above by value_. Consequently, is it correct that this function has computational complexity of O(N)?

1

2 Answers 2

1

It's O(log n) as x increases by doubling value toward _value for every execution. Try draw a graph of two lines you will see it.

Sign up to request clarification or add additional context in comments.

Comments

1

The time complexity will be O(log(m/n, 2)), where m = value_ and n = x. Here, log(i, 2) represents the logarithmic of i in base 2.

Consider that if x is doubled, for a fixed value_, one less iteration is computed.

On the contrary, if value_ is doubled, for a fixed x, one extra iteration is computed.

7 Comments

The base isn't really relevant; logarithms of i using different bases differ only by a constant factor. The usual convention is to just write O(lg i) to leave the base unspecified (though in practice it almost always arises from a base-2 logarithm, there's no great need to be specific).
There's also no need to specify the sizes of value_ and x separately; the input size is simply the sum of the sizes necessary for value_ and x (which is not the values of the two variables themselves, but a logarithm--you need lg n bits to specify a value n). Using a reasonable encoding, you end up with something like lg value_ < n < 2 * lg_value (give or take a constant number of bits), with the lower bound achieved when x = 1 and the upper bound achieved when x is just less than value_. Either way, O(lg n) sufficiently describes the running time.
@chepner, I appreciate your first comment. On your second, this answer suggests you should consider multiple factors when considering time complexity. Regarding your comment, the input size is simply the sum of the sizes necessary for value_ and x, the "sum of the sizes" (value_ + x ?) has no significance to me. Can you explain further?
I would compare to how the complexities of graph algorithms is stated, in terms of number of edges (m) and number of vertices (n). Roughly speaking, a sparse graph has m = O(N) while a dense graph has m = O(n^2). These types of graphs arise from real-world applications, and knowing whether an algorithm with complexity O(mn) is linear or quadratic in practice has real value. Here, though, you're dealing with something that at worst is O(lg n); is there any practical benefit to knowing that if x is large enough, the algorithm is really O(lg lg n) or even O(1)...
... and is there a real-world problem where you actually have distinct classes of problems where such a distinction between x and value_ arises naturally?
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.