3

I am reading a review sheet for a class and the instructor said this of the modulo operator:

Remember that modulus (%) has a runtime of O((logn)^2).

He continued to display this piece of code:

//Assumes n > 1

public boolean isPrime2 (int n) {
    int sqrtN = (int) Math. sqrt (n) ;
    for(int i = 2; i < sqrtN + 1; i++) {
        if (n % i == 0) return false;
    }
    return true;
}

He said of this code:

O(n) = sqrt(n) x n^2. As you can see, this method of checking if a number is prime only iterates from 2 to sqrt(n). Since we loop over a quadratic operation sqrt(n) times, the runtime is O(sqrt(n)).

Two things I don't understand. One, why is the inside the for loop now an n^2 operation rather than a (logn)^2 operation. Two, even if it is only an n^2 operation, shouldn't the total runtime then be O(n)?

3
  • 1
    Sounds like the instructor is talking through his hat. I can't see how the modulo operator could be dependent on anything other than the number of bits in its arguments. Commented Jul 25, 2014 at 7:50
  • Well, I have a few friends currently taking a Discrete Math and Probability course in which they also mentioned that the modulo operator takes O((logn)^2) time. Commented Jul 25, 2014 at 7:54
  • What is n in this case? If it's not the bit-length of the operands, I can't see how it's in any way related. If it is, then it's well-known that long division is O(n) and I can't see how modulo could be worse. Commented Jul 25, 2014 at 8:00

1 Answer 1

4

I assume the review sheet you are reading is flawed:

  • Modulo/remainder is a O(1) operation (it's essentially just a variation on division, which takes constant time on fixed-sized numbers).
  • Therefore, the inside of the loop is an O(1) operation,
  • which makes the total complexity O(√n).

However, let's assume for a moment that % would denote a waste-time-operator with complexity O(log(n)²). In that case, we would end up with a complexity O(√n · log(n)²). The expression √n · log(n)² cannot be simplified. However, the complexity class O(√n · n²) includes O(√n · log(n)²):

   O(√n · n²) includes O(√n · log(n)²)
⇔    √n · n²      >      √n · log(n)²
⇔         n²      >           log(n)²   assuming n ≠ 0
⇔         n       >           log(n)    assuming n > 0
which is true for n > 1

Of the three conditions n ≠ 0, n > 0, n > 1 the last one is the strongest, and is guaranteed by a comment in the code. It would therefore be correct to say that the code has O(√n · n²) complexity, although it is also true that it has O(√n · log(n)²) complexity, which is a stronger statement.

Note that the expression √n · n² can be simplified, but not to n:

   √n · n²
=  n1/2 · n²
=  n1/2 + 2
=  n5/2
=  √(n5)
2
  • 1
    That little word fix-sized is important, because it allows us to drastically simplify complexity investigations: Any operation with O(f(x)) is equivalent to O(1) iff x is a constant, regardless of what f is. So it doesn't matter if modulo takes log(n)² for arbitrary-precision numbers, if we're only over going to use 64-bit integers. Commented Jul 25, 2014 at 8:20
  • In p mod q where p is m bits long, it may be the case that the complexity modulo is O((lg m)^2), and that the confusion is being caused by m (bitlength) and n (loop bound) being conflated. Commented Jul 25, 2014 at 8:21

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.