3

Here's a claim made in one of the answers at: https://softwareengineering.stackexchange.com/a/136146/18748

Try your hand at a Functional language or two. Try implementing factorial in Erlang using recursion and watch your jaw hit the floor when 20000! returns in 5 seconds (no stack overflow in site)

Why would it be faster/more efficient than using recursion/iteration in Java/C/C++/Python (any)? What is the underlying mathematical/theoretical concept that makes this happen? Unfortunately, I wasn't ever exposed to functional programming in my undergrad (started with 'C') so I may just lack knowing the 'why'.

6
  • Moderator: Please move this to an appropriate forum/site if deemed inappropriate. Commented Mar 11, 2013 at 22:31
  • 2
    en.wikipedia.org/wiki/Tail-call_optimization Commented Mar 11, 2013 at 22:32
  • 4
    It's not faster, that's hogwash. It's usually faster to write, however. Commented Mar 11, 2013 at 22:33
  • That claim is actually made in a comment rather than an answer. Commented Mar 11, 2013 at 22:36
  • @DonRoby - The link is to the answer where the claim is made. It's reiterated in the comments however. Commented Mar 11, 2013 at 23:18

4 Answers 4

2

A recursive solution would probably be slower and clunkier in Java or C++, but we don't do things that way in Java and C++, so. :)

As for why, functional languages invariably make very heavy usage of recursion (as by default they either have to jump through hoops, or simply aren't allowed, to modify variables, which on its own makes most forms of iteration ineffective or outright impossible). So they effectively have to optimize the hell out of it in order to be competitive.

Almost all of them implement an optimization called "tail call elimination", which uses the current call's stack space for the next call and turns a "call" into a "jump". That little change basically turns recursion into iteration, but don't remind the FPers of that. When it comes to iteration in a procedural language and recursion in a functional one, the two would prove about equal performancewise. (If either one's still faster, though, it'd be iteration.)

The rest is libraries and number types and such. Decent math code ==> decent math performance. There's nothing keeping procedural or OO languages from optimizing similarly, other than that most people don't care that much. Functional programmers, on the other hand, love to navel gaze about how easily they can calculate Fibonacci sequences and 20000! and other numbers that most of us will never use anyway.

Sign up to request clarification or add additional context in comments.

Comments

1

Essentially, functional languages make recursion cheap.

Particularly via the tail call optimization.

Comments

1

Tail-call optimization

Lazy evaluation: "When using delayed evaluation, an expression is not evaluated as soon as it gets bound to a variable, but when the evaluator is forced to produce the expression's value."

Both concepts are illustrated in this question about various ways of implementing factorial in Clojure. You can also find a detailed discussion of lazy sequences and TCO in Stuart Halloway and Aaron Bedra's book, Programming Clojure.

The following function, adopted from Programming Clojure, creates a lazy sequence with the first 1M members of the Fibonacci sequence and realizes the one-hundred-thousandth member:

user=> (time (nth (take 1000000 (map first (iterate (fn [[a b]] [b (+ a b)]) [0N 1N]))) 100000))
"Elapsed time: 252.045 msecs"
25974069347221724166155034.... 
(20900 total digits)

(512MB heap, Intel Core i7, 2.5GHz)

3 Comments

Note though that Clojure is a strict (not lazy) language and also doesn't implement full tail call optimization, as it runs on the JVM...
On the JVM, you have to fake it 'till you make it with lazy-seq and recur
@DonStewart For factorial, you don't need full TCO, just tail recursion elimination. I can't imagine Clojure doesn't do that.
1

It's not faster (than what?), and it has nothing to do with tail call optimization (it's not enough to throw a buzzword in here, one should also explain why tail call optimization should be faster than looping? it's simply not the case!)

Mind you that I am not a functional programming hater, to the contrary! But spreading myths does not serve the case for functional programming.

BTW, has any one here actually tried how long it takes to compute (and print, which should consume at least 50% of the CPU cycles needed) 20000!?

I did:

main _ = println (product [2n..20000n])

This is a JVM language compiled to Java, and it uses Java big integers (known to be slow). It also suffers of JVM startup costs. And it is not the fastest way to do it (explicit recursion could save list construction, for example).

The result is:

181920632023034513482764175686645876607160990147875264
...
... many many lines with digits
...          
000000000000000000000000000000000000000000000000000000000000000

real    0m3.330s
user    0m4.472s
sys     0m0.212s

(On a Intel® Core™ i3 CPU M 350 @ 2.27GHz × 4)

We can safely assume that the same in C with GMP wouldn't even use 50% of that time.

Hence, functional is faster is a myth, as well as functional is slower. It's not even a myth, it's simply nonsense as long as one does not say compared to what it is faster/slower.

1 Comment

The languages mentioned in the original article were "Java, C/C++, JavaScript/jQuery, and ... Objective-C". My assertion is that TCO and/or lazy evaluation are features of Clojure that make it possible to support very large computations that might prove challenging in other environments. I have provided an example, as have others in the OP's linked article.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.