Do compilers (generally or in particular) optimize repeated function calls?
For example, consider this case.
struct foo {
member_type m;
return_type f() const; // returns by value
};
The function definition is in one translation unit
return_type foo::f() const {
/* do some computation using the value of m */
/* return by value */
}
Repeated function calls are in another unit
foo bar;
some_other_function_a(bar.f());
some_other_function_b(bar.f());
Would the code in the second translation unit be converted to this?
foo bar;
const return_type _tmp_bar_f = bar.f();
some_other_function_a(_tmp_bar_f);
some_other_function_b(_tmp_bar_f);
Potentially, the computation f does can be expensive, but the returned type can be something very small (think about a mathematical function returning a double). Do compilers do this? Are there cases when they do or don't? You can consider a generalized version of this question, not just for member functions, or functions with no arguments.
Clarification per @BaummitAugen's suggestion:
I'm more interested in the theoretical aspect of the question here, and not so much in whether one could rely on this to make real world code run faster. I'm particularly interested in GCC on x86_64 with Linux.