I know that on Linux and many other operating systems, loopback TCP interfaces are a special case: checksums are disabled, data is atomically copied from one process to the next, entire sections of network stack ignored etc.
My question, that I've been able to understand beyond all the links telling me that this optimisation exists is: can Linux detect a loopback equivalence? Can it detect that connections to server1 are actually routing to an IP address registered to a local interface, and therefore the equivalent loopback could be used instead? Is this permitted by the TCP standard and if so what are the rules?
I don't think I've ever seen this detection happen in real life, and it seems so obvious I can't be the first to think of it. If this optimization can be permitted by the standard, are there reasons why it isn't performed? How is this governed?