3

The only thing I can think of is as follows, which is far from ideal:

interface IBar {
    void Foo() => Console.WriteLine("Hello from interface!");
}

struct Baz : IBar {
    // compiler error
    void Test1() => this.Foo();

    // IIRC this will box
    void Test2() => ((IBar)this).Foo();

    // this shouldn't box but is pretty complicated just to call a method
    void Test3() {
        impl(ref this);

        void impl<T>(ref T self) where T : IBar
            => self.Foo();  
    }
}

Is there a more straightforward way to do this?

(Related and how I got to this question: Calling C# interface default method from implementing class)

8
  • 2
    @GSerg But casting a valuetype to an interface will box it, which is the case here. Commented Sep 6, 2019 at 19:27
  • 2
    I overloooked the struct. You are correct. Still, the cast is required. Commented Sep 6, 2019 at 19:29
  • @GSerg Ah ok, that's unfortunate. I had hoped to (ab)use DIMs as an alternative for the missing inheritance with structs, but if there's this much overhead involved I guess I'll leave it be. Anyways, thanks for the answer. Commented Sep 6, 2019 at 19:50
  • I haven't dug into the new feature yet, but what about not implementing the method at all? Then bazInstance.Foo() should call the method with no box. Commented Sep 6, 2019 at 19:50
  • @JoelCoehoorn That doesn't happen. Commented Sep 6, 2019 at 19:57

3 Answers 3

4

I don't think there are any allocations. This answer to this possibly duplicate question explains that the JIT compiler can avoid boxing in many cases, including explicit interface implementation calls. Andy Ayers from the JIT team verified this in a comment and provided a link to the PR that implemented this.

I adapted the code in that answer :

using System;
using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;

namespace DIMtest
{
    interface IBar {
        int Foo(int i) => i;
    }

    struct Baz : IBar {
        //Does this box?        
        public int  Test2(int i) => ((IBar)this).Foo(i);
    }


    [MemoryDiagnoser, CoreJob,MarkdownExporter]
    public class Program
    {
        public static void Main() => BenchmarkRunner.Run<Program>();

        [Benchmark]
        public int ViaDIMCast()
        {
            int sum = 0;
            for (int i = 0; i < 1000; i++)
            {
                sum += (new Baz().Test2(i));
            }

            return sum;
        }
    }

}

The results don't show any allocations :


BenchmarkDotNet=v0.11.5, OS=Windows 10.0.18956
Intel Core i7-3770 CPU 3.40GHz (Ivy Bridge), 1 CPU, 8 logical and 4 physical cores
.NET Core SDK=3.0.100-preview9-014004
  [Host] : .NET Core 3.0.0-preview9-19423-09 (CoreCLR 4.700.19.42102, CoreFX 4.700.19.42104), 64bit RyuJIT
  Core   : .NET Core 3.0.0-preview9-19423-09 (CoreCLR 4.700.19.42102, CoreFX 4.700.19.42104), 64bit RyuJIT

Job=Core  Runtime=Core  

|     Method |     Mean |    Error |   StdDev | Gen 0 | Gen 1 | Gen 2 | Allocated |
|----------- |---------:|---------:|---------:|------:|------:|------:|----------:|
| ViaDIMCast | 618.5 ns | 12.05 ns | 13.40 ns |     - |     - |     - |         - |

I changed the return type to an int just like the linked answer to ensure the method won't be optimized away

Sign up to request clarification or add additional context in comments.

4 Comments

Neat, didn't know that .NET core did those optimizations. I actually didn't have a C#8 ready compiler handy, so I tested it with implicit implementations and .NET framework. Sadly that one doesn't optimize away the boxing. I'll have to play around with it a little, to see if the optimization is guaranteed.
Thanks for finding the answer. You're probably right about the duplicate, if DIMs behave the same as implicit interface implementations.
@Velocirobtor .NET Core and .NET ... Old are very different. The linked question shows that explicit interface implementations get optimized since Core 2.1. DIMs depend on Core 2.0 runtime features and I suspect many of them deal with optimizations.
As noted above some boxes can be elided by the jit, provided the box creation and consumption are both visible to the jit, and the box is not dupd in the IL... in Core the async state machine core logic now depends on this. Some of these opts have made it back to full framework, 4.8 for instance should be able to do this too. Still a lot we can do to improve here...
0

Haven't set myself up for c# 8.0 yet, so I'm not sure this'll work, but here's an idea you could try:

struct Baz : IBar
{
    public void CallFoo()
    {
        this.AsBar().Foo();
    }

    public IBar AsBar()
    {
        return this;
    }
}

6 Comments

I don't object to the downvote, but downvoter, please help me understand what the problem is so I can learn.
I didn't downvote, but you're inheriting from a non-interface which isn't valid (unless it is in C# 8.0).
In the OP's example. Bar is the name of an interface. Personally I would have called it IBar.
Hmm. Then it might be because the AsBar method is still boxing to the interface.
I can't speak for the downvoter, but I would guess that they found two things wrong with your answer, at least: 1) your answer is speculative, while authors of questions deserve real answers, sure to solve their problem, and 2) your answer looks to me like it probably winds up boxing the Baz value anyway, which is exactly what the OP was trying to avoid.
|
0

If we don't rely on the JIT devirtualizing magic as the accepted answer pointed out to, the only "official way" would have been option number 3 as you guessed:

// this shouldn't box but is pretty complicated just to call a method
void Test3() {
    impl(ref this);

    void impl<T>(ref T self) where T : IBar
        => self.Foo();  
}

where we rely on the constrained callvirt IL to help us.

But it still boxes with this benchmarking code for .NET 6 and 7:

// identical method bodies for all methods below 
interface I {
    int DefaultOne() {
        var result = 0;
        for (int i = 0; i < 100; i++) {
            var a = i % 2;
            var b = a * 4;
            result += b;
        }
        return result;
    }

    int DefaultTwo() {
        var result = 0;
        for (int i = 0; i < 100; i++) {
            var a = i % 2;
            var b = a * 4;
            result += b;
        }
        return result;
    }
}

struct S : I {
    public int DefaultTwo() {
        var result = 0;
        for (int i = 0; i < 100; i++) {
            var a = i % 2;
            var b = a * 4;
            result += b;
        }
        return result;
    }
}

The generic constrained and benchmarking methods:

int BenchmarkDefaultOne() {
    S str = default;
    var result = DefaultOneGeneric(ref str);

    return result;
}

int BenchmarkDefaultTwo() {
    S str = default;
    var result = DefaultTwoGeneric(ref str);

    return result;
}


int DefaultOneGeneric<T>(ref T tparam) where T : I {
    var result = 0;
    for (int i = 0; i < 100; i++) {
        result += tparam.DefaultOne();
    }

    return result;
}

int DefaultTwoGeneric<T>(ref T tparam) where T : I {
    var result = 0;
    for (int i = 0; i < 100; i++) {
        result += tparam.DefaultTwo();
    }
    return result;
}

Results (AllocatedBytes only)

BenchmarkDefaultOne  2,400
BenchmarkDefaultTwo  0

which is to be expected given how constrained prefix is supposed to work (without JIT help):

If thisType is a value type and thisType does not implement method then ptr is dereferenced, boxed, and passed as the 'this' pointer to the callvirt method instruction.

So there is no guaranteed way of invoking default interface methods (that are not implemented) by our structs without also boxing the structs.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.