2

In our dev environment we have several environments running the same function app, but the environments all point to the same storage account/blob container.

I have noticed that when a blob is inserted, the blob triggers, in multiple environments, are able to pick up the same blob at once. I am sure it is obvious how this would be a problem from that point on.

Is there any way to prevent this?? Suggestion on better trigger to use to handle this situation?

I would have expected the receipt to be global to the blob, not specific to the environment.

1
  • 1
    Sorry just asking for clarification: so you have multiple function apps pointing to the same blob container and they are all triggered when a blob is inserted and you want to prevent this behavior or is it the other way around ? Commented Dec 18, 2018 at 2:44

1 Answer 1

1

The Function runtime/hosts(in each local environment) that run Azure Functions are isolated from each other, hence they both scan the incoming blobs and store receipts separately.

Internally, each host creates their own queue messages(incoming blob info) for Blob trigger to consume. My suggestion is to work with a centralized queue.

  1. Send messages(blob name) to a queue when blobs are inserted using code. If we manually upload blobs using portal or Storage explorer, I suggest creating a Blob trigger with Queue output binding to send messages, note this trigger should run on only one host. Using blob trigger here is a bad solution as we actually retrieve blob twice, but it's the only way I can provide for manual upload.

    // v2 C# Blob Trigger sample for manual upload
    public static void Run([BlobTrigger("mycontainer/{name}")]Stream myBlob, 
        [Queue("myqueue")]out string message,
        string name, ILogger log)
    {
        log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name}");
        message = name;
    }
    
  2. Use Queue Trigger to consume messages from that queue, then get blob using Blob input binding. In this way we can ensure blob is retrieved once among all hosts.

    // v2 C# sample
    public static void Run([QueueTrigger("myqueue")]string blobName, ILogger log,
        [Blob("mycontainer/{queuetrigger}",FileAccess.Read)]Stream myBlob)
    {
        log.LogInformation($"C# Queue trigger function processed: {blobName}");
        log.LogInformation($"\n Size: {myBlob.Length} Bytes");
    }
    

If you use v1 function, there's another simple solution for local dev. With these settings, Blob Trigger in all hosts share the same internal queue and blob receipts.

  1. In local.settings.json, make sure every Function project has the same value for AzureWebJobsStorage(where blob receipts, internal queue, etc. locate).
  2. In host.json, add same id(use to construct the name of blob receipts and internal queue) for each host, e.g. "id":"localhost-1300897049".
Sign up to request clarification or add additional context in comments.

1 Comment

Jerry...Thanks for the information. I appreciate the explanation and options. In our case I think we decided to switch over to uploading a blob and using a explicit Http Trigger call to start the function. This way the call is completely controlled by the environment that it is called from. Blob triggers seemed like a good idea, but starting to pull away from them with this in mind.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.