1

I have a serverless function that receives orders, about ~30 per day. This function is depending on a third-party API to perform some additional lookups and checks. However, this external endpoint isn't 100% reliable and I need to be able to store order requests if the other API isn't available for a couple of hours (or more..).

My initial thought was to split the function into two, the first part would receive orders, do some initial checks such as validating the order, then post the request into a message queue or pub/sub system. On the other side, there's a consumer that reads orders and tries to perform the API requests, if the API isn't available the orders get posted back into the queue.

However, someone suggested to me to simply use an Azure Durable Function for the requests, and store the current backlog in the function state, using the Aggregator Pattern (especially since the API will be working find 99.99..% of the time). This would make the architecture a lot simpler.

What are the advantages/disadvantages of using one over the other, am I missing any important considerations? I would appreciate any insight or other suggestions you have. Let me know if additional information is needed.

2
  • 2
    If you expect that 3rd party API to be available 99.9% of the time, then it is simpler for you to have a retry policy with exponential backoff [ie. increased time interval between retry attempts]. If you exceed X attempts, then you can put in a queue for later processing. I don't think durable function is apt here as it comes with its own list of constraints for orchestrator functions. Commented Jun 10, 2022 at 8:55
  • 1
    If you expect the 3rd party API to have much more frequent downtime, it is better to use the Queue to store the unprocessed orders. Having said that, if there is only transient errors by 3rd party API, then durable function's built-in Retry options can be used. Commented Jun 10, 2022 at 9:22

1 Answer 1

2

You could solve this problem with Durable Task Framework or Azure Storage or Service Bus Queues, but at your transaction volume, I think that's overcomplicating the solution.

If you're dealing with ~30 orders per day, consider one of the simpler solutions:

  • Use Polly, a well-supported resilience and fault-tolerance framework.
  • Write request information to your database. Have an Azure Function Timer Trigger read occasionally and finish processing orders that aren't marked as complete.

Durable Task Framework is great when you get into serious volume. But there's a non-trivial learning curve for the framework.

Sign up to request clarification or add additional context in comments.

2 Comments

Azure Durable Functions supports retries, but you have to specify the retry rule in the call to orchestration and activity function.
@ThomasEyde I totally agree. Durable Task Framework scales better than Polly as well.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.