I'm new to durable functions and have set up a consumption (serverless) based function plan in which I have installed a durable function.
The function is used to calculate a large set of mathematical equations and writes the result to data matrices. It usually takes between 1 and 5 minutes to run using 4 cores (8 processors). The output is written to a MySql database which is polled by the client app to retrieve updates and results.
It all works fine until I try to scale it...
I notice that each time I run it, it provides access to two processors. However if I call it twice at the same time, from two clients, then each client only seems to get 1 processor and it runs at half the speed. Three client calls takes even longer... so where is the scaling?
Note I've tried fanning out but that's too slow (probably because of the large amount of data in the calculations).
My questions are: -
What will happen as it scales up to 100 simultaneous calls?
Is there a more useful plan that gives me N processors minimum per execution?
It may go to 500 or even 1000 simultaneous calls... can Azure cope with this?
I've thought about setting up 100 identical functions (obviously with different names) and calling them in turn for each client... would that work? If so it seems odd that it's needed.