1

I've looked around and it seems the recommended way of implementing a Prediction Engine in an API service is by using PredictionEnginePool. I have mine currently setup like this in ConfigureServices().

.ConfigureServices(services => {
     services.AddPredictionEnginePool<Input, Output>()
     .FromFile("TrainedModels/my_model.zip");
     })

And consumed like this:

var predEngine = http.RequestServices.GetRequiredService<PredictionEnginePool<Input, Output>>();            
var prediction = predEngine.Predict(input);

Now what I need is to allow my endpoint to consume an array data input. So far, what I've seen is through using pipelines and IDataview/transforms found at ML.Net Multiple Predictions

IDataView predictions = predictionPipeline.Transform(inputData);

But how can this be done when using a PredictionEnginePool where I don't have the pipeline? Any thoughts appreciated, there should be others that have gone through this. Thanks!

1 Answer 1

2

Here's what your handler might look like when using PredictionEnginePool for multiple predictions.

static async Task PredictHandler(HttpContext http)
{
    var predEnginePool = http.RequestServices.GetRequiredService<PredictionEnginePool<Input,Output>>();

    var input = await JsonSerializer.DeserializeAsync<IEnumerable<Input>>(http.Request.Body);

    var response = input.Select((x) => predEnginePool.Predict(x));

    await http.Response.WriteAsJsonAsync(response);
}

Assuming your request body takes in an IEnumerable<Input>, apply the Predict method from PredictionEnginePool using the LINQ Select operation. The response in this case will be an IEnumerable<Output>.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.