I've looked around and it seems the recommended way of implementing a Prediction Engine in an API service is by using PredictionEnginePool. I have mine currently setup like this in ConfigureServices().
.ConfigureServices(services => {
services.AddPredictionEnginePool<Input, Output>()
.FromFile("TrainedModels/my_model.zip");
})
And consumed like this:
var predEngine = http.RequestServices.GetRequiredService<PredictionEnginePool<Input, Output>>();
var prediction = predEngine.Predict(input);
Now what I need is to allow my endpoint to consume an array data input. So far, what I've seen is through using pipelines and IDataview/transforms found at ML.Net Multiple Predictions
IDataView predictions = predictionPipeline.Transform(inputData);
But how can this be done when using a PredictionEnginePool where I don't have the pipeline? Any thoughts appreciated, there should be others that have gone through this. Thanks!