0

I am quite new to C++ and ONNX, I need to build up my Random Forest Model for ONNX C++ inference. I follow the tutorial from youtube: https://www.youtube.com/watch?v=exsgNLf-MyY and reproduce the codes as following. So far, the built is fine. No error returned. My random forest is 5 input and 4 output. When I open my app, it does not do not computation, but only leave the message "Model Loaded Successfully". Support Needed.

#include "Linear.h"
#include <onnxruntime_cxx_api.h>
#include <array>
#include <iostream>

using namespace std;


void Demo::RunLinearRegression()
{
    // gives access to the underlying API (you can optionally customize log)
    // you can create one environment per process (each environment manages an internal thread pool)
    Ort::Env env;

    Ort::Session session{ env, L"C:\\data\\RF.onnx", Ort::SessionOptions{}};
    std::cout << "Model Loaded Successfully!\n";
    system("PAUSE");

    // Ort::Session gives access to input and output information:
    // - counts
    // - name
    // - shape and type
    std::cout << "Number of model inputs: " << session.GetInputCount() << "\n";
    std::cout << "Number of model outputs: " << session.GetOutputCount() << "\n";

    // you can customize how allocation works. Let's just use a default allocator provided by the library
    Ort::AllocatorWithDefaultOptions allocator;
    // get input and output names
    auto* inputName = session.GetInputName(0, allocator);
    std::cout << "Input name: " << inputName << "\n";

    auto* outputName = session.GetOutputName(0, allocator);
    std::cout << "Output name: " << outputName << "\n";

    // get input shape
    auto inputShape = session.GetInputTypeInfo(0).GetTensorTypeAndShapeInfo().GetShape();
    // set some input values
    std::vector<float> inputValues = { 2, 3, 4, 5, 6 };

    // where to allocate the tensors
    auto memoryInfo = Ort::MemoryInfo::CreateCpu(OrtDeviceAllocator, OrtMemTypeCPU);

    // create the input tensor (this is not a deep copy!)
    auto inputOnnxTensor = Ort::Value::CreateTensor<float>(memoryInfo,
        inputValues.data(), inputValues.size(),
        inputShape.data(), inputShape.size());

    // the API needs the array of inputs you set and the array of outputs you get
    std::vector<const char*> inputNames = { inputName };
    std::vector<const char*> outputNames = { outputName };

    // finally run the inference!
    auto outputValues = session.Run(
        Ort::RunOptions{ nullptr }, // e.g. set a verbosity level only for this run
        inputNames.data(), &inputOnnxTensor, 5, // input to set
        outputNames.data(), 4);                 // output to take 

    // extract first (and only) output
    auto& output1 = outputValues[0];
    const auto* floats = output1.GetTensorMutableData<float>();
    const auto floatsCount = output1.GetTensorTypeAndShapeInfo().GetElementCount();

    // just print the output values
    std::copy_n(floats, floatsCount, ostream_iterator<float>(cout, " "));


    // closing boilerplate
    allocator.Free(inputName);
    allocator.Free(outputName);
}
1
  • Why do you have "system("PAUSE");" after the model load? This will stop the execution until you hit enter in the terminal. I was able to run this code locally with that removed. Commented Mar 2, 2022 at 16:09

3 Answers 3

0

What happens during the Run() call ? Does the app crash ?

Sign up to request clarification or add additional context in comments.

Comments

0

Inference is stuck because of system("PAUSE")

std::cout << "Model Loaded Successfully!\n";
system("PAUSE");

system is stuck on that statement and not moving forward

Please use breakpoints or debugger, given thread will give you why you shouldn't system("pause"); - Why is it wrong?

Comments

-1

Happy to support. I'd recommend checking out this sample: https://github.com/microsoft/onnxruntime-inference-examples/tree/main/c_cxx

1 Comment

Thank you for answering, but unfortunately links sending users elsewhere to find that information aren't considered an "answer" in the context of this this site. The folks here will work hard to curate this collection of knowledge, so when someone finally finds this site through search, the last thing we want to do is send them elsewhere to find that information. Can you include the essential parts of this answer in the body of your post? Thanks.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.