2

I am training a linear model using py-torch and I am saving it to a file with the "save" function call. I have another code that loads the model in C++ and performs inference. I would like to instruct the Torch CPP Library to use a specific memory blob at the final output tensor. Is this even possible? If yes, how? Below you can see a small example of what I am trying to achieve.

#include <iostream>
#include <memory>

#include <torch/script.h>

int main(int argc, const char* argv[]) {
  if (argc != 3) {
    std::cerr << "usage: example-app <path-to-exported-script-module>\n";
    return -1;
  }
  long numElements = (1024*1024)/sizeof(float) * atoi(argv[2]);

  float *a = new float[numElements]; 
  float *b = new float[numElements];
  float *c = new float[numElements*4];

  for (int i = 0; i < numElements; i++){
    a[i] = i;
    b[i] = -i;
  }

  //auto options = torch::TensorOptions().dtype(torch::kFloat64);
  at::Tensor a_t = torch::from_blob((float*) a, {numElements,1});
  at::Tensor b_t = torch::from_blob((float*) b, {numElements,1});
  at::Tensor out = torch::from_blob((float*) c, {numElements,4});

  at::Tensor c_t = at::cat({a_t,b_t}, 1);
  at::Tensor d_t = at::reshape(c_t, {numElements,2});

  torch::jit::script::Module module;
  try {
    module = torch::jit::load(argv[1]);
  }
  catch (const c10::Error& e) {
    return -1;
  }


  out =  module.forward({d_t}).toTensor();
  std::cout<< out.sizes() << "\n";

  delete [] a;
  delete [] b;
  delete [] c;

  return 0;
}

So, I am allocating memory into "c" and then I am casting creating a tensor out of this memory. I store this memory into a tensor named "out". I load the model when I call the forward method. I observe that the resulted data are copied/moved into the "out" tensor. However, I would like to instruct Torch to directly store into "out" memory. Is this possible?

1 Answer 1

1

Somewhere in libtorch source code (I don' remember where, I'll try to find the file), there is an operator which is something like below (notice the last &&)

torch::tensor& operator=(torch::Tensor rhs) &&;

and which does what you need if I remember correctly. Basically torch assumes that if you allocate a tensor rhs to an rvalue reference tensor, then you actually mean to copy rhs into the underlying storage.

So in your case, that would be

std::move(out) = module.forward({d_t}).toTensor();

or

torch::from_blob((float*) c, {numElements,4}) = module.forward({d_t}).toTensor();
Sign up to request clarification or add additional context in comments.

1 Comment

Verified that this works! Shocked this is the only documentation I have found about this anywhere.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.