4

I have build a very simple custom op zero_out.dll with bazel, it works when using python.

import tensorflow as tf
zero_out_module = tf.load_op_library('./zero_out.dll')
with tf.Session(''):
  zero_out_module.zero_out([[1, 2], [3, 4]]).eval()

However I have to run the inference using C++, is there any c++ api which has the similar function as tf.load_op_library, as it seems that a lot of registeration work has been done in tf.load_op_library, TF has not counterpart c++ API?

3
  • Possible duplicate of Dynamically load a function from a DLL. DLLs and this kind of functionality is handled through the Windows API. Commented Sep 9, 2019 at 16:15
  • @Romen It is not really a duplicate, as TensorFlow library loading does some special additional work of registering loaded ops/kernels. Commented Sep 9, 2019 at 16:38
  • @jdehsa, I think that may be a helpful link anyways then. I can't find an equivalent method either for loading ops from a library in the C++ TensorFlow API. Commented Sep 9, 2019 at 16:42

1 Answer 1

3

While there does not seem to be a public API for that in C++, the library loading functions are exposed in the TensorFlow API for C (which is the API that tf.load_library uses). There is no "nice" documentation for it, but you can find them in c/c_api.h:

// --------------------------------------------------------------------------
// Load plugins containing custom ops and kernels

// TF_Library holds information about dynamically loaded TensorFlow plugins.
typedef struct TF_Library TF_Library;

// Load the library specified by library_filename and register the ops and
// kernels present in that library.
//
// Pass "library_filename" to a platform-specific mechanism for dynamically
// loading a library. The rules for determining the exact location of the
// library are platform-specific and are not documented here.
//
// On success, place OK in status and return the newly created library handle.
// The caller owns the library handle.
//
// On failure, place an error status in status and return NULL.
TF_CAPI_EXPORT extern TF_Library* TF_LoadLibrary(const char* library_filename,
                                                 TF_Status* status);

// Get the OpList of OpDefs defined in the library pointed by lib_handle.
//
// Returns a TF_Buffer. The memory pointed to by the result is owned by
// lib_handle. The data in the buffer will be the serialized OpList proto for
// ops defined in the library.
TF_CAPI_EXPORT extern TF_Buffer TF_GetOpList(TF_Library* lib_handle);

// Frees the memory associated with the library handle.
// Does NOT unload the library.
TF_CAPI_EXPORT extern void TF_DeleteLibraryHandle(TF_Library* lib_handle);

These functions do actually call C++ code (see source in c/c_api.cc). However, the called functions, defined in core/framework/load_library.cc does not have a header to include. The workaround to use it in C++ code, which they use in c/c_api.cc, is to declare the function yourself, and link the TensorFlow library.

namespace tensorflow {
// Helpers for loading a TensorFlow plugin (a .so file).
Status LoadLibrary(const char* library_filename, void** result,
                   const void** buf, size_t* len);
}

As far as I can tell there is no API to unload the library. The C API allows you only to delete the library handle object. This done just by freeing the pointer, but if you want to avoid trouble you should probably use the freeing function given by TensorFlow, tensorflow::port:free, declared in core/platform/mem.h. Again, if you cannot not or don't want to include that, you can declare the function yourself and it should work as well.

namespace tensorflow {
namespace port {
void Free(void* ptr);
}
}
Sign up to request clarification or add additional context in comments.

15 Comments

very useful info! I try to load lib using TF_LoadLibrary #include "c_api.h" TF_Status* status_load = TF_NewStatus(); TF_Library* lib_handle = TF_LoadLibrary("C:\\custom_op\\zero_out.dll", status_load); cout << "message: " << TF_Message(status_load) << endl; But it shows zero_out.dll not found. BTW, I build the TF lib and the custom op dll using bazel in Windows. It seems that some dependencies are not found, I have placed tensorflow.dll besides zero_out.dll
@7oud Strange, but I haven't tried these myself so I don't know what issues could arise. If you look at the impl of LoadLibrary for Windows, in core/platform/windows/env.cc, it seems you should indeed use an absolute path (as LOAD_WITH_ALTERED_SEARCH_PATH is used)... but it seems that the error you get really just means that LoadLibraryExW failed, so it could be something else.
@jdejesa I find the python36.dll is needed, even thought it is not used. Now TF_Code code = TF_GetCode(status_load); the return code is TF_OK, then I use TF_Buffer op_list_buf = TF_GetOpList(lib_handle); to get op list, but op_list_buf.length returns 0, could you give some code snippet to get the interface function in custom op lib?
@7oud Strange that you need Python dll, the point of the C lib is to be self-contained. Anyway, about TF_GetOpList, I haven't used these myself, so I'm not sure. You are supposed to make an OpList mesasge from the data in the TF_Buffer (e.g. with ParseFromArray) - if the length is zero, I guess it's an empty list. May be a problem registering the op?
@Fisa No, if you are using tf.load_op_library from Python, you don't need to do anything else, that is part of the normal TensorFlow distribution for Python. The C API is only for the case where you want to run a model using a different programming language, e.g. C or C++, and need to use operations from an external library (so you need a way to do the equivalent of tf.load_op_library in that language).
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.