0

I am trying to write a custom function to carry out sum. I followed this question Cuda Thrust Custom function to take reference.Here is how I have defined my functor

struct hashElem
{
    int freq;
    int error;
};
//basically this function adds some value to to the error field of each element
struct hashErrorAdd{
    const int error;

    hashErrorAdd(int _error): error(_error){}

    __host__ __device__
    struct hashElem operator()(const hashElem& o1,const int& o2)
    {
            struct hashElem o3;
            o3.freq = o1.freq;
            o3.error = o1.error + (NUM_OF_HASH_TABLE-o2)*error;   //NUM_OF_HASH_TABLE is a constant
            return o3;
    }
};

struct hashElem freqError[SIZE_OF_HASH_TABLE*NUM_OF_HASH_TABLE];
int count[SIZE_OF_HASH_TABLE*NUM_OF_HASH_TABLE];

thrust::device_ptr<struct hashElem> d_freqError(freqError); 
thrust::device_ptr<int> d_count(count);

thrust::transform(thrust::device,d_freqError,d_freqError+new_length,d_count,hashErrorAdd(perThreadLoad)); //new_length is a constant

This code on compilation gives the following error:

error: function "hashErrorAdd::operator()" cannot be called with the given argument list

argument types are: (hashElem)

object type is: hashErrorAdd

Please can anybody explain to me why I am getting this error? and how I can resolve it. Please comment in case I am not able to explain the problem clearly. Thankyou.

1 Answer 1

2

It appears that you want to pass two input vectors to thrust::transform and then do an in-place transform (i.e. no output vector is specified).

There is no such incarnation of thrust::transform

Since you have passed:

thrust::transform(vector_first, vector_last, vector_first, operator);

The closest matching prototype is a version of transform that takes one input vector and creates one output vector. In that case, you would need to pass a unary op that takes the input vector type (hashElem) only as an argument, and returns a type appropriate for the output vector, which is int in this case, i.e. as you have written it (not as your intent). Your operator() does not do that, and it cannot be called with the arguments that thrust is expecting to pass to it.

As I see it, you have a couple options:

  1. You could switch to the version of transform that takes two input vectors and produces one output vector, and create a binary op as functor.

  2. You could zip together your two input vectors, and do an in-place transform if that is what you want. Your functor would then be a unary op, but it would take as argument whatever tuple was created from dereferencing the input vector, and it would have to return or modify the same kind of tuple.

As an aside, your method of creating device pointers directly from host arrays looks broken to me. You may wish to review the thrust quick start guide.

Sign up to request clarification or add additional context in comments.

3 Comments

Oops, you are right. I used the other version of the transform which uses the binary function. Now its working fine. Thank you. Regarding the creation of device pointer, in the documentation they have done a malloc on a raw pointer and then have wraped the raw pointer with device ptr. I skipped the malloc step to just declaring the array.
Could you point out in the documentation where there is a malloc on a pointer followed by wrapping that pointer with device_ptr? (Or do you mean cudaMalloc ?)
No I meant malloc, actually I read it as malloc instead of the "cudaMalloc" in the documentation. Got your point, Thanks again for pointing it out. I read that comment thrice earlier in the documentation which said "raw pointer to device memory" and was not able to figure out how using a malloc he can get a pointer to "device" memory. Earlier I thought that the wrapping step (although weird) does that thing.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.