0

I want to implement a system, where there are one receiver and multiple senders. Each sender keeps sending data to the receiver. The receiver waits to receive data and process it. Here is the toy example.

#include <iostream>
#include <cstdlib>
#include <mpi.h>
using namespace std;

int main(int argc, char *argv[]) {
    int _mpi_numWorkers, _mpi_rank;

    // Initialize openMPI
    MPI_Init(&argc, &argv);

    MPI_Comm_size(MPI_COMM_WORLD, &_mpi_numWorkers);
    MPI_Comm_rank(MPI_COMM_WORLD, &_mpi_rank);

    MPI_Barrier(MPI_COMM_WORLD);

    float *send_data = (float *)malloc(360 * 5000 * sizeof(float));
    MPI_Status receive_status;

    if (_mpi_rank == 0) {
        while (1) {
            MPI_Recv(send_data, 360 * 5000, MPI_FLOAT, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &receive_status);

            cout << "Receive from " << receive_status.MPI_SOURCE << endl;
        }
    } else {

        while (1){
            MPI_Send(send_data, 360 * 5000, MPI_FLOAT, 0, 0, MPI_COMM_WORLD);

            //sleep(1);
        }
    }

    // Terminate
    MPI_Finalize();
    return 0;
}

The issue is that MPI_recv can only receive messages from up to two processes no matter how many processes I set it to run (when no sleep). I have tested this code on one single machine, and on multiple machines cases:

Single Machine Case

I run this code through the following command:

mpiexec -n 5 ./test_mpi

Then, the receiver only receive from senders with rank 1 and 2.

Multiple Machine Case

I run 4 senders and 1 receiver on 5 homogeneous physical machines. All of them connect to a 100Mbps switch. In this case, the receiver also only receives data from a subset of senders. I use the tcpdump to check the packet, and observe that some senders do not even send the message. (Those senders are blocked at MPI_send, but no tcp sequence increases and no re-transmission.)

For those two cases, if I make each sender sleep some time (decreasing the sending rate), the receiver can receive data from more senders.

Can somebody help me to understand why this happens?

Environment

Debian testing with openmpi-1.6

Edit 2/4/16

I include <cstdlib> in the code to prevent any compilation issues.

2
  • Well, including the <cstdlib> helps to define malloc and running it as mpirun -np 5 test_mpi seems to work fine for me. (Compiling as mpic++ test_mpi.cpp -o test_mpi) Commented Feb 4, 2016 at 14:16
  • Thanks for reply. You mean work because you can see 4 senders' data continuously?? Commented Feb 4, 2016 at 22:44

1 Answer 1

1

MPI has no fairness guarantee in that respect, see e.g.

http://www.mcs.anl.gov/research/projects/mpi/tutorial/gropp/node92.html#Node92

That means what you see is perfectly "legal" from MPI's point of view. One page further in the link I gave, there's a snippet that supposedly helps with that issue. In short, you have to issue (asynchronous) receives for each possible sender manually, and then handle them in a manner that look "fair" to you.

http://www.mcs.anl.gov/research/projects/mpi/tutorial/gropp/node93.html#Node93

Sign up to request clarification or add additional context in comments.

4 Comments

The tutorial guides me to the way I should do. However, the code in Node 93 has a bug. MPI_Irecv( buf+j, 1, MPI_INT, j, MPI_ANY_TAG, MPI_COMM_WORLD, &requests[j] ); is not correct. It should be MPI_Irecv( buf+j+1, 1, MPI_INT, j+1, MPI_ANY_TAG, MPI_COMM_WORLD, &requests[j] );
@Sunghlin, at best j+1 should be replaced with statuses[i].MPI_SOURCE to prevent confusion.
@Sunghlin Probably the tutorial example is a port of the fortan-version (next page), and fortran usually starts indices at 1 instead of 0. Good catch anyway :-) PS I didn't try out any of this, just googled.
@hristolliev, You are right. At that time, I just want to make the modification as small as possible.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.