1

Here is a translation of an C+MPI example in Python+Numpy+mpi4py. The goal of this example was to show that the message received is put in memory and that memory is in one dimension

from mpi4py import MPI
import numpy as np
# Init MPI
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
# Define parameters
nb_lines = 6
nb_columns = 5
tag = 100
# Init matrix on each process
a = np.full((nb_lines, nb_columns), rank,dtype=np.float32)
# Define a contiguous type_line
type_line = MPI.FLOAT.Create_contiguous(nb_columns)
type_line.Commit()

if rank == 0:
  # Send first line
  comm.Send([a, 1, type_line], 1, tag)
elif rank == 1:
  # Receive
  comm.Recv([a[nb_lines-2,nb_columns-1:], nb_columns, MPI.FLOAT], 0, tag)
  print(f"{a}")

The ouput with 2 processes is the same as in C

[[1. 1. 1. 1. 1.]
 [1. 1. 1. 1. 1.]
 [1. 1. 1. 1. 1.]
 [1. 1. 1. 1. 1.]
 [1. 1. 1. 1. 0.]
 [0. 0. 0. 0. 1.]]

Even if it seems to work, I find the syntax a[nb_lines-1,nb_columns-1:] weird but with the expression a[nb_lines-1,nb_columns-1] i have an error BufferError: scalar buffer is readonly.

I have not find any example that use index in communication buffer, so my question is what it is the good way to use a specific index in communication buffer with mpi4py+numpy ?

3
  • Does the syntax a[nb_lines-1,0:1] work, since that's not a scalar? Commented Jun 27 at 14:52
  • Yes, the syntax a[nb_lines-2,nb_columns-1:nb_columns] work Commented Jun 27 at 15:20
  • Without the slice you're just passing a scalar not a numpy array anymore. Commented Jun 27 at 15:33

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.