MPIContainer Class Referencefinal

This class contains all information and functions required for communication between processors. More...

#include <MpiContainer.h>

Public Member Functions

void initialiseMercuryMPITypes (const SpeciesHandler &speciesHandler)
 Creates the MPI types required for communication of Mercury data through the MPI interface. More...
 
void sync ()
 Process all pending asynchronous communication requests before continuing. More...
 
template<typename T >
std::enable_if< std::is_scalar< T >::value, void >::type send (T &t, int to, int tag)
 Asynchronously send a scalar to some other processor. More...
 
template<typename T >
std::enable_if< std::is_scalar< T >::value, void >::type send (T *t, int count, int to, int tag)
 
template<typename T >
std::enable_if< std::is_scalar< T >::value, void >::type receive (T &t, int from, int tag)
 asynchronously receive a scalar from some other processor. More...
 
template<typename T >
std::enable_if< std::is_scalar< T >::value, void >::type receive (T *t, int count, int from, int tag)
 
template<typename T >
void send (T *t, MercuryMPIType type, int count, int to, int tag)
 asynchronously send a list of MercuryMPITypes objects to some other processor. More...
 
template<typename T >
void receive (T *t, MercuryMPIType type, int count, int from, int tag)
 asynchronously receive a list of MercuryMPIType objects from some other processor. More...
 
template<typename T >
std::enable_if< std::is_scalar< T >::value, void >::type directSend (T &t, int count, int to, int tag)
 synchronously send a list of scalars to another processor. the data should be received directly or the program will stall More...
 
template<typename T >
void directSend (T *t, MercuryMPIType type, int count, int to, int tag)
 
template<typename T >
std::enable_if< std::is_scalar< T >::value, void >::type directReceive (T &t, int count, int from, int tag)
 synchronously receive a list of scalars from another processor. if the send command has not been issued, this function will stall the program More...
 
template<typename T >
void directReceive (T *t, MercuryMPIType type, int count, int from, int tag)
 
template<typename T >
void gather (T &send_t, T *receive_t)
 Gathers a scaler from all processors to a vector of scalars on the root. More...
 
template<typename T >
std::enable_if< std::is_scalar< T >::value, void >::type broadcast (T &t, int fromProcessor=0)
 Broadcasts a scalar from the root to all other processors. More...
 
template<typename T >
std::enable_if< std::is_scalar< T >::value, void >::type broadcast (T *t, int size, int fromProcessor)
 Broadcasts a scalar from the root to all other processors. More...
 
template<typename T >
void broadcast (T *t, MercuryMPIType type, int fromProcessor=0)
 Broadcasts an MercuryMPIType to all other processors. More...
 
std::size_t getProcessorID ()
 Reduces a scalar on all processors to one scalar on a target processor. More...
 
std::size_t getNumberOfProcessors () const
 Get the total number of processors participating in this simulation. More...
 
template<typename T >
void createMercuryMPIType (T t, MercuryMPIType type)
 Get the communicator used for MPI commands. More...
 
void deleteMercuryMPITypes ()
 Deletes the MercuryMPITypes. More...
 
 MPIContainer (const MPIContainer &orig)=delete
 Copy constructor is disabled, to enforce a singleton pattern. More...
 

Static Public Member Functions

static MPIContainerInstance ()
 fetch the instance to be used for communication More...
 

Private Member Functions

 MPIContainer ()
 Constructor. More...
 

Private Attributes

int processorID_
 The ID of the processor this class is running on. More...
 
int numberOfProcessors_
 The total number of processors in the communicator. More...
 

Detailed Description

This class contains all information and functions required for communication between processors.

This class contains of a newly created MPI_COMM communicator used communicate between processors. The reason that MPI_COMM_WORLD is not used, is that other libraries might also be communicating in parallel and you don't want to interfere with these communications. It is a signleton pattern, restricting to one instance of the class. The class furthermore keeps track of the number of processors, processorID's and of asynchronous send/receive requests.

Constructor & Destructor Documentation

◆ MPIContainer() [1/2]

MPIContainer::MPIContainer ( const MPIContainer orig)
delete

Copy constructor is disabled, to enforce a singleton pattern.

◆ MPIContainer() [2/2]

MPIContainer::MPIContainer ( )
private

Constructor.

Initialise the communicator after MPI has been initalised.

23 {
24 #ifdef MERCURYDPM_USE_MPI
25  int Mpi_init_flag = 0;
26  MPI_Initialized(&Mpi_init_flag);
27  if(!Mpi_init_flag)
28  {
29  logger(FATAL,"MPI should be initialised before calling the MPIContainer constructor");
30  }
31  //A personal communicator will be created to ensure we don't meddle with communicators of other libraries
32  MPI_Group groupID;
33  MPI_Comm_group(MPI_COMM_WORLD, &groupID);
34  MPI_Comm_create(MPI_COMM_WORLD,groupID,&communicator_);
35  MPI_Comm_rank(MPI_COMM_WORLD,&processorID_);
36  MPI_Comm_size(MPI_COMM_WORLD,&numberOfProcessors_);
37 
38 #else
40  processorID_ = 0;
41 #endif
42 }
Logger< MERCURYDPM_LOGLEVEL > logger("MercuryKernel")
Definition of different loggers with certain modules. A user can define its own custom logger here.
@ FATAL
int numberOfProcessors_
The total number of processors in the communicator.
Definition: MpiContainer.h:588
int processorID_
The ID of the processor this class is running on.
Definition: MpiContainer.h:583

References FATAL, logger, numberOfProcessors_, and processorID_.

Member Function Documentation

◆ broadcast() [1/3]

template<typename T >
std::enable_if<std::is_scalar<T>::value, void>::type MPIContainer::broadcast ( T t,
int  fromProcessor = 0 
)
inline

Broadcasts a scalar from the root to all other processors.

Parameters
[in,out]tscalar data that is being send by the root and received by the processors
421  {
422 #ifdef MERCURYDPM_USE_MPI
423  MPI_Bcast(&t,1,Detail::toMPIType(t),fromProcessor,communicator_);
424 #endif
425  }
t
Definition: plotPSD.py:36

References plotPSD::t.

Referenced by ParticleHandler::addObject(), InsertionBoundary::checkBoundaryBeforeTimeStep(), RandomClusterInsertionBoundary::checkBoundaryBeforeTimeStep(), DPMBase::checkParticleForInteraction(), PeriodicBoundaryHandler::communicateTargetDomains(), SubcriticalMaserBoundaryTEST::copyExtraParticles(), SubcriticalMaserBoundaryTEST::extendBottom(), ParticleHandler::getNumberOfRealObjects(), DPMBase::mpiIsInCommunicationZone(), RNG::randomise(), DPMBase::synchroniseParticle(), and PeriodicBoundaryHandler::updateParticleStatus().

◆ broadcast() [2/3]

template<typename T >
std::enable_if<std::is_scalar<T>::value, void>::type MPIContainer::broadcast ( T t,
int  size,
int  fromProcessor 
)
inline

Broadcasts a scalar from the root to all other processors.

Parameters
[in,out]tscalar data that is being send by the root and received by the processors
434  {
435 #ifdef MERCURYDPM_USE_MPI
436  MPI_Bcast((void *)t,size,Detail::toMPIType(t[0]),fromProcessor,communicator_);
437 
438 #endif
439  }
Scalar Scalar int size
Definition: benchVecAdd.cpp:17

References size, and plotPSD::t.

◆ broadcast() [3/3]

template<typename T >
void MPIContainer::broadcast ( T t,
MercuryMPIType  type,
int  fromProcessor = 0 
)
inline

Broadcasts an MercuryMPIType to all other processors.

Parameters
[in,out]tMercuryMPIType data that is being send by the root and received by the processors
447  {
448 #ifdef MERCURYDPM_USE_MPI
449  MPI_Bcast((void *)t,1,dataTypes_[type],fromProcessor,communicator_);
450 #endif
451  }
type
Definition: compute_granudrum_aor.py:141

References plotPSD::t, and compute_granudrum_aor::type.

◆ createMercuryMPIType()

template<typename T >
void MPIContainer::createMercuryMPIType ( T  t,
MercuryMPIType  type 
)
inline

Get the communicator used for MPI commands.

Creates the MPI types telling the MPI interface how each data object looks like

NOTE: The current manner of creating MPI data types might not be compatible when computing on different computers with different architecture and compilers. The "padding" which different compilers and computers add to the class might be different which has a huge effect on the mpi data type because it needs to be consistent over all computers. To resolve this, one can easily create a consistent data type for all types required. I wish goodluck to the person that needs it, because it is a lot of type work and for now I am not doing it. /MX

544  {
545 #ifdef MERCURYDPM_USE_MPI
546  MPI_Datatype MPIType;
547  MPI_Type_contiguous(sizeof(T), MPI_BYTE, &MPIType);
548  MPI_Type_commit(&MPIType);
549  dataTypes_.push_back(MPIType);
550 #endif
551  }

Referenced by Interaction< NormalForceInteraction, FrictionForceInteraction, AdhesiveForceInteraction >::createMPIType(), and initialiseMercuryMPITypes().

◆ deleteMercuryMPITypes()

void MPIContainer::deleteMercuryMPITypes ( )
inline

Deletes the MercuryMPITypes.

558  {
559 #ifdef MERCURYDPM_USE_MPI
560  for(MPI_Datatype type : dataTypes_)
561  {
562  MPI_Type_free(&type);
563  }
564 #endif
565  }

References compute_granudrum_aor::type.

Referenced by initialiseMPI().

◆ directReceive() [1/2]

template<typename T >
std::enable_if<std::is_scalar<T>::value, void>::type MPIContainer::directReceive ( T t,
int  count,
int  from,
int  tag 
)
inline

synchronously receive a list of scalars from another processor. if the send command has not been issued, this function will stall the program

Parameters
[in,out]tthe data, list of scalars
[in]countthe number of scalars to be send
[in]fromthe processor that sends the information
[in]taga unique identifier that corresponds with a send command by the sending processor
363  {
364 #if MERCURYDPM_ASSERTS
365  if (from == processorID_)
366  {
367  logger(FATAL, "[MPI FATAL]: Receiving data from self!");
368  }
369 
370  if (count == 0)
371  {
372  logger(WARN, "[MPI ERROR]: Receiving zero data");
373  }
374 #endif
375 #ifdef MERCURYDPM_USE_MPI
376  MPI_Recv(&t, count, Detail::toMPIType(t), from, tag,communicator_, MPI_STATUS_IGNORE);
377 #endif
378  }
@ WARN

References FATAL, logger, processorID_, plotPSD::t, and WARN.

◆ directReceive() [2/2]

template<typename T >
void MPIContainer::directReceive ( T t,
MercuryMPIType  type,
int  count,
int  from,
int  tag 
)
inline
382  {
383 #if MERCURYDPM_ASSERTS
384  if (from == processorID_)
385  {
386  logger(FATAL, "[MPI FATAL]: Receiving data to self!");
387  }
388 
389  if (count == 0)
390  {
391  logger(WARN, "[MPI ERROR]: Receiving zero data");
392  }
393 #endif
394 #ifdef MERCURYDPM_USE_MPI
395  MPI_Recv(t, count, dataTypes_[type], from, tag, communicator_, MPI_STATUS_IGNORE);
396 #endif
397  }

References FATAL, logger, processorID_, plotPSD::t, compute_granudrum_aor::type, and WARN.

◆ directSend() [1/2]

template<typename T >
std::enable_if<std::is_scalar<T>::value, void>::type MPIContainer::directSend ( T t,
int  count,
int  to,
int  tag 
)
inline

synchronously send a list of scalars to another processor. the data should be received directly or the program will stall

Parameters
[in,out]tthe data, list of scalars
[in]countthe number of scalars to be send
[in]tothe processor that the data is being send to
[in]taga unique identifier that corresponds with a receive command by the receiving processor
315  {
316 #if MERCURYDPM_ASSERTS
317  if (to == processorID_)
318  {
319  logger(FATAL, "[MPI FATAL]: Sending data to self!");
320  }
321 
322  if (count == 0)
323  {
324  logger(WARN, "[MPI ERROR]: Sending zero data");
325  }
326 #endif
327 #ifdef MERCURYDPM_USE_MPI
328  MPI_Ssend(&t, count, Detail::toMPIType(t), to, tag, communicator_);
329 #endif
330  }

References FATAL, logger, processorID_, plotPSD::t, and WARN.

◆ directSend() [2/2]

template<typename T >
void MPIContainer::directSend ( T t,
MercuryMPIType  type,
int  count,
int  to,
int  tag 
)
inline
335  {
336 #if MERCURYDPM_ASSERTS
337  if (to == processorID_)
338  {
339  logger(FATAL, "[MPI FATAL]: Sending data to self!");
340  }
341 
342  if (count == 0)
343  {
344  logger(WARN, "[MPI ERROR]: Sending zero data");
345  }
346 #endif
347 #ifdef MERCURYDPM_USE_MPI
348  MPI_Ssend(t,count,dataTypes_[type], to, tag, communicator_);
349 #endif
350  }

References FATAL, logger, processorID_, plotPSD::t, compute_granudrum_aor::type, and WARN.

◆ gather()

template<typename T >
void MPIContainer::gather ( T send_t,
T receive_t 
)
inline

Gathers a scaler from all processors to a vector of scalars on the root.

When a single processor needs to know a certain value on all processors, this function will gathers them together at the root processor in an array of the size of the communicator size

Parameters
[in,out]send_tthe data that is being send, scalar value
[in,out]receive_tthe data that is being received by the root, an array of scalars
408  {
409 #ifdef MERCURYDPM_USE_MPI
410  MPI_Gather(&send_t, 1, Detail::toMPIType(send_t), receive_t, 1, Detail::toMPIType(send_t), 0, communicator_);
411 #endif
412  }

Referenced by DPMBase::checkParticleForInteraction(), and DPMBase::mpiIsInCommunicationZone().

◆ getNumberOfProcessors()

◆ getProcessorID()

std::size_t MPIContainer::getProcessorID ( )

Reduces a scalar on all processors to one scalar on a target processor.

Obtain the current processor ID.

A scalar defined on all processors is reduced to one number by an operation examples of operations are MPI::MAX, MPI::MIN and MPI::SUM, giving the maximum, minumum or the summation of the given scalars. The resulting reduced scalar is then sent to the target processor id, generally this is the root processor 0.

Parameters
[in,out]tScalar that needs to be collected and reduced to one scalar
[in]operationOperation that is performed on the collected scalars
[in]idOptional input, receiving processor of the reduced scalar

AllReduces a scalar on all processors by a given MPI operation

A local scalar on all processors is reduced to one scalar as output the reduction follows the operation rule given.

Parameters
[in]send_tthe scalar that is send to be reduced
[out]receive_tthe reduced scalar
[in]operationThe operation that is performed on all local numbers to reduce it to a single number

allGather takes a (different) scalar from all processors and returns a vector with all scalars

sometimes it is required to share a local scalar such as number of particles to all other processors. With allGather all processors now now the local scalar of all other processors

Parameters
[in]send_tThe scalar that is send to all other processors
[in]send_countthe number of scalars send to other processors
[in,out]receive_tA vector of scalars that contain all other scalars from other processors
[in]receive_countthe number of scalars that is received by each processor

Get the unique identifier associated with this processor.

Returns
Returns the processor ID in the communication group
93 {
94  return processorID_;
95 }

References processorID_.

Referenced by ParticleHandler::addGhostObject(), ParticleHandler::addObject(), CGHandler::evaluateDataFiles(), CGHandler::evaluateRestartFiles(), PeriodicBoundaryHandler::preparePositionAndVelocityUpdate(), printError(), printFatalError(), printInfo(), DPMBase::printTime(), PrintWallTimeMixin::printTime(), printWarn(), and DPMBase::synchroniseParticle().

◆ initialiseMercuryMPITypes()

void MPIContainer::initialiseMercuryMPITypes ( const SpeciesHandler speciesHandler)

Creates the MPI types required for communication of Mercury data through the MPI interface.

54 {
55 #ifdef MERCURYDPM_USE_MPI
56  //Note: Important that the MPI type creation is done in the order given by the enum
57  MPIParticle dummyParticle;
58  MPIParticlePosition dummyPosition;
59  MPIParticleVelocity dummyVelocity;
60  MPIParticleForce dummyForce;
65 
66  //Obtain the correct history force class
67  //NOTE: only works for one type of interaction
68  if (dataTypes_.size() == 4)
69  {
70  //Create a dummy interaction to get a grip on the size of the MPI interaction class
71  const BaseSpecies* species = speciesHandler.getObject(0);
72  BaseInteraction* emptyInteraction = species->getEmptyInteraction();
73  emptyInteraction->createMPIType();
74  species->deleteEmptyInteraction(emptyInteraction);
75  }
76 #endif
77 }
@ VELOCITY
Definition: MpiContainer.h:46
@ FORCE
Definition: MpiContainer.h:46
@ PARTICLE
Definition: MpiContainer.h:46
@ POSITION
Definition: MpiContainer.h:46
T * getObject(const unsigned int id)
Gets a pointer to the Object at the specified index in the BaseHandler.
Definition: BaseHandler.h:621
Stores information about interactions between two interactable objects; often particles but could be ...
Definition: BaseInteraction.h:39
virtual void createMPIType()
Definition: BaseInteraction.cc:882
BaseSpecies is the class from which all other species are derived.
Definition: BaseSpecies.h:29
virtual BaseInteraction * getEmptyInteraction() const =0
virtual void deleteEmptyInteraction(BaseInteraction *interaction) const =0
void createMercuryMPIType(T t, MercuryMPIType type)
Get the communicator used for MPI commands.
Definition: MpiContainer.h:543
Data class to send a particle force over MPI.
Definition: MpiDataClass.h:93
Data class to send a particle position over MPI.
Definition: MpiDataClass.h:69
Data class to send a particle velocity over MPI.
Definition: MpiDataClass.h:82
Data class to send a particle over MPI.
Definition: MpiDataClass.h:60

References createMercuryMPIType(), BaseInteraction::createMPIType(), BaseSpecies::deleteEmptyInteraction(), FORCE, BaseSpecies::getEmptyInteraction(), BaseHandler< T >::getObject(), PARTICLE, POSITION, and VELOCITY.

Referenced by DPMBase::decompose().

◆ Instance()

static MPIContainer& MPIContainer::Instance ( )
inlinestatic

fetch the instance to be used for communication

Returns
The only instance of this class
114  {
115  static MPIContainer theInstance;
116  return theInstance;
117  }
This class contains all information and functions required for communication between processors.
Definition: MpiContainer.h:109

Referenced by ParticleHandler::addGhostObject(), Domain::addNewParticles(), ParticleHandler::addObject(), Domain::addParticle(), InsertionBoundary::checkBoundaryBeforeTimeStep(), RandomClusterInsertionBoundary::checkBoundaryBeforeTimeStep(), DPMBase::checkParticleForInteraction(), MercuryBase::checkParticleForInteraction(), CircularPeriodicBoundary::CircularPeriodicBoundary(), PeriodicBoundaryHandler::communicateNumberOfNewParticlesAndInteractions(), PeriodicBoundaryHandler::communicateTargetDomains(), AngledPeriodicBoundary::copy(), SubcriticalMaserBoundaryTEST::copyExtraParticles(), Interaction< NormalForceInteraction, FrictionForceInteraction, AdhesiveForceInteraction >::createMPIType(), DPMBase::decompose(), CGHandler::evaluateDataFiles(), CGHandler::evaluateRestartFiles(), SubcriticalMaserBoundaryTEST::extendBottom(), ParticleHandler::getKineticEnergy(), ParticleHandler::getLargestInteractionRadius(), ParticleHandler::getMass(), ParticleHandler::getMassTimesPosition(), ParticleHandler::getMeanRadius(), getMPISum(), ParticleHandler::getNumberOfFixedObjects(), ParticleHandler::getNumberOfRealObjects(), ParticleHandler::getNumberOfRealObjectsLocal(), ParticleHandler::getParticleAttribute(), ParticleHandler::getRotationalEnergy(), ParticleHandler::getSmallestInteractionRadius(), ParticleSpecies::getSmallestParticleMass(), ParticleHandler::getVolume(), MercuryBase::hGridInfo(), initialiseMPI(), LeesEdwardsBoundary::LeesEdwardsBoundary(), DPMBase::mpiIsInCommunicationZone(), PeriodicBoundaryHandler::performNewParticleTransmission(), PeriodicBoundaryHandler::prepareNewParticleTransmission(), PeriodicBoundaryHandler::preparePositionAndVelocityUpdate(), printError(), printFatalError(), printInfo(), DPMBase::printTime(), PrintWallTimeMixin::printTime(), printWarn(), RNG::randomise(), ParticleHandler::removeObject(), Domain::sendAndReceiveCount(), Domain::sendAndReceiveMPIData(), DPMBase::synchroniseParticle(), TimeDependentPeriodicBoundary::TimeDependentPeriodicBoundary(), PeriodicBoundaryHandler::updateParticleStatus(), Domain::updateStatus(), Domain::updateVelocity(), and InteractionHandler::write().

◆ receive() [1/3]

template<typename T >
std::enable_if<std::is_scalar<T>::value, void>::type MPIContainer::receive ( T t,
int  from,
int  tag 
)
inline

asynchronously receive a scalar from some other processor.

Parameters
[in,out]tthe data
[in]fromthe processor that sends the information
[in]tagan identifier for this specific receive request. This must be unique among all receive requests between the previous synchronisation step and the next one. Exactly one send request must also provide this tag and it must be done on the process specified by the 'from' parameter
201  {
202 #if MERCURYDPM_ASSERTS
203  if (from == processorID_)
204  {
205  logger(FATAL, "[MPI FATAL]: Receiving data from self!");
206  }
207 #endif
208 #ifdef MERCURYDPM_USE_MPI
209  MPI_Request request;
210  MPI_Irecv(&t, 1, Detail::toMPIType(t), from, tag, communicator_, &request);
211  pending_.push_back(request);
212 
213 #endif
214  }

References FATAL, logger, processorID_, and plotPSD::t.

Referenced by ParticleHandler::addGhostObject(), PeriodicBoundaryHandler::communicateNumberOfNewParticlesAndInteractions(), PeriodicBoundaryHandler::performNewParticleTransmission(), PeriodicBoundaryHandler::preparePositionAndVelocityUpdate(), Domain::sendAndReceiveCount(), and Domain::sendAndReceiveMPIData().

◆ receive() [2/3]

template<typename T >
std::enable_if<std::is_scalar<T>::value, void>::type MPIContainer::receive ( T t,
int  count,
int  from,
int  tag 
)
inline
Todo:
MX: type documentation. this is used to receive vectors of scalars accross
220  {
221 #if MERCURYDPM_ASSERTS
222  if (from == processorID_)
223  {
224  logger(FATAL, "[MPI FATAL]: Receiving data fromself!");
225  }
226 
227  if (count == 0)
228  {
229  logger(WARN, "[MPI ERROR]: Receiving zero data");
230  }
231 #endif
232 #ifdef MERCURYDPM_USE_MPI
233  MPI_Request request;
234  MPI_Irecv(&t, count, Detail::toMPIType(*t), from, tag, communicator_, &request);
235  pending_.push_back(request);
236 
237 #endif
238  }

References FATAL, logger, processorID_, plotPSD::t, and WARN.

◆ receive() [3/3]

template<typename T >
void MPIContainer::receive ( T t,
MercuryMPIType  type,
int  count,
int  from,
int  tag 
)
inline

asynchronously receive a list of MercuryMPIType objects from some other processor.

Parameters
[in,out]tthe data, list of MercuryMPIType objects
[in]typethe MPIType that is being received, for instance an MPIParticle
[in]countthe number of objects that are being send
[in]fromthe processor that sends the information
[in]taga unique identifier that corresponds with a send command by the sending processor
283  {
284  //std::cout << "[Process: " << processorID_ << "]: QUEUING RECEIVE REQUEST with tag: " << tag << std::endl;
285 #if MERCURYDPM_ASSERTS
286  if (from == processorID_)
287  {
288  logger(FATAL, "[MPI FATAL]: Receiving data to self!");
289  }
290 
291  if (count == 0)
292  {
293  logger(WARN, "[MPI ERROR]: Receiving zero data");
294  }
295 #endif
296 #ifdef MERCURYDPM_USE_MPI
297  MPI_Request request;
298  MPI_Irecv(t, count, dataTypes_[type], from, tag, communicator_, &request);
299  pending_.push_back(request);
300 
301 #endif
302  }

References FATAL, logger, processorID_, plotPSD::t, compute_granudrum_aor::type, and WARN.

◆ send() [1/3]

template<typename T >
std::enable_if<std::is_scalar<T>::value, void>::type MPIContainer::send ( T t,
int  to,
int  tag 
)
inline

Asynchronously send a scalar to some other processor.

Parameters
[in]tthe data
[in]tothe processor to recieve the information
[in]tagan identifier for this specific send request. This must be unique among all send requests between the previous synchronisation step and the next one. Exactly one recieve request must also provide this tag and it must be done on the process specified by the 'to' parameter
151  {
152 #if MERCURYDPM_ASSERTS
153  if (to == processorID_)
154  {
155  logger(FATAL, "[MPI FATAL]: Sending data to self!");
156  }
157 #endif
158 #ifdef MERCURYDPM_USE_MPI
159  MPI_Request request;
160  MPI_Isend(&t, 1, Detail::toMPIType(t), to, tag, communicator_, &request);
161  pending_.push_back(request);
162 
163 #endif
164  }

References FATAL, logger, processorID_, and plotPSD::t.

Referenced by ParticleHandler::addGhostObject(), PeriodicBoundaryHandler::communicateNumberOfNewParticlesAndInteractions(), PeriodicBoundaryHandler::performNewParticleTransmission(), PeriodicBoundaryHandler::preparePositionAndVelocityUpdate(), Domain::sendAndReceiveCount(), and Domain::sendAndReceiveMPIData().

◆ send() [2/3]

template<typename T >
std::enable_if<std::is_scalar<T>::value, void>::type MPIContainer::send ( T t,
int  count,
int  to,
int  tag 
)
inline
Todo:
MX: type documentation. This one is used to send vectors of scalars across (hence the *t)
170  {
171 #if MERCURYDPM_ASSERTS
172  if (to == processorID_)
173  {
174  logger(FATAL, "[MPI FATAL]: Sending data to self!");
175  }
176 
177  if (count == 0)
178  {
179  logger(WARN, "[MPI ERROR]: Sending zero data");
180  }
181 #endif
182 #ifdef MERCURYDPM_USE_MPI
183  MPI_Request request;
184  MPI_Isend(t, count, Detail::toMPIType(*t), to, tag, communicator_, &request);
185  pending_.push_back(request);
186 
187 #endif
188  }

References FATAL, logger, processorID_, plotPSD::t, and WARN.

◆ send() [3/3]

template<typename T >
void MPIContainer::send ( T t,
MercuryMPIType  type,
int  count,
int  to,
int  tag 
)
inline

asynchronously send a list of MercuryMPITypes objects to some other processor.

Parameters
[in,out]tthe data, list of MPIType objects
[in]typethe MPIType that is being send, for instance an MPIParticle
[in]countthe number of objects that are being send
[in]tothe processor that the data is send to
[in]taga unique identifier that corresponds with a receive command by the receiving processor
251  {
252  //std::cout << "[Process: " << processorID_ << "]: QUEUING SEND REQUEST with tag: " << tag << std::endl;
253 #if MERCURYDPM_ASSERTS
254  if (to == processorID_)
255  {
256  logger(FATAL, "[MPI FATAL]: Sending data to self!");
257  }
258 
259  if (count == 0)
260  {
261  logger(WARN, "[MPI ERROR]: Sending zero data");
262  }
263 #endif
264 #ifdef MERCURYDPM_USE_MPI
265  MPI_Request request;
266  MPI_Isend(t, count, dataTypes_[type], to, tag, communicator_, &request);
267  pending_.push_back(request);
268 
269 
270 #endif
271  }

References FATAL, logger, processorID_, plotPSD::t, compute_granudrum_aor::type, and WARN.

◆ sync()

void MPIContainer::sync ( )
inline

Process all pending asynchronous communication requests before continuing.

Each processor can commit a send and receive request at any time when is is finished with computing. At some point these requests have to be resolved in order to keep order in the simulation. After all requests are resolved, a barrier ensures that all processors wait untill all requests are resolved.

132  {
133 #ifdef MERCURYDPM_USE_MPI
134  MPI_Waitall(pending_.size(),pending_.data(),MPI_STATUSES_IGNORE);
135  pending_.clear();
136  MPI_Barrier(communicator_);
137 #endif
138  }

Referenced by ParticleHandler::addGhostObject(), Domain::addNewParticles(), Domain::addParticle(), DPMBase::decompose(), PeriodicBoundaryHandler::performNewParticleTransmission(), PeriodicBoundaryHandler::prepareNewParticleTransmission(), PeriodicBoundaryHandler::preparePositionAndVelocityUpdate(), Domain::updateStatus(), and Domain::updateVelocity().

Member Data Documentation

◆ numberOfProcessors_

int MPIContainer::numberOfProcessors_
private

The total number of processors in the communicator.

Referenced by getNumberOfProcessors(), and MPIContainer().

◆ processorID_

int MPIContainer::processorID_
private

The ID of the processor this class is running on.

Referenced by directReceive(), directSend(), getProcessorID(), MPIContainer(), receive(), and send().


The documentation for this class was generated from the following files: