The code in this section illustrates not only certain new MPI elements,
but also an interesting and quite fast algorithm for evaluating interactions
between many particles. Unfortunately the algorithm which relies
MPI_Allgather operation is costly in terms of communications.
You need a very fast communication fabric for this to work.
So, how does it work?
Every process of the MPI job maintains an array with data pertaining
to all particles, but it calculates evolution for only a group of
particles that it is responsible for. Once the computation completes,
the new updated states for all particles are exchanged amongst all
participating processes. This is done by the
A lot of data is being shoved around when that happens, but, as the result
all processes end up receiving updated invormation about all particles,
so that they can commence the next iteration.