OP  0.1
OP is a optimization solver plugin package
 All Classes Namespaces Functions Variables Typedefs Enumerations Friends
Functions
op::utility::parallel Namespace Reference

Parallel methods. More...

Functions

template<typename T >
std::tuple< T, std::vector< T > > gatherVariablesPerRank (std::size_t local_vector_size, bool gatherAll=true, int root=0, MPI_Comm comm=MPI_COMM_WORLD)
 Get number of variables on each rank in parallel. More...
 
template<typename V >
concatGlobalVector (typename V::size_type global_size, std::vector< int > &variables_per_rank, std::vector< int > &offsets, V &local_vector, bool gatherAll=true, int root=0, MPI_Comm comm=MPI_COMM_WORLD)
 Assemble a vector by concatination of local_vector across all ranks on a communicator. More...
 
template<typename V >
concatGlobalVector (typename V::size_type global_size, std::vector< int > &variables_per_rank, V &local_vector, bool gatherAll=true, int root=0, MPI_Comm comm=MPI_COMM_WORLD)
 
template<typename T , typename M , typename I >
RankCommunication< T > generateSendRecievePerRank (M local_ids, T &all_global_local_ids, I &offsets, MPI_Comm comm=MPI_COMM_WORLD)
 given a map of local_ids and global_ids determine send and recv communications More...
 
template<typename V , typename T >
std::unordered_map< int, V > sendToOwners (RankCommunication< T > &info, V &local_data, MPI_Comm comm=MPI_COMM_WORLD)
 transfer data to owners More...
 
template<typename V , typename T >
auto returnToSender (RankCommunication< T > &info, const V &local_data, MPI_Comm comm=MPI_COMM_WORLD)
 transfer back data in reverse from sendToOwners More...
 

Detailed Description

Parallel methods.

Function Documentation

template<typename V >
V op::utility::parallel::concatGlobalVector ( typename V::size_type  global_size,
std::vector< int > &  variables_per_rank,
std::vector< int > &  offsets,
V &  local_vector,
bool  gatherAll = true,
int  root = 0,
MPI_Comm  comm = MPI_COMM_WORLD 
)

Assemble a vector by concatination of local_vector across all ranks on a communicator.

Parameters
[in]global_sizeSize of global concatenated vector
[in]variables_per_rankA std::vector with the number of variables on each rank
[in]offsetsThe inclusive offsets for the given local_vector that is being concatenated
[in]local_vectorThe local contribution to the global concatenated vector
[in]gatherAllTo perform the gather on all ranks (true) or only on the root (false)
[in]rootThe root rank
[in]commThe MPI Communicator

Definition at line 99 of file op_utility.hpp.

template<typename V >
V op::utility::parallel::concatGlobalVector ( typename V::size_type  global_size,
std::vector< int > &  variables_per_rank,
V &  local_vector,
bool  gatherAll = true,
int  root = 0,
MPI_Comm  comm = MPI_COMM_WORLD 
)

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

Definition at line 114 of file op_utility.hpp.

template<typename T >
std::tuple<T, std::vector<T> > op::utility::parallel::gatherVariablesPerRank ( std::size_t  local_vector_size,
bool  gatherAll = true,
int  root = 0,
MPI_Comm  comm = MPI_COMM_WORLD 
)

Get number of variables on each rank in parallel.

Parameters
[in]local_vector_sizeSize on local rank
[in]gatherAllGather all sizes per rank on all processors. If false, only gathered on root.
[in]rootRoot rank (only meaningful if gatherAll = false)
[in]commMPI communicator

Definition at line 63 of file op_utility.hpp.

template<typename T , typename M , typename I >
RankCommunication<T> op::utility::parallel::generateSendRecievePerRank ( local_ids,
T &  all_global_local_ids,
I &  offsets,
MPI_Comm  comm = MPI_COMM_WORLD 
)

given a map of local_ids and global_ids determine send and recv communications

Parameters
[in]local_idsmaps global_ids to local_ids for this rank. Note the values need to be sorted
[in]all_global_local_idsThis is the global vector of global ids of each rank concatenated
[in]offsetsThese are the inclusive offsets of the concatenated vector designated by the number of ids per rank
Returns
An unordered map of recv[rank] = {our_rank's local ids}, send[rank] = {our local id to send}

Definition at line 136 of file op_utility.hpp.

template<typename V , typename T >
auto op::utility::parallel::returnToSender ( RankCommunication< T > &  info,
const V &  local_data,
MPI_Comm  comm = MPI_COMM_WORLD 
)

transfer back data in reverse from sendToOwners

Note
. When transfering data back, all recieving ranks should only recieve one value from another rank
Parameters
[in]infoThe MPI communicator exchange data structure
[in]local_dataThe local data to update from "owning" ranks
[in]commThe MPI communicator

Definition at line 244 of file op_utility.hpp.

template<typename V , typename T >
std::unordered_map<int, V> op::utility::parallel::sendToOwners ( RankCommunication< T > &  info,
V &  local_data,
MPI_Comm  comm = MPI_COMM_WORLD 
)

transfer data to owners

Parameters
[in]infoThe mpi communicator struct that tells each rank which offsets will be recieved or sent from local_data
[in]local_dataThe data to send to "owning" ranks
[in]commThe MPI communicator

Definition at line 199 of file op_utility.hpp.