|
OP
0.1
OP is a optimization solver plugin package
|
Parallel methods. More...
Functions | |
| template<typename T > | |
| std::tuple< T, std::vector< T > > | gatherVariablesPerRank (std::size_t local_vector_size, bool gatherAll=true, int root=0, MPI_Comm comm=MPI_COMM_WORLD) |
| Get number of variables on each rank in parallel. More... | |
| template<typename V > | |
| V | concatGlobalVector (typename V::size_type global_size, std::vector< int > &variables_per_rank, std::vector< int > &offsets, V &local_vector, bool gatherAll=true, int root=0, MPI_Comm comm=MPI_COMM_WORLD) |
| Assemble a vector by concatination of local_vector across all ranks on a communicator. More... | |
| template<typename V > | |
| V | concatGlobalVector (typename V::size_type global_size, std::vector< int > &variables_per_rank, V &local_vector, bool gatherAll=true, int root=0, MPI_Comm comm=MPI_COMM_WORLD) |
| template<typename T , typename M , typename I > | |
| RankCommunication< T > | generateSendRecievePerRank (M local_ids, T &all_global_local_ids, I &offsets, MPI_Comm comm=MPI_COMM_WORLD) |
| given a map of local_ids and global_ids determine send and recv communications More... | |
| template<typename V , typename T > | |
| std::unordered_map< int, V > | sendToOwners (RankCommunication< T > &info, V &local_data, MPI_Comm comm=MPI_COMM_WORLD) |
| transfer data to owners More... | |
| template<typename V , typename T > | |
| auto | returnToSender (RankCommunication< T > &info, const V &local_data, MPI_Comm comm=MPI_COMM_WORLD) |
| transfer back data in reverse from sendToOwners More... | |
Parallel methods.
| V op::utility::parallel::concatGlobalVector | ( | typename V::size_type | global_size, |
| std::vector< int > & | variables_per_rank, | ||
| std::vector< int > & | offsets, | ||
| V & | local_vector, | ||
| bool | gatherAll = true, |
||
| int | root = 0, |
||
| MPI_Comm | comm = MPI_COMM_WORLD |
||
| ) |
Assemble a vector by concatination of local_vector across all ranks on a communicator.
| [in] | global_size | Size of global concatenated vector |
| [in] | variables_per_rank | A std::vector with the number of variables on each rank |
| [in] | offsets | The inclusive offsets for the given local_vector that is being concatenated |
| [in] | local_vector | The local contribution to the global concatenated vector |
| [in] | gatherAll | To perform the gather on all ranks (true) or only on the root (false) |
| [in] | root | The root rank |
| [in] | comm | The MPI Communicator |
Definition at line 99 of file op_utility.hpp.
| V op::utility::parallel::concatGlobalVector | ( | typename V::size_type | global_size, |
| std::vector< int > & | variables_per_rank, | ||
| V & | local_vector, | ||
| bool | gatherAll = true, |
||
| int | root = 0, |
||
| MPI_Comm | comm = MPI_COMM_WORLD |
||
| ) |
This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.
Definition at line 114 of file op_utility.hpp.
| std::tuple<T, std::vector<T> > op::utility::parallel::gatherVariablesPerRank | ( | std::size_t | local_vector_size, |
| bool | gatherAll = true, |
||
| int | root = 0, |
||
| MPI_Comm | comm = MPI_COMM_WORLD |
||
| ) |
Get number of variables on each rank in parallel.
| [in] | local_vector_size | Size on local rank |
| [in] | gatherAll | Gather all sizes per rank on all processors. If false, only gathered on root. |
| [in] | root | Root rank (only meaningful if gatherAll = false) |
| [in] | comm | MPI communicator |
Definition at line 63 of file op_utility.hpp.
| RankCommunication<T> op::utility::parallel::generateSendRecievePerRank | ( | M | local_ids, |
| T & | all_global_local_ids, | ||
| I & | offsets, | ||
| MPI_Comm | comm = MPI_COMM_WORLD |
||
| ) |
given a map of local_ids and global_ids determine send and recv communications
| [in] | local_ids | maps global_ids to local_ids for this rank. Note the values need to be sorted |
| [in] | all_global_local_ids | This is the global vector of global ids of each rank concatenated |
| [in] | offsets | These are the inclusive offsets of the concatenated vector designated by the number of ids per rank |
Definition at line 136 of file op_utility.hpp.
| auto op::utility::parallel::returnToSender | ( | RankCommunication< T > & | info, |
| const V & | local_data, | ||
| MPI_Comm | comm = MPI_COMM_WORLD |
||
| ) |
transfer back data in reverse from sendToOwners
| [in] | info | The MPI communicator exchange data structure |
| [in] | local_data | The local data to update from "owning" ranks |
| [in] | comm | The MPI communicator |
Definition at line 244 of file op_utility.hpp.
| std::unordered_map<int, V> op::utility::parallel::sendToOwners | ( | RankCommunication< T > & | info, |
| V & | local_data, | ||
| MPI_Comm | comm = MPI_COMM_WORLD |
||
| ) |
transfer data to owners
| [in] | info | The mpi communicator struct that tells each rank which offsets will be recieved or sent from local_data |
| [in] | local_data | The data to send to "owning" ranks |
| [in] | comm | The MPI communicator |
Definition at line 199 of file op_utility.hpp.
1.8.5