| 
    OP
    0.1
    
   OP is a optimization solver plugin package 
   | 
 
template MPI namespace More...
Namespaces | |
| detail | |
| MPI related type traits.  | |
Functions | |
| int | getRank (MPI_Comm comm=MPI_COMM_WORLD) | 
| Get rank.  | |
| int | getNRanks (MPI_Comm comm=MPI_COMM_WORLD) | 
| Get number of ranks.  | |
| template<typename T > | |
| std::enable_if_t <!(detail::has_data< T > ::value &&detail::has_size< T > ::value), int >  | Allreduce (T &local, T &global, MPI_Op operation, MPI_Comm comm=MPI_COMM_WORLD) | 
| All reduce a single element across all ranks in a communicator.  More... | |
| template<typename T > | |
| std::enable_if_t <(detail::has_data< T >::value &&detail::has_size< T >::value), int >  | Allreduce (T &local, T &global, MPI_Op operation, MPI_Comm comm=MPI_COMM_WORLD) | 
| All reduce std::collections across all ranks in a communicator.  More... | |
| template<typename T > | |
| std::enable_if_t <!(detail::has_data< T > ::value &&detail::has_size< T > ::value), int >  | Broadcast (T &buf, int root=0, MPI_Comm comm=MPI_COMM_WORLD) | 
| Broadcast a single element to all ranks on the communicator.  More... | |
| template<typename T > | |
| std::enable_if_t <(detail::has_data< T >::value &&detail::has_size< T >::value), int >  | Broadcast (T &buf, int root=0, MPI_Comm comm=MPI_COMM_WORLD) | 
| Broadcast a vector to all ranks on the communicator.  More... | |
| template<typename T > | |
| int | Allgatherv (T &buf, T &values_on_rank, std::vector< int > &size_on_rank, std::vector< int > &offsets_on_rank, MPI_Comm comm=MPI_COMM_WORLD) | 
| gathers a local collections from all ranks on all ranks on a communicator  More... | |
| template<typename T > | |
| int | Gatherv (T &buf, T &values_on_rank, std::vector< int > &size_on_rank, std::vector< int > &offsets_on_rank, int root=0, MPI_Comm comm=MPI_COMM_WORLD) | 
| gathers a local collections from all ranks only on the root rank  More... | |
| template<typename T > | |
| int | Scatterv (T &sendbuf, std::vector< int > &variables_per_rank, std::vector< int > &offsets, T &recvbuff, int root=0, MPI_Comm comm=MPI_COMM_WORLD) | 
| MPI_Scatterv on std::collections. Send only portions of buff to ranks.  More... | |
| template<typename T > | |
| int | Irecv (T &buf, int send_rank, MPI_Request *request, int tag=0, MPI_Comm comm=MPI_COMM_WORLD) | 
| Recieve a buffer from a specified rank and create a handle for the MPI_Request.  More... | |
| template<typename T > | |
| int | Isend (T &buf, int recv_rank, MPI_Request *request, int tag=0, MPI_Comm comm=MPI_COMM_WORLD) | 
| Send a buffer to a specified rank and create a handle for the MPI_Request.  More... | |
| int | Waitall (std::vector< MPI_Request > &requests, std::vector< MPI_Status > &status) | 
| A wrapper to MPI_Waitall to wait for all the requests to be fulfilled.  More... | |
| int | CreateAndSetErrorHandler (MPI_Errhandler &newerr, void(*err)(MPI_Comm *comm, int *err,...), MPI_Comm comm=MPI_COMM_WORLD) | 
template MPI namespace
| int op::mpi::Allgatherv | ( | T & | buf, | 
| T & | values_on_rank, | ||
| std::vector< int > & | size_on_rank, | ||
| std::vector< int > & | offsets_on_rank, | ||
| MPI_Comm | comm = MPI_COMM_WORLD  | 
        ||
| ) | 
gathers a local collections from all ranks on all ranks on a communicator
| [in] | buf | rank-local std::collection to gather | 
| [out] | values_on_rank | the globally-colelcted std::collection | 
| [in] | size_on_rank | Number of variables per rank | 
| [in] | offsets_on_rank | Offsets in values_on_rank corresponding to a given rank | 
| [in] | comm | MPI Communicator | 
Definition at line 137 of file op_mpi.hpp.
| std::enable_if_t<!(detail::has_data<T>::value && detail::has_size<T>::value), int> op::mpi::Allreduce | ( | T & | local, | 
| T & | global, | ||
| MPI_Op | operation, | ||
| MPI_Comm | comm = MPI_COMM_WORLD  | 
        ||
| ) | 
All reduce a single element across all ranks in a communicator.
| [in] | local | element contribution to reduce | 
| [out] | global | element to reduce to | 
| [in] | operation | MPI_Op | 
| [in] | comm | MPI communicator | 
Definition at line 75 of file op_mpi.hpp.
| std::enable_if_t<(detail::has_data<T>::value && detail::has_size<T>::value), int> op::mpi::Allreduce | ( | T & | local, | 
| T & | global, | ||
| MPI_Op | operation, | ||
| MPI_Comm | comm = MPI_COMM_WORLD  | 
        ||
| ) | 
All reduce std::collections across all ranks in a communicator.
| [in] | local | std::collection contribution to reduce | 
| [out] | global | std::collection to reduce to | 
| [in] | operation | MPI_Op | 
| [in] | comm | MPI communicator | 
Definition at line 91 of file op_mpi.hpp.
| std::enable_if_t<!(detail::has_data<T>::value && detail::has_size<T>::value), int> op::mpi::Broadcast | ( | T & | buf, | 
| int | root = 0,  | 
        ||
| MPI_Comm | comm = MPI_COMM_WORLD  | 
        ||
| ) | 
Broadcast a single element to all ranks on the communicator.
| [in] | buf | std::collection to broadcast | 
| [in] | root | Root rank | 
| [in] | comm | MPI communicator | 
Definition at line 106 of file op_mpi.hpp.
| std::enable_if_t<(detail::has_data<T>::value && detail::has_size<T>::value), int> op::mpi::Broadcast | ( | T & | buf, | 
| int | root = 0,  | 
        ||
| MPI_Comm | comm = MPI_COMM_WORLD  | 
        ||
| ) | 
Broadcast a vector to all ranks on the communicator.
| [in] | buf | std::collection to broadcast | 
| [in] | root | Root rank | 
| [in] | comm | MPI communicator | 
Definition at line 120 of file op_mpi.hpp.
| int op::mpi::Gatherv | ( | T & | buf, | 
| T & | values_on_rank, | ||
| std::vector< int > & | size_on_rank, | ||
| std::vector< int > & | offsets_on_rank, | ||
| int | root = 0,  | 
        ||
| MPI_Comm | comm = MPI_COMM_WORLD  | 
        ||
| ) | 
gathers a local collections from all ranks only on the root rank
| [in] | buf | rank-local std::collection to gather | 
| [out] | values_on_rank | the globally-colelcted std::collection | 
| [in] | size_on_rank | Number of variables per rank | 
| [in] | offsets_on_rank | Offsets in values_on_rank corresponding to a given rank | 
| [in] | root | root rank | 
| [in] | comm | MPI Communicator | 
Definition at line 157 of file op_mpi.hpp.
| int op::mpi::Irecv | ( | T & | buf, | 
| int | send_rank, | ||
| MPI_Request * | request, | ||
| int | tag = 0,  | 
        ||
| MPI_Comm | comm = MPI_COMM_WORLD  | 
        ||
| ) | 
Recieve a buffer from a specified rank and create a handle for the MPI_Request.
| [out] | buf | std::collection to recieve into | 
| [in] | send_rank | The rank sending the information | 
| [out] | request | the MPI request handle | 
| [in] | tag | A tag to identify the communication message | 
| [in] | comm | MPI communicator | 
Definition at line 197 of file op_mpi.hpp.
| int op::mpi::Isend | ( | T & | buf, | 
| int | recv_rank, | ||
| MPI_Request * | request, | ||
| int | tag = 0,  | 
        ||
| MPI_Comm | comm = MPI_COMM_WORLD  | 
        ||
| ) | 
Send a buffer to a specified rank and create a handle for the MPI_Request.
| [in] | buf | std::collection to send | 
| [in] | recv_rank | The rank recieving the data | 
| [out] | request | the MPI request handle | 
| [in] | tag | A tag to identify the communication message | 
| [in] | comm | MPI communicator | 
Definition at line 214 of file op_mpi.hpp.
| int op::mpi::Scatterv | ( | T & | sendbuf, | 
| std::vector< int > & | variables_per_rank, | ||
| std::vector< int > & | offsets, | ||
| T & | recvbuff, | ||
| int | root = 0,  | 
        ||
| MPI_Comm | comm = MPI_COMM_WORLD  | 
        ||
| ) | 
MPI_Scatterv on std::collections. Send only portions of buff to ranks.
| [in] | sendbuff | the buffer to send | 
| [in] | variables_per_rank | the numbers of variables each rank, i, will recieve | 
| [in] | offsets,the | exclusive scan of varaibles_per_rank | 
| [in] | recvbuff | the recieve buffer with the proper size | 
Definition at line 175 of file op_mpi.hpp.
| int op::mpi::Waitall | ( | std::vector< MPI_Request > & | requests, | 
| std::vector< MPI_Status > & | status | ||
| ) | 
A wrapper to MPI_Waitall to wait for all the requests to be fulfilled.
| [in] | requests | A vector of MPI_Request handles | 
| [in] | status | A vector MPI_Status for each of the handles | 
Definition at line 227 of file op_mpi.hpp.
 1.8.5