oomph::DoubleVectorHelpers Namespace Reference

Namespace for helper functions for DoubleVectors. More...

Functions

void concatenate (const Vector< DoubleVector * > &in_vector_pt, DoubleVector &out_vector)
 
void concatenate (Vector< DoubleVector > &in_vector, DoubleVector &out_vector)
 
void split (const DoubleVector &in_vector, Vector< DoubleVector * > &out_vector_pt)
 
void split (const DoubleVector &in_vector, Vector< DoubleVector > &out_vector)
 
void concatenate_without_communication (const Vector< DoubleVector * > &in_vector_pt, DoubleVector &out_vector)
 
void concatenate_without_communication (Vector< DoubleVector > &in_vector, DoubleVector &out_vector)
 
void split_without_communication (const DoubleVector &in_vector, Vector< DoubleVector * > &out_vector_pt)
 
void split_without_communication (const DoubleVector &in_vector, Vector< DoubleVector > &out_vector)
 

Detailed Description

Namespace for helper functions for DoubleVectors.

Function Documentation

◆ concatenate() [1/2]

void oomph::DoubleVectorHelpers::concatenate ( const Vector< DoubleVector * > &  in_vector_pt,
DoubleVector out_vector 
)

Concatenate DoubleVectors. Takes a Vector of DoubleVectors. If the out vector is built, we will not build a new distribution. Otherwise we build a uniform distribution.

The rows of the out vector is seen "as it is" in the in vectors. For example, if we have DoubleVectors with distributions A and B, distributed across two processors (p0 and p1),

A: [a0] (on p0) B: [b0] (on p0) [a1] (on p1) [b1] (on P1),

then the out_vector is

[a0 (on p0) a1] (on p0) [b0] (on p1) b1] (on p1),

Communication is required between processors. The sum of the global number of rows in the in vectors must equal to the global number of rows in the out vector. This condition must be met if one is to supply an out vector with a distribution, otherwise we can let the function generate the out vector distribution itself.

995  {
996  // How many in vectors to concatenate?
997  unsigned nvectors = in_vector_pt.size();
998 
999  // PARANIOD checks which involves the in vectors only
1000 #ifdef PARANOID
1001  // Check that there is at least one vector.
1002  if (nvectors == 0)
1003  {
1004  std::ostringstream error_message;
1005  error_message << "There is no vector to concatenate...\n"
1006  << "Perhaps you forgot to fill in_vector_pt?\n";
1007  throw OomphLibError(error_message.str(),
1010  }
1011 
1012  // Does this vector need concatenating?
1013  if (nvectors == 1)
1014  {
1015  std::ostringstream warning_message;
1016  warning_message << "There is only one vector to concatenate...\n"
1017  << "This does not require concatenating...\n";
1018  OomphLibWarning(warning_message.str(),
1021  }
1022 
1023  // Check that all the DoubleVectors in in_vector_pt are built
1024  for (unsigned vec_i = 0; vec_i < nvectors; vec_i++)
1025  {
1026  if (!in_vector_pt[vec_i]->built())
1027  {
1028  std::ostringstream error_message;
1029  error_message << "The vector in position " << vec_i
1030  << " is not built.\n"
1031  << "I cannot concatenate an unbuilt vector.\n";
1032  throw OomphLibError(error_message.str(),
1035  }
1036  }
1037 #endif
1038 
1039  // The communicator pointer for the first in vector.
1040  const OomphCommunicator* const comm_pt =
1041  in_vector_pt[0]->distribution_pt()->communicator_pt();
1042 
1043  // Check if the first in vector is distributed.
1044  bool distributed = in_vector_pt[0]->distributed();
1045 
1046  // If the out vector is not built, build it with a uniform distribution.
1047  if (!out_vector.built())
1048  {
1049  // Nrow for the out vector is the sum of the nrow of the in vectors.
1050  unsigned tmp_nrow = 0;
1051  for (unsigned vec_i = 0; vec_i < nvectors; vec_i++)
1052  {
1053  tmp_nrow += in_vector_pt[vec_i]->nrow();
1054  }
1055 
1056  // Build the out vector with uniform distribution.
1057  out_vector.build(
1058  LinearAlgebraDistribution(comm_pt, tmp_nrow, distributed), 0.0);
1059  }
1060  else
1061  {
1062 #ifdef PARANOID
1063  // Check that the sum of nrow of in vectors match the nrow in the out
1064  // vectors.
1065  unsigned in_nrow = 0;
1066  for (unsigned vec_i = 0; vec_i < nvectors; vec_i++)
1067  {
1068  in_nrow += in_vector_pt[vec_i]->nrow();
1069  }
1070 
1071  if (in_nrow != out_vector.nrow())
1072  {
1073  std::ostringstream error_message;
1074  error_message << "The sum of nrow of the in vectors does not match\n"
1075  << "the nrow of the out vector.\n";
1076  throw OomphLibError(error_message.str(),
1079  }
1080 #endif
1081  }
1082 
1083 #ifdef PARANOID
1084  // Check that all communicators of the vectors to concatenate are the same
1085  // by comparing all communicators against the out vector.
1086  const OomphCommunicator out_comm =
1087  *(out_vector.distribution_pt()->communicator_pt());
1088 
1089  for (unsigned vec_i = 0; vec_i < nvectors; vec_i++)
1090  {
1091  // Get the Communicator for the current vector.
1092  const OomphCommunicator in_comm =
1093  *(in_vector_pt[vec_i]->distribution_pt()->communicator_pt());
1094 
1095  if (out_comm != in_comm)
1096  {
1097  std::ostringstream error_message;
1098  error_message << "The vector in position " << vec_i << " has a\n"
1099  << "different communicator from the out vector.\n";
1100  throw OomphLibError(error_message.str(),
1103  }
1104  }
1105 
1106  // Check that the distributed boolean is the same for all vectors.
1107  if (out_comm.nproc() != 1)
1108  {
1109  const bool out_distributed = out_vector.distributed();
1110  for (unsigned vec_i = 0; vec_i < nvectors; vec_i++)
1111  {
1112  if (out_distributed != in_vector_pt[vec_i]->distributed())
1113  {
1114  std::ostringstream error_message;
1115  error_message << "The vector in position " << vec_i << " has a\n"
1116  << "different distributed boolean from "
1117  << "the out vector.\n";
1118  throw OomphLibError(error_message.str(),
1121  }
1122  }
1123  }
1124 #endif
1125 
1126 
1127  // Now we do the concatenation.
1128  if ((comm_pt->nproc() == 1) || !distributed)
1129  {
1130  // Serial version of the code.
1131  // This is trivial, we simply loop through the in vectors and
1132  // fill in the out vector.
1133 
1134  // Out vector index.
1135  unsigned out_i = 0;
1136 
1137  // Out vector values.
1138  double* out_value_pt = out_vector.values_pt();
1139 
1140  // Loop through the in vectors.
1141  for (unsigned vec_i = 0; vec_i < nvectors; vec_i++)
1142  {
1143  // Nrow of current in vector.
1144  unsigned in_nrow = in_vector_pt[vec_i]->nrow();
1145 
1146  // In vector values.
1147  double* in_value_pt = in_vector_pt[vec_i]->values_pt();
1148 
1149  // Loop through the entries of this in vector.
1150  for (unsigned i = 0; i < in_nrow; i++)
1151  {
1152  out_value_pt[out_i++] = in_value_pt[i];
1153  }
1154  }
1155  }
1156  // Otherwise we are dealing with a distributed vector.
1157  else
1158  {
1159 #ifdef OOMPH_HAS_MPI
1160  // Get the number of processors
1161  unsigned nproc = comm_pt->nproc();
1162 
1163  // My rank
1164  unsigned my_rank = comm_pt->my_rank();
1165 
1166  // Storage for the data (per processor) to send
1167  Vector<Vector<double>> values_to_send(nproc);
1168 
1169  // The sum of the nrow for the in vectors (so far). This is used as an
1170  // offset to calculate the global equation number in the out vector
1171  unsigned long sum_of_vec_nrow = 0;
1172 
1173  // Loop over the in vectors and work out:
1174  // out_p: the rank the of receiving processor
1175  // out_local_eqn: the local equation number of the receiving processor
1176  //
1177  // Then put the value and out_local_eqn at out_p in values_to_send
1178 
1179  LinearAlgebraDistribution* out_distribution_pt =
1180  out_vector.distribution_pt();
1181  for (unsigned in_vec_i = 0; in_vec_i < nvectors; in_vec_i++)
1182  {
1183  // Loop through the local equations
1184  unsigned in_vec_nrow_local = in_vector_pt[in_vec_i]->nrow_local();
1185  unsigned in_vec_first_row = in_vector_pt[in_vec_i]->first_row();
1186 
1187  for (unsigned in_row_i = 0; in_row_i < in_vec_nrow_local; in_row_i++)
1188  {
1189  // Calculate the global equation number for this in_row_i
1190  unsigned out_global_eqn =
1191  in_row_i + in_vec_first_row + sum_of_vec_nrow;
1192 
1193  // Get the processor that this global row belongs to.
1194  // The rank_of_global_row(...) function loops through all the
1195  // processors and does two unsigned comparisons. Since we have to do
1196  // this for every row, it may be better to store a list mapping for
1197  // very large number of processors.
1198  unsigned out_p =
1199  out_distribution_pt->rank_of_global_row(out_global_eqn);
1200  // unsigned out_p = out_distribution_pt
1201  // ->rank_of_global_row_map(out_global_eqn);
1202 
1203  // Knowing out_p enables us to work out the out_first_row and
1204  // out_local_eqn.
1205  unsigned out_first_row = out_distribution_pt->first_row(out_p);
1206  unsigned out_local_eqn = out_global_eqn - out_first_row;
1207 
1208  // Now push back the out_local_eqn and the value
1209  values_to_send[out_p].push_back(out_local_eqn);
1210  values_to_send[out_p].push_back(
1211  (*in_vector_pt[in_vec_i])[in_row_i]);
1212  }
1213 
1214  // Update the offset.
1215  sum_of_vec_nrow += in_vector_pt[in_vec_i]->nrow();
1216  }
1217 
1218  // Prepare to send the data!
1219 
1220  // Storage for the number of data to be sent to each processor.
1221  Vector<int> send_n(nproc, 0);
1222 
1223  // Storage for all the values to be send to each processor.
1224  Vector<double> send_values_data;
1225 
1226  // Storage location within send_values_data
1227  Vector<int> send_displacement(nproc, 0);
1228 
1229  // Get the total amount of data which needs to be sent, so we can
1230  // reserve space for it.
1231  unsigned total_ndata = 0;
1232  for (unsigned rank = 0; rank < nproc; rank++)
1233  {
1234  if (rank != my_rank)
1235  {
1236  total_ndata += values_to_send[rank].size();
1237  }
1238  }
1239 
1240  // Now we don't have to re-allocate data/memory when push_back is
1241  // called. Nb. Using push_back without reserving memory may cause
1242  // multiple re-allocation behind the scenes, this is expensive.
1243  send_values_data.reserve(total_ndata);
1244 
1245  // Loop over all the processors to "flat pack" the data for sending.
1246  for (unsigned rank = 0; rank < nproc; rank++)
1247  {
1248  // Set the offset for the current processor
1249  send_displacement[rank] = send_values_data.size();
1250 
1251  // Don't bother to do anything if
1252  // the processor in the loop is the current processor.
1253  if (rank != my_rank)
1254  {
1255  // Put the values into the send data vector.
1256  unsigned n_data = values_to_send[rank].size();
1257  for (unsigned j = 0; j < n_data; j++)
1258  {
1259  send_values_data.push_back(values_to_send[rank][j]);
1260  } // Loop over the data
1261  } // if rank != my_rank
1262 
1263  // Find the number of data to be added to the vector.
1264  send_n[rank] = send_values_data.size() - send_displacement[rank];
1265  } // Loop over processors
1266 
1267  // Storage for the number of data to be received from each processor.
1268  Vector<int> receive_n(nproc, 0);
1269  MPI_Alltoall(&send_n[0],
1270  1,
1271  MPI_INT,
1272  &receive_n[0],
1273  1,
1274  MPI_INT,
1275  comm_pt->mpi_comm());
1276 
1277  // Prepare the data to be received
1278  // by working out the displacement from the received data.
1279  Vector<int> receive_displacement(nproc, 0);
1280  int receive_data_count = 0;
1281  for (unsigned rank = 0; rank < nproc; rank++)
1282  {
1283  receive_displacement[rank] = receive_data_count;
1284  receive_data_count += receive_n[rank];
1285  }
1286 
1287  // Now resize the receive buffer for all data from all processors.
1288  // Make sure that it has size of at least one.
1289  if (receive_data_count == 0)
1290  {
1291  receive_data_count++;
1292  }
1293  Vector<double> receive_values_data(receive_data_count);
1294 
1295  // Make sure that the send buffer has size at least one
1296  // so that we don't get a segmentation fault.
1297  if (send_values_data.size() == 0)
1298  {
1299  send_values_data.resize(1);
1300  }
1301 
1302  // Now send the data between all processors
1303  MPI_Alltoallv(&send_values_data[0],
1304  &send_n[0],
1305  &send_displacement[0],
1306  MPI_DOUBLE,
1307  &receive_values_data[0],
1308  &receive_n[0],
1309  &receive_displacement[0],
1310  MPI_DOUBLE,
1311  comm_pt->mpi_comm());
1312 
1313  // Data from all other processors are stored in:
1314  // receive_values_data
1315  // Data already on this processor is stored in:
1316  // values_to_send[my_rank]
1317 
1318  // Loop through the data on this processor.
1319  unsigned location_i = 0;
1320  unsigned my_values_to_send_size = values_to_send[my_rank].size();
1321  while (location_i < my_values_to_send_size)
1322  {
1323  out_vector[unsigned(values_to_send[my_rank][location_i])] =
1324  values_to_send[my_rank][location_i + 1];
1325 
1326  location_i += 2;
1327  }
1328 
1329  // Before we loop through the data on other processors, we need to check
1330  // if any data has been received.
1331  bool data_has_been_received = false;
1332  unsigned send_rank = 0;
1333  while (send_rank < nproc)
1334  {
1335  if (receive_n[send_rank] > 0)
1336  {
1337  data_has_been_received = true;
1338  break;
1339  }
1340  send_rank++;
1341  }
1342 
1343  location_i = 0;
1344  if (data_has_been_received)
1345  {
1346  unsigned receive_values_data_size = receive_values_data.size();
1347  while (location_i < receive_values_data_size)
1348  {
1349  out_vector[unsigned(receive_values_data[location_i])] =
1350  receive_values_data[location_i + 1];
1351  location_i += 2;
1352  }
1353  }
1354 #else
1355  {
1356  std::ostringstream error_message;
1357  error_message << "I don't know what to do with distributed vectors\n"
1358  << "without MPI... :(";
1359  throw OomphLibError(error_message.str(),
1362  }
1363 #endif
1364  }
1365  } // function concatenate
int i
Definition: BiCGSTAB_step_by_step.cpp:9
#define OOMPH_EXCEPTION_LOCATION
Definition: oomph_definitions.h:61
#define OOMPH_CURRENT_FUNCTION
Definition: oomph_definitions.h:86
std::ptrdiff_t j
Definition: tut_arithmetic_redux_minmax.cpp:2

References oomph::DoubleVector::build(), oomph::DoubleVector::built(), oomph::LinearAlgebraDistribution::communicator_pt(), oomph::DistributableLinearAlgebraObject::distributed(), oomph::DistributableLinearAlgebraObject::distribution_pt(), oomph::LinearAlgebraDistribution::first_row(), i, j, oomph::OomphCommunicator::my_rank(), oomph::OomphCommunicator::nproc(), oomph::DistributableLinearAlgebraObject::nrow(), OOMPH_CURRENT_FUNCTION, OOMPH_EXCEPTION_LOCATION, oomph::LinearAlgebraDistribution::rank_of_global_row(), and oomph::DoubleVector::values_pt().

Referenced by concatenate(), oomph::HelmholtzMGPreconditioner< DIM >::direct_solve(), main(), and oomph::ExactDGPBlockPreconditioner< MATRIX >::preconditioner_solve().

◆ concatenate() [2/2]

void oomph::DoubleVectorHelpers::concatenate ( Vector< DoubleVector > &  in_vector,
DoubleVector out_vector 
)

Wrapper around the other concatenate(...) function. Be careful with Vector of vectors. If the DoubleVectors are resized, there could be reallocation of memory. If we wanted to use the function which takes a Vector of pointers to DoubleVectors, we would either have to invoke new and remember to delete, or create a temporary Vector to store pointers to the DoubleVector objects. This wrapper is meant to make life easier for the user by avoiding calls to new/delete AND without creating a temporary vector of pointers to DoubleVectors. If we had C++ 11, this would be so much nicer since we can use smart pointers which will delete themselves, so we do not have to remember to delete!

1382  {
1383  const unsigned n_in_vector = in_vector.size();
1384 
1385  Vector<DoubleVector*> in_vector_pt(n_in_vector, 0);
1386 
1387  for (unsigned i = 0; i < n_in_vector; i++)
1388  {
1389  in_vector_pt[i] = &in_vector[i];
1390  }
1391 
1392  DoubleVectorHelpers::concatenate(in_vector_pt, out_vector);
1393  } // function concatenate
void concatenate(Vector< DoubleVector > &in_vector, DoubleVector &out_vector)
Definition: double_vector.cc:1381

References concatenate(), and i.

◆ concatenate_without_communication() [1/2]

void oomph::DoubleVectorHelpers::concatenate_without_communication ( const Vector< DoubleVector * > &  in_vector_pt,
DoubleVector out_vector 
)

Concatenate DoubleVectors. Takes a Vector of DoubleVectors. If the out vector is built, we will not build a new distribution. Otherwise a new distribution will be built using LinearAlgebraDistribution::concatenate(...).

The out vector has its rows permuted according to the individual distributions of the in vectors. For example, if we have DoubleVectors with distributions A and B, distributed across two processors (p0 and p1),

A: [a0] (on p0) B: [b0] (on p0) [a1] (on p1) [b1] (on P1),

then the out_vector is

[a0 (on p0) b0] (on p0) [a1 (on p1) b1] (on p1),

as opposed to

[a0 (on p0) a1] (on p0) [b0 (on p1) b1] (on p1).

Note (1): The out vector may not be uniformly distributed even if the in vectors have uniform distributions. The nrow_local of the out vector will be the sum of the nrow_local of the in vectors. Try this out with two distributions of global rows 3 and 5, uniformly distributed across two processors. Compare this against a distribution of global row 8 distributed across two processors.

There are no MPI send and receive, the data stays on the processor as defined by the distributions from the in vectors.

Concatenate DoubleVectors. Takes a Vector of DoubleVectors. If the out vector is built, we will not build a new distribution. Otherwise a new distribution will be built using LinearAlgebraDistribution::concatenate(...).

The out vector has its rows permuted according to the individual distributions of the in vectors. For example, if we have DoubleVectors with distributions A and B, distributed across two processors (p0 and p1),

A: [a0] (on p0) B: [b0] (on p0) [a1] (on p1) [b1] (on P1),

then the out_vector is

[a0 (on p0) b0] (on p0) [a1 (on p1) b1] (on p1),

as opposed to

[a0 (on p0) a1] (on p0) [b0 (on p1) b1] (on p1).

Note (1): The out vector may not be uniformly distributed even if the the in vectors have uniform distributions. The nrow_local of the out vector will be the sum of the nrow_local of the in vectors. Try this out with two distributions of global rows 3 and 5, uniformly distributed across two processors. Compare this against a distribution of global row 8 distributed across two processors.

There are no MPI send and receive, the data stays on the processor as defined by the distributions from the in vectors.

1858  {
1859  // How many in vectors do we want to concatenate?
1860  unsigned nvectors = in_vector_pt.size();
1861 
1862  // PARANOID checks which involves the in vectors only.
1863 #ifdef PARANOID
1864  // Check that there is at least one vector.
1865  if (nvectors == 0)
1866  {
1867  std::ostringstream error_message;
1868  error_message << "There is no vector to concatenate...\n"
1869  << "Perhaps you forgot to fill in_vector_pt?\n";
1870  throw OomphLibError(error_message.str(),
1873  }
1874 
1875  // Does this vector need concatenating?
1876  if (nvectors == 1)
1877  {
1878  std::ostringstream warning_message;
1879  warning_message << "There is only one vector to concatenate...\n"
1880  << "This does not require concatenating...\n";
1881  OomphLibWarning(warning_message.str(),
1884  }
1885 
1886  // Check that all the DoubleVectors in in_vector_pt are built
1887  for (unsigned vec_i = 0; vec_i < nvectors; vec_i++)
1888  {
1889  if (!in_vector_pt[vec_i]->built())
1890  {
1891  std::ostringstream error_message;
1892  error_message << "The vector in position " << vec_i
1893  << " is not built.\n"
1894  << "I cannot concatenate an unbuilt vector.\n";
1895  throw OomphLibError(error_message.str(),
1898  }
1899  }
1900 #endif
1901 
1902  // If the out vector is not built, build it with the correct distribution.
1903  if (!out_vector.built())
1904  {
1905  Vector<LinearAlgebraDistribution*> in_distribution_pt(nvectors, 0);
1906  for (unsigned vec_i = 0; vec_i < nvectors; vec_i++)
1907  {
1908  in_distribution_pt[vec_i] = in_vector_pt[vec_i]->distribution_pt();
1909  }
1910 
1911  LinearAlgebraDistribution tmp_distribution;
1913  tmp_distribution);
1914  out_vector.build(tmp_distribution, 0.0);
1915  }
1916 
1917  // PARANOID checks which involves all in vectors and out vectors.
1918 #ifdef PARANOID
1919 
1920  // Check that all communicators of the vectors to concatenate are the same
1921  // by comparing all communicators against the out vector.
1922  const OomphCommunicator out_comm =
1923  *(out_vector.distribution_pt()->communicator_pt());
1924 
1925  for (unsigned vec_i = 0; vec_i < nvectors; vec_i++)
1926  {
1927  // Get the Communicator for the current vector.
1928  const OomphCommunicator in_comm =
1929  *(in_vector_pt[vec_i]->distribution_pt()->communicator_pt());
1930 
1931  if (out_comm != in_comm)
1932  {
1933  std::ostringstream error_message;
1934  error_message << "The vector in position " << vec_i << " has a\n"
1935  << "different communicator from the out vector.\n";
1936  throw OomphLibError(error_message.str(),
1939  }
1940  }
1941 
1942  // Check that the distributed boolean is the same for all vectors.
1943  if (out_comm.nproc() > 1)
1944  {
1945  const bool out_distributed = out_vector.distributed();
1946  for (unsigned vec_i = 0; vec_i < nvectors; vec_i++)
1947  {
1948  if (out_distributed != in_vector_pt[vec_i]->distributed())
1949  {
1950  std::ostringstream error_message;
1951  error_message << "The vector in position " << vec_i << " has a\n"
1952  << "different distributed boolean from the "
1953  << "out vector.\n";
1954  throw OomphLibError(error_message.str(),
1957  }
1958  }
1959  }
1960 
1961  // Check that the distribution from the out vector is indeed the
1962  // same as the one created by
1963  // LinearAlgebraDistributionHelpers::concatenate(...). This test is
1964  // redundant if the out_vector is not built to begin with.
1965 
1966  // Create tmp_distribution, a concatenation of all distributions from
1967  // the in vectors.
1968  Vector<LinearAlgebraDistribution*> in_distribution_pt(nvectors, 0);
1969  for (unsigned vec_i = 0; vec_i < nvectors; vec_i++)
1970  {
1971  in_distribution_pt[vec_i] = in_vector_pt[vec_i]->distribution_pt();
1972  }
1973 
1974  LinearAlgebraDistribution tmp_distribution;
1976  tmp_distribution);
1977  // The the distribution from the out vector.
1978  LinearAlgebraDistribution out_distribution =
1979  *(out_vector.distribution_pt());
1980 
1981  // Compare them!
1982  if (tmp_distribution != out_distribution)
1983  {
1984  std::ostringstream error_message;
1985  error_message << "The distribution of the out vector is not correct.\n"
1986  << "Please call the function with a cleared out vector,\n"
1987  << "or compare the distribution of the out vector with\n"
1988  << "the distribution created by\n"
1989  << "LinearAlgebraDistributionHelpers::concatenate(...)\n";
1990  throw OomphLibError(error_message.str(),
1993  }
1994 
1995  // Do not need these distributions.
1996  tmp_distribution.clear();
1997  out_distribution.clear();
1998 #endif
1999 
2000 
2001  unsigned out_value_offset = 0;
2002 
2003  double* out_value_pt = out_vector.values_pt();
2004 
2005  // Loop through the vectors.
2006  for (unsigned vec_i = 0; vec_i < nvectors; vec_i++)
2007  {
2008  // Get the nrow_local and
2009  // pointer to the values for the current in vector.
2010  unsigned in_vector_nrow_local = in_vector_pt[vec_i]->nrow_local();
2011  double* in_vector_value_pt = in_vector_pt[vec_i]->values_pt();
2012 
2013  // Loop through the local values and inset them into the out_vector.
2014  for (unsigned val_i = 0; val_i < in_vector_nrow_local; val_i++)
2015  {
2016  out_value_pt[out_value_offset + val_i] = in_vector_value_pt[val_i];
2017  }
2018 
2019  // Update the offset.
2020  out_value_offset += in_vector_nrow_local;
2021  }
2022  } // function concatenate_without_communication

References oomph::DoubleVector::build(), oomph::DoubleVector::built(), oomph::LinearAlgebraDistribution::clear(), oomph::LinearAlgebraDistribution::communicator_pt(), oomph::LinearAlgebraDistributionHelpers::concatenate(), oomph::DistributableLinearAlgebraObject::distributed(), oomph::DistributableLinearAlgebraObject::distribution_pt(), oomph::OomphCommunicator::nproc(), OOMPH_CURRENT_FUNCTION, OOMPH_EXCEPTION_LOCATION, and oomph::DoubleVector::values_pt().

Referenced by concatenate_without_communication(), oomph::BlockPreconditioner< MATRIX >::get_block_vector(), oomph::BlockPreconditioner< MATRIX >::get_block_vectors(), oomph::BlockPreconditioner< MATRIX >::get_dof_level_block(), and main().

◆ concatenate_without_communication() [2/2]

void oomph::DoubleVectorHelpers::concatenate_without_communication ( Vector< DoubleVector > &  in_vector,
DoubleVector out_vector 
)

Wrapper around the other concatenate_without_communication(...) function. Be careful with Vector of vectors. If the DoubleVectors are resized, there could be reallocation of memory. If we wanted to use the function which takes a Vector of pointers to DoubleVectors, we would either have to invoke new and remember to delete, or create a temporary Vector to store pointers to the DoubleVector objects. This wrapper is meant to make life easier for the user by avoiding calls to new/delete AND without creating a temporary vector of pointers to DoubleVectors. If we had C++ 11, this would be so much nicer since we can use smart pointers which will delete themselves, so we do not have to remember to delete!

2041  {
2042  const unsigned n_in_vector = in_vector.size();
2043 
2044  Vector<DoubleVector*> in_vector_pt(n_in_vector, 0);
2045 
2046  for (unsigned i = 0; i < n_in_vector; i++)
2047  {
2048  in_vector_pt[i] = &in_vector[i];
2049  }
2050 
2052  out_vector);
2053  } // function concatenate_without_communication
void concatenate_without_communication(Vector< DoubleVector > &in_vector, DoubleVector &out_vector)
Definition: double_vector.cc:2039

References concatenate_without_communication(), and i.

◆ split() [1/2]

void oomph::DoubleVectorHelpers::split ( const DoubleVector in_vector,
Vector< DoubleVector * > &  out_vector_pt 
)

Split a DoubleVector into the out DoubleVectors. Let vec_A be the in Vector, and let vec_B and vec_C be the out vectors. Then the splitting of vec_A is depicted below: vec_A: [a0 (on p0) a1] (on p0) [a2 (on p1) a3] (on p1)

vec_B: [a0] (on p0) vec_C: [a2] (on p0) [a1] (on p1) [a3] (on p1)

Communication is required between processors. The out_vector_pt must contain pointers to DoubleVector which has already been built with the correct distribution; the sum of the number of global row of the out vectors must be the same the number of global rows of the in vector.

Split a DoubleVector into the out DoubleVectors. Let vec_A be the in Vector, and let vec_B and vec_C be the out vectors. Then the splitting of vec_A is depicted below: vec_A: [a0 (on p0) a1] (on p0) [a2 (on p1) a3] (on p1)

vec_B: [a0] (on p0) vec_C: [a2] (on p0) [a1] (on p1) [a3] (on p1)

Communication is required between processors. The out_vector_pt must contain pointers to DoubleVector which has already been built with the correct distribution; the sum of the number of global row of the out vectors must be the same the the number of global rows of the in vector.

1415  {
1416  // How many out vectors do we have?
1417  unsigned nvec = out_vector_pt.size();
1418 #ifdef PARANOID
1419 
1420  // Check that the in vector is built.
1421  if (!in_vector.built())
1422  {
1423  std::ostringstream error_message;
1424  error_message << "The in_vector is not built.\n"
1425  << "Please build it!.\n";
1426  throw OomphLibError(error_message.str(),
1429  }
1430 
1431  // Check that all the out vectors are built.
1432  for (unsigned vec_i = 0; vec_i < nvec; vec_i++)
1433  {
1434  if (!out_vector_pt[vec_i]->built())
1435  {
1436  std::ostringstream error_message;
1437  error_message << "The vector at position " << vec_i
1438  << " is not built.\n"
1439  << "Please build it!.\n";
1440  throw OomphLibError(error_message.str(),
1443  }
1444  }
1445 
1446  // Check that the sum of the nrow from out vectors is the same as the
1447  // nrow from in_vector.
1448  unsigned out_nrow_sum = 0;
1449  for (unsigned vec_i = 0; vec_i < nvec; vec_i++)
1450  {
1451  out_nrow_sum += out_vector_pt[vec_i]->nrow();
1452  }
1453 
1454  if (in_vector.nrow() != out_nrow_sum)
1455  {
1456  std::ostringstream error_message;
1457  error_message << "The global number of rows in the in_vector\n"
1458  << "is not equal to the sum of the global nrows\n"
1459  << "of the in vectors.\n";
1460  throw OomphLibError(error_message.str(),
1463  }
1464 
1465  // Check that all communicators are the same. We use a communicator to
1466  // get the number of processors and my_rank. So we would like them to be
1467  // the same for in_vector and all out vectors.
1468  const OomphCommunicator in_vector_comm =
1469  *(in_vector.distribution_pt()->communicator_pt());
1470  for (unsigned vec_i = 0; vec_i < nvec; vec_i++)
1471  {
1472  const OomphCommunicator dist_i_comm =
1473  *(out_vector_pt[vec_i]->distribution_pt()->communicator_pt());
1474 
1475  if (in_vector_comm != dist_i_comm)
1476  {
1477  std::ostringstream error_message;
1478  error_message << "The communicator for the distribution in the \n"
1479  << "position " << vec_i
1480  << " is not the same as the in_vector\n";
1481  throw OomphLibError(error_message.str(),
1484  }
1485  }
1486 
1487  // Check that the distributed boolean is the same for all vectors.
1488  bool para_distributed = in_vector.distributed();
1489 
1490  for (unsigned vec_i = 0; vec_i < nvec; vec_i++)
1491  {
1492  if (para_distributed != out_vector_pt[vec_i]->distributed())
1493  {
1494  std::ostringstream error_message;
1495  error_message
1496  << "The vector in position " << vec_i << " does not \n"
1497  << " have the same distributed boolean as the in_vector\n";
1498  throw OomphLibError(error_message.str(),
1501  }
1502  }
1503 
1504 #endif
1505 
1506  // The communicator.
1507  const OomphCommunicator* const comm_pt =
1508  in_vector.distribution_pt()->communicator_pt();
1509 
1510  // Is this distributed?
1511  bool distributed = in_vector.distributed();
1512 
1513  // The serial code.
1514  if ((comm_pt->nproc() == 1) || !distributed)
1515  {
1516  // Serial version of the code: loop through all the out vectors and
1517  // insert the elements of in_vector.
1518 
1519  // index for in vector, and in vector values.
1520  unsigned in_vec_i = 0;
1521  double* in_value_pt = in_vector.values_pt();
1522 
1523  // Fill in the out vectors.
1524  for (unsigned out_vec_i = 0; out_vec_i < nvec; out_vec_i++)
1525  {
1526  // out vector nrow and values.
1527  unsigned out_nrow = out_vector_pt[out_vec_i]->nrow();
1528  double* out_value_pt = out_vector_pt[out_vec_i]->values_pt();
1529 
1530  // Fill in the current out vector.
1531  for (unsigned out_val_i = 0; out_val_i < out_nrow; out_val_i++)
1532  {
1533  out_value_pt[out_val_i] = in_value_pt[in_vec_i++];
1534  }
1535  }
1536  }
1537  // Otherwise we are dealing with a distributed vector.
1538  else
1539  {
1540 #ifdef OOMPH_HAS_MPI
1541  // For each entry in the in_vector, we need to work out:
1542  // 1) Which out vector this entry belongs to,
1543  // 2) which processor to send the data to and
1544  // 3) the local equation number in the out vector.
1545  //
1546  // We know the in_local_eqn, we can work out the in_global_eqn.
1547  //
1548  // From in_global_eqn we can work out the out vector and
1549  // the out_global_eqn.
1550  //
1551  // The out_global_eqn allows us to determine which processor to send to.
1552  // With the out_p (processor to send data to) and out vector, we get the
1553  // out_first_row which then allows us to work out the out_local_eqn.
1554 
1555 
1556  // Get the number of processors
1557  unsigned nproc = comm_pt->nproc();
1558 
1559  // My rank
1560  unsigned my_rank = comm_pt->my_rank();
1561 
1562  // Storage for the data (per processor) to send.
1563  Vector<Vector<double>> values_to_send(nproc);
1564 
1565  // Sum of the nrow of the out vectors so far. This is used to work out
1566  // which out_vector a in_global_eqn belongs to.
1567  Vector<unsigned> sum_of_out_nrow(nvec + 1);
1568  for (unsigned vec_i = 0; vec_i < nvec; vec_i++)
1569  {
1570  sum_of_out_nrow[vec_i + 1] =
1571  sum_of_out_nrow[vec_i] + out_vector_pt[vec_i]->nrow();
1572  }
1573 
1574  // Loop through the in_vector local values.
1575  unsigned in_nrow_local = in_vector.nrow_local();
1576  for (unsigned in_local_eqn = 0; in_local_eqn < in_nrow_local;
1577  in_local_eqn++)
1578  {
1579  // The global equation number of this row.
1580  unsigned in_global_eqn = in_local_eqn + in_vector.first_row();
1581 
1582  // Which out_vector does this in_global_eqn belong to?
1583  unsigned out_vector_i = 0;
1584  while (in_global_eqn < sum_of_out_nrow[out_vector_i] ||
1585  in_global_eqn >= sum_of_out_nrow[out_vector_i + 1])
1586  {
1587  out_vector_i++;
1588  }
1589 
1590  // The out_global_eqn
1591  // (this is the global equation in the current out vector)
1592  unsigned out_global_eqn =
1593  in_global_eqn - sum_of_out_nrow[out_vector_i];
1594 
1595  // The processor to send this row to.
1596  unsigned out_p =
1597  out_vector_pt[out_vector_i]->distribution_pt()->rank_of_global_row(
1598  out_global_eqn);
1599 
1600  // The local_eqn in the out_vector_i
1601  unsigned out_local_eqn =
1602  out_global_eqn -
1603  out_vector_pt[out_vector_i]->distribution_pt()->first_row(out_p);
1604 
1605 
1606  // Fill in the data to send
1607 
1608  // Which out vector to put this data in.
1609  values_to_send[out_p].push_back(out_vector_i);
1610 
1611  // The local equation of the data.
1612  values_to_send[out_p].push_back(out_local_eqn);
1613 
1614  // The actual data.
1615  values_to_send[out_p].push_back(in_vector[in_local_eqn]);
1616  }
1617 
1618  // Prepare to send the data!
1619 
1620  // Storage for the number of data to be sent to each processor.
1621  Vector<int> send_n(nproc, 0);
1622 
1623  // Storage for all the values to be send to each processor.
1624  Vector<double> send_values_data;
1625 
1626  // Storage location within send_values_data
1627  Vector<int> send_displacement(nproc, 0);
1628 
1629  // Get the total amount of data which needs to be sent, so we can
1630  // reserve space for it.
1631  unsigned total_ndata = 0;
1632  for (unsigned rank = 0; rank < nproc; rank++)
1633  {
1634  if (rank != my_rank)
1635  {
1636  total_ndata += values_to_send[rank].size();
1637  }
1638  }
1639 
1640  // Now we don't have to re-allocate data/memory when push_back is
1641  // called. Nb. Using push_back without reserving memory may cause
1642  // multiple re-allocation behind the scenes, this is expensive.
1643  send_values_data.reserve(total_ndata);
1644 
1645  // Loop over all the processors to "flat pack" the data for sending.
1646  for (unsigned rank = 0; rank < nproc; rank++)
1647  {
1648  // Set the offset for the current processor
1649  send_displacement[rank] = send_values_data.size();
1650 
1651  // Don't bother to do anything if
1652  // the processor in the loop is the current processor.
1653  if (rank != my_rank)
1654  {
1655  // Put the values into the send data vector.
1656  unsigned n_data = values_to_send[rank].size();
1657  for (unsigned j = 0; j < n_data; j++)
1658  {
1659  send_values_data.push_back(values_to_send[rank][j]);
1660  } // Loop over the data
1661  } // if rank != my_rank
1662 
1663  // Find the number of data to be added to the vector.
1664  send_n[rank] = send_values_data.size() - send_displacement[rank];
1665  } // Loop over processors
1666 
1667  // Storage for the number of data to be received from each processor.
1668  Vector<int> receive_n(nproc, 0);
1669  MPI_Alltoall(&send_n[0],
1670  1,
1671  MPI_INT,
1672  &receive_n[0],
1673  1,
1674  MPI_INT,
1675  comm_pt->mpi_comm());
1676 
1677  // Prepare the data to be received
1678  // by working out the displacement from the received data.
1679  Vector<int> receive_displacement(nproc, 0);
1680  int receive_data_count = 0;
1681  for (unsigned rank = 0; rank < nproc; rank++)
1682  {
1683  receive_displacement[rank] = receive_data_count;
1684  receive_data_count += receive_n[rank];
1685  }
1686 
1687  // Now resize the receive buffer for all data from all processors.
1688  // Make sure that it has size of at least one.
1689  if (receive_data_count == 0)
1690  {
1691  receive_data_count++;
1692  }
1693  Vector<double> receive_values_data(receive_data_count);
1694 
1695  // Make sure that the send buffer has size at least one
1696  // so that we don't get a segmentation fault.
1697  if (send_values_data.size() == 0)
1698  {
1699  send_values_data.resize(1);
1700  }
1701 
1702  // Now send the data between all processors
1703  MPI_Alltoallv(&send_values_data[0],
1704  &send_n[0],
1705  &send_displacement[0],
1706  MPI_DOUBLE,
1707  &receive_values_data[0],
1708  &receive_n[0],
1709  &receive_displacement[0],
1710  MPI_DOUBLE,
1711  comm_pt->mpi_comm());
1712 
1713  // Data from all other processors are stored in:
1714  // receive_values_data
1715  // Data already on this processor is stored in:
1716  // values_to_send[my_rank]
1717  //
1718 
1719  // Index for values_to_send Vector.
1720  unsigned location_i = 0;
1721  // Loop through the data on this processor
1722  unsigned my_values_to_send_size = values_to_send[my_rank].size();
1723  while (location_i < my_values_to_send_size)
1724  {
1725  // The vector to put the values in.
1726  unsigned out_vector_i =
1727  unsigned(values_to_send[my_rank][location_i++]);
1728 
1729  // Where to put the value.
1730  unsigned out_local_eqn =
1731  unsigned(values_to_send[my_rank][location_i++]);
1732 
1733  // The actual value!
1734  double out_value = values_to_send[my_rank][location_i++];
1735 
1736  // Insert the value in the out vector.
1737  (*out_vector_pt[out_vector_i])[out_local_eqn] = out_value;
1738  }
1739 
1740  // Before we loop through the data on other processors, we need to check
1741  // if any data has been received. This is because the
1742  // receive_values_data has been resized to at least one, even if no data
1743  // is sent.
1744  bool data_has_been_received = false;
1745  unsigned send_rank = 0;
1746  while (send_rank < nproc)
1747  {
1748  if (receive_n[send_rank] > 0)
1749  {
1750  data_has_been_received = true;
1751  break;
1752  }
1753  send_rank++;
1754  }
1755 
1756  // Reset the index, it is now being used to index the
1757  // receive_values_data vector.
1758  location_i = 0;
1759  if (data_has_been_received)
1760  {
1761  // Extract the data and put it into the out vector.
1762  unsigned receive_values_data_size = receive_values_data.size();
1763  while (location_i < receive_values_data_size)
1764  {
1765  // Which out vector to put the value in?
1766  unsigned out_vector_i = unsigned(receive_values_data[location_i++]);
1767 
1768  // Where in the out vector to put the value?
1769  unsigned out_local_eqn =
1770  unsigned(receive_values_data[location_i++]);
1771 
1772  // The value to put in.
1773  double out_value = receive_values_data[location_i++];
1774 
1775  // Insert the value in the out vector.
1776  (*out_vector_pt[out_vector_i])[out_local_eqn] = out_value;
1777  }
1778  }
1779 #else
1780  {
1781  std::ostringstream error_message;
1782  error_message << "You have a distributed vector but with no mpi...\n"
1783  << "I don't know what to do :( \n";
1784  throw OomphLibError(
1785  error_message.str(), "RYARAYERR", OOMPH_EXCEPTION_LOCATION);
1786  }
1787 #endif
1788  }
1789  } // function split(...)

References oomph::DoubleVector::built(), oomph::LinearAlgebraDistribution::communicator_pt(), oomph::DistributableLinearAlgebraObject::distributed(), oomph::DistributableLinearAlgebraObject::distribution_pt(), oomph::DistributableLinearAlgebraObject::first_row(), j, oomph::OomphCommunicator::my_rank(), oomph::OomphCommunicator::nproc(), oomph::DistributableLinearAlgebraObject::nrow(), oomph::DistributableLinearAlgebraObject::nrow_local(), OOMPH_CURRENT_FUNCTION, OOMPH_EXCEPTION_LOCATION, and oomph::DoubleVector::values_pt().

Referenced by Eigen::MatrixPower< MatrixType >::compute(), oomph::HelmholtzMGPreconditioner< DIM >::direct_solve(), read_mercury_cg::get_raw_data(), tools::getKeysAndData(), InputData::LoadPebbles(), framework.Frame::look_for_mpibinaries(), ComputeWallTorque::main(), PlotEnergies::main(), fpsmall::main(), main(), make_index::make_index(), Loadstatistics::make_readable(), oomph-convert.TecplotZone::parse(), oomph-convert.InputPoints::parse(), oomph::ExactDGPBlockPreconditioner< MATRIX >::preconditioner_solve(), Eigen::internal::InnerMostDimReducer< Self, Op, true, true >::reduce(), simulate::runSimulations(), simulateAWS::runSimulations(), SaveToStl::SaveStlSequence(), framework.Frame::set_download(), framework.Frame::set_ranlib(), and split().

◆ split() [2/2]

void oomph::DoubleVectorHelpers::split ( const DoubleVector in_vector,
Vector< DoubleVector > &  out_vector 
)

Wrapper around the other split(...) function. Be careful with Vector of vectors. If the DoubleVectors are resized, there could be reallocation of memory. If we wanted to use the function which takes a Vector of pointers to DoubleVectors, we would either have to invoke new and remember to delete, or create a temporary Vector to store pointers to the DoubleVector objects. This wrapper is meant to make life easier for the user by avoiding calls to new/delete AND without creating a temporary vector of pointers to DoubleVectors. If we had C++ 11, this would be so much nicer since we can use smart pointers which will delete themselves, so we do not have to remember to delete!

1806  {
1807  const unsigned n_out_vector = out_vector.size();
1808  Vector<DoubleVector*> out_vector_pt(n_out_vector, 0);
1809 
1810  for (unsigned i = 0; i < n_out_vector; i++)
1811  {
1812  out_vector_pt[i] = &out_vector[i];
1813  }
1814 
1815  DoubleVectorHelpers::split(in_vector, out_vector_pt);
1816  } // function split(...)
void split(const DoubleVector &in_vector, Vector< DoubleVector > &out_vector)
Definition: double_vector.cc:1805

References i, and split().

◆ split_without_communication() [1/2]

void oomph::DoubleVectorHelpers::split_without_communication ( const DoubleVector in_vector,
Vector< DoubleVector * > &  out_vector_pt 
)

Split a DoubleVector into the out DoubleVectors. Data stays on its current processor, no data is sent between processors. This results in our vectors which are a permutation of the in vector.

Let vec_A be the in Vector, and let vec_B and vec_C be the out vectors. Then the splitting of vec_A is depicted below: vec_A: [a0 (on p0) a1] (on p0) [a2 (on p1) a3] (on p1)

vec_B: [a0] (on p0) vec_C: [a1] (on p0) [a2] (on p1) [a3] (on p1).

This means that the distribution of the in vector MUST be a concatenation of the out vector distributions, refer to LinearAlgebraDistributionHelpers::concatenate(...) to concatenate distributions.

2077  {
2078  // How many out vectors do we need?
2079  unsigned nvec = out_vector_pt.size();
2080 
2081 #ifdef PARANOID
2082  // Check that in_vector is built
2083  if (!in_vector.built())
2084  {
2085  std::ostringstream error_message;
2086  error_message << "The in_vector is not built.\n"
2087  << "Please build it!.\n";
2088  throw OomphLibError(error_message.str(),
2091  }
2092 
2093  // Check that all out vectors are built.
2094  for (unsigned vec_i = 0; vec_i < nvec; vec_i++)
2095  {
2096  if (!out_vector_pt[vec_i]->built())
2097  {
2098  std::ostringstream error_message;
2099  error_message << "The vector at position " << vec_i
2100  << " is not built.\n"
2101  << "Please build it!.\n";
2102  throw OomphLibError(error_message.str(),
2105  }
2106  }
2107 
2108  // Check that the concatenation of distributions from the out vectors is
2109  // the same as the distribution from in_vector.
2110 
2111  // Create the distribution from out_distribution.
2112  Vector<LinearAlgebraDistribution*> tmp_out_distribution_pt(nvec, 0);
2113  for (unsigned vec_i = 0; vec_i < nvec; vec_i++)
2114  {
2115  tmp_out_distribution_pt[vec_i] =
2116  out_vector_pt[vec_i]->distribution_pt();
2117  }
2118 
2119  LinearAlgebraDistribution tmp_distribution;
2120  LinearAlgebraDistributionHelpers::concatenate(tmp_out_distribution_pt,
2121  tmp_distribution);
2122  // Compare the distributions
2123  if (tmp_distribution != *(in_vector.distribution_pt()))
2124  {
2125  std::ostringstream error_message;
2126  error_message << "The distribution from the in vector is incorrect.\n"
2127  << "It must be a concatenation of all the distributions\n"
2128  << "from the out vectors.\n";
2129  throw OomphLibError(error_message.str(),
2132  }
2133 
2134  // Clear the distribution.
2135  tmp_distribution.clear();
2136 
2137  // Check that all communicators are the same. We use a communicator to
2138  // get the number of processors and my_rank. So we would like them to be
2139  // the same for the in vector and all the out vectors.
2140  const OomphCommunicator in_vector_comm =
2141  *(in_vector.distribution_pt()->communicator_pt());
2142  for (unsigned vec_i = 0; vec_i < nvec; vec_i++)
2143  {
2144  const OomphCommunicator vec_i_comm =
2145  *(out_vector_pt[vec_i]->distribution_pt()->communicator_pt());
2146 
2147  if (in_vector_comm != vec_i_comm)
2148  {
2149  std::ostringstream error_message;
2150  error_message << "The communicator for the vector in position\n"
2151  << vec_i << " is not the same as the in_vector\n"
2152  << "communicator.";
2153  throw OomphLibError(error_message.str(),
2156  }
2157  }
2158 
2159  // Check that if the in vector is distributed, then all the out vectors
2160  // are also distributed.
2161  if (in_vector_comm.nproc() > 1)
2162  {
2163  bool in_distributed = in_vector.distributed();
2164  for (unsigned vec_i = 0; vec_i < nvec; vec_i++)
2165  {
2166  if (in_distributed != out_vector_pt[vec_i]->distributed())
2167  {
2168  std::ostringstream error_message;
2169  error_message << "The vector in position " << vec_i
2170  << " does not have\n"
2171  << "the same distributed boolean as the in vector";
2172  throw OomphLibError(error_message.str(),
2175  }
2176  }
2177  }
2178 #endif
2179 
2180  // Loop through the sub vectors and insert the values from the
2181  // in vector.
2182  double* in_value_pt = in_vector.values_pt();
2183  unsigned in_value_offset = 0;
2184  for (unsigned vec_i = 0; vec_i < nvec; vec_i++)
2185  {
2186  // The nrow_local and values for the current out vector.
2187  unsigned out_nrow_local = out_vector_pt[vec_i]->nrow_local();
2188  double* out_value_pt = out_vector_pt[vec_i]->values_pt();
2189 
2190  // Loop through the local values of out vector.
2191  for (unsigned val_i = 0; val_i < out_nrow_local; val_i++)
2192  {
2193  out_value_pt[val_i] = in_value_pt[in_value_offset + val_i];
2194  }
2195 
2196  // Update the offset.
2197  in_value_offset += out_nrow_local;
2198  }
2199  } // function split_distribution_vector

References oomph::DoubleVector::built(), oomph::LinearAlgebraDistribution::clear(), oomph::LinearAlgebraDistribution::communicator_pt(), oomph::LinearAlgebraDistributionHelpers::concatenate(), oomph::DistributableLinearAlgebraObject::distributed(), oomph::DistributableLinearAlgebraObject::distribution_pt(), oomph::OomphCommunicator::nproc(), OOMPH_CURRENT_FUNCTION, OOMPH_EXCEPTION_LOCATION, and oomph::DoubleVector::values_pt().

Referenced by main(), oomph::BlockPreconditioner< MATRIX >::return_block_vector(), oomph::BlockPreconditioner< MATRIX >::return_block_vectors(), and split_without_communication().

◆ split_without_communication() [2/2]

void oomph::DoubleVectorHelpers::split_without_communication ( const DoubleVector in_vector,
Vector< DoubleVector > &  out_vector 
)

Wrapper around the other split_without_communication(...) function. Be careful with Vector of vectors. If the DoubleVectors are resized, there could be reallocation of memory. If we wanted to use the function which takes a Vector of pointers to DoubleVectors, we would either have to invoke new and remember to delete, or create a temporary Vector to store pointers to the DoubleVector objects. This wrapper is meant to make life easier for the user by avoiding calls to new/delete AND without creating a temporary vector of pointers to DoubleVectors. If we had C++ 11, this would be so much nicer since we can use smart pointers which will delete themselves, so we do not have to remember to delete!

2218  {
2219  const unsigned n_out_vector = out_vector.size();
2220 
2221  Vector<DoubleVector*> out_vector_pt(n_out_vector, 0);
2222 
2223  for (unsigned i = 0; i < n_out_vector; i++)
2224  {
2225  out_vector_pt[i] = &out_vector[i];
2226  }
2227 
2229  out_vector_pt);
2230 
2231  } // function split_distribution_vector
void split_without_communication(const DoubleVector &in_vector, Vector< DoubleVector > &out_vector)
Definition: double_vector.cc:2216

References i, and split_without_communication().