Currently whenever we need to populate the routes geometry, we simply loop through routes in the solution and call Wrapper::add_geometry:
|
for (auto& route : sol.routes) { |
|
const auto& profile = route.profile; |
|
auto rw = std::ranges::find_if(_routing_wrappers, [&](const auto& wr) { |
|
return wr->profile == profile; |
|
}); |
|
if (rw == _routing_wrappers.end()) { |
|
throw InputException( |
|
"Route geometry request with non-routable profile " + profile + "."); |
|
} |
|
(*rw)->add_geometry(route); |
|
} |
This means route requests to the underlying routing engine are done sequentially, while we could send them concurrently.
Probably the speedup we'd get would be very small, and surely almost unnoticeable for most instances: getting the matrices and solving is usually much much longer than the routing part. Still it would be nice to try it out.
Note: matrix computing is already parallelized across profiles (Input::set_matrices calls get_matrices_by_profile concurrently across all profile values).
Currently whenever we need to populate the routes geometry, we simply loop through routes in the solution and call
Wrapper::add_geometry:vroom/src/structures/vroom/input/input.cpp
Lines 1188 to 1198 in 9905815
This means route requests to the underlying routing engine are done sequentially, while we could send them concurrently.
Probably the speedup we'd get would be very small, and surely almost unnoticeable for most instances: getting the matrices and solving is usually much much longer than the routing part. Still it would be nice to try it out.
Note: matrix computing is already parallelized across profiles (
Input::set_matricescallsget_matrices_by_profileconcurrently across all profile values).