Call ExecuteTrajectoryAction while planning#3676
Conversation
|
Codecov ReportAttention: Patch coverage is
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## master #3676 +/- ##
==========================================
- Coverage 47.85% 0.00% -47.84%
==========================================
Files 604 582 -22
Lines 61108 57364 -3744
Branches 7029 7142 +113
==========================================
- Hits 29235 0 -29235
- Misses 31455 57364 +25909
+ Partials 418 0 -418 ☔ View full report in Codecov by Sentry. |
rhaschke
left a comment
There was a problem hiding this comment.
Adding a second spinner not only allows for parallel processing of execution and planning requests, but of any two ROS events. I'm afraid, the code is not prepared for this concurrency.
|
Sure, that's why I'm asking about mutexes. I'm thinking of something like a planning mutex and an executing mutex, so only planning+executing can happen concurrently, not e.g. planning+planning. |
|
This would be easier in ros2, with a separate spinner for the trajectory execution manager, but should still be achievable in ros1. |
|
I was thinking of a separate spinner and callback queue for trajectory execution as well. This is possible in ROS1. |
|
Ooh, I did not know about this. I will take a look, thanks. |
|
@rhaschke I've replaced the second spinner thread with a dedicated spinner in
^ Nevermind, I see the planning scene monitor has its own spinner, so the |
v4hn
left a comment
There was a problem hiding this comment.
I just looked into this for a while and the spinner for the separate queue should be fine.
The early return in updateSceneWithCurrentState seems problematic though and needs more details in my opinion.
Please rebase the branch to enable the required jammy and noble CI checks.
moveit_ros/move_group/src/default_capabilities/execute_trajectory_action_capability.cpp
Show resolved
Hide resolved
moveit_ros/planning/planning_scene_monitor/src/planning_scene_monitor.cpp
Outdated
Show resolved
Hide resolved
moveit_ros/planning/planning_scene_monitor/src/planning_scene_monitor.cpp
Outdated
Show resolved
Hide resolved
v4hn
left a comment
There was a problem hiding this comment.
Personally I'm fine with the proposed patch, as it improves the current situation.
Thank you for contributing.
A clean rebase (instead of merging from master) would be appreciated.
Not sure whether you or @rhaschke want to address the bigger design issue (which needs some additional verification) of where to set state_update_pending_.
moveit_ros/planning/planning_scene_monitor/src/planning_scene_monitor.cpp
Outdated
Show resolved
Hide resolved
moveit_ros/planning/planning_scene_monitor/src/planning_scene_monitor.cpp
Show resolved
Hide resolved
I think I've now rebased. I'm not very familiar with rebasing, so I don't know if it worked properly. The network graph looks right, but the commit history is a mess, with most commits duplicated. I don't know if this is expected? |
You managed to rebase locally. be49dc4 is the correct
... but messed up subsequently by merging your rebased branch back into your PR branch.
|
dc1a8b9 to
be49dc4
Compare
rhaschke
left a comment
There was a problem hiding this comment.
Generally, I approve this as well. However, I have some cleanup suggestions filed as rivelinrobotics#17.
|
Does moveit use a style guide which covers the |
As the argument is passed by value, const is not needed from the caller's perspective. Not using const allows the function to (re)use the variable in some other fashion. Declaring it const, forbids that too.
I'm not aware of that. |
Thanks for these. I've left one comment on your PR: I'm not sure if the 0.1 second timeout will play nicely with the other timeouts when calling
|
Oh, sorry, I think your commit to revert one of my commits reverted more than you intended? rivelinrobotics@94a71f3 |
|
I've rebase-merged your PR into mine. I'll just fix the extra things that were reverted, one sec. |
|
Okay, I think everything is as it should be now. |
|
And yes, I think I misremembered the other timeouts being 0.1s rather than 1s, so the 0.1s timeout is fine. I am happy with all the changes here now. |
…ectory execution manager.
…ectory_action_capability.
…' behaviour more specific.
d19ecf8 to
0e6ce8e
Compare
| boost::unique_lock<boost::shared_mutex> ulock(scene_update_mutex_, boost::defer_lock); | ||
| if (!skip_update_if_locked) | ||
| ulock.lock(); | ||
| else if (!ulock.try_lock_for(boost::chrono::duration<double>(std::min(0.1, 0.1 * dt_state_update_.toSec())))) |
There was a problem hiding this comment.
at least formally dt_state_update_ is protected by state_pending_mutex_ which you don't hold here.
I don't think it's necessary to wait here at all.
There was a problem hiding this comment.
Locking of the planning scene due to planning is only one locking scenario. Most of them hold the lock only shortly, e.g. by scenePublishingThread(), getPlanningSceneServiceCallback(), clearOctomap(), etc.
Thus, I thought that we should wait for the lock to become available and correctly finish the update.
Otherwise, we might get very irregular update frequencies most of the time.
However, I agree that we might want to use a fixed duration:
- else if (!ulock.try_lock_for(boost::chrono::duration<double>(std::min(0.1, 0.1 * dt_state_update_.toSec()))))
+ else if (!ulock.try_lock_for(boost::chrono::milliseconds(100)))The concrete duration is debatable. Probably, 10ms are fine too. I have no idea. Ideally, we should collect some real-world statistics...
Anyway: In the past we waited forever. Thus 100ms is a tremendous improvement.
…veit#3676) This allows parallel execution + planning. Also required modifying updateSceneWithCurrentState() to allow skipping a scene update with a new robot state (from CurrentStateMonitor), if the planning scene is currently locked (due to planning). Otherwise, the CurrentStateMonitor would block too. Co-authored-by: Robert Haschke <rhaschke@techfak.uni-bielefeld.de>
Description
MoveGroupExecuteTrajectoryAction, to allow it to run while other Capabilities are running.PlanningSceneMonitor::updateSceneWithCurrentStatereturn early if the mutex is locked, rather than waiting for the mutex. This allows theCurrentStateMonitorto continue updating while thePlanningSceneMonitoris locked, allowing theTrajectoryExecutionManagerto grab fresh states while planning is happening.N.B. it was already possible to plan while a trajectory was executing with
MoveGroupExecuteTrajectoryAction. This PR just allows new trajectories to start executing while planning.Open questions:
PlanningSceneMonitorthe best way to avoid locking out theCurrentStateMonitor? Or do we want something a little more targeted to the specific callback thePlanningSceneMonitoradds to theCurrentStateMonitor?Checklist