You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on May 31, 2025. It is now read-only.
Currently, rosbag filter deserializes every message in a bag, in a single threaded process, with a fresh eval each time. This is all enormously expensive and slow:
Some possibilities for how to improve this situation:
Lazy deserialization: Instead of deserializing every time, pass a proxy object which uses getattr to deserialize on demand and pass through accesses. This would prevent unnecessary deserialization expense for the use-cases which only filter on topic name or timestamp.
Use multiprocessing to parallelize the eval invocations—would have to check how much of a gain there is to be had here, particularly with the added cost of managing the work queue to maintain sequence.
Currently,
rosbag filterdeserializes every message in a bag, in a single threaded process, with a freshevaleach time. This is all enormously expensive and slow:ros_comm/tools/rosbag/src/rosbag/rosbag_main.py
Lines 374 to 383 in 29053c4
Some possibilities for how to improve this situation:
evalin a lambda, so that it needs to be compiled just once rather than on each use: https://stackoverflow.com/a/12467755/109517