Hello,
I just tried running the osrm-datastore as I would like to make better use of many cpu's for the same amount of RAM by increasing the number of osrm-routed-processes.
My old setup runs the osrm-routed process in a docker-container and it's been working quite well. For the shared memory approach I am testing in a single docker-container.
I have the problem that the osrm-routed process wont read the shared memory. See the output below.
Disclaimer, I program mostly python, but occasionally C++, often distributing my work over many processes but I haven't really dealt with shared memory in this way before so I may be doing some silly mistake. But I tried following the wiki completely.
root@e748a9c8a8c8:/# free -h
total used free shared buffers cached
Mem: 2.9G 520M 2.4G 1.6M 36M 240M
-/+ buffers/cache: 243M 2.7G
Swap: 6.0G 0B 6.0G
root@e748a9c8a8c8:/# /osrm-backend/build/osrm-datastore /data/osrm-data/europe-latest-extracted.osrm
[info] load names from: "/data/osrm-data/europe-latest-extracted.osrm.names"
[info] name offsets size: 17663
[info] allocating shared memory of 1066637251 bytes
[info] all data loaded
root@e748a9c8a8c8:/# free -h
total used free shared buffers cached
Mem: 2.9G 1.5G 1.4G 1.0G 36M 1.2G
-/+ buffers/cache: 245M 2.7G
Swap: 6.0G 0B 6.0G
root@e748a9c8a8c8:/# /osrm-backend/build/osrm-routed --shared-memory=yes
[info] starting up engines, v4.8.1
[warn] caught exception: Resource temporarily unavailable, code 8
[warn] [exception] Resource temporarily unavailable
root@e748a9c8a8c8:/#
shmmax seems large enough:
root@e748a9c8a8c8:/# cat /proc/sys/kernel/shmmax
18446744073692774399
Host is running Ubuntu 14.04:
? uname -a
Linux packer-trusty 3.19.0-25-generic #26~14.04.1-Ubuntu SMP Fri Jul 24 21:16:20 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
And the docker-version is
? docker -v
Docker version 1.9.1, build a34a1d5
Docker-container is also running ubuntu 14.04.
OSRM-version is fca4aeb.
Hello,
I just tried running the osrm-datastore as I would like to make better use of many cpu's for the same amount of RAM by increasing the number of osrm-routed-processes.
My old setup runs the osrm-routed process in a docker-container and it's been working quite well. For the shared memory approach I am testing in a single docker-container.
I have the problem that the osrm-routed process wont read the shared memory. See the output below.
Disclaimer, I program mostly python, but occasionally C++, often distributing my work over many processes but I haven't really dealt with shared memory in this way before so I may be doing some silly mistake. But I tried following the wiki completely.
shmmax seems large enough:
Host is running Ubuntu 14.04:
And the docker-version is
Docker-container is also running ubuntu 14.04.
OSRM-version is fca4aeb.