Skip to content

Erratic timings on iotlab-m3 with compression context activated #5405

@miri64

Description

@miri64

When using a compression context on iotlab-m3 nodes I get very weird long timings when pinging.

When applying the following patch to add the abcd::/64 prefix for context 0 (alternatively set-up a border router and use the 6ctx command, I just don't have enough iotlab-m3 boards left to do that and setting-up a border router on the IoT-Lab isn't possible right now):

diff --git a/examples/gnrc_networking/main.c b/examples/gnrc_networking/main.c
index 6301f42..7901ee7 100644
--- a/examples/gnrc_networking/main.c
+++ b/examples/gnrc_networking/main.c
@@ -22,6 +22,7 @@

 #include "shell.h"
 #include "msg.h"
+#include "net/gnrc/sixlowpan/ctx.h"

 #define MAIN_QUEUE_SIZE     (8)
 static msg_t _main_msg_queue[MAIN_QUEUE_SIZE];
@@ -43,6 +44,10 @@ int main(void)
     /* start shell */
     puts("All up, running the shell now");
     char line_buf[SHELL_DEFAULT_BUFSIZE];
+    ipv6_addr_t prefix = IPV6_ADDR_UNSPECIFIED;
+    prefix.u8[0] = 0xab;
+    prefix.u8[1] = 0xcd;
+    gnrc_sixlowpan_ctx_update(0, &prefix, 64, UINT16_MAX, true);
     shell_run(shell_commands, line_buf, SHELL_DEFAULT_BUFSIZE);

     /* should be never reached */

And configuring this prefix to the nodes:

m3-7;ifconfig 7 add abcd::3432:4833:46d4:7c2a
m3-9;ifconfig 7 add abcd::3432:4833:46d5:9d12
m3-7;fibroute add :: via fe80::3432:4833:46d5:9d12
m3-9;fibroute add :: via fe80::3432:4833:46d4:7c2a
m3-7;ping6 abcd::3432:4833:46d5:9d12

I get the following (with UDP the same is true, it is just not so nice to see as with pings):

1461845740.202873;m3-7;ping timeout
1461845740.204380;m3-7;dropping additional response packet (probably caused by duplicates)
1461845742.236271;m3-7;12 bytes from abcd::3432:4833:46d5:9d12: id=83 seq=2 hop limit=64 time = 1029.852 ms
1461845744.025064;m3-7;12 bytes from abcd::3432:4833:46d5:9d12: id=83 seq=3 hop limit=64 time = 787.174 ms
1461845744.025973;m3-7;--- abcd::3432:4833:46d5:9d12 ping statistics ---
1461845744.027849;m3-7;3 packets transmitted, 2 received, 34% packet loss, time 5.06351186 s
1461845744.028849;m3-7;rtt min/avg/max = 787.174/605.675/1029.852 ms

(the first 2 lines basically mean, that the ping came in after the time-out).

Without a compression context (normal release version) I get these timings:

1461845505.011304;m3-7;12 bytes from abcd::3432:4833:46d5:9d12: id=83 seq=1 hop limit=64 time = 10.415 ms
1461845506.022415;m3-7;12 bytes from abcd::3432:4833:46d5:9d12: id=83 seq=2 hop limit=64 time = 8.463 ms
1461845507.035214;m3-7;12 bytes from abcd::3432:4833:46d5:9d12: id=83 seq=3 hop limit=64 time = 9.131 ms
1461845507.035444;m3-7;--- abcd::3432:4833:46d5:9d12 ping statistics ---
1461845507.035702;m3-7;3 packets transmitted, 3 received, 0% packet loss, time 2.0633950 s
1461845507.036245;m3-7;rtt min/avg/max = 8.463/9.336/10.415 ms

which are comparable to the link-local pings so I would say "normal".

I wasn't able to reproduce this on samr21-xpro, so it might be a timer related issue (?) or maybe the optimization for Cortex M3 does something crazy that lets a simple array look-up (which the context lookup basically is) run incredibly long.

Metadata

Metadata

Assignees

Labels

Area: networkArea: NetworkingArea: timersArea: timer subsystemsType: bugThe issue reports a bug / The PR fixes a bug (including spelling errors)

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions