Conversation
|
Commit 2553c2e2a300809bb907ad5c0e32965debf3ae2c does not contain "Signed-off-by". Please follow instructions provided in https://docs.cilium.io/en/stable/contributing/development/contributing_guide/#developer-s-certificate-of-origin |
joestringer
left a comment
There was a problem hiding this comment.
Cool stuff. I have just a few boring process / maintenance type bits of feedback. I'll leave the NAT test review up to someone with more familiarity in that area.
ef9fa29 to
8ac7fd3
Compare
8229495 to
7423433
Compare
Weil0ng
left a comment
There was a problem hiding this comment.
Would be great to add a README under bpf/mock/ to describe the steps to create/run unit tests :)
dd50163 to
9b67fb9
Compare
|
Not too happy with this huge churn of generated boilerplate code just for the mocking infra. Is there any way this can be kept out of the tree and auto-generated on the fly? So that we only have the actual hand-written test code in the repo? (I presume goal would be to run this as GH action?) |
I think this makes sense especially now that we have a container to run all the cmock cmds, we should probably not keep the cmock code and the generated code in the tree (they both depend on cmock version which will be baked in with the container). Thoughts? @xinyuannn |
I addressed this issue by putting duplicated files into a container and excluding the generated files from the tree. But I suggest we keep |
This shouldn't happen if we always run the sed commands as part of the checks for validating PRs. Like we wouldn't accept a PR if it broke the sed command, the contributor will have to fix the sed command. As far as I understand, this should be triggered by the regular workflows on PRs now right? |
You are right. That's why we need to keep |
We could instead check for generation success instead of comparing the files? Ideally we'd always operate on newly generated headers, what matters is if the generation can go through, if it cannot, then the PR author should fix that (maybe by modifying the sed scripts). |
Actually it is hard to check for generation success since sed commands might not raise an error even if the pruning goes wrong. |
|
If the sed command generates output headers and the test successfully builds against those headers and the test runs & passes, is that good enough? |
I think so. Just discussed with @Weil0ng on how to check for generation success. Maybe it is good enough if the mock generation process based on the headers successes? |
|
All comments addressed, ptal! |
joestringer
left a comment
There was a problem hiding this comment.
LGTM at a high level. I didn't try out the latest iteration but I did check the Travis output from the previous one and it looked like the test ran successfully there. So we should be able to discover any failures or regressions as they come up. I also didn't look at the details of the nat test.
|
test-me-please |
hand-written files: bpf/mock/bpf.yaml: specify CMock configuration options bpf/mock/Makefile: contains rules to prepare for unit testing bpf/mock/Dockerfile: contains commands to assemble the docker image bpf/mock/helpers.sed: sed script file to generate header for helpers.h bpf/mock/helpers_skb.sed: sed script file to generate header for helpers_skb.h bpf/mock/helpers_xdp.sed: sed script file to generate header for helpers_xdp.h bpf/mock/mock_helpers.sh: script to generate mock libraries for helper functions bpf/mock/mock_customized.sh: script to generate mock libraries for customized functions bpf/mock/README.md modified files: Makefile: add check_helper_headers in bpf/mock/Makefile to target lint Signed-off-by: Xinyuan Zhang <zhangxinyuan@google.com>
hand-written files: bpf/mock/conntrack_stub.h: header of functions to be mocked in conntrack.h bpf/tests/nat_test.h: sample unit test program for some functions in nat.h test/bpf/nat-test.c: contains main functions to run test functions nat_test.h modified files: test/bpf/Makefile: add target for the customized test Signed-off-by: Xinyuan Zhang <zhangxinyuan@google.com>
|
test-me-please Job 'Cilium-PR-K8s-1.16-net-next' hit: #17060 (88.36% similarity) |
borkmann
left a comment
There was a problem hiding this comment.
Thanks, are you planning to add more follow-ups with further test coverage? Would be great to see.
Thanks, Daniel. Yes, we are going to help address some existing problems with the framework. Besides, we are also planning to add more contents to the framework as stated in #16951. |
|
The runtime (test-runtime) failed #17074. Please take a look. |
|
edit: I just noticed it was caused because of multiple bpf mount points so I've reopened it. |
|
@aanm I think this PR did not cause the failure. |
|
test-runtime |
|
The only failures are known unrelated flakes in the master tree. Good to merge. |
Considering that eBPF programs are in essence C programs, we propose to leverage existing C-native testing infrastructure to dramatically improve eBPF testing experience and enhance productivity. To this end we generate mock interfaces for kernel-based helper functions and also customized functions in Cilium with CMock. By including the generated mock module, eBPF programs can be easily and conveniently unit-tested as normal C programs.
We demonstrate the approach by performing a sample unit test on functions in
lib/nat.h. The test functions are written in nat_test.h.
To make the demo, run "make -C test/bpf nat-test" to produce an executable file.
As we briefly introduced this work in the Cilium dev sync session on July 7, we would like to get feedbacks, especially on which funtions in Cilium need this coverage.
Signed-off-by: Xinyuan Zhang zhangxinyuan@google.com