crowdstrike: add support for host info enrichment#8474
Conversation
9042287 to
0457f4b
Compare
🌐 Coverage report
|
|
/test |
2751001 to
db9ce4a
Compare
|
Pinging @elastic/security-external-integrations (Team:Security-External Integrations) |
There was a problem hiding this comment.
| description: Uses data in aidmaster to add host information to events. The aidmaster blob must string "aidmaster" in its path and the FDR Notification Parsing Script must sort events so that aidmaster events appear first in the stream. | |
| description: Uses data in aidmaster to add host information to events. The aidmaster blob must contain string "aidmaster" in its path and the FDR Notification Parsing Script must sort events so that aidmaster events appear first in the stream. |
There was a problem hiding this comment.
Is the field log.file.path going to exist for S3 input in this case?
andrewkroh
left a comment
There was a problem hiding this comment.
Have you manually tested this using the aws-s3 input?
The fdr datastream offers both s3 and logfile inputs. Limitations in elastic-package prevent us from being able to have two system test infrastructure deployments, so only the logfile input is tested.
If we have to sacrifice the logfile testing to get aws-s3 system tests then let's do it. The aws-s3 input is what most users are running with.
There was a problem hiding this comment.
This will be a weak ordering guarantee because the scan can start concurrent readers. You could set harvester_limit: 1 to control concurrency. I don't think the log file input is used in production by anyone so this matters very little.
There was a problem hiding this comment.
Thanks, I was worried about that. We can't limit though; making the limit 1 results in a deadlock.
e5a1d0d to
affaee3
Compare
affaee3 to
8770ae4
Compare
65571d8 to
c305dc4
Compare
Error: error running package system tests: could not complete test run: could not add data stream config to policy: could not add package to policy; API status code = 400; response body = {"statusCode":400,"error":"Bad Request","message":"Package policy is invalid: inputs.aws-s3.streams.crowdstrike.fdr.vars.fdr_parsing_script: Invalid YAML format"}
b621c88 to
c027048
Compare
andrewkroh
left a comment
There was a problem hiding this comment.
LGTM. Thanks for getting the system testing setup for the aws-s3 input.
I left some minor suggestions related to the test config.
| secret_access_key: "{{AWS_SECRET_ACCESS_KEY}}" | ||
| session_token: "{{AWS_SESSION_TOKEN}}" | ||
| queue_url: "{{TF_OUTPUT_queue_url}}" | ||
| preserve_original_event: true |
There was a problem hiding this comment.
I think we should be able to know the expected number of events so can we add the
assert:
hit_count: N| @@ -0,0 +1,9 @@ | |||
| input: aws-s3 | |||
| wait_for_data_timeout: 20m | |||
There was a problem hiding this comment.
This wait period seems quite high. Once Terraform completes its provisioning an SQS notification should be sent after about 1 minute. I think the default of 10m should be fine.
There was a problem hiding this comment.
The 20m value appears to be commonly used, but agreed. Will change.
|
Package crowdstrike - 1.26.0 containing this change is available at https://epr.elastic.co/search?package=crowdstrike |
Proposed commit message
See title.
Checklist
changelog.ymlfile.Author's Checklist
logfiles3 input is tested.How to test this PR locally
Make the following change:
Then:
(
gsedis GNUsedso if on macos you will need to install that, is on linuxs/gsed/sed/in the command above)Then test with
elastic-packageas normal.For convenience (because the AWS setup was not trivial), the TF graph is here.
Related issues
Screenshots