[aws_logs] fix number_of_workers to be always passed in hbs template#14416
[aws_logs] fix number_of_workers to be always passed in hbs template#14416mykola-elastic merged 5 commits intoelastic:mainfrom
Conversation
|
@elastic/obs-ds-hosted-services do you need this fix? Issue: #14403 |
|
|
||
| You can enable SQS notification method by setting `queue_url` configuration value. | ||
| You can enable S3 bucket list polling method by setting `bucket_arn`, `access_point_arn` | ||
| or `non_aws_bucket_name` configuration values and `number_of_workers` value. |
There was a problem hiding this comment.
Now we dont make any reference to number of workers. You can add it back in the doc
There was a problem hiding this comment.
Thanks @gizas, I've returned that and also mentioned it in the sentence about SQS too. Good?
There was a problem hiding this comment.
@elastic/obs-ds-hosted-services do you need this fix? Issue: #14403
@mykola-elastic sorry confused, do you mean to mark 14403 as done when this PR will be merged? Personally have not verified it e2e
There was a problem hiding this comment.
@gizas what would be the correct process for this?
Shall e2e testing be performed and described in this PR? I guess showing the proof that the number of workers is actually changed..
There was a problem hiding this comment.
Changing the setting Number of Workers does not have any effect, the integration still starts using the default number of 5 workers for the SQS input.
If you can showcase that we have a new value after the change we are ok
There was a problem hiding this comment.
@gizas I did the following:
Set number_of_workers to 6 for S3 and 7 for Cloudwatch.
Also set queue_url for S3 and log_group_arn for Cloudwatch.
The policy before this PR (See that there are no number_of_workers here):
id: elastic-agent-managed-ep
revision: 3
outputs:
default:
type: elasticsearch
hosts:
- https://elasticsearch:9200
ssl.ca_trusted_fingerprint: 57E00CA616A85926934E861AC4731978FCA0E33BB519DCB8AEBD61CCF4F7F7AC
preset: latency
fleet:
hosts:
- https://fleet-server:8220
output_permissions:
default:
_elastic_agent_monitoring:
indices: []
_elastic_agent_checks:
cluster:
- monitor
39a03b08-6f90-4959-8c4c-32b8366d1e2a:
indices:
- names:
- logs-*-*
privileges:
- auto_configure
- create_doc
- names:
- logs-*-*
privileges:
- auto_configure
- create_doc
agent:
download:
sourceURI: https://artifacts.elastic.co/downloads/
monitoring:
enabled: false
logs: false
metrics: false
traces: false
features: {}
protection:
enabled: false
uninstall_token_hash: 9G7sLqRFHP1Joeq0VAX6pZe1q8Mo4BTIbhDHQD/R/ow=
signing_key: >-
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEZ0xeikdK85bTrAq6JTQMmyAPO9dQIl5CxKtzMuk7mitXZFxm8tZwMUf76WHG+1zRmwpNzgkWUnDSxTTAaPbj8Q==
inputs:
- id: aws-s3-aws_logs-39a03b08-6f90-4959-8c4c-32b8366d1e2a
name: aws_logs-1
revision: 2
type: aws-s3
use_output: default
meta:
package:
name: aws_logs
version: 1.8.0
data_stream:
namespace: default
package_policy_id: 39a03b08-6f90-4959-8c4c-32b8366d1e2a
streams:
- id: aws-s3-aws_logs.generic-39a03b08-6f90-4959-8c4c-32b8366d1e2a
data_stream:
dataset: aws_logs.generic
queue_url: http://test
max_bytes: 10MiB
max_number_of_messages: 5
sqs.max_receive_count: 5
sqs.wait_time: 20s
file_selectors: null
tags:
- forwarded
publisher_pipeline.disable_host: true
parsers: null
- id: aws-cloudwatch-aws_logs-39a03b08-6f90-4959-8c4c-32b8366d1e2a
name: aws_logs-1
revision: 2
type: aws-cloudwatch
use_output: default
meta:
package:
name: aws_logs
version: 1.8.0
data_stream:
namespace: default
package_policy_id: 39a03b08-6f90-4959-8c4c-32b8366d1e2a
streams:
- id: aws-cloudwatch-aws_logs.generic-39a03b08-6f90-4959-8c4c-32b8366d1e2a
data_stream:
dataset: aws_logs.generic
log_group_arn: test-log-group-arn
start_position: beginning
scan_frequency: 1m
api_sleep: 200ms
tags:
- forwarded
publisher_pipeline.disable_host: true
signed:
data: >-
eyJpZCI6ImVsYXN0aWMtYWdlbnQtbWFuYWdlZC1lcCIsImFnZW50Ijp7ImZlYXR1cmVzIjp7fSwicHJvdGVjdGlvbiI6eyJlbmFibGVkIjpmYWxzZSwidW5pbnN0YWxsX3Rva2VuX2hhc2giOiI5RzdzTHFSRkhQMUpvZXEwVkFYNnBaZTFxOE1vNEJUSWJoREhRRC9SL293PSIsInNpZ25pbmdfa2V5IjoiTUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFWjB4ZWlrZEs4NWJUckFxNkpUUU1teUFQTzlkUUlsNUN4S3R6TXVrN21pdFhaRnhtOHRad01VZjc2V0hHKzF6Um13cE56Z2tXVW5EU3hUVEFhUGJqOFE9PSJ9fSwiaW5wdXRzIjpbeyJpZCI6ImF3cy1zMy1hd3NfbG9ncy0zOWEwM2IwOC02ZjkwLTQ5NTktOGM0Yy0zMmI4MzY2ZDFlMmEiLCJuYW1lIjoiYXdzX2xvZ3MtMSIsInJldmlzaW9uIjoyLCJ0eXBlIjoiYXdzLXMzIn0seyJpZCI6ImF3cy1jbG91ZHdhdGNoLWF3c19sb2dzLTM5YTAzYjA4LTZmOTAtNDk1OS04YzRjLTMyYjgzNjZkMWUyYSIsIm5hbWUiOiJhd3NfbG9ncy0xIiwicmV2aXNpb24iOjIsInR5cGUiOiJhd3MtY2xvdWR3YXRjaCJ9XX0=
signature: >-
MEUCIQC47UBGKnazr/w0d+MuYxmvqHAEq0C/F21aslsCPYU2RgIgCY8hNc4bPFVT3rd2dtQ9hmSy9cn1JLQdK4KGzDT2nic=
secret_references: []
namespaces: []The policy after this PR (the number_of_workers are now in the policy):
id: elastic-agent-managed-ep
revision: 2
outputs:
default:
type: elasticsearch
hosts:
- https://elasticsearch:9200
ssl.ca_trusted_fingerprint: 57E00CA616A85926934E861AC4731978FCA0E33BB519DCB8AEBD61CCF4F7F7AC
preset: latency
fleet:
hosts:
- https://fleet-server:8220
output_permissions:
default:
_elastic_agent_monitoring:
indices: []
_elastic_agent_checks:
cluster:
- monitor
a3f1c02f-e771-4388-9ce7-96aeb6139a65:
indices:
- names:
- logs-*-*
privileges:
- auto_configure
- create_doc
- names:
- logs-*-*
privileges:
- auto_configure
- create_doc
agent:
download:
sourceURI: https://artifacts.elastic.co/downloads/
monitoring:
enabled: false
logs: false
metrics: false
traces: false
features: {}
protection:
enabled: false
uninstall_token_hash: myV+/PC9h7tRznolRUvKDaTX0ppPn8sOuq7FX2oXKso=
signing_key: >-
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEJspyjkf1F7Vu1HgYTa268flRF4l/IXtOIjvIX+A44OkkOn4FG8XIhnQKFcovgez0oqX96Gt1twRUyR6i4WayHQ==
inputs:
- id: aws-s3-aws_logs-a3f1c02f-e771-4388-9ce7-96aeb6139a65
name: aws_logs-1
revision: 1
type: aws-s3
use_output: default
meta:
package:
name: aws_logs
version: 1.8.1
data_stream:
namespace: default
package_policy_id: a3f1c02f-e771-4388-9ce7-96aeb6139a65
streams:
- id: aws-s3-aws_logs.generic-a3f1c02f-e771-4388-9ce7-96aeb6139a65
data_stream:
dataset: aws_logs.generic
queue_url: http://test
number_of_workers: 6
max_bytes: 10MiB
max_number_of_messages: 5
sqs.max_receive_count: 5
sqs.wait_time: 20s
file_selectors: null
tags:
- forwarded
publisher_pipeline.disable_host: true
parsers: null
- id: aws-cloudwatch-aws_logs-a3f1c02f-e771-4388-9ce7-96aeb6139a65
name: aws_logs-1
revision: 1
type: aws-cloudwatch
use_output: default
meta:
package:
name: aws_logs
version: 1.8.1
data_stream:
namespace: default
package_policy_id: a3f1c02f-e771-4388-9ce7-96aeb6139a65
streams:
- id: aws-cloudwatch-aws_logs.generic-a3f1c02f-e771-4388-9ce7-96aeb6139a65
data_stream:
dataset: aws_logs.generic
log_group_arn: test-log-group-arn
number_of_workers: 7
start_position: beginning
scan_frequency: 1m
api_sleep: 200ms
tags:
- forwarded
publisher_pipeline.disable_host: true
signed:
data: >-
eyJpZCI6ImVsYXN0aWMtYWdlbnQtbWFuYWdlZC1lcCIsImFnZW50Ijp7ImZlYXR1cmVzIjp7fSwicHJvdGVjdGlvbiI6eyJlbmFibGVkIjpmYWxzZSwidW5pbnN0YWxsX3Rva2VuX2hhc2giOiJteVYrL1BDOWg3dFJ6bm9sUlV2S0RhVFgwcHBQbjhzT3VxN0ZYMm9YS3NvPSIsInNpZ25pbmdfa2V5IjoiTUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFSnNweWprZjFGN1Z1MUhnWVRhMjY4ZmxSRjRsL0lYdE9JanZJWCtBNDRPa2tPbjRGRzhYSWhuUUtGY292Z2V6MG9xWDk2R3QxdHdSVXlSNmk0V2F5SFE9PSJ9fSwiaW5wdXRzIjpbeyJpZCI6ImF3cy1zMy1hd3NfbG9ncy1hM2YxYzAyZi1lNzcxLTQzODgtOWNlNy05NmFlYjYxMzlhNjUiLCJuYW1lIjoiYXdzX2xvZ3MtMSIsInJldmlzaW9uIjoxLCJ0eXBlIjoiYXdzLXMzIn0seyJpZCI6ImF3cy1jbG91ZHdhdGNoLWF3c19sb2dzLWEzZjFjMDJmLWU3NzEtNDM4OC05Y2U3LTk2YWViNjEzOWE2NSIsIm5hbWUiOiJhd3NfbG9ncy0xIiwicmV2aXNpb24iOjEsInR5cGUiOiJhd3MtY2xvdWR3YXRjaCJ9XX0=
signature: >-
MEUCIQCVoFSji38/z/VOOsK15OlM6XMSm7sKkbK0lvZVJPw0/QIgRoRq/esTIMI4hyqKxzo21huIH2Rt/SsSsHsUeiypW4c=
secret_references: []
namespaces: []
💚 Build Succeeded
History
|
|
|
Package aws_logs - 1.8.1 containing this change is available at https://epr.elastic.co/package/aws_logs/1.8.1/ |




For some reason the fix was not applied for
aws_logspackage while doing the bulk fix for thenumber_of_workersin the PR: #13350Proposed commit message
See title.
Checklist
I have verified that all data streams collect metrics or logs.changelog.ymlfile.I have verified that any added dashboard complies with Kibana's Dashboard good practicesAuthor's Checklist
How to test this PR locally
Related issues
Screenshots