Skip to content

aws-s3 input treats client rate limiting as permanent failure #39114

@faec

Description

@faec

The Go AWS client uses an internal rate limiter to throttle requests when there are errors (which can happen because of upstream rate limiting or other ephemeral states), returning a ratelimit.QuotaExceededError. However, the Filebeat aws-s3 input treats this error the same as any other, so the objects are marked as having an error and will never be retried.

This is especially severe because when this error is returned, there is no retry delay applied, which means the S3 workers keep attempting to read new objects and marking them as failed as fast as the bucket reader can provide them, which results in many more missing objects.

Metadata

Metadata

Assignees

Type

No type
No fields configured for issues without a type.

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions