The Go AWS client uses an internal rate limiter to throttle requests when there are errors (which can happen because of upstream rate limiting or other ephemeral states), returning a ratelimit.QuotaExceededError. However, the Filebeat aws-s3 input treats this error the same as any other, so the objects are marked as having an error and will never be retried.
This is especially severe because when this error is returned, there is no retry delay applied, which means the S3 workers keep attempting to read new objects and marking them as failed as fast as the bucket reader can provide them, which results in many more missing objects.
The Go AWS client uses an internal rate limiter to throttle requests when there are errors (which can happen because of upstream rate limiting or other ephemeral states), returning a
ratelimit.QuotaExceededError. However, the Filebeataws-s3input treats this error the same as any other, so the objects are marked as having an error and will never be retried.This is especially severe because when this error is returned, there is no retry delay applied, which means the S3 workers keep attempting to read new objects and marking them as failed as fast as the bucket reader can provide them, which results in many more missing objects.