Skip to content

custom-resource-handlers: buffered asset download #29898

@nmussy

Description

@nmussy

Describe the feature

This issue is part feature request, part bug report.

S3 custom resource should be able to handle arbitrarily large asset files efficiently

Use Case

Deploying large asset files with an S3 custom resource causes the deployment Lambda to run out of memory The function's memory can be increased to compensate for bigger file sizes, but only up to a point and at additional costs.

Proposed Solution

It doesn't seem like the AWS CLI s3 commands support buffered download. As proposed in #29862 (comment), using boto3's multipart_threshold should allow the file to be downloaded and written to disk in multiple parts.

There might be other places where this change would be necessary, but this is the one that caused the initial issue:

if extract:
archive=os.path.join(workdir, str(uuid4()))
logger.info("archive: %s" % archive)
aws_command("s3", "cp", s3_source_zip, archive)
logger.info("| extracting archive to: %s\n" % contents_dir)
logger.info("| markers: %s" % markers)
extract_and_replace_markers(archive, contents_dir, markers)
else:
logger.info("| copying archive to: %s\n" % contents_dir)
aws_command("s3", "cp", s3_source_zip, contents_dir)

Other Information

See #29862 for the original issue

Acknowledgements

  • I may be able to implement this feature request
  • This feature might incur a breaking change

CDK version used

2.134.0

Environment details (OS name and version, etc.)

N/A

Metadata

Metadata

Assignees

No one assigned

    Labels

    @aws-cdk/aws-s3Related to Amazon S3effort/smallSmall work item – less than a day of effortfeature-requestA feature should be added or improved.p2

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions