Skip to content

dynamodb: Replication regions are incompatible with resource policies in TableV2 #30705

@VincentPheng

Description

@VincentPheng

Describe the bug

As stated in the AWS documentation for resource policies, we cannot create a replica and add a resource policy to that replica in the same stack update.

This means every time we want to add a new replication region in the CDK we need to somehow not add the resource policy to that new region for the initial deployment. TableV2 allows us to customize each replica, including whether or not we want a resource policy, but it too eagerly adds the resource policy under TableV2 construct and we are unable to override it with an undefined to not include any resourcePolicy.

Expected Behavior

When adding a new replica region that specifies an undefined resource policy, TableV2 should not add the resource policy defined within it's construct.

Current Behavior

TableV2 eagerly adds the resource policy to all replicas, even when specifying it should be undefined, and the deployment fails. The only way to add a new replica without adding a resource policy is to first deploy a stack update that creates the new replica and removes the resource policy from all other replicas. Then following up with a second stack update that re-adds the resource policy to all replicas. This means between the first and second update none of the tables will have a resource policy attached.

Reproduction Steps

Using a TableV2 WITHOUT replicas, but WITH the resource policy and this works

new TableV2(this, `MyTable-${stage}`, {
      pointInTimeRecovery: true,
      partitionKey: {
        name: 'key',
        type: AttributeType.STRING,
      },
      tableName: 'MyTable',
      resourcePolicy: tablePolicyDocument,
    });

If I then try to deploy WITH a replica (us-east-1) and WITH a resource policy, it fails due same stack update error

new TableV2(this, `MyTable-${stage}`, {
      pointInTimeRecovery: true,
      partitionKey: {
        name: 'key',
        type: AttributeType.STRING,
      },
      tableName: 'MyTable',
      replicas: [{ region: 'us-east-1' }],
      resourcePolicy: tablePolicyDocument,
    });

When I delete the stack and restart and try to deploy WITH a replica (us-east-1) but WITHOUT the resource policy, this succeeds

new TableV2(this, `MyTable-${stage}`, {
      pointInTimeRecovery: true,
      partitionKey: {
        name: 'key',
        type: AttributeType.STRING,
      },
      tableName: 'MyTable',
      replicas: [{ region: 'us-east-1' }]
    });

When I then add the resource policy WITHOUT adding a new replica region, this works and the resource policy is added to both the table in us-west-2 and us-east-1

new TableV2(this, `MyTable-${stage}`, {
      pointInTimeRecovery: true,
      partitionKey: {
        name: 'key',
        type: AttributeType.STRING,
      },
      tableName: 'MyTable',
      replicas: [{ region: 'us-east-1' }],
      resourcePolicy: tablePolicyDocument,
    });

But when I then try to add another region (us-east-2), it then fails with the same stack update error

new TableV2(this, `MyTable-${stage}`, {
      pointInTimeRecovery: true,
      partitionKey: {
        name: 'key',
        type: AttributeType.STRING,
      },
      tableName: 'MyTable',
      replicas: [{ region: 'us-east-1' }, { region: 'us-east-2' }],
      resourcePolicy: tablePolicyDocument,
    });

If I try to override the resourcePolicy in the new replica to not include one, TableV2 still adds tablePolicyDocument to it in the CFN template and the deployment fails with the same stack update error

new TableV2(this, `MyTable-${stage}`, {
      pointInTimeRecovery: true,
      partitionKey: {
        name: 'key',
        type: AttributeType.STRING,
      },
      tableName: 'MyTable',
      replicas: [{ region: 'us-east-1' }, { region: 'us-east-2', resourcePolicy: undefined }],
      resourcePolicy: tablePolicyDocument,
    });

I've so far been able to work around this by using the L1 escape hatch with CfnGlobalTable and manually specifying each replica and selectively adding the resource policy to each replica.

    new CfnGlobalTable(this, `MyTable-${stage}`, {
      tableName: 'MyTable',
      attributeDefinitions: [{ attributeName: 'key', attributeType: AttributeType.STRING }],
      keySchema: [{ attributeName: 'key', keyType: 'HASH' }],
      replicas: [
        {
          pointInTimeRecoverySpecification: {
            pointInTimeRecoveryEnabled: true,
          },
          region: 'us-west-2', // main table region
          resourcePolicy: { policyDocument: tablePolicyDocument },
        },
        {
          pointInTimeRecoverySpecification: {
            pointInTimeRecoveryEnabled: true,
          },
          region: 'us-east-1',
        },
      ],
      billingMode: 'PAY_PER_REQUEST',
      streamSpecification: { streamViewType: 'NEW_AND_OLD_IMAGES' },
    });

Possible Solution

The easiest solution is to update this line from:

const resourcePolicy = props.resourcePolicy ?? this.tableOptions.resourcePolicy;

to

const resourcePolicy = props.resourcePolicy

and require all replicas to manually include a resourcePolicy if one is desired.

Ideally, the construct could allow null and when null is specified in a specific replica, no resourcePolicy is added to that replica even when one is defined in the TableV2 itself

new TableV2(this, `MyTable-${stage}`, {
      pointInTimeRecovery: true,
      partitionKey: {
        name: 'key',
        type: AttributeType.STRING,
      },
      tableName: 'MyTable',
      // us-east-2 should not have tablePolicyDocument added in the template
      replicas: [{ region: 'us-east-1' }, { region: 'us-east-2', resourcePolicy: null }], 
      resourcePolicy: tablePolicyDocument,
    });

Additional Information/Context

No response

CDK CLI Version

2.136.0

Framework Version

No response

Node.js Version

v18.18.2

OS

Linux

Language

TypeScript

Language Version

No response

Other information

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    @aws-cdk/aws-dynamodbRelated to Amazon DynamoDBbugThis issue is a bug.effort/mediumMedium work item – several days of effortp2

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions