Skip to content

Policy Failures (CQ IDs), Backwards Compatability, and other strange behavior #161

@jbertman

Description

@jbertman

New Policy Command

Is there a way to run policies purely on-disk (not require a repository)? Previously I was storing policies centrally and assessing on-demand like cloudquery policy --path policies/custom.yml --config aws_config.hcl. This doesn't appear to be possible with the new paradigm.

Backwards Compat

Are policies now JSON and HCL only? Or is there a way to continue the use of yaml?

Run Failures

The newest version of CQ, cq-aws-provider, and cq-policy-core seem to produce errors during queries:

cloudquery policy run cq-policy-core aws/cis-v1.20 --config aws_config.hcl
Starting policy run...
❌ Failed to run policy: failed to run policies: cis-v1.20 - aws_log_metric_filter_and_alarm: ERROR: column aws_cloudtrail_trails.id does not exist (SQLSTATE 42703).

The policy runner is of course correct, as all of the keys (primary and foreign) have been changed to cq_id or similar:

postgres=# \d+ aws_cloudtrail_trails
                                                     Table "public.aws_cloudtrail_trails"
                 Column                 |            Type             | Collation | Nullable | Default | Storage  | Stats target | Description
----------------------------------------+-----------------------------+-----------+----------+---------+----------+--------------+-------------
 cq_id                                  | uuid                        |           |          |         | plain    |              |
 meta                                   | jsonb                       |           |          |         | extended |              |
 account_id                             | text                        |           | not null |         | extended |              |
 region                                 | text                        |           |          |         | extended |              |
 cloudwatch_logs_log_group_name         | text                        |           |          |         | extended |              |
 is_logging                             | boolean                     |           |          |         | plain    |              |
 latest_cloud_watch_logs_delivery_error | text                        |           |          |         | extended |              |
 latest_cloud_watch_logs_delivery_time  | timestamp without time zone |           |          |         | plain    |              |
 latest_delivery_error                  | text                        |           |          |         | extended |              |
 latest_delivery_time                   | timestamp without time zone |           |          |         | plain    |              |
 latest_digest_delivery_error           | text                        |           |          |         | extended |              |
 latest_digest_delivery_time            | timestamp without time zone |           |          |         | plain    |              |
 latest_notification_error              | text                        |           |          |         | extended |              |
 latest_notification_time               | timestamp without time zone |           |          |         | plain    |              |
 start_logging_time                     | timestamp without time zone |           |          |         | plain    |              |
 stop_logging_time                      | timestamp without time zone |           |          |         | plain    |              |
 cloud_watch_logs_log_group_arn         | text                        |           |          |         | extended |              |
 cloud_watch_logs_role_arn              | text                        |           |          |         | extended |              |
 has_custom_event_selectors             | boolean                     |           |          |         | plain    |              |
 has_insight_selectors                  | boolean                     |           |          |         | plain    |              |
 home_region                            | text                        |           |          |         | extended |              |
 include_global_service_events          | boolean                     |           |          |         | plain    |              |
 is_multi_region_trail                  | boolean                     |           |          |         | plain    |              |
 is_organization_trail                  | boolean                     |           |          |         | plain    |              |
 kms_key_id                             | text                        |           |          |         | extended |              |
 log_file_validation_enabled            | boolean                     |           |          |         | plain    |              |
 name                                   | text                        |           |          |         | extended |              |
 s3_bucket_name                         | text                        |           |          |         | extended |              |
 s3_key_prefix                          | text                        |           |          |         | extended |              |
 sns_topic_arn                          | text                        |           |          |         | extended |              |
 sns_topic_name                         | text                        |           |          |         | extended |              |
 arn                                    | text                        |           | not null |         | extended |              |
Indexes:
    "aws_cloudtrail_trails_pk" PRIMARY KEY, btree (account_id, arn)
    "aws_cloudtrail_trails_cq_id_key" UNIQUE CONSTRAINT, btree (cq_id)
Referenced by:
    TABLE "aws_cloudtrail_trail_event_selectors" CONSTRAINT "aws_cloudtrail_trail_event_selectors_trail_cq_id_fkey" FOREIGN KEY (trail_cq_id) REFERENCES aws_cloudtrail_trails(cq_id) ON DELETE CASCADE
Access method: heap

This breaks all of the policies that have been written thus far... is the answer to go and update all the policies? Is there a strong reason the cq prefix was introduced?

Strange Fetch Behavior

I'm unsure how to reproduce this behavior, but with the latest cloudquery core and aws provider, I can't seem to complete a fetch unless I specify that the console log is enabled. Possible worth noting that I'm running via aws-vault. Running:

 aws-vault exec some_profile --no-session -- cloudquery fetch --config aws_config.hcl

Will stall out around 85 resources (⌛cq-provider-aws@latest fetching 7m [--------------------------------------------------------------| Finished Resources: 85/89), while running it like:

aws-vault exec some_profile --no-session -- cloudquery fetch --config aws_config.hcl --enable-console-log

Runs and finishes correctly. Is there maybe a small delay introduced by the logging that causes this behavior?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions