Accessing and logging the amount of S3 POST/GETs by filename

I have several JSON files in an S3 bucket. I need to do a monthly count of the amount of put/gets each file receives in a month.

Can these be extracted via CSV or even accessed via an API? I have looked at Cloudwatch and there doesn’t appear to be an option for this, or within the billing dashboard.

If this feature doesn’t exist, are there any workarounds such as a Lamba function with a counter?

Solution:

  1. Enable bucket logs under –

s3 > bucket > properties > server access logging > configure target
bucket/prefix

  1. Use Athena to query this data using simple SQL statements. Read more about Athena HERE

deploy lambda to s3 bucket folder?

Here is the command to deploy the lambda:

$ sam package --template-file sam.yaml --s3-bucket mybucket --output-template-file packaged.yaml

But can I specify a bucket prefix? so it gets deployed to subfolder instead of root of bucket?

Solution:

You can provide the bucket prefix using --s3-prefix parameter

$ sam package --template-file sam.yaml --s3-bucket mybucket --s3-prefix path/to/file --output-template-file packaged.yaml

Under the hood sam is calling aws command and all the options listed here are valid.

AWS Authorization In Code – {"message": "The security token included in the request is invalid." }

I am trying to invoke an API call in my Lambda function in the form of requests.get(url, auth=auth).

I have the url of the API endpoint but I am having issues with the authorization part of it. I imported an equivalent package of Requests-aws4auth and am getting my access key and secret key from Boto3 by following these instructions.

session = boto3.Session()
credentials = session.get_credentials()
credentials = credentials.get_frozen_credentials()
access_key = credentials.access_key
secret_key = credentials.secret_key
auth = AWS4AuthHandler(access_key=access_key, secret_key=secret_key, service_name='execute-api', region_name='us-west-2')
brand_info = requests.get(url, auth=auth).json()

However, brand_info returns:

{"message": "The security token included in the request is invalid." }

I’m assuming this is an issue with my access and secret keys, and if that’s the case, am I missing any steps to get the correct access / secret key?

Solution:

You need to obtain the security token also and pass it on. You can obtain it as:

token = credentials.token

How to Write an AWS Python3 Lambda Function using a zip file on Windows OS

I have looked all over for a tutorial or help on creating a python3 lambda function from a zip file using the Lambda Management Console on Windows OS. I have, unfortunately, been unlucky. Here is where I am at…

Following the instructions on the AWS website here: https://docs.aws.amazon.com/lambda/latest/dg/lambda-python-how-to-create-deployment-package.html

  • I create a folder on my desktop called ‘APP’. In that folder I save a file with my python code called ‘twilio_test.py’ at the root level of ‘App’.

My Python code:

import twilio

def lambda_handler(event, context):
    account_sid = '##########################'
    auth_token = '###########################'

    client = Client(account_sid, auth_token)

    message = client.messages.create(
            to = '###########',
            from_ = '###########',
            body = "Test")
    return("success")
  • Since I am using the twilio library I pip install it in the root of my ‘APP’ folder based on the instructions found in the above link. The instructions say specifically, “Install any libraries using pip. Again, you install these libraries at the root level of the directory.”:

pip install twilio -t \path\to\directory

  • I then zip the contents of ‘APP’ based on the quoted instruction, “Zip the content of the project-dir directory, which is your deployment package. Zip the directory content, not the directory.” This creates a zip file called ‘twilio_test’.

  • I then go to the AWS lambda management console, upload the zip file ‘twilio_test’.

Here is where I am getting confused. What should be the handler?

Have I correctly done everything so far up until this point? If not, what is the best way to go about installing twilio, zipping a file and then using it in AWS lambda?

Although it is inappropriate to say that AWS lambdas are inherently difficult to use, I can say that I am inherently confused.

Solution:

You should set the handler to python_file_name.function_name. So in your case it should be twilio_test.lambda_handler.

From the documentation:

… You specify the function name in the Python code to be used as the handler when you create a Lambda function. For instructions to create a Lambda function using the console, see Create a Simple Lambda Function. In this example, the handler is hello_python.my_handler (file-name.function-name)

What happens if I have 2 CloudWatch events that trigger the same lambda function at the same time?

I have 1 Lambda function which is configured to submit a given job to AWS Batch via a boto3 call. The lambda function gets triggered by a CloudWatch event, where the CloudWatch event passes the job information as a dictionary.

There are many CloudWatch events and each of them is for a different job. It is possible for more than one CloudWatch events to trigger the Lambda function at the same time. What will happen in this case? Will the Lambda function fail to submit some of the jobs or all of them?

Solution:

By default a Lambda function will run up to 1000 concurrent executions.

So to answer your question, assuming your Lambda is configured to execute on each Cloud Watch event, your Lambda function will be running twice, both at the same time.

Do I give only the read-only RDS database permission for an aws lambda code, when I create custom role and edit the policy document this way?

I want to create an aws lambda code, which provides a public API for only read from an aws rds db instance. When I want to create a lambda function, it asks me about permission roles. Because I’m afraid, I want to give a very strict permission to the code to allow only the reading from the db instance.

I have found this site, it lists a few managed policies. I could find this inside that:

"AmazonRDSReadOnlyAccess": {
    "Arn": "arn:aws:iam::aws:policy/AmazonRDSReadOnlyAccess",
    "AttachmentCount": 0,
    "CreateDate": "2015-02-06T18:40:53+00:00",
    "DefaultVersionId": "v1",
    "Document": {
        "Statement": [
            {
                "Action": [
                    "rds:Describe*",
                    "rds:ListTagsForResource",
                    "ec2:DescribeAccountAttributes",
                    "ec2:DescribeAvailabilityZones",
                    "ec2:DescribeSecurityGroups",
                    "ec2:DescribeVpcs"
                ],
                "Effect": "Allow",
                "Resource": "*"
            },
            {
                "Action": [
                    "cloudwatch:GetMetricStatistics"
                ],
                "Effect": "Allow",
                "Resource": "*"
            }
        ],
        "Version": "2012-10-17"
    },
    "IsAttachable": true,
    "IsDefaultVersion": true,
    "Path": "/",
    "PolicyId": "ANPAJKTTTYV2IIHKLZ346",
    "PolicyName": "AmazonRDSReadOnlyAccess",
    "UpdateDate": "2015-02-06T18:40:53+00:00",
    "VersionId": "v1"
},

I can see the default policy document, when I want to create a new custom role. And I can see that is basically contains {“Statement”, “Version” and “Resource”}:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Resource": "arn:aws:logs:*:*:*"
    }
  ]
}

And this perfectly fits to the AmazonRDSReadOnlyAccess’s “Document” block, so I think that needs to be copy-pasted to there to achieve the rds read-only permission. So what I need to put into the custom role’s policy document is:

{
    "Statement": [
        {
            "Action": [
                "rds:Describe*",
                "rds:ListTagsForResource",
                "ec2:DescribeAccountAttributes",
                "ec2:DescribeAvailabilityZones",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeVpcs"
            ],
            "Effect": "Allow",
            "Resource": "*"
        },
        {
            "Action": [
                "cloudwatch:GetMetricStatistics"
            ],
            "Effect": "Allow",
            "Resource": "*"
        }
    ],
    "Version": "2012-10-17"
}

This is the thing I need to do? Am I right?
Does it allow me for the lambda function to read only from a certain RDS db instance? Is there a more simple way to do this?
Because I saw the policy templates in the “create new role from template”, and I couldn’t find anything for this goal.

Solution:

The IAM policies, like the one you shown above, are grant/deny access for management of the RDS service only. That does not grant you authorization to access data. You can consider the following security approach for securing the DB again unauthorized access.

  1. Secure the Lambda Execution Role – give the lambda service least privleged role for accessing the RDS management service.

  2. Secure the RDS login user – Create a user dedicated for this function, and grant it the least privedge required to access the DB and perform the needed functions

  3. Secure the Lambda via API. You can use the AWS API Gateway to expose the Lambda function. This API can further secured against unauthorized access. This is optional.

What is the code to tell a lambda function to do a redirect?

So just to be clear I have spent several hours googling things and none of these work. This is not a “low effort post”.

This is an example of the code I have been trying. It doesn’t work. Neither does doing response like this response.headers=[{Location:"foo"}] or response.headers=[{location:"foo"}] or the other eight ways I’ve tried it.

exports.handler = (event, context, callback) => {
    if(request.uri === "/") {
    var response = {
        statusCode: 301,
        headers: {
            "location" : [{
                key: "Location",
                value: "foo"
            }]
        },
        body: null
    };
    callback(null, response);
}

I’ve tried the following links:

Solution:

You mentioned the link to this example in your question; it should work with Lambda Proxy Integration:

'use strict';

exports.handler = function(event, context, callback) {
var response = {
    statusCode: 301,
    headers: {
        "Location" : "http://example.com"
    },
    body: null
};
callback(null, response);
};

source: http://blog.ryangreen.ca/2016/01/04/how-to-http-redirects-with-api-gateway-and-lambda/

Update:

Else, try using this example from this page of example functions:

'use strict';

exports.handler = (event, context, callback) => {
/*
 * Generate HTTP redirect response with 302 status code and Location header.
 */
const response = {
    status: '302',
    statusDescription: 'Found',
    headers: {
        location: [{
            key: 'Location',
            value: 'http://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.html',
        }],
    },
};
callback(null, response);
};

Python call my AWS lambda from code with boto3 error

In my project i have to create a py that call a lambda function passing body parameters, i write this code:

import boto3
import json
import base64

client = boto3.client(‘lambda’)
d = {'calID': '92dqiss5bg87etcqeeamlmob2g@group.calendar.google.com', 'datada': '2017-12-22T16:40:00+01:00', 'dataa': '2017-12-22T17:55:00+01:00', 'email': 'example@hotmail.com'}
s = json.dump(d)
s64 = base64.b64encode(s.encode('utf-8'))

response = client.invoke(
    FunctionName='arn:aws:lambda:eu-west-1:13737373737:function:test',
    InvocationType='RequestResponse',
    LogType='None',
    ClientContext='None',
    Payload=s64
)

but when response run this error is generated:

InvalidRequestContentException: An error occurred (InvalidRequestContentException) when calling the Invoke operation: Could not parse request body into json: Unrecognized token ‘eyJjYWxJRCI6ICI5MmRxaXNzNWJnODdldGNxZWVhbWxtb2IyZ0Bncm91cC5jYWxlbmRhci5nb29nbGUuY29tIiwgImRhdGFkYSI6ICIyMDE3LTEyLTIyVDE2OjQwOjAwKzAxOjAwIiwgImRhdGFhIjogIjIwMTctMTItMjJUMTc6NTU6MDArMDE6MDAiLCAiZW1haWwiOiAibHVjYV9ncmV6eml4eEBob3RtYWlsLmNvbSJ9’: was expecting (‘true’, ‘false’ or ‘null’)
at [Source: [B@4587098d; line: 1, column: 481]

what this mean?

Many thanks in advance

Solution:

The error is because of the following parameter:

ClientContext='None',

From the docs:

ClientContext (string) —

Using the ClientContext you can pass client-specific information to
the Lambda function you are invoking. You can then process the client
information in your Lambda function as you choose through the context
variable. For an example of a ClientContext JSON, see PutEvents
in the Amazon Mobile Analytics API Reference and User Guide .

The ClientContext JSON must be base64-encoded and has a maximum size
of 3583 bytes.

You do not need the ClientContext parameter here at all. Simply invoke as follows:

response = client.invoke(
    FunctionName='arn:aws:lambda:eu-west-1:13737373737:function:test',
    LogType='None',
    Payload=json.dumps(d)
)

Can't put a new item in an existing DynamoDB Table using Lamda Python 3.6 Boto3

I’m having trouble putting a new item to my DynamoDB Table.
I’m programming directly in AWS Lambda.

import boto3
import json

def lambda_handler(event, context):

    dynamodb = boto3.resource('dynamodb', region_name='eu-central-1')

    dynamodb.putItem{
        "TableName": "myTable",
        "Item": {
            "username": {
                "S": "chicken"
            },
            "fav_food": {
                "S": "ketchup"
            }
        }
    }
    return 0

Solution:

Try this:

table = boto3.resource('dynamodb', region_name=region).Table(table_name)
item = { 
    "username" : "chicken", 
    "fav_food" : "ketchup" 
}
table.put_item(Item=item)

If you post the gotten error it will help a little bit.