2

I have an AWS Lambda Function that accesses an S3 resource by it’s URL (i.e https://s3-eu-west-1.amazonaws.com/bucketname/key).

I have added a Bucket Policy on the S3 Bucket that allows my Lambda Function access to the S3 Bucket (via the Lambda Functions IAM Role). This Bucket Policy looks as follows: 


{
    "Version": "2012-10-17",
    "Id": "Access control to S3 bucket",
    "Statement": [
        {
            "Sid": "Allow Get and List Requests from IAM Role",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::123412341234:role/role-name“
            },
            "Action": [
                "s3:Get*",
                "s3:List*"
            ],
            "Resource": [
                "arn:aws:s3:::bucket-name”,
                "arn:aws:s3:::bucket-name/*"
            ]
        }
    ]
}

This all works fine when the Lambda Function is activated "automatically" by an trigger. But when I test the Lambda Function manually (via the AWS Console) I get a 403 error.

If I then change the Principal in the S3 Bucket Policy to “*” the 403 exception is resolved.

My guess is that a different Principal is used when manually triggering the Lambda Function, but I’ve no idea what this might be. I’ve tried adding a new policy giving access to my canonical user but this doesn’t work.

Any suggestions?

10
  • 2
    Why are you accessing the Amazon S3 object via its URL? A URL like s3-eu-west-1.amazonaws.com/bucketname/key does not send any identification, so it is an anonymous request. If the object is not public, it should always receive a 403 error. It would be better to access the object via an authenticated API call, or by using an S3 pre-signed URL. Commented Jul 26, 2017 at 23:18
  • I didn't realize I needed to use the "Download" or "Download as" buttons in the AWS S3 Console instead of using the URL at the bottom of the properties page. Been chasing 403 error ghosts. Commented Jul 28, 2017 at 3:07
  • @JohnRotenstein My use case is that I used Nodemailer + SES to send an email with S3 object as an attachment. Perhaps the S3 pre-signed URL might fit here? Commented Jul 28, 2017 at 6:34
  • 1
    So, you need to give the Lambda function access to the S3 object so that it can attach it to an email. Yes, you could either pass a signed URL to the Lambda function for the specific S3 object, or you could give the Lambda function access to S3 to always be able to access the object. The second one is more logical unless you are particularly security-sensitive. Can you show the code that is generating the 403 error -- is it making an API call to S3, or trying to retrieve an object via a public URL? Commented Jul 28, 2017 at 7:36
  • 1
    @MattD -- It all depends on how the object is being obtained. If access is via an API call using the AWS JavaScript SDK, then the credentials of the Role will be used. But if the object is being fetched via a URL with no credentials being passed, then access would be denied. Commented Jul 29, 2017 at 0:56

3 Answers 3

1

I ran into a similar issue and the problem was that my policy was not taking into account the fact that the Lambda assumes the role when it executes. I added the assumed role to the Principal section and everything started working:

        "Principal": {
            "AWS": [
                "arn:aws:sts::123412341234:assumed-role/role-name/function-name"
            ]
        },
Sign up to request clarification or add additional context in comments.

Comments

0

If you wish to give permissions to a particular IAM User/Group/Role, then you should add the permissions directly on that User/Group/Role rather than adding it as a special-case in a Bucket Policy.

This keeps your bucket policies clean, with less special-cases.

I would recommend:

  • Remove the bucket policy you have displayed
  • Add an In-line policy (for one-off situations) to the IAM Role used by your Lambda function

Here is a sample policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "BucketAccess",
            "Effect": "Allow",
            "Action": [
                "s3:*"
            ],
            "Resource": [
                "arn:aws:s3:::my-bucket",
                "arn:aws:s3:::my-bucket/*"
            ]
        }
    ]
}

Actually, this is too permissive since it would allow the Lambda function to do anything in to the bucket (eg delete the bucket), so you should only grant the permissions that you know are required by the Lambda function.

1 Comment

Thanks for the suggestion. This is indeed a much better way of expressing this. However I still have the same basic problem after implementing this solution. When running the Lambda Function manually I get a 403, while everything works fine when the Lambda Function is triggered by the AWS infrastructure.
-1

As suggested by @JohnRotenstein I removed the bucket policy and instead implemented a pre-signed URL. Everything now works fine.

Example of pre-signed URL generation in Node.js (URL will be valid for 360 seconds):

s3.getSignedUrl('getObject', {Bucket: bucket, Key: filename, Expires: 360})

And in Java (valid for 1 hour):

private URL createSignedURL(String s3Bucket, String s3Key){

    AmazonS3 s3client = AmazonS3ClientBuilder.defaultClient();

    // Set expiration to 1 hour
    java.util.Date expiration = new java.util.Date();
    long msec = expiration.getTime();
    msec += 1000 * 60 * 60; 
    expiration.setTime(msec);

    // Generate signed key
    GeneratePresignedUrlRequest generatePresignedUrlRequest = 
                  new GeneratePresignedUrlRequest(s3Bucket, s3Key);

    generatePresignedUrlRequest.setMethod(HttpMethod.GET); 
    generatePresignedUrlRequest.setExpiration(expiration);

    // Return key
    return s3client.generatePresignedUrl(generatePresignedUrlRequest); 
}

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.