2

TLDR

How can I configure my provider in terraform so that it is using docker to mount code and the correct function directory to execute lambda functions?

I am trying to run a simple lambda function that listens for dynamodb stream events. My code itself is working properly, but the issue I am having is when using terraform, the function executor does not find the function to run. In order to debug, I set the following envars in my localstack container DEBUG=true. I tested my code first with the serverless frame work, which works as expected.

The successful function execution logs from serverless shows:

localstack    | 2021-03-17T13:14:53:INFO:localstack.services.awslambda.lambda_executors: Running lambda cmd: docker run -i  -v "/Users/myuser/functions":/var/task -e AWS_REGION="$AWS_REGION" -e DOCKER_LAMBDA_USE_STDIN="$DOCKER_LAMBDA_USE_STDIN" -e LOCALSTACK_HOSTNAME="$LOCALSTACK_HOSTNAME" -e EDGE_PORT="$EDGE_PORT" -e _HANDLER="$_HANDLER" -e AWS_LAMBDA_FUNCTION_TIMEOUT="$AWS_LAMBDA_FUNCTION_TIMEOUT" -e AWS_LAMBDA_FUNCTION_NAME="$AWS_LAMBDA_FUNCTION_NAME" -e AWS_LAMBDA_FUNCTION_VERSION="$AWS_LAMBDA_FUNCTION_VERSION" -eAWS_LAMBDA_FUNCTION_INVOKED_ARN="$AWS_LAMBDA_FUNCTION_INVOKED_ARN" -e AWS_LAMBDA_COGNITO_IDENTITY="$AWS_LAMBDA_COGNITO_IDENTITY"   --rm "lambci/lambda:go1.x" "bin/dbchanges"
localstack    | 2021-03-17T13:14:54:DEBUG:localstack.services.awslambda.lambda_executors: Lambda arn:aws:lambda:us-east-1:000000000000:function:myService-local-dbchanges result / log output:
localstack    | null

Terraform: issue

But when running from terramform, it looks like the function cannot be found and fails with the following logs:

localstack    | 2021-03-17T13:30:32:INFO:localstack.services.awslambda.lambda_executors: Running lambda cmd: docker run -i  -v "/tmp//zipfile.717163a0":/var/task -e AWS_REGION="$AWS_REGION" -e DOCKER_LAMBDA_USE_STDIN="$DOCKER_LAMBDA_USE_STDIN" -e LOCALSTACK_HOSTNAME="$LOCALSTACK_HOSTNAME" -e EDGE_PORT="$EDGE_PORT" -e _HANDLER="$_HANDLER" -e AWS_LAMBDA_FUNCTION_TIMEOUT="$AWS_LAMBDA_FUNCTION_TIMEOUT" -e AWS_LAMBDA_FUNCTION_NAME="$AWS_LAMBDA_FUNCTION_NAME" -e AWS_LAMBDA_FUNCTION_VERSION="$AWS_LAMBDA_FUNCTION_VERSION" -e AWS_LAMBDA_FUNCTION_INVOKED_ARN="$AWS_LAMBDA_FUNCTION_INVOKED_ARN" -e AWS_LAMBDA_COGNITO_IDENTITY="$AWS_LAMBDA_COGNITO_IDENTITY"   --rm "lambci/lambda:go1.x" "dbchanges"
localstack    | 2021-03-17T13:30:33:DEBUG:localstack.services.awslambda.lambda_executors: Lambda arn:aws:lambda:us-east-1:000000000000:function:dbchanges result / log output:
localstack    | {"errorType":"exitError","errorMessage":"RequestId: 4f3cfd0a-7905-12e2-7d4e-049bd2c1a1ac Error: fork/exec /var/task/dbchanges: no such file or directory"}

After inspect the two log sets, I noticed that the path that is being mounted by terraform + localstack docker executor is different. In the case of serverless, it is pointing to the correct folder for volume mounting; i.e. /Users/myuser/functions while in terraform, it is mounting /tmp//zipfile.somevalue which seems to be the root of the issue.

In my serverless config file, lambda mountcode is set to true which leads me to believe that is why it is mounting and executing correctly.

lambda:
      mountCode: True

So my question is, what can I do in terraform so that the uploaded function actually gets executed by the docker container, or tell terraform to mount the correct directory so that it can find the function? My terraform lambda function definition is:

data "archive_file" "dbchangeszip" {
  type        = "zip"
  source_file = "../bin/dbchanges"
  output_path = "./zips/dbchanges.zip"
}

resource "aws_lambda_function" "dbchanges" {
  description      = "Function to capture dynamodb change"
  runtime          = var.runtime
  timeout          = var.timeout
  memory_size      = var.memory
  role             = aws_iam_role.lambda_role.arn
  handler          = "dbchanges"
  filename         = "./zips/dbchanges.zip"
  function_name    = "dbchanges"
  source_code_hash = data.archive_file.dbchangeszip.output_base64sha256
}

P.S. Some other things are tried are

  • setting the handler in terraform to bin/handler to mimic serverless

1 Answer 1

0

Figured out the issue. When using terraform, the s3 bucket for the functions being stored isnt defined, so those two has to be set in the resource definition in terraform.

Example:

resource "aws_lambda_function" "dbchanges" {
  s3_bucket = "__local__"
  s3_key = "/Users/myuser/functions/"
  role             = aws_iam_role.lambda_role.arn
  handler          = "bin/dbchanges"
  # filename         = "./zips/dbchanges.zip"
  function_name    = "dbchanges"
  source_code_hash = data.archive_file.dbchangeszip.output_base64sha256
}

The two important values are:

  s3_bucket = "__local__"
  s3_key = "/Users/myuser/functions/"

Where s3_key is the absolute path to the functions.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.