1

Does anybody know a workaround for lambda and corresponding S3 provision in Terraform?

The issue is next, I described the S3 bucket and AWS Lambda and want to apply it in 1 try AWS Lambda has a property s3_key, however, the S3 doesn’t have an S3 object before the very first lambda deployment.

Therefore my current workflow is next:

Provision S3 bucket via terraform -> do zip deployment to s3 via concourse -> provision AWS lambda and use the key from zip deployment

But it is not acceptable, terraform shouldn’t have a dependency on intermediate deployments.

My Lambda zip is created by a different pipeline and can't be attached to terraform repo.

4
  • 2
    Can the different pipeline upload the zip to S3, and then you pass in the S3 path as an input for the TF which deploys the lambda function? Commented Apr 7, 2021 at 10:40
  • But it is the current behaviour! Terraform is the process 1, it set ups s3 and lambda on AWS and concourse is the process 2. Concourse pushes assembled zip to s3. The issue is that I write terraform apply only ones, but zip file should be placed between S3 and Lambda creation, which is not possible Commented Apr 7, 2021 at 12:13
  • Any non-trivial deployment scenario should separate infrastructure from application. In places where I've worked, we might have dozens of scripts that create different parts of the infrastructure (VPC, CI/CD pipeline, log aggregator, Lambda deployment buckets, and so on), and then an additional deployment script per application. Commented Apr 7, 2021 at 12:32
  • But how do you provide defaults for lambda? Commented Apr 8, 2021 at 10:24

2 Answers 2

0

If you define your zipped lambda code as an aws_s3_bucket_object, and then reference that object from the lambda, terraform will be able to create the bucket, the object, and the lambda in dependency order.

Sign up to request clarification or add additional context in comments.

4 Comments

I also tried it, but as I said, zip file is not in the terraform repository, therefore it is not possible to attach zip to aws_s3_bucket_object
You're going to need to get it in there somehow. I don't know concourse, but if it has a concept of build artifacts, you could stash the zip as an artifact, then pull it in to your terraform build process before terraform runs. That at least decouples your builds.
Alternately, build your lambda with a lifecycle meta argument setting s3_key, s3_object_version, and source_code_hash set to ignore_changes. Then your concourse could use the aws API to update the code, and terraform wouldn't stomp it. Either way, the root problem is that you have two builds that touch the same thing, and you need to make one of them not touch it.
Thank you for advice, but the issue is different, I am not struggling with terraform VS concourse battle for changing object in S3. The issue is that for lambda creation I need to specify either pair of (s3_bucket, s3_key) or filename inside terraform repository. Second case is not applicable for me, because I don't want to have lambda inside terraform repository. The first case is also broken, because the s3_key doesn't exist during lambda creation.
0

After very detail investigation, I did the next.

  1. Create dummy empty lambda, without libs, dependencies, etc, only the handler. Zip it and put it in the separate folder in terraform project.
  2. Describe the aws_s3_bucket_object for dummy lambda zip
  3. Describe s3_bucket and lambda in the terraform, and create dependency from lambda on aws_s3_bucket_object
  4. Run successfully the terraform apply (In this moment lambda is not executable, and doesn't have any business logic inside)
  5. Run concourse deployment, which replaces dummy lambda zip with real, and redeploy the lambda
  6. DONE!

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.