0

I am trying to setup some automation around AWS infrastructure. Just bumped into one issue about module dependencies. Since there is no "Include" type of option in terraform so it's becoming little difficult to achieve my goal.

Here is the short description of scenario:

In my root directory I've a file main.tf

which consists of multiple module blocks eg.

module mytest1
{
source = mymod/dev
}

module mytest2
{
source = mymod2/prod
}

each dev and prod have lots of tf files Few of my .tf file which exists inside prod directory needs some output from the resources which exists inside dev directory

Since module has no dependency, I was thinking if there is any way to run modules in sequence or any other ideas ?

3 Answers 3

1

Not entirely sure about your use case for having prod and dev needing to interact in the way you've stated.

I would expect you to maybe have something like the below folder structure:

  • Folder 1: Dev (Contains modules for dev)
  • Folder 2: Prod (Contains modules for prod)
  • Folder 3: Resources (Contains generic resource blocks that both dev and prod module utilise)

Then when you run terraform apply for Folder 1, it will create your dev infrastructure by passing the variables from your modules to the resources (in Folder 3).

And when you run terraform apply for Folder 2, it will create your prod infrastructure by passing the variables from your modules to the resources (in Folder 3).

If you can't do that for some reason, then Output Variables or Data Sources can potentially help you retrieve the information you need.

Sign up to request clarification or add additional context in comments.

Comments

0

There is no reason for you to have different modules for different envs. Usually, the difference between lower envs and prod are the number and the tier for each resource, and you could just use variables to pass that to inside the modules.

To deal with this, you can use terraform workspaces and create one workspace for each env, e.g:

terraform worskspace new staging

This will create a completely new workspace, with its own state. If you need to define the number of resouces to be created, you can use the variable sor the terraform workspace name itself, e.g:

# Your EC2 Module
"aws_instance" "example" {
    count = "${terraform.workspace == "prod" ? 3 : 1}"
}

# or

"aws_instance" "example" {
    count = "${lenght(var.subnets)}" # you are likely to have more subnets for prod
}


# Your module
module "instances" {
source = "./modules/ec2"
subnets = "my subnets list"
}

And that is it, you can have all your modules working for any environment just by creating workspaces and changing the variables for each one on your pipeline and applying the plan each time.

You can read more about workspaces here

1 Comment

Just to clear It's not thing to do with environment. It's simply a directory name. Each directory consists of multiple .tf files and those .tf files have dependencies. And I want to run modules in sequence so that, the second module will get all resources output from the first module. Since include type of functionality is not available in terraform so I am using module. In fact module has nothing to do with.There're hell lot of files inside each directory so don't want them to mess up my root directory.Hence I've placed them into different directories,executing them through module
0

I'm not too sure about your requirement of having the production environment depend on the development environment, but putting the specifics aside, the idiomatic way to create sequencing between resources and between modules in Terraform is to use reference expressions.

You didn't say what aspect of the development environment is consumed by the production environment, but for the sake of example let's say that the production environment needs the id of a VPC created in the development environment. In that case, the development module would export that VPC id as an output value:

# (this goes within a file in your mymod/dev directory)
output "vpc_id" {
  value = "${aws_vpc.example.id}"
}

Then your production module conversely would have an input variable to specify this:

# (this goes within a file in your mymod2/prod directory)
variable "vpc_id" {
  type = "string"
}

With these in place, your parent module can then pass the value between the two to establish the dependency you are looking for:

module "dev" {
  source = "./mymod/dev"
}

module "prod" {
  source = "./mymod2/prod"

  vpc_id = "${module.dev.vpc_id}"
}

This works because it creates the following dependency chain:

module.prod's input variable vpc_id depends on
module.dev's output value vpc_id, which depends on
module.dev's aws_vpc.example resource

You can then use var.vpc_id anywhere inside your production module to obtain that VPC id, which creates another link in that dependency chain, telling Terraform that it must wait until the VPC is created before taking any action that depends on the VPC to exist.

In particular, notice that it's the individual variables and outputs that participate in the dependency chain, not the module as a whole. This means that if you have any resources in the prod module that don't need the VPC to exist then Terraform can get started on creating them immediately, without waiting for the development module to be fully completed first, while still ensuring that the VPC creation completes before taking any actions that do need it.

There is some more information on this pattern in the documentation section Module Composition. It's written with Terraform v0.12 syntax and features in mind, but the general pattern is still applicable to earlier versions if you express it instead using the v0.11 syntax and capabilities, as I did in the examples above.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.