The body content of your post goes here. To edit this text, click on it and delete this default text and start typing your own or paste your own from a different source.
This is the first of tree posts about terraform remote state on Azure, AWS and GCP.
Feel free to have a look into the other posts as well, if interested in the other hypervisor setups:
Terraform Remote State on Azure
The source code you can find here : https://github.com/bluetuple/terraform-gcp
. . .
Harnessing the power of infrastructure-as-code through Terraform is transformative, but when working in a team, managing consistent state files becomes paramount.
Google Cloud Platform (GCP) offers an integrated solution for this very challenge: Terraform remote state storage.
In this blog post, we’ll walk you through the step-by-step process of setting up Terraform’s remote state on GCP, ensuring your infrastructure projects are both collaborative and coherent.
Whether you’re new to Terraform or looking to optimize your current workflows, this guide will be your roadmap to effective state management on GCP.
. . .
1. Create Environment Variable File
As we have to reference a couple of environment variables, it is recommended to create a hidden file containing all required variables.
Don’t forget to add that secrets-file to your `.gitignore` file if you’re planning to push all code to a git repo!
We’ll directly use the terraform notation for environment variables, so that our terraform code will be able to read the information as-well late on.
Every environment variable starting with `TF_VAR_` will be available for the terraform code.
Within your local terminal:
Next, we have to initialize the gcp console with the correct settings:
We now have to set our own user credentials to access the required API’s in the following steps. You’ll be forwarded to the GCP login screen.
2. Create a service account for the current project
Follow the name conventions of your company, we recommend using a naming convention like:
sa-<project_name>-<stage>-tf
Take care of the leading two dashes `- -`upfront description and display-name.
The newly created account will be listed in the google cloud console under project -> IAM -> Service Accounts.
Add the service account name to the environment list, this will come in handy later.
3. Enable required API
When we’re using a “fresh” created GCP project, we have to enable a couple of APIs. You can easily do this via the Google cloud console “APIs & Services”:
At least we have to enable the **IAM Service Account Credentials API** and **Cloud Resource Manager API** within the project.
4. Add necessary roles for the newly created service account
We now need to provide necessary roles and permissions. For easy of this example, we will go with Editor permission for the service account. In production environment it is recommended to follow three least privilege principle and reduce permissions as far as possible. We will elaborate on this a bit more a later blog post.
For this sandbox example we will impersonate the service account to make our terraform changes. In production environment you should use service account credentials and store them in a credentials fault on gcp. We will dig into this in another article.
For impersonating, we need the existing policies for the service account and store them in a local policy.json file.
Modify the policy.json to add yourself as a member to the role ServiceAccountTokenCreator. Remember to add the rest of policies that already exist:
Now we must update the policies:
After completing the initial GCP configuration we now need to setup the terraform files defining the infrastructure on GCP we’re planning to setup.
We will use the following general terraform file structure:
For this example we will use ../gcp/sandbox as our project folder.
main.tf
The main.tf is kind of the entry point of our terraform setup, at least from a reader’s perspective. Terraform itself reads all files ending with `.tf` without any special order. While processing, terraform tries to figure out the optimal order for processing the configuration.
In the main.tf file we will initialize the google provider. Use the following as starting point.
This is the first version of main.tf, we will add additional info later:
variables.tf
For all variable definition we use the file `variable.tf` as follows:
bucket.tf
This declaration file will hold all storage bucket definitions. For our use case we will define the bucket we want to use for storing the remote state.
We will enable versioning and prohibit public access (versioning enabled= true and public-access_prevention = “enforced”).
We first must create the bucket, before we can use it for storing the remote state. Of course we could have configured the bucket via the google cloud console or cli, but the idea is to use as much infrastructure as code as possible.
Now we must initialize the bucket. Carry out the following command in the terminal, and ensure that no errors are thrown.
We now should have a bucket… You can check in the Google cloud console under Storage -> Buckets.
Define bucket to be used as backend
We now have to provide terraform the info where to find and store the backend state
For production scenarios you might want to place the backend information and versioning in separate files (backend.tf, version.tf) but for simplicity of our sandbox environment we’ll keep all setting dealing with remote state in `main.tf`.
Add the following lines to your main.tf:
Your final main.tf should now look like this:
For security reason we prefer to put this information in a separate file which than can be excluded from being pushed to a git hub repository.
Create a file named `.backend-config` or whatever suits you. Important is only the leading dot to declare the file as hidden file. Insert put the following declaration with your individual values in it:
Make sure to add this file to your .gitignore if you don’t wan to push it to the repository.
Your final main.tf now should look like this:
Save everything, run `terraform fmt` and `terraform validate`. If there’s no typo everything should be fine and ready for the next step.
Before we now can run plan/apply to activate our changes, we must run a `terraform init` first, which will change the state from local to the remote storage.
We must provide the backend config as a parameter:
You must acknowledge the switch from local to remote backend, after this you’re done!”
From now on any `terraform plan` or `terraform apply` will be tracked in the remote state, independently from your local client.Neuer Text
Call: +49 2151 9168231
E-mail: info(a)bluetuple.ai
47809 Krefeld, Germany
Copyright © All Rights reserved.