We value your privacy. We use cookies to enhance your browsing experience, serve personalized ads or content, and analyze our traffic. By clicking "Accept All", you consent to our use of cookies. Read our Privacy Policy for more information.
back arrow icon

Using Terraform for microservices orchestration and deployments

Monday, March 25, 2024
Stefanos Peros
Software engineer

As a digital product studio, our role extends beyond just writing code for applications; it is equally crucial to support our customers by managing the infrastructure that ensures these applications run smoothly and efficiently. Typically, we leverage public cloud service providers (CSPs) to deploy applications as they offer significant advantages (cost savings, out-of-the-box scalability, availability, etc.). Given the complexity and potential for errors in manually deploying resources and managing microservice architectures across Azure, AWS, and GCP through their respective consoles, we turn to state-of-the-art Infrastructure as Code (IaC) tools like Terraform, which streamlines and automates the provisioning of infrastructure as we'll explore next.

Infrastructure as Code

Among other advantages, IaC enhances efficiency with automation, ensures consistency by reducing manual errors, speeds up deployments, and allows easy tracking of changes for agile and scalable infrastructure management. CloudFormation, Azure Resource Manager, and Cloud Deployment Manager are examples of IaC services offered by AWS, Azure, and GCP respectively. However, using these IaC services can limit flexibility due to vendor lock-in, making it challenging to adapt or migrate across different clouds. Therefore, we favour Terraform for its cloud-agnostic capabilities, as we can use the same programming language to define infrastructure across any cloud service provider. Moving forward, we'll delve into the process of integrating Terraform into our software development cycle, as illustrated in Figure 1, resulting in a seamless and efficient workflow.

Figure 1: CI/CD pipeline with Terraform to provision resources in public cloud.

Best practices


Having setup Terraform in numerous products, we have refined the structure as detailed below:


|— aws/

|    |— dev/

|    |— staging/

|    |— production/

|    |— modules/

|    |    |— reusable_component_1/

|    |    |— …

|— gcp/

|    |— …

| …

Specifically, we separate environments in directories that contain the corresponding Terraform code for provisioning their resources. In order to minimise code duplication, we encapsulate reusable groups of resources into modules/, which are then instantiated in each environment. Alternatively, Terraform workspaces could be leveraged to switch between environments while sharing the same code. From our experience however, this makes the Terraform code ambiguous to read and far more error-prone, e.g. deleting resources in the wrong environment because the workspace was not changed.

State management

To facilitate team collaboration, the state file that Terraform uses to maintain an overview of the deployed infrastructure must be stored remotely. We typically configure a remote backend within the CSP, for example by storing the state file in a S3 bucket and using a DynamoDB table as a locking mechanism in AWS (equivalent solutions exist for GCP and Azure). This ensures that when multiple teammates simultaneously issue Terraform commands during local development, they will not conflict with one another. Alternatively, Hashicorp (the company behind Terraform) offers Terraform Cloud as a managed service for remote state management among others, but it comes with a hefty price tag that scales with the number of managed resources as of 2023.

Version control

We typically store the Terraform code in the same repository as the rest of the application, as this enables us to directly link Terraform resources to source code, which greatly simplifies the CI/CD pipeline setup. Alternatively, all Terraform code can reside in a separate repository, linking to the source code through artefacts that are generated across multiple CI/CD pipelines: this is a great option where a clear separation of concerns between DevOps and software developers is desired, as it keeps application code separated from infrastructure code. In the CI/CD pipelines, we define a deploy step where Terraform commands are executed to deploy the infrastructure. For example, we can configure a rule that whenever a branch is merged to the `main` branch, the pipeline will deploy the production environment through Terraform. This requires that the GitHub runner has sufficient permissions to create, destroy, and update resources in the cloud environment (e.g. through a separate IAM account with least-privilege permissions). Furthermore, sensitive variables such as API keys and credentials can be injected as secrets that are stored securely outside the code, e.g. as GitHub secrets or within the CSP’s secret management service.

Closing statement

The emergence of AI services from major cloud service providers has been a significant trend, with AWS, Azure, and GCP continuously expanding their AI and machine learning offerings. Concurrently, Terraform has been evolving to keep pace, offering support for these new AI services either by extending existing providers or introducing new ones, such as a separate OpenAI Azure provider. This adaptability ensures that we can continue managing and integrating the latest AI services into new and existing infrastructures using the familiar and powerful Terraform toolset, thus bridging the gap between cloud-based AI capabilities and infrastructure as code practices.

Interested to hear more about our approach? Never hesitate to reach out at hello@panenco.com.

Let's build!

Are you looking for an entrepreneurial digital partner?
Reach out to hello@panenco.com.

Egwin Avau
Founding CEO
Koen Verschooten
Operations manager

Subscribe to our newsletter

Quarterly hand-picked company updates
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.