SMS Blog
Deploying DokuWiki on Amazon Elastic Container Service (ECS) – Part 2 of 2
By Rob Stewart, Cloud Architect, SMS
Improving the DokuWiki Deployment
In Part 1 of this series, we documented a very basic “click-ops” deployment of an ECS Task running the Bitnami DokuWiki container. Please go back and read that post if you haven’t already before you read this one.
In this post, we are going to address some of the deficiencies in the original deployment:
- Improving the fault tolerance of the DokuWiki deployment
- Moving on from a “Click-Ops” deployment via the AWS console to an Infrastructure as Code (IaC) deployment using Terraform
Improving the Fault Tolerance of our Elastic Container Service (ECS) DokuWiki Deployment
In Part 1 of this series, we performed a manual deployment of the following resources using the AWS Console:
- An ECS Task Definition which referenced the Bitnami DokuWiki container from Docker Hub that we wanted to run on AWS.
- An ECS Cluster which is used by AWS to logically separate sets of ECS Tasks and ECS Services.
- An ECS Task which is a running instance of the ECS Task Definition we created.
- A Security Group which controlled the network traffic going to the ECS Task running the DokuWiki container.
Exhibit 1: The Original Deployment of DokuWiki
After we finished deploying all these resources, we found that the deployment was not very robust. If our ECS task crashed then our application would stop working and any data we added to DokuWiki would be lost. We also noted that we had to connect to our application using a nonstandard TCP port.
This time around, we are going to enhance the deployment by introducing the following changes:
- An ECS Service which will restart the ECS Task if it fails
- An Elastic Filesystem (EFS) to store DokuWiki data so that it no longer resides on the running ECS Task and is thus preserved if the task should fail
- An Application Load Balancer (ALB) to give us a consistent URL for our application and route traffic dynamically to the ECS Tasks created by the ECS Service
- Security Groups for our ALB and EFS to control network traffic
Exhibit 2: An Enhanced Deployment of DokuWiki
It would take a long time to run through all the configuration required to create all of these services and the connections between them using the AWS console, and there is a good chance that we might miss a step along the way. There is a better way to complete this deployment.
From “Click-Ops” to Infrastructure as Code Using HashiCorp Terraform
One of the major shortcomings of the original deployment was that all the resources were created via the AWS console. When you are first learning how to use AWS, creating resources via the console can be helpful as it will enable you to gain an understanding of how AWS services work together. However, there are several shortcomings of this approach when it is time to make the transition to deploying production workloads on AWS.
- Unless somebody is watching you click through the console, or they are very good at picking through AWS CloudTrail logs after the fact then they will not get a full understanding of the steps you followed to complete a deployment.
- If you wanted to have a shot at a repeatable deployment then you would have to write up a long run book detailing each step of the process. Even if you include every step, and the person following your directions is very conscientious, there is a very good chance that they would miss a step during the deployment which will result in small inconsistencies cropping up over time. Further, eventually the document will be out of date as the AWS console changes and evolves.
- In most cases, you will only discover issues or security vulnerabilities introduced by a manual deployment after the deployment is already done. After a security vulnerability is discovered, you will have to revise the run book and then go through each manual deployment and attempt to make corrections which can lead to other complications.
In sum, manual deployments are not scalable. Fortunately, we have tools like Terraform and AWS CloudFormation which enable us to define our infrastructure, the resources we will deploy in the cloud, as code. There are several benefits to defining our infrastructure as code.
- We can be precise in defining exactly what resources we need and how each resource will be configured.
- We can store the code in a Version Control System (VCS) like GitLab or GitHub.
- We can introduce code reviews into the process where other engineers can review our code and provide input prior to deployment.
- We can also employ tools to scan our code and identify any deviations from best practices and established standards and detect potential security vulnerabilities so that these issues can be addressed before we deploy infrastructure.
- We can repeat the deployment process exactly as the level of human involvement in each deployment is dramatically reduced.
- We reduce or eliminate the need to write lengthy run books describing deployments as the code is self-documenting. If you want to know what is deployed then all you need to do is review the code.
- When we discover an issue with a deployment, all we need to do is update the code and deploy it again. If we deploy our updates via the code, then we can be much more confident that the changes will be applied consistently.
Taken together, these benefits combine to make a compelling case for leveraging IaC tools to express our infrastructure as code.
As one of the best tools for Infrastructure as Code, HashiCorp Terraform has several unique benefits:
- HashiCorp Configuration Language (HCL), the language used to write Terraform code, is easy to write and easy to read due to its declarative nature and the sheer volume of helpful examples available online due to broad industry adoption. While this simplicity results in a gentle learning curve when first learning Terraform, the language has evolved to handle increasingly complex scenarios without forsaking clarity.
- Terraform’s core workflow loop of generating a plan describing what changes will be made, applying the changes in accordance with that plan after review, and then optionally rolling back all changes via a destroy operation if they are no longer needed is easy for engineers to understand and use. This simplicity enables rapid iteration when writing code to deploy infrastructure.
Exhibit 3: The Terraform Workflow
- Terraform tracks the infrastructure that it deploys in a state file which enables tracking of what has been deployed and makes it simple to completely remove that infrastructure when it is no longer needed.
- Terraform supports packaging code into reusable modules. In the HashiCorp Terraform documentation, modules are described as follows:
- A module is a container for multiple resources that are used together. Modules can be used to create lightweight abstractions, so that you can describe your infrastructure in terms of its architecture, rather than directly in terms of physical objects.
- The files in the root of a terraform project are really just a module as far as Terraform is concerned. The root module can reference other modules using module blocks. You will see this in action below.
- HashiCorp encourages developers to create and use modules by providing a searchable module registry which now contains hundreds of robust modules contributed by the community.
- Terraform is designed to support a diverse ecosystem of platforms and technologies via plug-ins called providers. Providers are responsible for managing the communication between Terraform and other cloud services and technologies. One benefit of this approach is that the core Terraform functionality and the functionality made available via a given provider can evolve independently. For example, when a cloud provider makes a new service available then that service can be added to an updated version of the Terraform provider, and Terraform will automatically support it.
Exhibit 4: Terraform Providers
- Most importantly, Terraform is fast. You can deploy and then destroy a few resources or a complex environment made up of hundreds of resources using Terraform in a matter of minutes.
Deploying DokuWiki on ECS Using Terraform
If you have never used Terraform before you will need to get your computer set up first. After you finish getting your computer set up, you will need to download the Terraform code using Git, review the code, deploy the DokuWiki resources in AWS by running Terraform commands, validate that it worked by logging into the AWS Console, and then destroy the infrastructure created by the code.
Getting Setup
In order to follow along with the steps in this post you will first need to install Git, the Terraform command line interface (CLI), and the AWS CLI. You will need to have access to an AWS account with IAM Administrative permissions, and you will need to setup programmatic access to AWS with the AWS CLI.
Install Git
The Terraform code associated with this post has been uploaded to a GitLab repository. In order to download this code and follow along, you will need to install Git. Follow the installation directions on the Git website to get started. If you haven’t ever used Git before, the git-scm.com site has a lot of great documentation to get you going in the right direction, including the following sections of the Pro Git book:
Install the Terraform Command Line Interface (CLI)
In order to follow along with the steps in the post, you will need to install Terraform. The following tutorial on the HashiCorp Learn site takes you step by step through the installation process.
Install the AWS CLI
In order to create infrastructure on AWS using Terraform, you will also need to install the AWS CLI. This page in the AWS documentation takes you step by step through the installation process.
Setup An AWS Account
You will also need access to an AWS account with IAM Administrative permissions. If you were following along with the first post in this series then you already created an AWS account.
NOTE: If you follow along with the steps in this post, there is some chance you may incur minimal charges. However, if you create a new account you will be under the AWS Free Tier. That said, it is always prudent to remove any resources you create in AWS right after you finish using them so that you limit the risk of unexpected charges. Guidance on how to remove the resources created by the Terraform code after we finish with them is provided below.
Set Up Programmatic Access to AWS for Terraform Using the AWS CLI
Before you can deploy infrastructure to your AWS account using Terraform you will need to generate an AWS IAM Access Key and Secret Key (a key pair) using the AWS Console. After you generate a key pair, you will need to configure the AWS CLI to use it using the aws configure command. The following page in the AWS documentation provides step by step directions on how to generate an Access Key and Secret Key for your AWS IAM user in the AWS Console and then configure the AWS CLI to use those credentials.
NOTE: When you run the aws configure command, you will be prompted to select a region. Make sure you specify one.
NOTE: When you generate an Access Key and Secret key for an IAM user then that key pair grants the same access to your AWS account that your AWS login and password has. If the IAM account has permissions to create resources then anybody who possesses the Access Key and Secret key can create resources. Therefore, you should never share these keys and should treat them with care like you would your login and password.
Download the ECS DokuWiki Terraform Code from GitLab
Before we can start deploying infrastructure using the Terraform CLI we need to download the code from GitLab. Enter the following command to download the project code from GitLab:
git clone https://gitlab.com/sms-pub/terraform-aws-ecs-dokuwiki-demo.git
Next change into the directory containing the code you just downloaded.
cd terraform-aws-ecs-dokuwiki-demo
Review the Terraform Code
If you take a look at the terraform code you just downloaded you will see several files and folders.
Exhibit 5: The Terraform ECS Code.
Files in sub-directories are not represented for the sake of brevity.
The following table summarizes the files and folders in the repository. The Terraform files are described in further detail below.
Name | Object Type | Contents |
/terraform.tf | Terraform file | terraform block |
/provider.tf | Terraform file | Provider block |
/variables.tf | Terraform file | Variable blocks |
/main.tf | Terraform file | Module block for the dokuwiki module |
/outputs.tf | Terraform file | Module blocks for output values |
/README.md | Markdown document | Overview documentation for the GitLab Project. This file is displayed when you view the project on GitLab.com. |
/modules | Directory | Contains the dokuwiki module that is created by the module block in main.tf |
/modules/dokuwiki | Directory | The DokuWiki folder contains all the files that make up the DokuWiki Terraform module. Most of the Terraform files in the DokuWiki folder contain module blocks referencing the modules in the modules folder. |
modules/dokuwiki/modules | Directory | The modules folder in the DokuWiki folder contains all the modules used by the DokuWiki module |
/modules/dokuwiki/modules/application-load-balancer | Directory | Terraform module which creates Application Load Balancers (ALBs) |
/modules/dokuwiki/modules/ecs-cluster | Directory | Terraform module which creates ECS Clusters |
/modules/dokuwiki/modules/ecs-service | Directory | Terraform module which creaes ECS Services |
/modules/dokuwiki/modules/ecs-task-definition | Directory | Terraform module responsible for creating ECS Task Definitions |
/modules/dokuwiki/modules/efs | Directory | Terraform module responsible for creating the EFS storage |
/modules/dokuwiki/modules/security-group | Directory | Terraform module which creates Security Groups |
NOTE: The intent behind modules is to create code that can be used in multiple projects. Therefore, it is not generally considered a best practice to put terraform modules in subfolders within the project that is referencing those modules. Instead, it is more common to reference the project repository and tag for the module you are using in your module block. The modules tutorial on the HashiCorp Learn site goes into more detail on this.
Terraform.tf
The terraform.tf file contains a terraform block which defines settings for the project including required terraform CLI and AWS provider versions:
# Terraform block which specifies version requirements terraform { # Specify required providers and versions required_providers { aws = { source = "hashicorp/aws" version = ">= 4.21.0" } } # specify required version for terraform itself required_version = ">= 1.2.4" }
Provider.tf
The provider.tf file contains a provider block which defines settings for the AWS Terraform provider including the AWS region where the resources will be created and default tags that will be applied to all of the resources created using this provider:
# Terraform AWS Provider block # variables come from variables.tf file provider "aws" { # Set default AWS Region region = var.region # Define default tags default_tags { tags = merge(var.default_tags, ) } }
Variables.tf
The variables.tf file contains variable blocks which set the AWS region, resource name prefix which is appended to the names of all the resources created by Terraform, and the default tags which are applied to all resources created by this project:
# AWS Region where resources will be deployed variable "region" { type = string description = "AWS Region where resources will be deployed" default = "us-east-1" } # Names for all resources created by this project will have this prefix applied variable "name_prefix" { type = string description = "Prefix all resource names" default = "dokuwiki" } # All resources will have these tags applied variable "default_tags" { description = "A map of tags to add to all resources" type = map(string) default = { tf-owned = "true" repo = "https://TODOUPDATEURL" branch = "main" } }
Main.tf
The main.tf file contains the module block for the dokuwiki module:
# This module block creates all of the AWS resources for Dokuwiki module "dokuwiki" { # Specify the path to the dokuwiki module in this project source = "./modules/dokuwiki" # these variables come from variables.tf region = var.region default_tags = var.default_tags name_prefix = var.name_prefix }
If you are only looking at files in the main root directory of the project, it might seem as though a lot of detail is missing, and it is. In order to simplify the code, we have intentionally placed all of the resources that are created by the deployment in the dokuwiki module which references other modules to actually create resources in AWS.
Exhibit 6: The Relationships Between the Different Modules in the DokuWiki Project
Outputs.tf
The outputs.tf file contains output blocks which enable us to add values for modules and resources to the output when we run Terraform commands.
# Name of the ECS Cluster output "ecs_cluster_name" { description = "The name of the ECS cluster" value = module.dokuwiki.ecs_cluster_name } # DNS Name for the Application Load Balancer output "alb_dns_name" { description = "The DNS name of the load balancer." value = module.dokuwiki.alb_dns_name }
Both of these output blocks are pulling data from the Dokuwiki module.
Deploy DokuWiki Resources Using Terraform
Now that we have taken an initial look at the code it is time to start running terraform commands to turn this code into resources in AWS. We’ll go through the following steps to execute this deployment using terraform:
- Initialize the project and download required providers
- Generate and review the list of changes that terraform will perform
- Deploy changes to our infrastructure
- Test the deployment
- Roll back the changes made by Terraform
These steps are described in more detail in the following sections.
Step 1: Initialize the Project – terraform init
Before we can start deploying resources with Terraform, we need to instruct it to download any modules and provider plug-ins that we are using in our code.
Make sure you change into the project folder and then run the following command to initialize the project:
terraform init
When you run this command, terraform will do the following:
- Evaluate the code in the current folder (the root module).
- Download any modules referenced in the current folder that are not available locally and put them into a hidden subfolder in our project folder called .terraform.
- Evaluate all the code blocks in the current folder and all module folders to determine which provider plug-ins are needed.
- Download the provider plug-ins and put them in the .terraform folder.
Essentially, it does all the preparation work required to enable us to proceed to the next step.
Step 2: Generate and Review a List of Changes that Terraform will Perform – terraform plan
After we initialize our project using the terraform init command, the next step is to instruct Terraform to generate a list of changes that terraform will make to our infrastructure; this list of changes that Terraform generates is called a plan.
When Terraform generates a plan, it will do the following:
- Evaluate all of the code blocks in the current folder and the code blocks in all of the modules that are referenced in the current folder
- Determine which resources will be created
- Generate a dependency map which determines the order in which those resources will be created
- Print out a detailed output listing exactly what terraform will do if we choose to make changes to our infrastructure
NOTE: Running the command terraform plan is a safe operation. Terraform will not make any changes to your infrastructure when you run a plan. It will only tell you what changes will be made.
Run the following command to instruct terraform to generate a plan:
terraform plan
Let’s go through the output on the plan to see what resources terraform will create.
Note: The attributes of each resource were removed from the plan for the sake of brevity.
module.dokuwiki.data.aws_vpc.vpc: Reading... module.dokuwiki.data.aws_vpc.vpc: Read complete after 1s [id=vpc-02bc8afe3a47e8497] module.dokuwiki.data.aws_subnets.subnets: Reading... module.dokuwiki.data.aws_subnets.subnets: Read complete after 0s [id=us-east-1]
The first statements we find in the plan output are a collection of data outputs from data blocks in the DokuWiki module. In Terraform, data blocks represent queries sent by the Terraform provider to fetch information. If we want to learn more about these then we need to check the documentation for the HashiCorp Terraform AWS Provider in the Terraform Registry.
- The data.aws_vpc data block triggers a call to the AWS API to fetch the properties of a VPC. In this case, the intent of the data call is to fetch the ID for the default vpc in the AWS Region. When a new AWS account is created, AWS places a VPC in the account by default so workloads can be created without first having to create a VPC.
- The data.aws_subnets data block triggers a call to the AWS API to fetch the subnets in a VPC. In this case, the intent of the data call is to get the attributes of all the subnets in the default VPC. It is necessary to specify which subnets will be used when creating resources like application load balancers and ECS services.
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create
Terraform marks resources in the plan with a + symbol to indicate that they will be created by Terraform when we eventually create the infrastructure. We’ll see other symbols when we roll back the changes we make.
Terraform will perform the following actions: # module.dokuwiki.aws_cloudwatch_log_group.ecs-log-group will be created + resource "aws_cloudwatch_log_group" "ecs-log-group" { . . . }
The aws_cloudwatch_log_group resource block is the first resource in the plan. AWS CloudWatch log groups aggregate logging data from AWS resources. In this case, the log group will capture logging data from our ECS Tasks.
# module.dokuwiki.aws_iam_role.ecs_task_role will be created + resource "aws_iam_role" "ecs_task_role" { . . . }
# module.dokuwiki.aws_iam_role_policy_attachment.ecs-task-role-policy-attach will be created + resource "aws_iam_role_policy_attachment" "ecs-task-role-policy-attach" { . . . }
- The aws_iam_role resource block creates an IAM role for our ECS Task. Whenever you deploy a resource in AWS that needs to interact with the AWS API then you have to assign an IAM role with corresponding IAM permissions to that resource.
- The aws_iam_role_policy_attachment resource block attaches an AWS IAM policy to the ECS Task IAM role. In this case, we are attaching minimal permissions to permit the task to interact with other AWS services. You can read more about the
AmazonECSTaskExecutionRolePolicy
AWS policy in the AWS Documentation.
# module.dokuwiki.random_id.index will be created + resource "random_id" "index" { . . . }
The random_id resource block generates a random number which is used to pick random subnets for our resources because we don’t care which subnets our resources are placed in but need to have a deterministic mechanism for picking subnets.
# module.dokuwiki.module.alb.aws_lb.this[0] will be created + resource "aws_lb" "this" { . . . } # module.dokuwiki.module.alb.aws_lb_listener.frontend_http_tcp[0] will be created + resource "aws_lb_listener" "frontend_http_tcp" { . . . } # module.dokuwiki.module.alb.aws_lb_target_group.main[0] will be created + resource "aws_lb_target_group" "main" { . . . }
- The aws_lb resource block creates an Application Load Balancer which will send HTTP traffic from users of our application to the ECS Tasks running DokuWiki.
- The aws_lb_listener resource block creates an Application Load Balancer Listener which listens for traffic coming to the load balancer and sends it to Target Groups.
- The aws_lb_target_group resource block creates an Application Load Balancer Target Group. When the ECS Service creates a new ECS Task, it will register it to the Target Group so that HTTP traffic coming to the Application Load Balancer can be sent to the ECS Task.
# module.dokuwiki.module.ecs-cluster.aws_ecs_cluster.this[0] will be created + resource "aws_ecs_cluster" "this" { . . . } # module.dokuwiki.module.ecs-service.aws_ecs_service.this will be created + resource "aws_ecs_service" "this" { . . . } # module.dokuwiki.module.ecs-task-def-dokuwiki.aws_ecs_task_definition.ecs_task_definition[0] will be created + resource "aws_ecs_task_definition" "ecs_task_definition" { . . . }
- The ecs-cluster resource block creates an ECS Cluster.
- The ecs-service resource block creates an ECS Service.
- The aws_ecs_task_definition resource block creates an ECS Task Definition.
Note: For more information on the relationships between the different ECS resources please refer back to Part 1 of this series.
# module.dokuwiki.module.efs.aws_efs_access_point.default["Doku"] will be created + resource "aws_efs_access_point" "default" { . . . } # module.dokuwiki.module.efs.aws_efs_backup_policy.policy[0] will be created + resource "aws_efs_backup_policy" "policy" { . . . } # module.dokuwiki.module.efs.aws_efs_file_system.default[0] will be created + resource "aws_efs_file_system" "default" { . . . } # module.dokuwiki.module.efs.aws_efs_mount_target.default[0] will be created + resource "aws_efs_mount_target" "default" { . . . }
- The aws_efs_access_point resource block creates an EFS Access Point which exposes a path on the EFS storage volume as the root directory of the filesystem mapped to the ECS Task.
- The aws_efs_backup_policy resource block creates an EFS backup policy for an EFS storage volume.
- The aws_efs_file_system resource block creates an EFS storage volume which our ECS Tasks use to store Dokuwiki data.
- The aws_efs_mount_target resource block creates an EFS Mount Target. In order to access an EFS filesystem an EFS mount target must be created in the VPC.
# module.dokuwiki.module.sg_ecs_task.aws_security_group.this_name_prefix[0] will be created + resource "aws_security_group" "this_name_prefix" { . . . } # module.dokuwiki.module.sg_ecs_task.aws_security_group_rule.computed_egress_rules[0] will be created + resource "aws_security_group_rule" "computed_egress_rules" { . . . } # module.dokuwiki.module.sg_ecs_task.aws_security_group_rule.computed_ingress_with_source_security_group_id[0] will be created + resource "aws_security_group_rule" "computed_ingress_with_source_security_group_id" { . . . } # module.dokuwiki.module.sg_efs.aws_security_group.this_name_prefix[0] will be created + resource "aws_security_group" "this_name_prefix" { . . . } # module.dokuwiki.module.sg_efs.aws_security_group_rule.computed_egress_rules[0] will be created + resource "aws_security_group_rule" "computed_egress_rules" { . . . } # module.dokuwiki.module.sg_efs.aws_security_group_rule.computed_ingress_with_source_security_group_id[0] will be created + resource "aws_security_group_rule" "computed_ingress_with_source_security_group_id" { . . . } # module.dokuwiki.module.sg_alb.module.sg.aws_security_group.this_name_prefix[0] will be created + resource "aws_security_group" "this_name_prefix" { . . . } # module.dokuwiki.module.sg_alb.module.sg.aws_security_group_rule.egress_rules[0] will be created + resource "aws_security_group_rule" "egress_rules" { . . . } # module.dokuwiki.module.sg_alb.module.sg.aws_security_group_rule.ingress_rules[0] will be created + resource "aws_security_group_rule" "ingress_rules" { . . . } # module.dokuwiki.module.sg_alb.module.sg.aws_security_group_rule.ingress_rules[1] will be created + resource "aws_security_group_rule" "ingress_rules" { . . . } # module.dokuwiki.module.sg_alb.module.sg.aws_security_group_rule.ingress_with_self[0] will be created + resource "aws_security_group_rule" "ingress_with_self" { . . . }
- The aws_security_group resource blocks creates Security Groups in a VPC which are essentially firewalls for the resources that are associated to the Security Group.
- The aws_security_group_rule resource blocks create Security Group Rules for the security groups. Security Group Rules define what incoming and outgoing network traffic is permitted to and from the resources that the security groups are associated with.
For the ECS deployment we have 3 security groups.
- dokuwiki.module.sg_efs.aws_security_group – The Security Group assigned to the EFS storage volume which only permits traffic coming from resources that have been assigned to the ECS Tasks Security Group.
- dokuwiki.module.sg_ecs_task.aws_security_group – The Security Group assigned to the ECS Tasks which only permits traffic coming from the resources that have been assigned to the ALB Security Group.
- dokuwiki.module.sg_alb.module.sg.aws_security_group – The Security Group assigned to the ALB which allows incoming HTTP traffic from any address.
Plan: 25 to add, 0 to change, 0 to destroy. ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────── Changes to Outputs: + alb_dns_name = (known after apply) + ecs_cluster_name = "dokuwiki-cluster"
Note: You didn’t use the -out
option to save this plan, so Terraform can’t guarantee to take exactly these actions if you run terraform apply
now.
When we choose to move onto the next step we will be deploying 25 distinct resources to AWS! We also see that the plan shows two changes to Outputs.
- The ecs_cluster_name has a value because Terraform can already determine what the value should be.
- The alb_dns_name shows a value of (known after apply) because this value will only be known after the Application Load Balancer (ALB) is created.
NOTE: When you run a terraform plan command, it is very important to review it carefully and confirm the plan is doing what you expect it to do.
Step 3: Make changes to Infrastructure – terraform apply
After we run the terraform plan command to generate the planned list of changes that Terraform will make to our infrastructure, the next step is to instruct Terraform to carry out those changes.
Run the following command to instruct terraform to initiate the process of applying the changes that we saw in the plan:
terraform apply
When you run the terraform apply
command, Terraform will generate a new plan for you. Let’s go through the output on the apply command.
Note: The list of resources were removed from the sample output for the sake of brevity.
module.dokuwiki.data.aws_vpc.vpc: Reading... module.dokuwiki.data.aws_vpc.vpc: Read complete after 1s [id=vpc-02bc8afe3a47e8497] module.dokuwiki.data.aws_subnets.subnets: Reading... module.dokuwiki.data.aws_subnets.subnets: Read complete after 0s [id=us-east-1]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: . . .
The output from the terraform apply
command starts out just like the output from the terraform plan command we just ran so we don’t need to go over it again. However, if you scroll to the end of the output you will see something new.
Plan: 25 to add, 0 to change, 0 to destroy. Changes to Outputs: + alb_dns_name = (known after apply) + ecs_cluster_name = "dokuwiki-cluster" Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value:
Terraform is asking you to confirm that you want it carry out all the actions that are listed in the plan. Go ahead and type yes at the prompt and hit enter. When you do this, Terraform will start creating all of the resources in AWS on your behalf.
As Terraform works through the process of carrying out the planned changes to our infrastructure, it lists out what it is doing.
Note: Some of the output was omitted for the sake of brevity.
module.dokuwiki.random_id.index: Creating... module.dokuwiki.random_id.index: Creation complete after 0s [id=7lA] . . . module.dokuwiki.module.ecs-service.aws_ecs_service.this: Still creating... [2m20s elapsed] module.dokuwiki.module.ecs-service.aws_ecs_service.this: Creation complete after 2m23s [id=arn:aws:ecs:us-east-1:816649246361:service/dokuwiki-cluster/dokuwiki-service] Apply complete! Resources: 25 added, 0 changed, 0 destroyed. Outputs: alb_dns_name = "dokuwiki-alb-979498190.us-east-1.elb.amazonaws.com" ecs_cluster_name = "dokuwiki-cluster"
It should take around 3 minutes for Terraform to create all 25 resources! This is a huge time savings if you consider how long it would take to create all of these resources by clicking around in the AWS Console.
NOTE: Terraform tracks all the changes it makes to infrastructure in a state file. This will come up later when we are finished and want Terraform to roll back all the changes it has made for us.
Notice that the output for the alb_dns_name now has a value. Terraform can tell us what the DNS name is for the Application Load Balancer (ALB) because it has now been created. Try copying the value for the alb_dns_name from your output (which will be different from mine) and then paste it into your browser to go the Dokuwiki site Terraform created.
Exhibit 7: Accessing Dokuwiki in the browser.
NOTE: The Dokuwiki application may not load the first time you try it. When the ECS Task is created, AWS needs to pull the Bitnami Dokuwiki container from Docker Hub and then start it which may take a few minutes. If you try to access the DNS name for the ALB in your browser, but it does not load for you or you see an error, just wait a few minutes and try it again.
Step 4: Test the Deployment
If we were able to launch the Dokuwiki site using the alb_dns_name in the last step then we have tested a lot of the infrastructure. At the very least the following is working:
- Our Application Load Balancer (ALB) is handling the incoming network traffic from our browser via the security group
- The ALB is routing network traffic to the ECS Task running Dokuwiki which means we have a running ECS Cluster with an ECS Task running Dokuwiki.
- Unlike the last deployment in Part 1 of this series, we didn’t have to specify a port number in the URL to access the running Dokuwiki container from the browser.
However, there are two enhancements to this deployment when compared with the deployment we did in Part 1 of this series.
- Introducing an ECS Service to run our ECS Task so that if the ECS Task stops for some reason then the ECS Service will start a new one for us.
- Previously, our data was stored on the Dokuwiki ECS Task; therefore it would be lost if the Task was stopped or failed. However, now our ECS Task is using EFS storage for our content which continues to be available even if the Task is lost.
Now that we have deployed the infrastructure using Terraform we can test these aspects of the deployment by adding content to Dokuwiki via the browser, stopping the running ECS Task, and then verifying that the ECS Service starts a new ECS Task and that our content is still visible when we refresh the page via the browser.
- Add Content to Dokuwiki
From your browser, click the pencil on the right side of the page to engage the content editor for the current page.
Exhibit 8: Edit the current page in Dokuwiki.
Next, type some text into the text box for the page and then click the Save button. You should now see the text you changed appear on the page.
- Stop the ECS Task
Now that we have added the content to the page in Dokuwiki, we should stop the running ECS Task and then wait to see if it starts again. We could login to the AWS Console and stop the running task. However, it would be much quicker to use the AWS command line Interface (CLI) instead. We need to use two AWS CLI commands to do this.
- aws ecs list-tasks – which lists the ECS tasks running in an ECS Cluster. Go here to check out the documentation for this command.
- aws ecs stop-task – which stops a running ECS task. Go here to check out the documentation for this command.
First, run the following command to get a list of running ECS Tasks on our cluster.
aws ecs list-tasks --cluster dokuwiki-cluster
If we have setup the AWS CLI correctly with an IAM Access Key and Secret Key then we should get a response like this when we run the command.
{ "taskArns": [ "arn:aws:ecs:us-east-1:816649246361:task/dokuwiki-cluster/ef120a2a79fe4e4e8efb70a6623d886e" ] }
Note: The output you get will not match mine exactly. The identifiers will be different.
Next, we need to run the command to stop the ECS Task that we saw when we ran the aws ecs list-tasks command. We will need to run the ecs stop-task command and pass it the name of our ECS Cluster and the identifier for the task we want to stop. Run the following command substituting the ECS Task ID you got when you ran the first command.
aws ecs stop-task --cluster dokuwiki-cluster --task ef120a2a79fe4e4e8efb70a6623d886e
If we ran the command correctly then AWS will stop the task and return all of the parameters of the stopped task. Hit the q key to get back to the command prompt.
Now that we ran a command to stop the task, run the following command again to see if our task was actually stopped.
aws ecs list-tasks --cluster dokuwiki-cluster
If we run the aws ecs list-tasks command fast enough, we may not see any tasks in the list. However, if we wait 15 seconds and run it again, we should see that another task listed with a new Task ID.
{ "taskArns": [ "arn:aws:ecs:us-east-1:816649246361:task/dokuwiki-cluster/91e4b6e4c0084d0493756cfe0c4d7898" ] }
Note that the ID at the end of the task is different this time because the ECS Service created a new ECS Task.
- Check the DokuWiki Site to Confirm the Content We Changed is Still Loading from EFS Storage
After confirming that a new ECS Task has started, reload the DokuWiki page in your browser to see if the content you changed previously is still there. You may find that the first time you reload the page that you get an error message. This is expected because it will take a minute for the ECS Service to start a new ECS Task running the DokuWiki container. However, if you wait 30 seconds or so and reload the page you should find that the content you changed previously in DokuWiki is still there. A successful test is evidence that our content is now stored on the EFS storage volume instead of our ECS Task.
Step 5: Roll Back the Changes Made by Terraform – terraform destroy
Now that we have deployed and validated our infrastructure, it is time to remove it. Fortunately, Terraform tracked all the changes it made to our infrastructure in a state file and can use this information to roll back all the changes it made.
Run the following command to instruct terraform to roll back or destroy all the changes made to our infrastructure:
terraform destroy
When you run the terraform destroy command, Terraform will generate a new plan for you listing the resources that will be removed. Let’s go through the output on the destroy command.
Note: The list of resources were removed from the sample output for the sake of brevity.
module.dokuwiki.random_id.index: Refreshing state... [id=m4g] module.dokuwiki.data.aws_vpc.vpc: Reading... module.dokuwiki.aws_iam_role.ecs_task_role: Refreshing state... [id=dokuwiki-ecstaskrole] . . . Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: - destroy
The terraform destroy command refreshes the state of all the resources first. Most of these messages were removed from the sample output for the sake of brevity. After it finishes refreshing the state of all the resources, it tells you what it will do. This time around, the symbol changes to – destroy indicating that any resources with the minus symbol next to them will be destroyed.
Terraform will perform the following actions: # module.dokuwiki.aws_cloudwatch_log_group.ecs-log-group will be destroyed - resource "aws_cloudwatch_log_group" "ecs-log-group" { . . . } # module.dokuwiki.aws_iam_role.ecs_task_role will be destroyed - resource "aws_iam_role" "ecs_task_role" { . . . } . . . Plan: 0 to add, 0 to change, 25 to destroy. Changes to Outputs: - alb_dns_name = "dokuwiki-alb-979498190.us-east-1.elb.amazonaws.com" - ecs_cluster_name = "dokuwiki-cluster" Do you really want to destroy all resources? Terraform will destroy all your managed infrastructure, as shown above. There is no undo. Only 'yes' will be accepted to confirm. Enter a value:
Note: Some resources and resource attributes were removed from the sample output for the sake of brevity.
As you continue to review the output you should notice that every single resource now has a minus symbol next to it indicating that if you approve the operation then Terraform will remove all the resources. If you scroll down to the end of the output, you’ll see that it will destroy 25 resources which is exactly the same as the number of resources that Terraform created when we ran the apply command.
Terraform is asking you to confirm that you want it to carry out all the actions that are listed in the plan. Go ahead and type yes at the prompt and hit enter. When you do this, Terraform will start destroying all of the resources in AWS on your behalf.
As Terraform works through the process of carrying out the planned changes to our infrastructure, it lists out what it is doing.
Note: Some of the output was omitted for the sake of brevity.
module.dokuwiki.module.efs.aws_efs_backup_policy.policy[0]: Destroying... [id=fs-02fc0e0a740ddd00e] module.dokuwiki.module.sg_alb.module.sg.aws_security_group_rule.egress_rules[0]: Destroying... [id=sgrule-1334013127] module.dokuwiki.module.efs.aws_efs_mount_target.default[0]: Destroying... [id=fsmt-03bb888c2575f56e4] . . . Destroy complete! Resources: 25 destroyed.
It should take around 3 minutes for Terraform to destroy all 25 resources! Again, this is a huge time savings if you consider how long it would take to destroy all of these resources by clicking around in the AWS Console.
After the destroy process finishes, you will find that if you reload the DokuWiki browser tab it will no longer load because the Application Load Balancer (ALB) created by AWS no longer exists.
Closing Remarks
We covered a lot of ground in this post.
- We started by looking at some ways to make our ECS DokuWiki deployment more robust using an ECS Task and an EFS volume.
- We listed some of the benefits of Infrastructure as Code (IaC) when compared with “Click-Ops.”
- We went over some of the benefits of Terraform.
- We described the setup requirements for running Terraform with AWS including installing Git, the AWS CLI, and Terraform.
- We pulled the source code for the Terraform deployment from GitLab.
- We reviewed the code, ran a terraform init, ran a terraform plan, and then deployed the code using terraform apply.
- We tested to confirm that the Terraform deployment was successful using the AWS CLI.
- We then used Terraform to destroy our deployment so that we wouldn’t have to pay for resources in AWS that we were no longer using.
Thanks for reading!