Automation framework using DevOps methodology and Akamai provider for Terraform to build self service auto-management solutions
Published on April 09, 2023 by Dai Tran
project application-security content-delivery-network network-security self-service blog
15 min READ
The framework architecture is built upon the following building blocks and tools:
These building blocks are integrated to create the following types of workflows:
Akamai service owners and engineers access to various building blocks to set up, configure, and maintain the framework infrastructure. This includes the initial set up such as account creation, access permissions as well as ongoing configuration fine-tunning and support. These workflows are enabled by the following types of CI/CD pipelines:
Automation developers develop and update the Akamai automation and infrastructure code base that triggers development CI/CD pipeline to package and publish the code base to software artifactory and Docker repositories. They also develop the CI/CD pipelines that are used by Akamai service owners, engineers and end users. These workflows are enabled by the following types of CI/CD pipelines:
Akamai service consumers/end users raise and submit configuration management requests against automation configuration data repositories, i.e. self service portals via the GitHub feature branches and pull requests. This submission triggers a CI/CD pipeline that retrieves the runtime credentials such as Akamai API clients and AWS service account credentials from the vault, pulls the automation code from the software artifactory repository, runs the Terraform code (terraform plan
) within a Docker container hosted by a CI agent to generate a Terraform plan for review. Configuration management approvers and/or peer reviewers are then notified of these requests and conduct the approval process. The approval leads to the configuration data merge into the repository. This trigger a configuration deployment/update CI/CD pipeline that retrieves the runtime credentials such as Akamai API clients and AWS service account credentials from the vault, pulls the automation code from the software artifactory repository, runs the code (terraform apply
) within a Docker container hosted by a CI agent and use the merged configuration data to deploy requested configuration changes onto the Akamai Staging and Production networks. The state locking is acquired and released before and after the Terraform operations terraform plan
and terraform apply
.
The pipeline builds a Docker image that is used as a Docker container in the other pipelines. This Docker image is versioned and pushed to a internal Docker registry.
The pipeline is integrated with the Terraform Docker GitHub repository that has the following structure:
requirements
consists of text files that store the versions of the tools Akamai CLI, Akamai CLI Terraform, AWS CLI, internal root CA bundle, required OS packages, PyPI packages, Terraform, Terraform providers including Akamai provider, Terraform linting and code security scanners.scripts
contains a shell script to install the list of the Terraform providers whose versions are fetched from the above Terraform provider text fileThe CI phase of the pipeline triggered when change makers introduce updates to the code base via a change branch (feature/*
, bugfix/*
, hotfix/*
) performs the following build steps:
<docker_registry_hostname/terraform:<branch_name>-<yyyymmdd>.<hhmmss>
..terraform.d/plugins/registry.terraform.io
located in the login user’s home directoryIn the CD phase, a pull request is created for one of the three following scenarios:
The merge of this pull request performs the following build steps:
This pipeline is integrated with the automation code GitHub repository that has the following structure:
appsec
is used to store Terraform code for Akamai WAF configuration managementnetwork-list
is used to store Terraform code for Akamai Network List (IP/Geo Firewall) configuration managementproperty
is used to store Terraform code for Akamai property (CDN) configuration managementscripts
is used to store some Python scripts that used automate some pipeline tasksThe pipeline consists of the following build steps to eventually package the Terraform code into a tar.gz compressed file and push it to an Artifactory repository:
terraform init
, terraform validate
, Terraform linting and code security scanningfind
, tar
, gzip
, and push it to the Artifactory using the curl
command. This is executed in the CD phase only.This pipeline is used to manage AWS services as code through a list of CloudFormation YAML templates stored in the AWS IaC GitHub repository. These templates define the following resources in a CloudFormation nested stack across development
and production
AWS accounts:
PipelineRole
has only enough permissions to:CloudFormationRole
via action iam:PassRole
CloudFormationRole
has all the required permissions to manage S3 buckets, DynamoDB tables, VPC interface endpoints and security groupsAWS Direct Connect
This pipeline make use of other AWS infrastructure services like AWS VPC, routing, KMS and SSM that are managed centrally by different platform pipelines.
Each Akamai configuration like network list
(IP/Geo firewalling) and property
has its own onboarding and OOB sync pipeline to onboard the configuration onto the self service platform and allow for the OOB changes made via Akamai Control Center in case of emergencies.
This pipeline consist of the following build steps:
terraform.tfvars
of the id-fetch
module to retrieve network list configurations/contentsterraform apply
in the id-fetch
module to create a local stateterraform output -json > netlist_name_id.json
netlist_name_id.json
into a JSON object through which the network list names and IDs are iterated. They are then put together in a Terraform import command with the format f"terraform import 'akamai_network_list.network_lists[\"{netlist_name}\"]' {netlist_id}\n"
and written into the file import.sh
import.sh
command to create the local stateterraform show -json > terraform.tfstate.json
and jq . terraform.tfstate.json
to show the network list contentsAWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
PipelineRole
and update env variables AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
, AWS_SESSION_TOKEN
:aws sts assume-role \
--role-arn arn:aws:iam::<aws account id>:role/PipelineRole \
--role-session-name spmkeeper@gmail.com > authz.json
export AWS_ACCESS_KEY_ID=$(cat authz.json | jq -r ".Credentials.AccessKeyId")
export AWS_SECRET_ACCESS_KEY=$(cat authz.json | jq -r ".Credentials.SecretAccessKey")
export AWS_SESSION_TOKEN=$(cat authz.json | jq -r ".Credentials.SessionToken")
import.sh
and staterm.sh
. The staterm.sh
consists of the terraform state rm
commands, one per line with this format f"terraform state rm 'akamai_network_list.network_lists[\"{netlist_name}\"]'\n"
terraform init
with the AWS S3 backend configurations. Note: To make this command work, the S3BackendRole
needs to be configured to allow PipelineRole
to assume it.terraform init -backend-config="bucket=<S3 bucket>" \
-backend-config="key=<terraform state file as S3 object>" \
-backend-config="encryption=true" \
-backend-config="region=<AWS region>" \
-backend-config="dynamodb_table=<AWS DynamoDB table>" \
-backend-config="kms_key_id=<AWS KMS key id>" \
-backend-config="role_arn=<arn:aws:iam::<aws account id>:role/S3BackendRole>" \
import.sh
. If the task is OOB syncing, run staterm.sh
then import.sh
. These commands will create/update the state on S3.terraform.tfvars
with what retrieved in step 4 via a feature branch. Merge the feature branch into the main branch that triggers the CD pipeline to complete the final config sync.This pipeline consist of the following build steps:
property-snippets
and auto-generate Terraform code (property.tf
and variables.tf
) and import.sh
: akamai-terraform --edgerc $HOME/.edgerc --section default --version <production network active version> <akamai property name>
import.sh
to create the local stateterraform show -json > ./terraform.tfstate.json
jq -r
command to extract property name
, edge hostnames
, hostnames
, product id
, rule format
, staging network active versionAWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
PipelineRole
and update env variables AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
, AWS_SESSION_TOKEN
:aws sts assume-role \
--role-arn arn:aws:iam::<aws account id>:role/PipelineRole \
--role-session-name spmkeeper@gmail.com > authz.json
export AWS_ACCESS_KEY_ID=$(cat authz.json | jq -r ".Credentials.AccessKeyId")
export AWS_SECRET_ACCESS_KEY=$(cat authz.json | jq -r ".Credentials.SecretAccessKey")
export AWS_SESSION_TOKEN=$(cat authz.json | jq -r ".Credentials.SessionToken")
property-snippets
property-snippets
and auto-generate Terraform code (property.tf
and variables.tf
) and import.sh
: akamai-terraform --edgerc $HOME/.edgerc --section default --version <production network active version> <akamai property name>
variables.tf
, rename auto-generated import.sh
, property.tf
to import.sh.origin
, property.tf.origin
import.sh.origin
to build new import.sh
that has the command terraform import akamai_property.property <property id,contract id,group id,version no.>
terraform init
with the AWS S3 backend configurations. Note: To make this command work, the S3BackendRole
needs to be configured to allow PipelineRole
to assume it.terraform init -backend-config="bucket=<S3 bucket>" \
-backend-config="key=<terraform state file as S3 object>" \
-backend-config="encryption=true" \
-backend-config="region=<AWS region>" \
-backend-config="dynamodb_table=<AWS DynamoDB table>" \
-backend-config="kms_key_id=<AWS KMS key id>" \
-backend-config="role_arn=<arn:aws:iam::<aws account id>:role/S3BackendRole>" \
staterm.sh
that will be used for the OOB sync caseterraform state list > property_state_list.txt
akamai_edge_hostname.edge_hostnames
or akamai_property.property
using regexstaterm.sh
that has lines in the format terraform state rm 'akamai_edge_hostname.edge_hostnames<>'
or terraform state rm 'akamai_property.property<>'
to remove those resourcesstaterm.sh
and import.sh
. Otherwise, only run import.sh
property-snippets
directory into this branch, extract the active version from state and replace version in terraform.tfvars
with it, and finally perform git add
, git commit
, and git push
to push updates to the new branchAfter the onboarding, the network list configurations are managed via terraform.tfvars
. To make changes, a change maker needs to create a feature branch and modifies the content of terraform.tfvars
. That triggers a CI pipeline that run terraform plan
to show the planned changes. The merge of the pull request from the feature branch to the main branch triggers the CD/implementation pipeline that applies changes to Akamai networks.
After the onboarding, the CDN ACL configurations are managed via JSON files in property-snippets
. To make changes, a change maker needs to create a feature branch and modifies the content of the relevant JSON files. That triggers a CI pipeline that run terraform plan
to show the planned changes. The merge of the pull request from the feature branch to the main branch triggers the CD/implementation pipeline that applies changes to Akamai networks.