Custom Workflows
Custom workflows can be defined to override the default commands that Atlantis runs.
Usage
Custom workflows can be specified in the Server-Side Repo Config or in the Repo-Level atlantis.yaml files.
Notes:
- If you want to allow repos to select their own workflows, they must have the
allowed_overrides: [workflow]setting. See server-side repo config use cases for more details. - If in addition you also want to allow repos to define their own workflows, they must have the
allow_custom_workflows: truesetting. See server-side repo config use cases for more details.
Use Cases
.tfvars files
TIP
Before creating custom workflows for .tfvars files, consider using Atlantis's automatic env/{workspace}.tfvars feature. If you structure your files as env/staging.tfvars, env/production.tfvars, etc., Atlantis will automatically include them based on the workspace without any configuration. See Using Atlantis - Automatic Environment Variable Files for details.
Given the structure:
.
└── project1
├── main.tf
├── production.tfvars
└── staging.tfvarsIf you wanted Atlantis to automatically run plan with -var-file staging.tfvars and -var-file production.tfvars you could define two workflows:
# repos.yaml or atlantis.yaml
workflows:
staging:
plan:
steps:
- init
- plan:
extra_args: ["-var-file", "staging.tfvars"]
# NOTE: no need to define the apply stage because it will default
# to the normal apply stage.
production:
plan:
steps:
- init
- plan:
extra_args: ["-var-file", "production.tfvars"]
apply:
steps:
- apply:
extra_args: ["-var-file", "production.tfvars"]
import:
steps:
- init
- import:
extra_args: ["-var-file", "production.tfvars"]
state_rm:
steps:
- init
- state_rm:
extra_args: ["-lock=false"]Then in your repo-level atlantis.yaml file, you would reference the workflows:
# atlantis.yaml
version: 3
projects:
# If two or more projects have the same dir and workspace, they must also have
# a 'name' key to differentiate them.
- name: project1-staging
dir: project1
workflow: staging
- name: project1-production
dir: project1
workflow: production
workflows:
# If you didn't define the workflows in your server-side repos.yaml config,
# you would define them here instead.When you want to apply the plans, you can comment
atlantis apply -p project1-stagingand
atlantis apply -p project1-productionWhere -p refers to the project name.
Adding extra arguments to Terraform commands
If you need to append flags to terraform plan or apply temporarily, you can append flags on a comment following --, for example commenting:
atlantis plan -- -lock=falseIf you always need to do this for a project's init, plan or apply commands then you must define a custom workflow and set the extra_args key for the command you need to modify.
# atlantis.yaml or repos.yaml
workflows:
myworkflow:
plan:
steps:
- init:
extra_args: ["-lock=false"]
- plan:
extra_args: ["-lock=false"]
apply:
steps:
- apply:
extra_args: ["-lock=false"]If policy checking is enabled, extra_args can also be used to change the default behaviour of conftest.
workflows:
myworkflow:
policy_check:
steps:
- show
- policy_check:
extra_args: ["--all-namespaces"]Custom init/plan/apply Commands
If you want to customize terraform init, plan or apply in ways that aren't supported by extra_args, you can completely override those commands.
In this example, we're not using any of the built-in commands and are instead using our own.
# atlantis.yaml or repos.yaml
workflows:
myworkflow:
plan:
steps:
# If you want to hide command output from Atlantis's PR comment, use
# the output option on the run step's expanded form.
- run:
command: terraform init -input=false
output: hide
# If you're using workspaces you need to select the workspace using the
# $WORKSPACE environment variable.
- run: terraform workspace select $WORKSPACE
# You MUST output the plan using -out $PLANFILE because Atlantis expects
# plans to be in a specific location.
- run: terraform plan -input=false -refresh -out $PLANFILE
apply:
steps:
# Again, you must use the $PLANFILE environment variable.
- run: terraform apply $PLANFILECDKTF
Here are the requirements to enable CDKTF
- A custom image with
CDKTFinstalled - Add
**/cdk.tf.jsonto the list of Atlantis autoplan files. - Set the
atlantis-include-git-untracked-filesflag so that the Terraform files dynamically generated by CDKTF will be add to the Atlantis modified file list. - Use
pre_workflow_hooksto runcdktf synth - Optional: There isn't a requirement to use a repo
atlantis.yamlbut one can be leveraged if needed.
Custom Image
# Dockerfile
FROM ghcr.io/runatlantis/atlantis:v0.19.7
USER root
RUN apk add npm && npm i -g cdktf-cliServer Config
# env variables
ATLANTIS_AUTOPLAN_FILE_LIST="**/*.tf,**/*.tfvars,**/*.tfvars.json,**/cdk.tf.json"
ATLANTIS_INCLUDE_GIT_UNTRACKED_FILES=trueOR
atlantis server --config config.yaml
# config.yaml
autoplan-file-list: "**/*.tf,**/*.tfvars,**/*.tfvars.json,**/cdk.tf.json"
include-git-untracked-files: trueServer Repo Config
Use pre_workflow_hooks
atlantis server --repo-config="repos.yaml"
# repos.yaml
repos:
- id: /.*cdktf.*/
pre_workflow_hooks:
- run: npm i && cdktf get && cdktf synth --output ci-cdktf.outNote: don't use the default cdktf.out directory that CDKTF uses, as this should be in the .gitignore list of the repo, so that locally generated files are not checked in.
Repo Structure
This is the git repo structure after running cdktf synth. The cdk.tf.json files contain the Terraform configuration that atlantis can run.
$ tree --gitignore
.
├── cdktf.json
├── ci-cdktf.out
│ ├── manifest.json
│ └── stacks
│ └── eks
│ └── cdk.tf.jsonWorkflow
- Container orchestrator (k8s/fargate/ecs/etc) uses the custom docker image of atlantis with
cdktfinstalled with the--autoplan-file-listto trigger oncdk.tf.jsonfiles and--include-git-untracked-filesset to include the CDKTF dynamically generated Terraform files in the Atlantis plan. - PR branch is pushed up containing
cdktfcode changes. - Atlantis checks out the branch in the repo.
- Atlantis runs the
npm i && cdktf get && cdktf synthcommand in the repo root as a step inpre_workflow_hooks, generating thecdk.tf.jsonTerraform files. - Atlantis detects the
cdk.tf.jsonuntracked files in a number of directories. - Atlantis then runs
terraformworkflows in the respective directories as usual.
Terragrunt
Atlantis supports running custom commands in place of the default Atlantis commands. We can use this functionality to enable Terragrunt.
You can either use your repo's atlantis.yaml file or the Atlantis server's repos.yaml file.
Given a directory structure:
.
└── live
├── prod
│ └── terragrunt.hcl
└── staging
└── terragrunt.hclIf using the server repos.yaml file, you would use the following config:
# repos.yaml
# Specify TERRAGRUNT_TFPATH environment variable to accommodate setting --default-tf-version
# Generate json plan via terragrunt for policy checks
repos:
- id: "/.*/"
workflow: terragrunt
workflows:
terragrunt:
plan:
steps:
- env:
name: TERRAGRUNT_TFPATH
command: 'echo "terraform${ATLANTIS_TERRAFORM_VERSION}"'
- env:
# Reduce Terraform suggestion output
name: TF_IN_AUTOMATION
value: 'true'
- run:
# Allow for targeted plans/applies as not supported for Terraform wrappers by default
command: terragrunt plan -input=false $(printf '%s' $COMMENT_ARGS | sed 's/,/ /g' | tr -d '\\') -no-color -out $PLANFILE
output: hide
- run: |
terragrunt show $PLANFILE
apply:
steps:
- env:
name: TERRAGRUNT_TFPATH
command: 'echo "terraform${ATLANTIS_TERRAFORM_VERSION}"'
- env:
# Reduce Terraform suggestion output
name: TF_IN_AUTOMATION
value: 'true'
- run: terragrunt apply -input=false $PLANFILE
import:
steps:
- env:
name: TERRAGRUNT_TFPATH
command: 'echo "terraform${DEFAULT_TERRAFORM_VERSION}"'
- env:
name: TF_VAR_author
command: 'git show -s --format="%ae" $HEAD_COMMIT'
# Allow for imports as not supported for Terraform wrappers by default
- run: terragrunt import -input=false $(printf '%s' $COMMENT_ARGS | sed 's/,/ /' | tr -d '\\')
state_rm:
steps:
- env:
name: TERRAGRUNT_TFPATH
command: 'echo "terraform${DEFAULT_TERRAFORM_VERSION}"'
# Allow for state removals as not supported for Terraform wrappers by default
- run: terragrunt state rm $(printf '%s' $COMMENT_ARGS | sed 's/,/ /' | tr -d '\\')If using the repo's atlantis.yaml file you would use the following config:
version: 3
projects:
- dir: live/staging
workflow: terragrunt
- dir: live/prod
workflow: terragrunt
workflows:
terragrunt:
plan:
steps:
- env:
name: TERRAGRUNT_TFPATH
command: 'echo "terraform${ATLANTIS_TERRAFORM_VERSION}"'
- env:
# Reduce Terraform suggestion output
name: TF_IN_AUTOMATION
value: 'true'
- run:
command: terragrunt plan -input=false -out=$PLANFILE
output: strip_refreshing
apply:
steps:
- env:
name: TERRAGRUNT_TFPATH
command: 'echo "terraform${ATLANTIS_TERRAFORM_VERSION}"'
- env:
# Reduce Terraform suggestion output
name: TF_IN_AUTOMATION
value: 'true'
- run: terragrunt apply $PLANFILENOTE: If using the repo's atlantis.yaml file, you will need to specify each directory that is a Terragrunt project.
WARNING
Atlantis will need to have the terragrunt binary in its PATH. If you're using Docker you can build your own image, see Customization.
If you don't want to create/manage the repo's atlantis.yaml file yourself, you can use the tool terragrunt-atlantis-config to generate it.
The terragrunt-atlantis-config tool is a community project and not maintained by the Atlantis team.
Running custom commands
Atlantis supports running completely custom commands. In this example, we want to run a script after every apply:
# repos.yaml or atlantis.yaml
workflows:
myworkflow:
apply:
steps:
- apply
- run: ./my-custom-script.shNotes
- We don't need to write a
plankey undermyworkflow. Ifplanisn't set, Atlantis will use the default plan workflow which is what we want in this case. - A custom command will only terminate if all output file descriptors are closed. Therefore a custom command can only be sent to the background (e.g. for an SSH tunnel during the terraform run) when its output is redirected to a different location. For example, Atlantis will execute a custom script containing the following code to create a SSH tunnel correctly:
ssh -f -M -S /tmp/ssh_tunnel -L 3306:database:3306 -N bastion 1>/dev/null 2>&1. Without the redirect, the script would block the Atlantis workflow.
Custom Backend Config
If you need to specify the -backend-config flag to terraform init you'll need to use a custom workflow. In this example, we're using custom backend files to configure two remote states, one for each environment. We're then using .tfvars files to load different variables for each environment.
# repos.yaml or atlantis.yaml
workflows:
staging:
plan:
steps:
- run: rm -rf .terraform
- init:
extra_args: [-backend-config=staging.backend.tfvars]
- plan:
extra_args: [-var-file=staging.tfvars]
production:
plan:
steps:
- run: rm -rf .terraform
- init:
extra_args: [-backend-config=production.backend.tfvars]
- plan:
extra_args: [-var-file=production.tfvars]NOTE
We have to use a custom run step to rm -rf .terraform because otherwise Terraform will complain in-between commands since the backend config has changed.
You would then reference the workflows in your repo-level atlantis.yaml:
version: 3
projects:
- name: staging
dir: .
workflow: staging
- name: production
dir: .
workflow: productionAdd directory and repo context for aws resources using default tags
This is only available in AWS provider version 5.62.0 and higher.
This configuration will create the following tags
repositoryequal togithub.com/<owner>/<repo>which can be changed for gitlab or other VCSrepository_direqual to the relative directory
Other default variables can be added such as for workspace. See below for more available environment variables.
workflows:
terraform:
plan:
steps:
# These env vars TF_AWS_DEFAULT_TAGS_ will work for aws provider 5.62.0+
# https://github.com/hashicorp/terraform-provider-aws/releases/tag/v5.62.0
- &env_default_tags_repository
env:
name: TF_AWS_DEFAULT_TAGS_repository
command: 'echo "github.com/${BASE_REPO_OWNER}/${BASE_REPO_NAME}"'
- &env_default_tags_repository_dir
env:
name: TF_AWS_DEFAULT_TAGS_repository_dir
command: 'echo "${REPO_REL_DIR}"'
apply:
steps:
- *env_default_tags_repository
- *env_default_tags_repository_dirNOTE:
Appending tags to every resource may regenerate data sources such as
aws_iam_policy_documentwhich will cause many resources to be modified. See known issue in aws provider #29421.To run a local plan outside of terraform, the same environment variables will need to be created.
bashtfvars () { export terraform_repository=$(git config --get remote.origin.url | sed 's,^git@,,g' | tr ':' '/' | sed 's,.git$,,g') export terraform_repository_dir=$(git rev-parse --show-prefix | sed 's,\/$,,g') } export TF_AWS_DEFAULT_TAGS_repository=$terraform_repository export TF_AWS_DEFAULT_TAGS_repository_dir=$terraform_repository_dir tfvars terraform planIf a colon is used in the tag name, use the
envcommand instead ofexport.bashtfvars env \ TF_AWS_DEFAULT_TAGS_org:repository=$terraform_repository \ TF_AWS_DEFAULT_TAGS_org:repository_dir=$terraform_repository_dir \ terraform plan
Reference
Workflow
plan:
apply:
import:
state_rm:| Key | Type | Default | Required | Description |
|---|---|---|---|---|
| plan | Stage | steps: [init, plan] | no | How to plan for this project. |
| apply | Stage | steps: [apply] | no | How to apply for this project. |
| import | Stage | steps: [init, import] | no | How to import for this project. |
| state_rm | Stage | steps: [init, state_rm] | no | How to run state rm for this project. |
Stage
steps:
- run: custom-command
- init
- plan:
extra_args: [-lock=false]| Key | Type | Default | Required | Description |
|---|---|---|---|---|
| steps | array[Step] | [] | no | List of steps for this stage. If the steps key is empty, no steps will be run for this stage. |
Step
Built-In Commands
Steps can be a single string for a built-in command.
- init
- plan
- apply
- import
- state_rm| Key | Type | Default | Required | Description |
|---|---|---|---|---|
| init/plan/apply/import/state_rm | string | none | no | Use a built-in command without additional configuration. Only init, plan, apply, import and state_rm are supported |
Built-In Command With Extra Args
A map from string to extra_args for a built-in command with extra arguments.
- init:
extra_args: [arg1, arg2]
- plan:
extra_args: [arg1, arg2]
- apply:
extra_args: [arg1, arg2]
- import:
extra_args: [arg1, arg2]
- state_rm:
extra_args: [arg1, arg2]| Key | Type | Default | Required | Description |
|---|---|---|---|---|
| init/plan/apply/import/state_rm | map[extra_args -> array[string]] | none | no | Use a built-in command and append extra_args. Only init, plan, apply, import and state_rm are supported as keys and only extra_args is supported as a value |
Custom run Command
A custom command can be written in 2 ways
Compact:
- run: custom-command arg1 arg2| Key | Type | Default | Required | Description |
|---|---|---|---|---|
| run | string | none | no | Run a custom command |
Full example:
- run:
command: custom-command arg1 arg2
shell: sh
shellArgs:
- "--debug"
- "-c"
output: showFull example, filtering output and masking matching text (mySecret: "foo" -> mySecret: "<redacted>"):
- run:
command: custom-command arg1 arg2
shell: sh
shellArgs:
- "--debug"
- "-c"
output:
- strip_refreshing
- filter_regex: "((?i)secret:\\s\")[^\"]*"| Key | Type | Default | Required | Description |
|---|---|---|---|---|
| run | map[string -> string] | none | no | Run a custom command |
| run.command | string | none | yes | Shell command to run |
| run.shell | string | "sh" | no | Name of the shell to use for command execution |
| run.shellArgs | string or []string | "-c" | no | Command line arguments to be passed to the shell. Cannot be set without shell |
| run.output | string or []string or []any | "show" | no | How to post-process the output of this command when posted in the PR comment. The options are:show - preserve the full outputhide - hide output from comment (still visible in the real-time streaming output)strip_refreshing - hide all output up until and including the last line containing "Refreshing...". This matches the behavior of the built-in plan command filter_regex: "<regex_pattern>" - masks sensitive text in Atlantis comments by replacing regex matches with <redacted>. Can be used multiple times (processed in order). Only filters inline comments - full plan links still show unfiltered results. |
Native Environment Variables
runsteps in the mainworkfloware executed with the following environment variables: note: these variables are not available topreorpostworkflowsWORKSPACE- The Terraform workspace used for this project, ex.default. NOTE: if the step is executed beforeinitthen Atlantis won't have switched to this workspace yet.ATLANTIS_TERRAFORM_VERSION- The version of Terraform used for this project, ex.0.11.0.DIR- Absolute path to the current directory.PLANFILE- Absolute path to the location where Atlantis expects the plan to either be generated (by plan) or already exist (if running apply). Can be used to override the built-inplan/applycommands, ex.run: terraform plan -out $PLANFILE.SHOWFILE- Absolute path to the location where Atlantis expects the plan in json format to either be generated (by show) or already exist (if running policy checks). Can be used to override the built-inplan/applycommands, ex.run: terraform show -json $PLANFILE > $SHOWFILE.POLICYCHECKFILE- Absolute path to the location of policy check output if Atlantis runs policy checks. See policy checking for information of data structure.BASE_REPO_NAME- Name of the repository that the pull request will be merged into, ex.atlantis.BASE_REPO_OWNER- Owner of the repository that the pull request will be merged into, ex.runatlantis.HEAD_REPO_NAME- Name of the repository that is getting merged into the base repository, ex.atlantis.HEAD_REPO_OWNER- Owner of the repository that is getting merged into the base repository, ex.acme-corp.HEAD_BRANCH_NAME- Name of the head branch of the pull request (the branch that is getting merged into the base)HEAD_COMMIT- The sha256 that points to the head of the branch that is being pull requested into the base. If the pull request is from Bitbucket Cloud the string will only be 12 characters long because Bitbucket Cloud truncates its commit IDs.BASE_BRANCH_NAME- Name of the base branch of the pull request (the branch that the pull request is getting merged into)PROJECT_NAME- Name of the project configured inatlantis.yaml. If no project name is configured this will be an empty string.PULL_NUM- Pull request number or ID, ex.2.PULL_URL- Pull request URL, ex.https://github.com/runatlantis/atlantis/pull/2.PULL_AUTHOR- Username of the pull request author, ex.acme-user.REPO_REL_DIR- The relative path of the project in the repository. For example if your project is indir1/dir2/then this will be set to"dir1/dir2". If your project is at the root this will be".".USER_NAME- Username of the VCS user running command, ex.acme-user. During an autoplan, the user will be the Atlantis API user, ex.atlantis.COMMENT_ARGS- Any additional flags passed in the comment on the pull request. Flags are separated by commas and every character is escaped, ex.atlantis plan -- arg1 arg2will result inCOMMENT_ARGS=\a\r\g\1,\a\r\g\2.
- A custom command will only terminate if all output file descriptors are closed. Therefore a custom command can only be sent to the background (e.g. for an SSH tunnel during the terraform run) when its output is redirected to a different location. For example, Atlantis will execute a custom script containing the following code to create a SSH tunnel correctly:
ssh -f -M -S /tmp/ssh_tunnel -L 3306:database:3306 -N bastion 1>/dev/null 2>&1. Without the redirect, the script would block the Atlantis workflow. - If a workflow step returns a non-zero exit code, the workflow will stop. :::
Environment Variable env Command
The env command allows you to set environment variables that will be available to all steps defined below the env step.
You can set hard coded values via the value key, or set dynamic values via the command key which allows you to run any command and uses the output as the environment variable value.
- env:
name: ENV_NAME
value: hard-coded-value
- env:
name: ENV_NAME_2
command: 'echo "dynamic-value-$(date)"'
- env:
name: ENV_NAME_3
command: echo ${DIR%$REPO_REL_DIR}
shell: bash
shellArgs:
- "--verbose"
- "-c"| Key | Type | Default | Required | Description |
|---|---|---|---|---|
| env | map[string -> string] | none | no | Set environment variables for subsequent steps |
| env.name | string | none | yes | Name of the environment variable |
| env.value | string | none | no | Set the value of the environment variable to a hard-coded string. Cannot be set at the same time as command |
| env.command | string | none | no | Set the value of the environment variable to the output of a command. Cannot be set at the same time as value |
| env.shell | string | "sh" | no | Name of the shell to use for command execution. Cannot be set without command |
| env.shellArgs | string or []string | "-c" | no | Command line arguments to be passed to the shell. Cannot be set without shell |
Notes
envcommand's can use any of the built-in environment variables available toruncommands.
Multiple Environment Variables multienv Command
The multienv command allows you to set dynamic number of multiple environment variables that will be available to all steps defined below the multienv step.
Compact:
- multienv: custom-command| Key | Type | Default | Required | Description |
|---|---|---|---|---|
| multienv | string | none | no | Run a custom command and add printed environment variables |
Full:
- multienv:
command: custom-command
shell: bash
shellArgs:
- "--verbose"
- "-c"
output: show| Key | Type | Default | Required | Description |
|---|---|---|---|---|
| multienv | map[string -> string] | none | no | Run a custom command and add printed environment variables |
| multienv.command | string | none | yes | Name of the custom script to run |
| multienv.shell | string | "sh" | no | Name of the shell to use for command execution |
| multienv.shellArgs | string or []string | "-c" | no | Command line arguments to be passed to the shell. Cannot be set without shell |
| multienv.output | string | "show" | no | Setting output to "hide" will suppress the message obout added environment variables |
The output of the command execution must have the following format: EnvVar1Name=value1,EnvVar2Name=value2,EnvVar3Name=value3
The name-value pairs in the output are added as environment variables if command execution is successful, otherwise the workflow execution is interrupted with an error and the errorMessage is returned.
Notes
multienvcommand's can use any of the built-in environment variables available toruncommands.