GitOps
5
min read

How To Use GitOps on AWS In Your Organization: A Complete Guide

GitOps has changed how we handle infrastructure by providing a set of practices to manage infrastructure as a part of the delivery pipeline while using Git as the source control mechanism. GitOps can be easily adapted to any infrastructure configuration requirements, with Git as the single source of truth for declarative infrastructure configurations. Moreover, GitOps is well suited for cloud infrastructure management

In this article, let's dig into how to apply these practices best to facilitate AWS GitOps.

How to Get Started with GitOps on AWS? 

GitOps is well suited for managing AWS Kubernetes Service (EKS) as it was initially developed with Kubernetes in mind. However, GitOps can be extended for any type of infrastructure management, whether GitOps ECS, simpler container management experience, or other resources in AWS.

GitOps and Infrastructure as Code (IaC) go hand in hand as IaC is the basis for creating declarative infrastructure configurations. It enables users to codify their infrastructure and include all the infrastructure changes and configurations as code in a version-controlled Git repository. Codifying the infrastructure is the starting point of GitOps which enables you to build upon that base and integrate infrastructure into a delivery pipeline as a core part of the development and delivery process. You will need the knowledge of Git, AWS, and how to define declarative infrastructure (Infrastructure as Code) to get started with AWS GitOps. 

Let’s see how we can start with GitOps by creating a GitOps delivery pipeline using AWS tools and services.

The GitOps Delivery Pipeline on AWS

GitOps delivery pipelines will be identical to normal software delivery pipelines yet with the notable inclusion of automated infrastructure provisioning and management. When it comes to AWS GitOps, there can be varying steps depending on the requirements. However, in general, we can break down a delivery pipeline that is based on GitOps for AWS EKS or AWS ECS into the following distinct steps;

  1. Source Code (VCS) - Managed by Git, this is where all the declarative infrastructure code is stored and managed. Developers will push infrastructure and application changes into a git repository at the beginning of any pipeline.
  2. Build and Test - Next, the container will be built and tested. 
  3. Publish - Then, the container will be published to a container registry before deploying in an EKS cluster or any container service.
  4. Deployment - This is where infrastructure changes will be applied, and the containers will be deployed to the production EKS cluster or ECS.

The primary tool that combines all these services will be AWS CodePipeline which combines Continuous Integration and Continuous Delivery tools and allows users to create multi-stage GitOps delivery pipelines. The primary benefit of using native services is that we have deeper integration with AWS itself and can easily configure and secure any service related to AWS. If we are thinking of only using AWS services for the declarative infrastructure, we can use AWS CloudFormation or AWS Cloud Development Kit (AWS CDK) which lets you provision AWS resources in your preferred language. 

In our pipeline, the code for both of these can be stored in a CodeCommit repo and be directly included as a part of the GitOps pipeline. The declarative infrastructure enabled by these tools eliminates the need for manual intervention for infrastructure configuration changes leading to less misconfiguration incidents.

Also, it is important that we try to keep the GitOps pipeline as simple as possible without sacrificing any functionality. Thus allowing users to easily modify and extend the pipeline with changing requirements.

Additionally, this allows users to extend the functionality of the GitOps pipeline by integrating services like CloudWatch for monitoring, AWS Config for Policy as Code, AWS X-Ray for distributed tracing, etc.

AWS GitOps Delivery Workflow

Now we understand the basics of a GitOps delivery pipeline based on AWS services. Let us look at a simple workflow to facilitate infrastructure changes using a GitOps pipeline. This example will be based on CodePipeline as the CI/CD tool and CloudFormation Stack for declarative infrastructure.

  1. A developer creates or updates a CloudFormation template.
  2. The changes are pushed (committed) into the AWS CodeCommit repository. Users can implement a verification process before merging changes to the master branch before triggering the pipeline.
  3. AWS CodePipeline query the repository when a new commit is detected which triggers the pipeline.
  4. The new changes are pulled and CodeBuild executes to build the new version of the CloudFormation template.
  5. AWS CodeBuild then runs the changes in the new template and applies them to the relevant resources.
  6. Finally, the pipeline will verify if the changes are applied accordingly and if any failures roll back to the previous configurations.

Which Tools to Use for Each Step of the GitOps Pipeline

One major advantage when facilitating GitOps on AWS is that you are not limited to native tools. These AWS-provided tools provide a more streamlined experience with deeper integration. However, as the leading cloud provider, many third-party tools and services provide near-native integration with AWS and sometimes can offer functionality that is not available with native tools.

Tool selection is entirely dependent on user preferences and project requirements. For the previously mentioned stages of the pipeline, the following are some of the available tools and services from AWS and third parties to facilitate GitOps on AWS.

Source Code Management

The version control system is the core component that powers the pipeline. Any git-based repository service can be used for source code management.

  • GitHub
  • Amazon CodeCommit
  • GitLab
  • Bitbucket

Build and Test 

These tools are used to power the pipeline, especially the continuous integration part where the application or the container will be built and tested. They can be further integrated with test automation frameworks such as Selenium for testing.

  • AWS CodeBuild
  • GitHub Actions
  • GitLab Pipelines
  • Bitbucket Pipelines
  • Jenkins/JenkinsX
  • CircleCI

Publish

Users can use a container registry or an artifact/package storage service when publishing packages or containers. Container registries are vital for easy deployment of containers, especially for managed services like Kubernetes AWS.

  • Amazon Elastic Container Registry (ECR)
  • Docker Container Registry
  • JFrog Container Registry
  • GitHub Package Registry
  • Jfrog Artifactory
  • VMWare Harbor
  • Sonatype Nexus

Deployment

Continuous delivery must be deeply integrated with infrastructure as code tools. Therefore,  users need to consider both IaC and CD tools at the deployment stage.

IaC Tools

  • AWS CloudFormation
  • Terraform
  • Ansible
  • Puppet/Chef

Continuous Delivery Tools

  • AWS CodePipeline
  • ArgoCD
  • Spinnaker
  • GitLab
  • BitBucket Pipeline

If you don’t want to create and maintain a whole DevOps toolchain, integrate multiple different tools to work together, sign up to check out Microtica. You connect your Git account, create pipelines to build and test the infrastructure code, and deploy infrastructure on your own AWS account. Additionally, you can deploy your apps on Kubernetes and deliver them together with your infrastructure. All in one platform. 

Best Practices when Adapting GitOps in AWS

Now we have an idea of how to implement AWS GitOps best. So let's look at some of the best practices when adapting GitOps in AWS.

  • Standardize infrastructure configurations

As with any code, infrastructure declarations must adhere to a standardized way of development. Everything from strict naming conventions to how secrets are managed must adhere to proper guidelines. Implementing Tag Policies and utilizing services like AWS Config for policy enforcement and AWS Organizations for managing policies across AWS accounts are some tools that can be used for standardization across aws resources.

  • Separate application code from infrastructure code

This separation with independent Git repositories allows users to implement proper access control and eliminates the chances for code contamination. This separation ensures that the configurations that power GitOps on AWS will be intact, especially when dealing with infrastructure configurations like manifests.

  • Introduce multiple repositories depending on the requirement

There is no strict requirement only to have a single infrastructure repository. There can be multiple repositories for each project, team, etc., depending on the organizational requirements. Whether it's an ECS GitOps repo or repository containing configs for EKS AWS with separate repos, responsible teams can independently manage their developments and bring them together when releases are needed or enable separate releases.

  • Verify and Test Infrastructure Configurations

Verification and Testing are not only for applications, and infrastructure changes also need to be verified and tested. So ensure each pull request is verified by other team members with an infrastructure code review, test the infrastructure configurations in staging environments, and use built-in validation tools in IaC tools before applying the configurations. Implementing validate functions to cloud formation templates or manual reviews can be integrated as a part of the overall pipeline.

  • Integrate Monitoring into Infrastructure Changes

Monitoring can help to detect configuration drift as well as eliminate any shadow IT resources created in AWS. AWS Config, CloudTrail, and CloudWatch are excellent tools for monitoring infrastructure changes.

Conclusion

Adapting GitOps in AWS is the ideal way to manage infrastructure at scale in a cloud-based development process. By implementing a CI/CD pipeline for GitOps, users can introduce the same automated and agile development to handle infrastructure configurations. This implementation ultimately leads to faster deployments with fewer overall bottlenecks due to slow infrastructure changes.