Ensuring CIS Compliance in AWS

2018-07-13 00:00:00 +0000

the problem

How do we validate our cloud security compliance? How do we know that we didn’t just roll out a change to our infrastructure with terraform that changed our security profile?

We’ve got our ‘infrastructure as code’ codified and checked in to gitlab. We’ve got our CI/CD pipeline rolling out our AWS infrastructure as merge requests happen so it would make sense that we’d also then test the security of the changes that we just deployed.

prowler

Prowler is, as the docs state:

Prowler is a tool that provides automate auditing and hardening guidance of an AWS account.
It is based on AWS-CLI commands. It follows guidelines present in the CIS Amazon
Web Services Foundations Benchmark at:
https://d0.awsstatic.com/whitepapers/compliance/AWS_CIS_Foundations_Benchmark.pdf

basically, it’s the AWS CIS Foundations Benchmark in python. When you run it, it looks at your AWS account and checks to see whether you’ve done things like:

  • enable MFA on the root account
  • set a complex password policy with expiration
  • set up cloud watch to watch for specific security events
  • etc.

All we have to do is to run prowler every time we make a change to our AWS infrastructure and/or on a schedule and we’ll have taken a huge step toward automating our audit process. Using Gitlab CI, this is a fairly simple task.

running prowler in CI

Naturally, we’re already managing our AWS infrastructure ‘as code’ using terraform and using Gitlab CI to test and apply our terraform code changes. If not please see our previous article on how to Manage AWS with Gitlab and Terraform. There’s a Dockerfile in the Prowler repo that is capable of making a docker image that will run prowler. My .gitlab-ci.yml to add it to my ‘infrastructure’ CI pipeline looks like this:

---
stages:
  - plan
  - apply
  - audit

cache:
  paths:
    - .terraform
  key: "$CI_BUILD_REPO"

plan:
  image:
    name: hashicorp/terraform:0.11.3
    entrypoint: ["/bin/sh", "-c"]
  stage: plan
  script:
    - terraform init -backend=true -get=true -input=false
    - terraform plan -input=false -out=planfile
  when: always
  artifacts:
    paths:
      - planfile

apply:
  image:
    name: hashicorp/terraform:0.11.3
    entrypoint: ["/bin/sh", "-c"]
  allow_failure: true
  stage: apply
  script:
    - terraform init -backend=true -get=true -input=false
    - terraform apply -auto-approve
  dependencies:
   - plan

prowler:cis:1:
  image:
    name: aethereal/prowler:latest
  stage: audit
  script:
    - prowler -r ${AWS_DEFAULT_REGION} -f ${AWS_DEFAULT_REGION} -c check1
  dependencies:
   - apply

prowler:cis:2:
  image:
    name: aethereal/prowler:latest
  stage: audit
  script:
    - prowler -r ${AWS_DEFAULT_REGION} -f ${AWS_DEFAULT_REGION} -c check2
  dependencies:
   - apply

prowler:cis:3:
  image:
    name: aethereal/prowler:latest
  stage: audit
  script:
    - prowler -r ${AWS_DEFAULT_REGION} -f ${AWS_DEFAULT_REGION} -c check3
  dependencies:
   - apply

prowler:cis:4:
  image:
    name: aethereal/prowler:latest
  stage: audit
  script:
    - prowler -r ${AWS_DEFAULT_REGION} -f ${AWS_DEFAULT_REGION} -c check4
  dependencies:
    - apply

The *plan* and *apply* stages in the pipeline will plan and apply our infrastructure change. Each log uses the hashicorp terraform image, respectively. The *audit* stage then runs the prowler docker image to run different phases of the CIS audit.

the results

Here’s the resulting CI pipeline… prowler fail

as you can see, prowler has run through the first four sections of the CIS hardening guidelines and has determined some areas that are in need of improvement.

Let’s take a look at *prowler:cis:4* prowler fail

Prowler has detected that we have a few users who need to rotate their access tokens. I’ll have to go have a chat with them. ;)

conclusion

By combining a few off the shelf, free and open source products we are able to create an automated cloud infrastructure management and audit pipeline.

Terraforming Kubernetes

2017-11-18 00:00:00 +0000

Terraform has a really nice Kubernetes interface. Gitlab has a docker registry and integrated CI. These features, combined, has enabled us to merge our kubernetes configs with my other terraform configuration that manages our infrastructure. This allows ops to manage applications and their deployment into my infrastructure in the same, single repository that describes ‘all the things’… load balancers, DNS, deployed applications, etc.

Here’s how I’m able to describe the connection to Kubernetes and the docker secrets in the Kubernetes cluster:

# initialize our provider
provider "kubernetes" {
  host                   = "api.aethereal.engineering"
  config_context_cluster = "aethereal.engineering"
}

# manage our docker credentials for gitlab in kubernetes
resource "kubernetes_secret" "docker-registry" {
  metadata {
    name = "docker-registry"
  }

  data {
    ".dockercfg" = <<EOF
{
  "registry.gitlab.com": {
    "username": "${var.docker_user}",
    "password": "${var.docker_pw}",
    "email": "${var.docker_email}",
    "auth": "${base64encode(format("%s:%s", var.docker_user, var.docker_pw))}"
  }
}
EOF
  }

  type = "kubernetes.io/dockercfg"
}

# variables
variable "docker_user" {
  default = ""
}

variable "docker_pw" {
  default     = ""
  description = "password or API token"
}

variable "docker_email" {
  default     = ""
  description = "password or API token"
}

These stanzas will initialize the kubernetes API connection and will create a docker registry secret which will allow kubernetes to connect to the docker registry where our site is hosted. In this case, we’re using the gitlab docker registry which is integrated with our source code management system (gitlab).

Next, I define my pod an service with references back to the kubernetes cluster provider above.

# manage my loadbalancer
resource "kubernetes_service" "www" {
  metadata {
    name = "www"
  }

  spec {
    selector {
      app = "${kubernetes_pod.www.metadata.0.labels.app}"
    }

    port {
      port        = 80
      target_port = 80
    }

    type = "LoadBalancer"
  }
}

# manage my pod
resource "kubernetes_pod" "www" {
  metadata {
    name = "www"

    labels {
      app = "www"
    }
  }

  spec {
    image_pull_secrets {
      name = "docker-registry"
    }

    container {
      image = "registry.gitlab.com/aethereal/www:1.0.5" # <-- change your app version here.
      name  = "www"
    }
  }
}

# add a CNAME for 'aethereal.io' pointing to the load balancer that kubernetes created.
resource "aws_route53_record" "www" {
  type    = "CNAME"
  zone_id = "${aws_route53_zone.io.zone_id}"
  name    = "www"
  ttl     = "3600"
  records = ["${kubernetes_service.www.load_balancer_ingress.0.hostname}"]
}

I can run this through terraform and it will connect to my kubernetes cluster and create a pod and a load balanced service for that pod which is listening on port 80. Kubernetes handles deploying the new container and creating and configuring load balancer for me. The last terraform stanza creates a ‘www’ CNAME so that my ‘www’ service is addressable via route 53 DNS.

That’s it. My whole workflow to push a new version of an app is:

  1. tag a new version of ‘www’ in the ‘www’ repository
    • gitlab CI creates a new docker image and stores it in the integrated registry
  2. increment the version of my application in my ‘infrastructure’ repo which hosts my terraform and push to master on ‘infrastructure’
    • gitlab CI runs terraform and updates (or creates) my kubernetes pods and service definitions for me

This workflow is great because:

  • my dev team can simply push and tag a new release of code.
  • my ops team can push the new version of code and manage the infrastructure/rollback/etc in the ‘infrastructure’ repo.
  • neither team requires actual access to AWS or Kubernetes to deploy code as the credentials are stored in secret variables in gitlab.

Hope this helps!

A Kubernetes development environment

2017-11-12 00:00:00 +0000

Setting up a kubernetes cluster, while richly rewarding, can be a daunting task. There are lots of moving parts – master nodes, load balancers, etcd, etc. Lucky for us the kubernetes project has minikube. Minikube enables those with the curiosity or the need to install a itty, bitty single node kubernetes cluster on your laptop, complete with dashboard. This makes it very simple for developers to mock up a ‘full stack’ on their local machines thereby alleviating the need to have a ‘dev’ environment, saving money and reducing operational complexity.

If you’d like to play with minikube, we’d recommend the getting started guide.

Have fun!

Static site deployment with Gitlab Pages

2017-11-12 00:00:00 +0000

As you may already know, we heart gitlab. It tracks our source code changes, our bugs and issues. It tests our code with gitlab-ci and it even manages our website. Gitlab pages is a fantastic way to host a static website without ever spinning up a server.

Jekyll is a simple, blog-aware, static site generator. We use it to generate this site. As we push blog posts, content and images to our ‘www’ repository in gitlab, the site is re-generated using the latest content and pushed into gitlab pages. The actual ‘guts’ of that pipeline is a simple YAML file:

image: ruby:2.3

variables:
  JEKYLL_ENV: production

before_script:
  - bundle install

test:
  stage: test
  script:
  - cd site
  - bundle exec jekyll build -d ../test
  artifacts:
    paths:
    - test
  except:
  - master

pages:
  stage: deploy
  script:
  - cd site
  - bundle exec jekyll build -d ../public
  artifacts:
    paths:
    - public
  only:
  - master

This pipeline will test our code’s ability to be compiled any time we push to a branch other than ‘master’ and will publish our site any time we push to the ‘master’ branch. The new, static site contents will then be available for all to see on ‘http//aethereal.io’.

Managing AWS with Gitlab CI and Terraform

2017-11-12 00:00:00 +0000

Terraform is a declarative language that allows you to express ‘I want this’ to your cloud provider. Here’s and example of how I manage me MX records and the “A” record in AWS for aethereal.io site and domain:

provider "aws" {
}

# root level route 53 resources
resource "aws_route53_zone" "io" {
  name          = "aethereal.io."
  comment       = "HostedZone created by Route53 Registrar"
  force_destroy = "false"
}

resource "aws_route53_record" "main" {
  type    = "A"
  zone_id = "${aws_route53_zone.io.zone_id}"
  name    = "${aws_route53_zone.io.name}"
  ttl     = "3600"
  records = ["52.167.214.135"]
}

# MX records
resource "aws_route53_record" "io-mx" {
  zone_id = "${aws_route53_zone.io.zone_id}"
  name    = "${aws_route53_zone.io.name}"
  type    = "MX"
  ttl     = "3600"

  records = [
    "1 ASPMX.L.GOOGLE.COM",
    "5 ALT1.ASPMX.L.GOOGLE.COM",
    "5 ALT2.ASPMX.L.GOOGLE.COM",
    "10 ALT3.ASPMX.L.GOOGLE.COM",
    "10 ALT4.ASPMX.L.GOOGLE.COM",
  ]
}

Naturally, we store all of this in gitlab. The next, natural progression is that we create a CI/CD pipeline to test and run our ‘infrastructure as code’. Here’s a little example to get you started. Drop this yaml file into the base of your gitlab repository and add a your AWS credentials as ‘secret variables’ in your gitlab repository and you’re in business.

---
image: jonatanblue/gitlab-ci-terraform:latest

stages:
  - plan
  - apply

cache:
  paths:
    - .terraform
  key: "$CI_BUILD_REPO"

plan:
  stage: plan
  script:
    - terraform init -backend=true -get=true -input=false
    - terraform plan -out planfile
  when: always
  artifacts:
    paths:
      - planfile

apply:
  stage: apply
  script:
    - terraform init -backend=true -get=true -input=false
    - terraform apply
  when: manual
  dependencies:
   - plan

This gitlab pipeline uses a docker container, courtesy of jonatanblue, that already has terraform installed and is suited for running terraform code in gitlab-ci. There are 2 stages:

  • plan where we look at the results of the run
  • apply where we can manually apply the proposed changes

This process works well for us. We can accept pull requests from development or other team members for changes to our AWS infrastructure and our ops team can decide whether to approve them or not. Another nice aspect is that the CI/CD pipeline has access to affect change, not individual users so we don’t have to give credentials to our AWS account just so people can look around… It’s all in the code.

Hope this helps!

Deploy a Kubernetes cluster to AWS

2017-11-12 00:00:00 +0000

Containers are great. Deploying and maintainting the infrastructures that run them sucks.

That’s why we use kops to manage our clusters in AWS. The name ‘kops’ is a conflation of ‘kubernetes’ and ‘ops’ and it does exactly that… The creation of your kubernetes infrastructure in less than 10 minutes. Kops also handles upgrades of kubernetes and resizing your clusters.

If you’re looking to get your feet wet, just follow the directions here to get started.

What's in the secret sauce?

2017-11-10 00:00:00 +0000

There is no secret sauce.

Phone

(612) 840-6253

Address

750 Margaret Street
Saint Paul, MN 55106
United States of America