[DOCKER] Manage kube-bench execution results in cooperation with AWS Security Hub

This article is the 14th day of ** AWS Advent Calendar 2020 **.

Introduction

As a 3rd Party partner product that can be integrated into AWS Security Hub on 12/4/2020 It was announced that Aqua Security's kube-bench has been added.

:rocket: AWS Security Hub adds open source tool integrations with Kube-bench and Cloud Custodian https://aws.amazon.com/jp/about-aws/whats-new/2020/12/aws-security-hub-adds-open-source-tool-integration-with-kube-bench-and-cloud-custodian/

With this integration, CIS Kubernetes Benchmark running on kube-bench and You can now centrally manage the CIS Amazon EKS Benchmark check results in AWS Security Hub.

I've run kube-bench on an EKS cluster and brought the results into Security Hub.

What is CIS Benchmark?

CIS Benchmark is published by CIS (Center for Internet Secuirty), a non-profit organization in the United States. This is a guideline for strengthening various OSs, servers, cloud environments, etc. CIS has issued over 140 CIS Benchmarks with compliance requirements such as PCI DSS. It is referred to when there is a description that it is a system strengthening standard recognized in the industry.

CIS Benchmark can be downloaded in PDF format from the CIS site. https://www.cisecurity.org/cis-benchmarks/

What is kube-bench?

kube-bench is a recommendation that the target environment is described in CIS Kubernetes Benchmark An application made by Go that can check whether it is compliant. Developed by Aqua Security and published as OSS.

Not only CIS Kubernetes Benchmark, but also CIS Amazon Elastic Kubernetes Service (EKS) Benchmark It also supports checking the CIS Google Kubernetes Engine (GKE) Benchmark.

What is AWS Security Hub?

AWS Security Hub (https://aws.amazon.com/jp/security-hub/) aggregates various security data for the entire AWS environment. It is a service for centralized management. Not to mention AWS services such as Amazon GuardDuty, Inspector, Macie Integrates with many 3rd Party security products and brings data from corresponding products to Security Hub It is possible to send, and conversely, receive Security Hub data on the product side.

Internally, the result information is in a JSON type format called AWS Security Finding Format (ASFF). Because it is managed, you can also import your own data if it conforms to this format.

Prerequisites

It is implemented in the following versions.

Run the CIS Amazon EKS Benchmark v1.0 check in your EKS cluster. Steps such as building an EKS cluster and enabling Security Hub are omitted.

Enable integration

Search for kube-bench from the Security Hub console integration and click ** Accept Results ** It displays information about the IAM policy needed to send the findings to Security Hub. image.png Select Accept Results again on the confirmation screen to enable the status. image.png

IAM role settings

To run kube-bench in an EKS cluster, the running pod goes to the Security Hub Must have permission to send check results There are two ways to assign access to AWS resources to a pod.

IRSA allows you to associate an IAM role with a Kubernetes service account. This allows you to provide Security Hub permissions only to the pod that kube-bench launches. If you use an IAM role that you set for a node group, all pods that start under that group Please note that you will be assigned access to the Security Hub.

This time we will use IRSA. If you haven't created an IAM OIDC Provider to use IRSA, create one first.

$ eksctl utils associate-iam-oidc-provider --cluster {CLUSTER_NAME} --approve --region ap-northeast-1

Then create an IAM role with the following policy attached. Please read the region as appropriate.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "securityhub:BatchImportFindings",
            "Resource": [
                "arn:aws:securityhub:ap-northeast-1::product/aqua-security/kube-bench"
            ]
        }
    ]
}

Add the following policy to the trust relationship of the IAM role. <ACCOUNT_ID>, <OIDR_PROVIDER_ID>, Please set your own region. The description is based on the assumption that kube-bech is used for Namespace.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::<ACCOUNT_ID>:oidc-provider/oidc.eks.ap-northeast-1.amazonaws.com/id/<OIDC_PROVIDER_ID>"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringLike": {
          "oidc.eks.ap-northeast-1.amazonaws.com/id/<OIDC_PROVIDER_ID>:sub": "system:serviceaccount:kube-bench:*",
          "oidc.eks.ap-northeast-1.amazonaws.com/id/<OIDC_PROVIDER_ID>:aud": "sts.amazonaws.com"
        }
      }
    }
  ]
}

Creating a container image

You need to build the kube-bench container image and push it to the ECR. First, create an ECR repository to store your images.

$ aws ecr create-repository --repository-name k8s/kube-bench --image-tag-mutability MUTABLE
{
    "repository": {
        "repositoryArn": "arn:aws:ecr:ap-northeast-1:123456789012:repository/k8s/kube-bench",
        "registryId": "123456789012",
        "repositoryName": "k8s/kube-bench",
        "repositoryUri": "123456789012.dkr.ecr.ap-northeast-1.amazonaws.com/k8s/kube-bench",
        "createdAt": 1607747704.0,
        "imageTagMutability": "MUTABLE",
        "imageScanningConfiguration": {
            "scanOnPush": false
        },
        "encryptionConfiguration": {
            "encryptionType": "AES256"
        }
    }
}

Clone the kube-bench source code from GitHub.

$ git clone https://github.com/aquasecurity/kube-bench.git
Cloning into 'kube-bench'...
remote: Enumerating objects: 4115, done.
remote: Total 4115 (delta 0), reused 0 (delta 0), pack-reused 4115
Receiving objects: 100% (4115/4115), 7.69 MiB | 5.28 MiB/s, done.
Resolving deltas: 100% (2644/2644), done.

$ cd kube-bench

If you want to integrate the execution results with Security Hub, edit cfg/eks-1.0/config.yaml before building the image. You need to rewrite the AWS account, region, and cluster name to be executed.

cfg/eks-1.0/config.yaml


---
AWS_ACCOUNT: "123456789012"
AWS_REGION: "ap-northeast-1"
CLUSTER_ARN: "arn:aws:eks:ap-northeast-1:123456789012:cluster/{YOUR_CLUSTER_NAME}"

Build the container image and push it to the ECR.

$ aws ecr get-login-password | docker login --username AWS --password-stdin https://123456789012.dkr.ecr.ap-northeast-1.amazonaws.com
Login Succeeded

$ docker build -t k8s/kube-bench .
...
Successfully built 6ad073f96455
Successfully tagged k8s/kube-bench:latest

$ docker tag k8s/kube-bench:latest 123456789012.dkr.ecr.ap-northeast-1.amazonaws.com/k8s/kube-bench:latest
$ docker push 123456789012.dkr.ecr.ap-northeast-1.amazonaws.com/k8s/kube-bench:latest
The push refers to repository [123456789012.dkr.ecr.ap-northeast-1.amazonaws.com/k8s/kube-bench]
bd6c279efeaa: Pushed 
3032a8c3bf7a: Pushed 
b0efa7564210: Pushed 
fe4cf80d4f2c: Pushed 
31c3d3db74eb: Pushed 
d2e36eff2b5d: Pushed 
f4666769fca7: Pushed 
latest: digest: sha256:da95de0edccad7adb6fd6c80137a65b5458142efe14efa94ac44cc5c6ce6b2ef size: 1782

Run kube-bench

Create a service account

Create a service account with an IAM role in the following manifest file.

sa.yaml


apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-bench-sa
  namespace: kube-bench
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/<IAM_ROLE_NAME>

Create and apply Namespace kube-bench.

$ kubectl create ns kube-bench
kubectl create ns kube-bench

$ kubectl apply -f sa.yaml
serviceaccount/kube-bench-sa created

$ kubectl get sa -n kube-bench
NAME            SECRETS   AGE
default         1         6m32s
kube-bench-sa   1         11s

Job execution

Open jobs-eks.yaml cloned from GitHub and edit image, command. The result is sent to Security Hub by adding the --asff flag to command. In addition, add the service account name created earlier. The edited file will be as follows.

job-eks.yaml


---
apiVersion: batch/v1
kind: Job
metadata:
  name: kube-bench
spec:
  template:
    spec:
      hostPID: true
      serviceAccountName: kube-bench-sa
      containers:
        - name: kube-bench
          image: 123456789012.dkr.ecr.region.amazonaws.com/k8s/kube-bench:latest
          command: ["kube-bench", "node", "--benchmark", "eks-1.0", "--asff"]
          volumeMounts:
            - name: var-lib-kubelet
              mountPath: /var/lib/kubelet
              readOnly: true
            - name: etc-systemd
              mountPath: /etc/systemd
              readOnly: true
            - name: etc-kubernetes
              mountPath: /etc/kubernetes
              readOnly: true
      restartPolicy: Never
      volumes:
        - name: var-lib-kubelet
          hostPath:
            path: "/var/lib/kubelet"
        - name: etc-systemd
          hostPath:
            path: "/etc/systemd"
        - name: etc-kubernetes
          hostPath:
            path: "/etc/kubernetes"

Run kube-bench as a Kubernetes job and make sure it completes successfully

$ kubectl apply -f job-eks.yaml -n kube-bench
job.batch/kube-bench created

$ kubectl get all -n kube-bench                                                                                                                                                    
NAME                   READY   STATUS      RESTARTS   AGE
pod/kube-bench-q4sbd   0/1     Completed   0          55s

NAME                   COMPLETIONS   DURATION   AGE
job.batch/kube-bench   1/1           5s         55s

If the following message is output to the pod log and the job results in an error Check if the contents of cfg/eks-1.0/config.yaml are correct.

failed to output to ASFF: finding publish failed: MissingEndpoint: 'Endpoint' configuration is required for this service

Check Security Hub detection results

Normally, the execution result of kube-bench is output to the pod log.

Sample output (click to expand)
$ kubectl logs -f pod/kube-bench-2j2ss -n kube-bench
[INFO] 3 Worker Node Security Configuration
[INFO] 3.1 Worker Node Configuration Files
[PASS] 3.1.1 Ensure that the proxy kubeconfig file permissions are set to 644 or more restrictive (Scored)
[PASS] 3.1.2 Ensure that the proxy kubeconfig file ownership is set to root:root (Scored)
[PASS] 3.1.3 Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Scored)
[PASS] 3.1.4 Ensure that the kubelet configuration file ownership is set to root:root (Scored)
[INFO] 3.2 Kubelet
[PASS] 3.2.1 Ensure that the --anonymous-auth argument is set to false (Scored)
[PASS] 3.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Scored)
[PASS] 3.2.3 Ensure that the --client-ca-file argument is set as appropriate (Scored)
[PASS] 3.2.4 Ensure that the --read-only-port argument is set to 0 (Scored)
[PASS] 3.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Scored)
[PASS] 3.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Scored)
[PASS] 3.2.7 Ensure that the --make-iptables-util-chains argument is set to true (Scored) 
[PASS] 3.2.8 Ensure that the --hostname-override argument is not set (Scored)
[WARN] 3.2.9 Ensure that the --event-qps argument is set to 0 or a level which ensures appropriate event capture (Scored)
[PASS] 3.2.10 Ensure that the --rotate-certificates argument is not set to false (Scored)
[PASS] 3.2.11 Ensure that the RotateKubeletServerCertificate argument is set to true (Scored)

== Remediations node ==
3.2.9 If using a Kubelet config file, edit the file to set eventRecordQPS: to an appropriate level.
If using command line arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service


== Summary node ==
14 checks PASS
0 checks FAIL
1 checks WARN
0 checks INFO

== Summary total ==
14 checks PASS
0 checks FAIL
1 checks WARN
0 checks INFO

If you send the result to Security Hub with the --asff flag Only the number of cases imported into Security Hub is listed in the log.

$ kubectl logs -f pod/kube-bench-q4sbd -n kube-bench
2020/12/12 14:51:07 Number of findings that were successfully imported:1

In the case of this environment, it can be confirmed that there is one case.

The result is sent to the Security Hub when the check result is FAIL or WARN Please note that it is an item only. (If all checks are passed, no result will be generated on the Security Hub side.)

When I check the detection results of the Security Hub console, I get one result sent. You can confirm that it has been imported normally. image.png

Leverage custom insights

Security Hub uses specific grouping criteria and filters for each resource or environment. There is a feature called Custom Insight that allows you to aggregate your data. To create it, simply set any grouping conditions and filters from the console insights and save.

For example, if you are using Security Hub in a multi-account configuration, the master account You can use it to aggregate the detection results for each member AWS account.

The following sets the grouping condition to AWS account ID and filters the product name Kube-bench Here is an example of custom insights you have set. Click the account ID for the target account You will be able to see the results of kube-bench immediately. image.png Even if you are operating with a single account, you can set the grouping condition to the resource ID. You can also use it to list the detection results for each EKS cluster.

In an environment where multiple accounts and clusters are operated, use custom insights It is recommended because it makes it easier to manage the results.

reference

Integrating kube-bench with AWS Security Hub https://github.com/aquasecurity/kube-bench/blob/master/docs/asff.md Managing custom insights https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-custom-insights.html

that's all. I'm glad if you can use it as a reference.

Recommended Posts

Manage kube-bench execution results in cooperation with AWS Security Hub
Manage View status with enum in Swift