1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209
|
policy: CIS
title: Configuration Recommendations of an EKS System
id: cis_eks
version: '1.0.1'
source: https://www.cisecurity.org/cis-benchmarks/
levels:
- id: level_1
- id: level_2
inherits_from:
- level_1
controls:
- id: '1'
title: '1 Control Plane Components'
description: |-
Security is a shared responsibility between AWS and the Amazon EKS customer. The shared responsibility model describes this as security of the cloud and security in the cloud:
Security of the cloud – AWS is responsible for protecting the infrastructure that runs AWS services in the AWS Cloud. For Amazon EKS, AWS is responsible for the Kubernetes control plane, which includes the control plane nodes and etcd database. Third-party auditors regularly test and verify the effectiveness of our security as part of the AWS compliance programs. To learn about the compliance programs that apply to Amazon EKS, see AWS Services in Scope by Compliance Program.
Security in the cloud – Your responsibility includes the following areas.
- The security configuration of the data plane, including the configuration of the security groups that allow traffic to pass from the Amazon EKS control plane into the customer VPC
- The configuration of the worker nodes and the containers themselves
- The worker node guest operating system (including updates and security patches)
- Amazon EKS follows the shared responsibility model for CVEs and security patches on managed node groups. Because managed nodes run the Amazon
EKS-optimized AMIs, Amazon EKS is responsible for building patched versions of these AMIs when bugs or issues are reported and we are able to publish a fix. However, customers are responsible for deploying these patched AMI versions to your managed node groups.
- Other associated application software:
- Setting up and managing network controls, such as firewall rules
- Managing platform-level identity and access management, either with or in addition to IAM
- The sensitivity of your data, your company’s requirements, and applicable laws and regulations
AWS is responsible for securing the control plane, though you might be able to configure certain options based on your requirements.
notes: |-
As noted, evaluating the security of the control plane is not the
responsibility of customers or deployers, but of Amazon instead. Thus, this
profile will not implement controls from Section 1.
status: not applicable
rules: []
- id: '2'
title: '2 Control Plane Configuration'
description: |-
This section contains recommendations for Amazon EKS control plane logging
configuration. Customers are able to configure logging for control plane in
Amazon EKS.
levels:
- level_1
status: manual
controls:
- id: '2.1'
title: '2.1 Logging'
status: manual
levels:
- level_1
controls:
- id: 2.1.1
title: '2.1.1 Enable audit Logs'
description: |-
Description:
The audit logs are part of the EKS managed Kubernetes control plane logs that are managed
by Amazon EKS. Amazon EKS is integrated with AWS CloudTrail, a service that provides a
record of actions taken by a user, role, or an AWS service in Amazon EKS. CloudTrail
captures all API calls for Amazon EKS as events. The calls captured include calls from the
Amazon EKS console and code calls to the Amazon EKS API operations.
Rationale:
Exporting logs and metrics to a dedicated, persistent datastore such as CloudTrail ensures
availability of audit data following a cluster security event, and provides a central location
for analysis of log and metric data collated from multiple sources.
# TODO: Evaluate if it's feasible to do an automated check for this
# control. This might require that the Compliance Operator acquire the
# ability to probe AWS endpoints (like the EKS one) to be able to do such
# checks.
notes: |-
Automating this check requires access to AWS endpoints, the aws or
eksctl client, the EKS cluster name, and the cluster region. It's not
feasible to automate this check until we find a way to incorporate the
information we need and use it to check AWS using the API. For now,
this must be checked manually.
status: manual
levels:
- level_1
rules:
- audit_logging
- id: '3'
title: '3 Worker Nodes'
description: |-
This section consists of security recommendations for the components that run on Amazon
EKS worker nodes.
levels:
- level_2
status: automated
controls:
- id: '3.1'
title: '3.1 Worker Node Configuration Files'
description: |-
This section covers recommendations for configuration files on Amazon EKS worker nodes.
status: automated
levels:
- level_2
controls:
- id: 3.1.1
title: >-
3.1.1 Ensure that the kubeconfig file permissions are set to 644 or
more restrictive
description: |-
Description:
If kubelet is running, and if it is using a file-based kubeconfig file, ensure that the proxy
kubeconfig file has permissions of 644 or more restrictive.
Rationale:
The kubelet kubeconfig file controls various parameters of the kubelet service in the
worker node. You should restrict its file permissions to maintain the integrity of the file.
The file should be writable by only the administrators on the system.
It is possible to run kubelet with the kubeconfig parameters configured as a Kubernetes
ConfigMap instead of a file. In this case, there is no proxy kubeconfig file.
status: automated
notes: |-
The default permissions are acceptable based on the control's description
and rationale.
rules:
- file_permissions_worker_kubeconfig
- id: 3.1.2
title: >-
3.1.2 Ensure that the kubelet kubeconfig file ownership is set to
root:root
description: |-
Description:
If kubelet is running, ensure that the file ownership of its kubeconfig file is set to
root:root.
Rationale:
The kubeconfig file for kubelet controls various parameters for the kubelet service in the
worker node. You should set its file ownership to maintain the integrity of the file. The file
should be owned by root:root.
status: automated
rules:
- file_owner_worker_kubeconfig
- file_groupowner_worker_kubeconfig
- id: 3.1.3
title: >-
3.1.3 Ensure that the kubelet configuration file has permissions set to 644 or more restrictive
description: |-
Description:
Ensure that if the kubelet refers to a configuration file with the --config argument, that file
has permissions of 644 or more restrictive.
Rationale:
The kubelet reads various parameters, including security settings, from a config file
specified by the --config argument. If this file is specified you should restrict its file
permissions to maintain the integrity of the file. The file should be writable by only the
administrators on the system.
status: automated
levels:
- level_1
rules:
- file_permissions_kubelet_conf
- id: 3.1.4
title: >-
3.1.4 Ensure that the kubelet configuration file ownership is set to root:root
description: |-
Description:
Ensure that if the kubelet refers to a configuration file with the --config argument, that file
is owned by root:root.
Rationale:
The kubelet reads various parameters, including security settings, from a config file
specified by the --config argument. If this file is specified you should restrict its file
permissions to maintain the integrity of the file. The file should be writable by only the
administrators on the system.
status: automated
levels:
- level_1
rules:
- file_groupowner_kubelet_conf
- file_owner_kubelet_conf
- id: '3.2'
title: >-
3.2 Kubelet
description: |-
This section contains recommendations for kubelet configuration.
Kubelet settings may be configured using arguments on the running kubelet executable, or
they may be taken from a Kubelet config file. If both are specified, the executable argument
takes precedence.
To find the Kubelet config file, run the following command:
ps -ef | grep kubelet | grep config
If the --config argument is present, this gives the location of the Kubelet config file. This
config file could be in JSON or YAML format depending on your distribution.
status: automated
levels:
- level_2
controls:
- id: 3.2.1
title: >-
3.2.1 Ensure that the --anonymous-auth argument is set to false
description: |-
Description:
Disable anonymous requests to the Kubelet server.
Rationale:
When enabled, requests that are not rejected by other configured authentication methods
are treated as anonymous requests. These requests are then served by the Kubelet server.
You should rely on authentication to authorize access and disallow anonymous requests.
status: automated
levels:
- level_1
rules:
- kubelet_anonymous_auth
- id: 3.2.2
title: >-
3.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow
description: |-
Description:
Do not allow all requests. Enable explicit authorization.
Rationale:
Kubelets, by default, allow all authenticated requests (even anonymous ones) without
needing explicit authorization checks from the apiserver. You should restrict this behavior
and only allow explicitly authorized requests.
status: automated
levels:
- level_1
rules:
- kubelet_authorization_mode
- id: 3.2.3
title: >-
3.2.3 Ensure that the --client-ca-file argument is set as appropriate
description: |-
Description:
Enable Kubelet authentication using certificates.
Rationale:
The connections from the apiserver to the kubelet are used for fetching logs for pods,
attaching (through kubectl) to running pods, and using the kubelet’s port-forwarding
functionality. These connections terminate at the kubelet’s HTTPS endpoint. By default, the
apiserver does not verify the kubelet’s serving certificate, which makes the connection
subject to man-in-the-middle attacks, and unsafe to run over untrusted and/or public
networks. Enabling Kubelet certificate authentication ensures that the apiserver could authenticate the Kubelet before submitting any requests.
status: automated
levels:
- level_1
rules:
- kubelet_configure_client_ca
- id: 3.2.4
title: >-
3.2.4 Ensure that the --read-only-port is secured
description: |-
Description:
Disable the read-only port.
Rationale:
The Kubelet process provides a read-only API in addition to the main Kubelet API.
Unauthenticated access is provided to this read-only API which could possibly retrieve
potentially sensitive information about the cluster.
status: automated
levels:
- level_1
rules:
- kubelet_read_only_port_secured
- id: 3.2.5
title: >-
3.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0
description: |-
Description:
Do not disable timeouts on streaming connections.
Rationale:
Setting idle timeouts ensures that you are protected against Denial-of-Service attacks,
inactive connections and running out of ephemeral ports.
Note: By default, --streaming-connection-idle-timeout is set to 4 hours which might be
too high for your environment. Setting this as appropriate would additionally ensure that
such streaming connections are timed out after serving legitimate use cases.
status: automated
levels:
- level_1
rules:
- kubelet_enable_streaming_connections
- id: 3.2.6
title: >-
3.2.6 Ensure that the --protect-kernel-defaults argument is set to true
description: |-
Description:
Protect tuned kernel parameters from overriding kubelet default kernel parameter values.
Rationale:
Kernel parameters are usually tuned and hardened by the system administrators before
putting the systems into production. These parameters protect the kernel and the system.
Your kubelet kernel defaults that rely on such parameters should be appropriately set to
match the desired secured system state. Ignoring this could potentially lead to running
pods with undesired kernel behavior.
status: automated
levels:
- level_1
rules:
- kubelet_enable_protect_kernel_defaults
- id: 3.2.7
title: >-
3.2.7 Ensure that the --make-iptables-util-chains argument is set to true
description: |-
Description:
Allow Kubelet to manage iptables.
Rationale:
Kubelets can automatically manage the required changes to iptables based on how you
choose your networking options for the pods. It is recommended to let kubelets manage
the changes to iptables. This ensures that the iptables configuration remains in sync with
pods networking configuration. Manually configuring iptables with dynamic pod network
configuration changes might hamper the communication between pods/containers and to
the outside world. You might have iptables rules too restrictive or too open.
status: automated
levels:
- level_1
rules:
- kubelet_enable_iptables_util_chains
- id: 3.2.8
title: >-
3.2.8 Ensure that the --hostname-override argument is not set
description: |-
Description:
Do not override node hostnames.
Rationale:
Overriding hostnames could potentially break TLS setup between the kubelet and the
apiserver. Additionally, with overridden hostnames, it becomes increasingly difficult to
associate logs with a particular node and process them for security analytics. Hence, you
should setup your kubelet nodes with resolvable FQDNs and avoid overriding the
hostnames with IPs.
status: automated
levels:
- level_1
rules:
- kubelet_disable_hostname_override
- id: 3.2.9
title: >-
3.2.9 Ensure that the --eventRecordQPS argument is set to 0 or a level which ensures appropriate event capture
# TODO(jaosorior): I'm unsure how to check this as the value is not
# set in EKS, but the result of evaluating the dynamic config
# endpoint gives out a correct value... so it seems that it
# just passes by default.
description: |-
Description:
Security relevant information should be captured. The --eventRecordQPS flag on the
Kubelet can be used to limit the rate at which events are gathered. Setting this too low
could result in relevant events not being logged, however the unlimited setting of 0 could
result in a denial of service on the kubelet.
Rationale:
It is important to capture all events and not restrict event creation. Events are an important
source of security information and analytics that ensure that your environment is
consistently monitored using the event data.
status: pending
levels:
- level_2
rules: []
- id: 3.2.10
title: >-
3.2.10 Ensure that the --rotate-certificates argument is not set to false
description: |-
Description:
Enable kubelet client certificate rotation.
Rationale:
The --rotate-certificates setting causes the kubelet to rotate its client certificates by
creating new CSRs as its existing credentials expire. This automated periodic rotation
ensures that the there is no downtime due to expired certificates and thus addressing
availability in the CIA security triad.
status: automated
levels:
- level_2
rules:
- kubelet_enable_client_cert_rotation
- kubelet_enable_cert_rotation
- id: 3.2.11
title: >-
3.2.11 Ensure that the RotateKubeletServerCertificate argument is set to true
description: |-
Description:
Enable kubelet server certificate rotation.
Rationale:
RotateKubeletServerCertificate causes the kubelet to both request a serving certificate
after bootstrapping its client credentials and rotate the certificate as its existing credentials
expire. This automated periodic rotation ensures that the there are no downtimes due to
expired certificates and thus addressing availability in the CIA security triad.
Note: This recommendation only applies if you let kubelets get their certificates from the
API server. In case your kubelet certificates come from an outside authority/tool (e.g.
Vault) then you need to take care of rotation yourself.
status: automated
levels:
- level_1
rules:
- kubelet_enable_server_cert_rotation
- id: '4'
title: >-
4 Policies
description: |-
This section contains recommendations for various Kubernetes policies which
are important to the security of Amazon EKS customer environment.
status: manual
levels:
- level_2
controls:
- id: '4.1'
title: >-
4.1 RBAC and Service Accounts
status: pending
levels:
- level_2
controls:
- id: 4.1.1
title: >-
4.1.1 Ensure that the cluster-admin role is only used where required
status: manual
description: |-
Description:
The RBAC role cluster-admin provides wide-ranging powers over the
environment and should be used only where and when needed.
Rationale:
Kubernetes provides a set of default roles where RBAC is used. Some
of these roles such as cluster-admin provide wide-ranging privileges
which should only be applied where absolutely necessary. Roles such
as cluster-admin allow super-user access to perform any action on any
resource. When used in a ClusterRoleBinding, it gives full control
over every resource in the cluster and in all namespaces. When used
in a RoleBinding, it gives full control over every resource in the
rolebinding's namespace, including the namespace itself.
notes: |-
This check is manual because it requires deployment-specific knowledge
about users and their authorization.
levels:
- level_1
rules: []
- id: 4.1.2
title: >-
4.1.2 Minimize access to secrets
status: manual
description: |-
Description:
The Kubernetes API stores secrets, which may be service account
tokens for the Kubernetes API or credentials used by workloads in the
cluster. Access to these secrets should be restricted to the smallest
possible group of users to reduce the risk of privilege escalation.
Rationale:
Inappropriate access to secrets stored within the Kubernetes cluster
can allow for an attacker to gain additional access to the Kubernetes
cluster or external resources whose credentials are stored as
secrets.
notes: |-
This check is manual because it requires deployment-specific knowledge
about users and their authorization.
levels:
- level_1
rules: []
- id: 4.1.3
title: >-
4.1.3 Minimize wildcard use in Roles and ClusterRoles
status: manual
description: |-
Description:
Kubernetes Roles and ClusterRoles provide access to resources based
on sets of objects and actions that can be taken on those objects. It
is possible to set either of these to be the wildcard "*" which
matches all items. Use of wildcards is not optimal from a security
perspective as it may allow for inadvertent access to be granted when
new resources are added to the Kubernetes API either as CRDs or in
later versions of the product.
Rationale:
The principle of least privilege recommends that users are provided
only the access required for their role and nothing more. The use of
wildcard rights grants is likely to provide excessive rights to the
Kubernetes API.
notes: |-
This check is manual because it requires deployment-specific knowledge
to audit effectively. The Compliance Operator also uses wildcards for
the compliance-operator roles.
levels:
- level_1
rules: []
- id: 4.1.4
title: >-
4.1.4 Minimize access to create pods
status: manual
description: |-
Description:
The ability to create pods in a namespace can provide a number of
opportunities for privilege escalation, such as assigning privileged
service accounts to these pods or mounting hostPaths with access to
sensitive data (unless Pod Security Policies are implemented to
restrict this access) As such, access to create new pods should be
restricted to the smallest possible group of users.
Rationale:
The ability to create pods in a cluster opens up possibilities for
privilege escalation and should be restricted, where possible.
notes: |-
This check is manual because it requires deployment-specific knowledge
about users and their authorization.
levels:
- level_1
rules: []
- id: 4.1.5
title: >-
4.1.5 Ensure that default service accounts are not actively used.
status: manual
description: |-
Description:
The default service account should not be used to ensure that rights
granted to applications can be more easily audited and reviewed.
Rationale:
Kubernetes provides a default service account which is used by
cluster workloads where no specific service account is assigned to
the pod. Where access to the Kubernetes API from a pod is required, a
specific service account should be created for that pod, and rights
granted to that service account. The default service account should
be configured such that it does not provide a service account token
and does not have any explicit rights assignments.
notes: |-
This check is manual because it requires deployment-specific knowledge
about each namespace in the deployment.
levels:
- level_1
rules: []
- id: 4.1.6
title: >-
4.1.6 Ensure that Service Account Tokens are only mounted where necessary
status: manual
description: |-
Description:
Service accounts tokens should not be mounted in pods except where
the workload running in the pod explicitly needs to communicate with
the API server
Rationale:
Mounting service account tokens inside pods can provide an avenue for
privilege escalation attacks where an attacker is able to compromise
a single pod in the cluster. Avoiding mounting these tokens removes
this attack avenue.
notes: |-
This check is manual because it requires deployment-specific knowledge
about each namespace and pod in the deployment.
levels:
- level_1
rules: []
- id: '4.2'
title: >-
4.2 Pod Security Policies
status: pending
levels:
- level_2
controls:
- id: 4.2.1
title: >-
4.2.1 Minimize the admission of privileged containers
status: pending
levels:
- level_1
rules: []
- id: 4.2.2
title: >-
4.2.2 Minimize the admission of containers wishing to share the host process ID namespace
status: pending
levels:
- level_1
rules: []
- id: 4.2.3
title: >-
4.2.3 Minimize the admission of containers wishing to share the host IPC namespace
status: pending
levels:
- level_1
rules: []
- id: 4.2.4
title: >-
4.2.4 Minimize the admission of containers wishing to share the host network namespace
status: pending
levels:
- level_1
rules: []
- id: 4.2.5
title: >-
4.2.5 Minimize the admission of containers with allowPrivilegeEscalation
status: pending
levels:
- level_1
rules: []
- id: 4.2.6
title: >-
4.2.6 Minimize the admission of root containers
status: pending
levels:
- level_2
rules: []
- id: 4.2.7
title: >-
4.2.7 Minimize the admission of containers with the NET_RAW capability
status: pending
levels:
- level_1
rules: []
- id: 4.2.8
title: >-
4.2.8 Minimize the admission of containers with added capabilities
status: pending
levels:
- level_1
rules: []
- id: 4.2.9
title: >-
4.2.9 Minimize the admission of containers with capabilities assigned
status: pending
levels:
- level_2
rules: []
- id: '4.3'
title: >-
4.3 CNI Plugin
status: pending
levels:
- level_2
controls:
- id: 4.3.1
title: >-
4.3.1 Ensure latest CNI version is used
status: pending
levels:
- level_1
rules: []
- id: 4.3.2
title: >-
4.3.2 Ensure that all Namespaces have Network Policies defined
description: |-
Description:
Use network policies to isolate traffic in your cluster network.
Rationale:
Running different applications on the same Kubernetes cluster creates a risk of one
compromised application attacking a neighboring application. Network segmentation is
important to ensure that containers can communicate only with those they are supposed
to. A network policy is a specification of how selections of pods are allowed to
communicate with each other and other network endpoints.
Network Policies are namespace scoped. When a network policy is introduced to a given
namespace, all traffic not allowed by the policy is denied. However, if there are no network
policies in a namespace all traffic will be allowed into and out of the pods in that
namespace.
notes: |-
We can verify this with the Compliance Operator and check that all
namespaces have at least a NetworkPolicy defined.
status: automated
levels:
- level_2
rules:
- configure_network_policies_namespaces
- id: '4.4'
title: >-
4.4 Secrets Management
status: pending
levels:
- level_2
controls:
- id: 4.4.1
title: >-
4.4.1 Prefer using secrets as files over secrets as environment variables
status: pending
levels:
- level_2
rules: []
- id: 4.4.2
title: >-
4.4.2 Consider external secret storage
status: pending
levels:
- level_2
rules: []
- id: '4.5'
title: >-
4.5 Extensible Admission Control
status: pending
levels:
- level_2
controls:
- id: 4.5.1
title: >-
4.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller
status: pending
levels:
- level_2
rules: []
- id: '4.6'
title: >-
4.6 General Policies
status: pending
levels:
- level_2
controls:
- id: 4.6.1
title: >-
4.6.1 Create administrative boundaries between resources using namespaces
status: pending
levels:
- level_1
rules: []
- id: 4.6.2
title: >-
4.6.2 Apply Security Context to Your Pods and Containers
status: pending
levels:
- level_2
rules: []
- id: 4.6.3
title: >-
4.6.3 The default namespace should not be used
status: pending
levels:
- level_2
rules: []
- id: '5'
title: >-
5 Managed services
status: pending
description: |-
This section consists of security recommendations for the Amazon EKS. These
recommendations are applicable for configurations that Amazon EKS customers
own and manage.
levels:
- level_2
controls:
- id: '5.1'
title: >-
5.1 Image Registry and Image Scanning
status: manual
description: |-
This section contains recommendations relating to container image
registries and securing images in those registries, such as Amazon
Elastic Container Registry (ECR).
levels:
- level_2
controls:
- id: 5.1.1
title: >-
5.1.1 Ensure Image Vulnerability Scanning using Amazon ECR image scanning or a third party provider
# NOTE(lbragstad): We're unable to automate this check because it
# requires access to an AWS endpoint. We should re-evaluate this if it
# becomes possible to write rules that use network connections.
status: manual
description: |-
Description:
Scan images being deployed to Amazon EKS for vulnerabilities.
Rationale:
Vulnerabilities in software packages can be exploited by hackers or
malicious users to obtain unauthorized access to local cloud resources.
Amazon ECR and other third party products allow images to be scanned
for known vulnerabilities.
levels:
- level_1
rules:
- image_scanning
- id: 5.1.2
title: >-
5.1.2 Minimize user access to Amazon ECR
# NOTE(lbragstad): We're unable to automate this check because it
# requires access to an AWS endpoint. We should re-evaluate this if it
# becomes possible to write rules that use network connections.
status: manual
description: |-
Description:
Restrict user access to Amazon ECR, limiting interaction with build
images to only authorized personnel and service accounts.
Rationale:
Weak access control to Amazon ECR may allow malicious users to replace
built images with vulnerable containers.
levels:
- level_1
rules:
- registry_access
- id: 5.1.3
title: >-
5.1.3 Minimize cluster access to read-only for Amazon ECR
# NOTE(lbragstad): We're unable to automate this check because it
# requires access to an AWS endpoint. We should re-evaluate this if it
# becomes possible to write rules that use network connections.
status: manual
description: |-
Description:
Configure the Cluster Service Account with Storage Object Viewer Role
to only allow read- only access to Amazon ECR.
Rationale:
The Cluster Service Account does not require administrative access to
Amazon ECR, only requiring pull access to containers to deploy onto
Amazon EKS. Restricting permissions follows the principles of least
privilege and prevents credentials from being abused beyond the
required role.
levels:
- level_1
rules:
- read_only_registry_access
- id: 5.1.4
title: >-
5.1.4 Minimize Container Registries to only those approved
# NOTE(lbragstad): We're unable to automate this check because it
# requires access to an AWS endpoint. We should re-evaluate this if it
# becomes possible to write rules that use network connections.
status: manual
description: |-
Description:
Use approved container registries.
Rationale:
Allowing unrestricted access to external container registries
provides the opportunity for malicious or unapproved containers to be
deployed into the cluster. Allowlisting only approved container
registries reduces this risk.
levels:
- level_2
rules:
- approved_registries
- id: '5.2'
title: >-
5.2 Identity and Access Management (IAM)
status: manual
levels:
- level_1
controls:
- id: 5.2.1
title: >-
5.2.1 Prefer using dedicated EKS Service Accounts
status: manual
description: |-
Description:
Kubernetes workloads should not use cluster node service accounts to
authenticate to Amazon EKS APIs. Each Kubernetes workload that needs
to authenticate to other AWS services using AWS IAM should be
provisioned with a dedicated Service account.
Rationale:
Manual approaches for authenticating Kubernetes workloads running on
Amazon EKS against AWS APIs are: storing service account keys as a
Kubernetes secret (which introduces manual key rotation and potential
for key compromise); or use of the underlying nodes' IAM Service
account, which violates the principle of least privilege on a
multi-tenanted node, when one pod needs to have access to a service,
but every other pod on the node that uses the Service account does
not.
notes: |-
This must be checked manually since it requires deployment-specific
knowledge and requires network access to the AWS IAM endpoint.
Re-evaluate this if or when we have the ability to use network
connections in rules.
levels:
- level_1
rules:
- dedicated_service_accounts
- id: '5.3'
title: >-
5.3 AWS Key Management Service (KMS)
status: manual
levels:
- level_1
controls:
- id: 5.3.1
title: >-
5.3.1 Ensure Kubernetes Secrets are encrypted using Customer Master Keys (CMKs) managed in AWS KMS
status: manual
description: |
Description:
Encrypt Kubernetes secrets, stored in etcd, using secrets encryption
feature during Amazon EKS cluster creation.
Rationale:
Kubernetes can store secrets that pods can access via a mounted
volume. Today, Kubernetes secrets are stored with Base64 encoding,
but encrypting is the recommended approach. Amazon EKS clusters
version 1.13 and higher support the capability of encrypting your
Kubernetes secrets using AWS Key Management Service (KMS) Customer
Managed Keys (CMK). The only requirement is to enable the encryption
provider support during EKS cluster creation. Use AWS Key Management
Service (KMS) keys to provide envelope encryption of Kubernetes
secrets stored in Amazon EKS. Implementing envelope encryption is
considered a security best practice for applications that store
sensitive data and is part of a defense in depth security strategy.
Application-layer Secrets Encryption provides an additional layer of
security for sensitive data, such as user defined Secrets and Secrets
required for the operation of the cluster, such as service account
keys, which are all stored in etcd. Using this functionality, you can
use a key, that you manage in AWS KMS, to encrypt data at the
application layer. This protects against attackers in the event that
they manage to gain access to etcd.
notes: |-
This check is manual because it requires deployment-specific knowledge
and access to the AWS KMS endpoint. Re-evaluate this if or when we have
the ability to use network connections in rules.
levels:
- level_1
rules:
- secret_encryption
- id: '5.4'
title: >-
5.4 Cluster Networking
status: pending
levels:
- level_2
controls:
- id: 5.4.1
title: >-
5.4.1 Restrict Access to the Control Plane Endpoint
status: manual
description: |-
Description:
Enable Endpoint Private Access to restrict access to the cluster's
control plane to only an allowlist of authorized IPs.
Rationale:
Authorized networks are a way of specifying a restricted range of IP
addresses that are permitted to access your cluster's control plane.
Kubernetes Engine uses both Transport Layer Security (TLS) and
authentication to provide secure access to your cluster's control
plane from the public internet. This provides you the flexibility to
administer your cluster from anywhere; however, you might want to
further restrict access to a set of IP addresses that you control.
You can set this restriction by specifying an authorized network.
Restricting access to an authorized network can provide additional
security benefits for
your container cluster, including:
* Better protection from outsider attacks: Authorized networks
provide an additional layer of security by limiting external
access to a specific set of addresses you designate, such as
those that originate from your premises. This helps protect
access to your cluster in the case of a vulnerability in the
cluster's authentication or authorization mechanism.
* Better protection from insider attacks: Authorized networks help
protect your cluster from accidental leaks of master certificates
from your company's premises. Leaked certificates used from
outside Amazon EC2 and outside the authorized IP ranges (for
example, from addresses outside your company) are still denied
access.
notes: |-
This check is manual because it requires deployment-specific knowledge
and access to the AWS EKS endpoint. Re-evaluate this if or when we have
the ability to use network connections in rules.
levels:
- level_1
rules:
- control_plane_access
- id: 5.4.2
title: >-
5.4.2 Ensure clusters are created with Private Endpoint Enabled and Public Access Disabled
status: manual
description: |-
Description:
Disable access to the Kubernetes API from outside the node network if
it is not required.
Rationale:
In a private cluster, the master node has two endpoints, a private
and public endpoint. The private endpoint is the internal IP address
of the master, behind an internal load balancer in the master's VPC
network. Nodes communicate with the master using the private
endpoint. The public endpoint enables the Kubernetes API to be
accessed from outside the master's VPC network. Although Kubernetes
API requires an authorized token to perform sensitive actions, a
vulnerability could potentially expose the Kubernetes publicly with
unrestricted access. Additionally, an attacker may be able to
identify the current cluster and Kubernetes API version and determine
whether it is vulnerable to an attack. Unless required, disabling
public endpoint will help prevent such threats, and require the
attacker to be on the master's VPC network to perform any attack on
the Kubernetes API.
notes: |-
This check is manual because it requires deployment-specific knowledge
and access to AWS endpoints. Re-evaluate this if or when we have the
ability to use network connections in rules.
levels:
- level_2
rules:
- endpoint_configuration
- id: 5.4.3
title: >-
5.4.3 Ensure clusters are created with Private Nodes
status: manual
description: |-
Description:
Disable public IP addresses for cluster nodes, so that they only have
private IP addresses. Private Nodes are nodes with no public IP
addresses.
Rationale:
Disabling public IP addresses on cluster nodes restricts access to
only internal networks, forcing attackers to obtain local network
access before attempting to compromise the underlying Kubernetes
hosts.
notes: |-
This check is manual because it requires deployment-specific knowledge
and access to AWS endpoints. Re-evaluate this if or when we have the
ability to use network connections in rules.
levels:
- level_1
rules:
- private_nodes
- id: 5.4.4
title: >-
5.4.4 Ensure Network Policy is Enabled and set as appropriate
status: manual
description: |-
Description:
Use Network Policy to restrict pod to pod traffic within a cluster
and segregate workloads.
Rationale:
By default, all pod to pod traffic within a cluster is allowed.
Network Policy creates a pod-level firewall that can be used to
restrict traffic between sources. Pod traffic is restricted by having
a Network Policy that selects it (through the use of labels). Once
there is any Network Policy in a namespace selecting a particular
pod, that pod will reject any connections that are not allowed by any
Network Policy. Other pods in the namespace that are not selected by
any Network Policy will continue to accept all traffic. Network
Policies are managed via the Kubernetes Network Policy API and
enforced by a network plugin, simply creating the resource without a
compatible network plugin to implement it will have no effect. EKS
supports Network Policy enforcement through the use of Calico.
notes: |-
This check is manual because it requires deployment-specific knowledge.
levels:
- level_1
rules:
- configure_network_policy
- id: 5.4.5
title: >-
5.4.5 Encrypt traffic to HTTPS load balancers with TLS certificates
status: manual
description: |-
Description:
Encrypt traffic to HTTPS load balancers using TLS certificates.
Rationale:
Encrypting traffic between users and your Kubernetes workload is
fundamental to protecting data sent over the web.
notes: |-
This check is manual because it requires access to the Kubernetes API
server configuration, which requires access to the Kubernetes API or
the AWS EKS configuration.
levels:
- level_2
rules:
- configure_tls
- id: '5.5'
title: >-
5.5 Authentication and Authorization
status: manual
levels:
- level_2
controls:
- id: 5.5.1
title: >-
5.5.1 Manage Kubernetes RBAC users with AWS IAM Authenticator for Kubernetes
status: manual
description: |-
Description:
Amazon EKS uses IAM to provide authentication to your Kubernetes
cluster through the AWS IAM Authenticator for Kubernetes. You can
configure the stock kubectl client to work with Amazon EKS by
installing the AWS IAM Authenticator for Kubernetes and modifying
your kubectl configuration file to use it for authentication.
Rationale:
On- and off-boarding users is often difficult to automate and prone
to error. Using a single source of truth for user permissions reduces
the number of locations that an individual must be off-boarded from,
and prevents users gaining unique permissions sets that increase the
cost of audit.
notes: |-
This check is manual because it requires deployment-specific knowledge
about the namespaces to audit it effectively.
levels:
- level_2
rules:
- iam_integration
- id: '5.6'
title: >-
5.6 Other Cluster Configurations
status: manual
levels:
- level_1
controls:
- id: 5.6.1
title: >-
5.6.1 Consider Fargate for running untrusted workloads
status: manual
description: |-
Description:
It is Best Practice to restrict or fence untrusted workloads when
running in a multi-tenant environment.
Rationale:
AWS Fargate is a technology that provides on-demand, right-sized
compute capacity for containers. With AWS Fargate, you no longer have
to provision, configure, or scale groups of virtual machines to run
containers. This removes the need to choose server types, decide when
to scale your node groups, or optimize cluster packing. You can
control which pods start on Fargate and how they run with Fargate
profiles, which are defined as part of your Amazon EKS cluster.
Amazon EKS integrates Kubernetes with AWS Fargate by using
controllers that are built by AWS using the upstream, extensible
model provided by Kubernetes. These controllers run as part of the
Amazon EKS managed Kubernetes control plane and are responsible for
scheduling native Kubernetes pods onto Fargate. The Fargate
controllers include a new scheduler that runs alongside the default
Kubernetes scheduler in addition to several mutating and validating
admission controllers. When you start a pod that meets the criteria
for running on Fargate, the Fargate controllers running in the
cluster recognize, update, and schedule the pod onto Fargate. Each
pod running on Fargate has its own isolation boundary and does not
share the underlying kernel, CPU resources, memory resources, or
elastic network interface with another pod.
notes: |-
This check is manual because it requires access to the AWS Fargate
endpoint to audit and remediate.
levels:
- level_2
rules:
- fargate
|