Configuring Kubernetes Add-ons
kube-aws has built-in supports for several Kubernetes add-ons known to require additional configurations beforehand.
cluster-autoscaler
cluster-autoscaler is an add-on which automatically scales in/out your k8s cluster by removing/adding worker nodes according to resource utilization per node.
To enable cluster-autoscaler, add the below settings to your cluster.yaml:
addons:
clusterAutoscaler:
enabled: true
worker:
nodePools:
- name: scaled
autoScalingGroup:
minSize: 1
maxSize: 10
autoscaling:
clusterAutoscaler:
enabled: true
- name: notScaled
autoScalingGroup:
minSize: 2
maxSize: 4
The above example configuration would:
- By
addons.clusterAutoscaler.enabled
:- Provide controller nodes appropriate IAM permissions to call necessary AWS APIs from CA
- Create a k8s deployment to run CA on one of controller nodes, so that CA can utilize the IAM permissions
- By
worker.nodePools[0].autoscaling.clusterAutoscaler.enabled
:- If there are unschedulable, pending pod(s) that is requesting more capacity, CA will add more nodes to the
scaled
node pool, up until the max size10
- If there are no unschdulable, pending pod(s) that is waiting for more capacity and one or more nodes are in low utlization, CA will remove node(s), down until the min size
1
- If there are unschedulable, pending pod(s) that is requesting more capacity, CA will add more nodes to the
- The second node pool
notScaled
is scaled manually by YOU, because you had not the autoscaling on it(=missingautoscaling.clusterAutoscaler.enabled
)
kube2iam / kiam
kube2iam and kiam are add-ons which provides IAM credentials for target IAM roles to pods running inside a Kubernetes cluster based on annotations. To allow kube2iam or kiam deployed to worker and controller nodes to assume target roles, you need the following configurations.
IAM roles associated to worker and controller nodes requires an IAM policy:
{ "Action": "sts:AssumeRole", "Resource": "*", "Effect": "Allow" }
To add the policy to controller nodes, set
experimental.kube2IamSupport.enabled
orexperimental.kiamSupport.enabled
totrue
in yourcluster.yaml
(but not both). For worker nodes, it isworker.nodePools[].kube2IamSupport.enabled
orworker.nodePools[].kiamSupport.enabled
.Target IAM roles needs to change trust relationships to allow kube-aws worker/controller IAM role to assume the target roles.
As CloudFormation generates unpredictable role names containing random IDs by default, it is recommended to make them predictable at first so that you can easily automate configuring trust relationships afterwards. To make worker/controller role names predictable, set
controller.iam.role.name
for controller andworker.nodePools[].iam.role.name
for worker nodes.iam.role.name
s becomes suffixes of the resulting worker/controller role names.Please beware that configuration of target roles' trust relationships are out-of-scope of kube-aws. Please see the part of kube2iam doc or the part of the kiam docfor more information. Basically, you need to point
Principal
to the ARN of a resulting worker/controller IAM role which would look likearn:aws:iam::<your aws account id>:role/<stack-name>-<managed iam role name>
.
Finally, an example cluster.yaml
usable with kube2iam would look like:
# for controller nodes
controller:
iam:
role:
name: mycontrollerrole
experimental:
kube2IamSupport:
enabled: true
# for worker nodes
worker:
nodePools:
- name: mypool
iam:
role:
name: myworkerrole
kube2IamSupport:
enabled: true
See the relevant GitHub issues for kube2iam and kiam or more information.
You can reference controller and worker IAM Roles in a separate CloudFormation stack that provides roles to assume:
...
Parameters:
KubeAWSStackName:
Type: String
Resources:
IAMRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Action: sts:AssumeRole
Principal:
Service: ec2.amazonaws.com
- Effect: Allow
Action: sts:AssumeRole
Principal:
AWS:
Fn::ImportValue: !Sub "${KubeAWSStackName}-ControllerIAMRoleArn"
- Effect: Allow
Action: sts:AssumeRole
Principal:
AWS:
Fn::ImportValue: !Sub "${KubeAWSStackName}-NodePool<Node Pool Name>WorkerIAMRoleArn"
...
When you are done with your cluster, destroy your cluster