1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294
|
# Getting Started
## Cluster Operations
### Creating
You can use a few different methods to create a Kubernetes cluster with the
Cluster API driver for Magnum. We cover a few different methods in this
section.
!!! note "Notes about deployment speed"
The Cluster API driver for Magnum is designed to be fast. It is capable of
deploying a Kubernetes cluster in under 5 minutes. However, there are several
factors that can slow down the deployment process:
* **Operating system image size**
The average size of the operating system image is around 4 GB. The image
needs to be downloaded to each node before deploying the cluster, and the
download speed depends on the network connection. The compute service caches
images locally, so the initial cluster deployment is slower than subsequent
deployments.
* **Network connectivity**
When the cluster goes up, it needs to pull all the container images from the
container registry. By default, it will pull all the images from the upstream
registries. If you have a slow network connection, you can use a local
registry to speed up the deployment process and read more about pointing to
it in the [Labels](labels.md#images) section.
[Atmosphere](https://github.com/vexxhost/atmosphere) deploys a local
registry by default as well as includes several speed optimizations to
improve the deployment speed down to 5 minutes.
You can create clusters using several different methods which all end up using
the Magnum API. You can either use the OpenStack CLI, OpenStack Horizon
dashboard, Terraform, Ansible or the Magnum API directly.
=== "OpenStack CLI"
The OpenStack CLI is the easiest way to create a Kubernetes cluster from
your terminal directly. You can use the `openstack coe cluster create`
command to create a Kubernetes cluster with the Cluster API driver for Magnum.
Before you get started, you'll have to make sure that you have the cluster
templates you want to use available in your environment. You can create
them using the OpenStack CLI:
```bash
export OS_DISTRO=ubuntu # you can change this to "flatcar" if you want to use Flatcar
for version in v1.24.16 v1.25.12 v1.26.7 v1.27.4; do \
[[ "${OS_DISTRO}" == "ubuntu" ]] && IMAGE_NAME="ubuntu-2204-kube-${version}" || IMAGE_NAME="flatcar-kube-${version}"; \
curl -LO https://object-storage.public.mtl1.vexxhost.net/swift/v1/a91f106f55e64246babde7402c21b87a/magnum-capi/${IMAGE_NAME}.qcow2; \
openstack image create ${IMAGE_NAME} --disk-format=qcow2 --container-format=bare --property os_distro=${OS_DISTRO} --file=${IMAGE_NAME}.qcow2; \
openstack coe cluster template create \
--image $(openstack image show ${IMAGE_NAME} -c id -f value) \
--external-network public \
--dns-nameserver 8.8.8.8 \
--master-lb-enabled \
--master-flavor m1.medium \
--flavor m1.medium \
--network-driver calico \
--docker-storage-driver overlay2 \
--coe kubernetes \
--label kube_tag=${version} \
k8s-${version};
done;
```
Once you've got a cluster template, you can create a cluster using the
OpenStack CLI:
```console
$ openstack coe cluster create --cluster-template <cluster-template-name> <cluster-name>
```
You'll be able to view the status of the deployment using the OpenStack CLI:
```console
$ openstack coe cluster show <cluster-name>
```
=== "OpenStack Horizon"
The OpenStack Horizon dashboard is the easiest way to create a Kubernetes
using a simple web interface. In order to get started, you can review the
list of current cluster templates in your environment by navigating using
the left sidebar to *Project* > *Container Infra* > *Cluster Templates*.

In order to launch an new cluster, you will need to navigate to *Project* >
*Container Infra* > *Clusters* and click on the *Launch Cluster* button.

There is a set of required fields that you will need to fill out in order
to launch a cluster, the first of which are related to it's basic
configuration, the required fields are:
* **Cluster Name**
The name of the cluster that will be created.
* **Cluster Template**
The cluster template that will be used to create the cluster.
* **Keypair**
The SSH key pair that will be used to access the cluster.
In this example, we're going to create a cluster with the name of
`test-cluster`, running Kuberentes 1.27.3 so using the `k8s-v1.27.3`
cluster template, and using the `admin_key` SSH key pair.

The next step is deciding on the size of the cluster and selecting if auto
scaling will be enabled for the cluster. The required fields are:
* **Number of Master Nodes**
The number of master nodes that will be created in the cluster.
* **Flavor of Master Nodes**
The flavor of the master nodes that will be created in the cluster.
* **Number of Worker Nodes**
The number of worker nodes that will be created in the cluster.
* **Flavor of Worker Nodes**
The flavor of the worker nodes that will be created in the cluster.
In addition, if you want to enable auto scaling, you will need to provide the
following information:
* **Auto-scale Worker Nodes**
Whether or not to enable auto scaling for the worker nodes.
* **Minimum Number of Worker Nodes**
The minimum number of worker nodes that will be created in the cluster,
the auto scaler will not scale below this number even if the cluster is
under utilized.
* **Maximum Number of Worker Nodes**
The maximum number of worker nodes that will be created in the cluster,
the auto scaler will not scale above this number even if the cluster is
over utilized.
In this example, we're going to create a cluster with 3 master node and 4
worker nodes, using the `m1.medium` flavor for both the master and worker
nodes, and we will enable auto scaling with a minimum of 2 worker nodes and
a maximum of 10 worker nodes.

The next step is managing the network configuration of the cluster. The
required fields are:
* **Enable Load Balancer for Master Nodes**
This is required to be **enabled** for the Cluster API driver for Magnum
to work properly.
* **Create New Network**
This will determine if a new network will be created for the cluster or if
an existing network will be used. It's useful to use an existing network
if you want to attach the cluster to an existing network with other
resources.
* **Cluster API**
This setting controls if the API will get a floating IP address assigned
to it. You can set this to _Accessible on private network only_ if you
are using an existing network and don't want to expose the API to the
public internet. Otherwise, you should set it to _Accessible on the public
internet_ to allow access to the API from the external network.
In this example, we're going to make sure we have the load balancer enabled
for the master nodes, we're going to create a new network for the cluster,
and we're going to make sure that the API is accessible on the public internet.

For the next step, we need to decide if we want to enable auto-healing for
the cluster which automatically detects nodes that are unhealthy and
replaces them with new nodes. The required fields are:
* **Automatically Repair Unhealthy Nodes**
Whether or not to enable auto-healing for the cluster.
In this example, we're going to enable auto-healing for the cluster since it
will help keep the cluster healthy.

Finally, you can override labels for the cluster in the _Advanced_ section,
we do not recommend changing these unless you know what you're doing. Once
you're ready, you can click on the _Submit_ button to create the cluster.
The page will show your cluster being created.

If you click on the cluster, you'll be able to track the progress of the
cluster creation, more specifically in the _Status Reason_ field, seen below:

Once the cluster is created, you'll be able to see the cluster details,
including the health status as well:

At this point, you should have a ready cluster and you can proceed to the
[Accessing](#accessing) section to learn how to access the cluster.
### Accessing
In order to access the Kubernetes cluster, you will have to request for a
`KUBECONFIG` file generated by the Cluster API driver for Magnum. You can do
this using a few several ways, we cover a few of them in this section.
=== "OpenStack CLI"
You can use the OpenStack CLI to request a `KUBECONFIG` file for a
Kubernetes cluster. You can do this using the `openstack coe cluster config`
command:
```console
$ openstack coe cluster config <cluster-name>
```
### Upgrading
The Cluster API driver for Magnum supports upgrading Kubernetes clusters to any
minor release in the same series or one major release ahead. The upgrade
process is performed in-place, meaning that the existing cluster is upgraded to
the new version without creating a new cluster in a rolling fashion.
!!! note
You must have an operating system image for the new Kubernetes version
available in Glance before upgrading the cluster. See the [Images
documentation](images.md) for more information.
In order to upgrade a cluster, you must have a cluster template pointing at the
image for the new Kubernetes version and the `kube_tag` label must be updated
to point at the new Kubernetes version.
=== "OpenStack CLI"
Once you have this cluster template, you can trigger an upgrade by using the
OpenStack CLI:
```console
$ openstack coe cluster upgrade <cluster-name> <cluster-template-name>
```
### Node group role
Roles can be used to show the purpose of a node group, and multiple node groups can be given the same role if they share a common purpose.
```sh
$ openstack coe nodegroup create kube test-ng --node-count 1 --role test
```
When listing node groups, the role may be used as a filter:
```sh
$ openstack coe nodegroup list kube --role test
+--------------------------------------+---------+-----------+--------------------------------------+------------+-----------------+------+
| uuid | name | flavor_id | image_id | node_count | status | role |
+--------------------------------------+---------+-----------+--------------------------------------+------------+-----------------+------+
| c8acbb1f-2fa3-4d1f-b583-9a2df1e269d7 | test-ng | m1.medium | ef107f29-8f26-474e-8f5f-80d269c7d2cd | 1 | CREATE_COMPLETE | test |
+--------------------------------------+---------+-----------+--------------------------------------+------------+-----------------+------+
```
The node group role will default to "worker" if unset, and the only reserved role is "master".
Role information is available within Kubernetes.
```sh
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-7kjbp-control-plane-vxtrz-nhjr2 Ready control-plane,master 3d v1.25.3
kube-7kjbp-default-worker-infra-hnk8x-v6cp9 Ready worker 2d19h v1.25.3
kube-7kjbp-test-ng-infra-b8yux-3v6fd Ready test 5m v1.25.3
```
This information can be used for scheduling, using a node selector.
```sh
nodeSelector:
# node-role.kubernetes.io/ROLE_NAME: ""
node-role.kubernetes.io/test: ""
```
The label `node.cluster.x-k8s.io/nodegroup` is also available for selecting a specific node group.
```sh
nodeSelector:
# node.cluster.x-k8s.io/nodegroup: "NODEGROUP_NAME"
node.cluster.x-k8s.io/nodegroup: "test-ng"
```
## Reference
Here reference awesome blog:
- [`Kubernetes Cluster Autoscaler with Magnum CAPI Driver`](https://satishdotpatel.github.io/kubernetes-cluster-autoscaler-with-magnum-capi/) written by Satish Patel.
|