chore: Kubernetes not k8s or kubernetes (#25)

pull/26/head
Rémy-Christophe Schermesser 5 years ago committed by GitHub
parent c3e9b6399c
commit 94786096ab
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -2,9 +2,9 @@
## Introduction
In this section we will learn what is a pod, deploy your first container, configure k8s, and interact with k8s in the command line.
In this section we will learn what is a pod, deploy your first container, configure Kubernetes, and interact with Kubernetes in the command line.
The base job of k8s is to schedule `pods`. K8s will choose how and where to schedule them. You can also see a `pod` as an object that requests some CPU and RAM. K8s will take those requirements into account in its scheduling.
The base job of Kubernetes is to schedule `pods`. Kubernetes will choose how and where to schedule them. You can also see a `pod` as an object that requests some CPU and RAM. Kubernetes will take those requirements into account in its scheduling.
But it has a base assumption that a `pod` can be killed whenever it wants to. So keep in mind that a `pod` is **mortal** and it **will** be destroyed at some point.
@ -17,7 +17,7 @@ It's a stateless python JSON API that answers on:
* `/info`
* `/health`
Here is our first manifest for k8s:
Here is our first manifest for Kubernetes:
```yml
apiVersion: v1
@ -30,10 +30,10 @@ spec:
image: mhausenblas/simpleservice:0.5.0
```
The manifest of k8s represents a desired state. We do not write the steps to go to this state. It's k8s who will handle it for us.
The manifest of Kubernetes represents a desired state. We do not write the steps to go to this state. It's Kubernetes who will handle it for us.
Let's have a look a the fields:
* `apiVersion`: the version of the k8s API we will be using, `v1` here
* `apiVersion`: the version of the Kubernetes API we will be using, `v1` here
* `kind`: what resource this object represents
* `metadata`: some metadata about this `pod`, more on it later
* `spec`: specification of the desired behavior of this pod
@ -41,7 +41,7 @@ Let's have a look a the fields:
* `name`: the name of the container
* `image`: which image to start
Let's `apply` this manifest to k8s. This will tell k8s to create the `pod` and run it.
Let's `apply` this manifest to Kubernetes. This will tell Kubernetes to create the `pod` and run it.
```bash
$ kubectl apply -f 05-pods/01-simple-pod.yml
@ -49,11 +49,11 @@ $ kubectl apply -f 05-pods/01-simple-pod.yml
pod "simple-pod" created
```
We also could have used the `kubectl create -f ...`. But it's better to have a declarative approach in k8s rather than an imperative one, [see]( https://medium.com/bitnami-perspectives/imperative-declarative-and-a-few-kubectl-tricks-9d6deabdde).
We also could have used the `kubectl create -f ...`. But it's better to have a declarative approach in Kubernetes rather than an imperative one, [see]( https://medium.com/bitnami-perspectives/imperative-declarative-and-a-few-kubectl-tricks-9d6deabdde).
## `kubectl get`
Now list all the `pods` running in k8s. `get` is the `ls` of k8s.
Now list all the `pods` running in Kubernetes. `get` is the `ls` of Kubernetes.
```bash
$ kubectl get pod
@ -99,7 +99,7 @@ $ curl 172.17.0.4:9876/info
{"host": "172.17.0.4:9876", "version": "0.5.0", "from": "172.17.0.1"}
```
K8s has a useful add-on, a web dashboard. It's included by default in minikube. You can start it with:
Kubernetes has a useful add-on, a web dashboard. It's included by default in minikube. You can start it with:
```bash
minikube dashboard
@ -108,7 +108,7 @@ minikube dashboard
## Exercises
1. Deploy a `pod` containing nginx. The image name is `nginx`, see: <https://hub.docker.com/_/nginx/>
2. Do you think you can access the pod `simple-service` from outside of k8s, *without changing the manifest*?
2. Do you think you can access the pod `simple-service` from outside of Kubernetes, *without changing the manifest*?
## Clean up

@ -2,12 +2,12 @@
## Introduction
In this section we will learn how to name things in k8s, and how to find them again.
In this section we will learn how to name things in Kubernetes, and how to find them again.
Labels are the way to organize objects in k8s. The labels are a list of key/value.
Annotations are a way to mark objects in k8s, it's also a list of key/value.
Labels are the way to organize objects in Kubernetes. The labels are a list of key/value.
Annotations are a way to mark objects in Kubernetes, it's also a list of key/value.
They seem the same. The major difference is that you can query k8s based on labels.
They seem the same. The major difference is that you can query Kubernetes based on labels.
On the other hand, annotations are not limited on characters used for labels.
Valid label keys have two segments: an optional prefix and name, separated by a slash `/`. The name segment is required and must be 63 characters or less, beginning and ending with an alphanumeric character `[a-z0-9A-Z]` with dashes `-`, underscores `_`, dots `.`, and alphanumerics between. The prefix is optional. If specified, the prefix must be a DNS subdomain: a series of DNS labels separated by dots `.`, not longer than 253 characters in total, followed by a slash `/`.
@ -45,7 +45,7 @@ NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 1m
```
Those queries we call them `selector` in the k8s jargon. We will use them later on with deployments.
Those queries we call them `selector` in the Kubernetes jargon. We will use them later on with deployments.
## Exercises

@ -5,8 +5,8 @@
In this section you will learn how to deploy a stateless application with multiple replicas and scale it.
Managing pods manually is doable, but what if you want to deploy multiple times the same one?
Of course you can copy/paste the yaml files and `apply` them. But remember, pods are **mortal**, so k8s can kill them whenever it feels like it.
So you can either have a look for your k8s cluster to recreate pods when needed, or you can use a `deployment`.
Of course you can copy/paste the yaml files and `apply` them. But remember, pods are **mortal**, so Kubernetes can kill them whenever it feels like it.
So you can either have a look for your Kubernetes cluster to recreate pods when needed, or you can use a `deployment`.
## First deployment
@ -36,7 +36,7 @@ spec:
Let's have a look at the manifest:
* `apiVersion`: the version of the k8s API we will be using, `v1` here
* `apiVersion`: the version of the Kubernetes API we will be using, `v1` here
* `kind`: A `deployment` has the kind `Deployment`
* `spec`:
* `replicas`: The number of pods this `deployment` will create
@ -46,7 +46,7 @@ Let's have a look at the manifest:
* `spec`: The `spec` of the pods
* `containers`:
* `image`: the name of the container, here we will use version "0.4.0"
* `ports`: The list of ports to expose internally in the k8s cluster
* `ports`: The list of ports to expose internally in the Kubernetes cluster
* `containerPort`: The kind of port we want to expose, here a `containerPort`. So our container will expose one port `9876` in the cluster.
Apply the deployment:
@ -66,7 +66,7 @@ NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
simple-deployment 2 2 2 2 2m
```
Firstly, k8s created a `deployment`. We see a lot of 2s. It is the number of replicas that are available. Let's have a look at the pods we have running:
Firstly, Kubernetes created a `deployment`. We see a lot of 2s. It is the number of replicas that are available. Let's have a look at the pods we have running:
```bash
$ kubectl get pod
@ -78,7 +78,7 @@ simple-deployment-5f7c895db4-wt9j7 1/1 Running 0 1m
The `deployment` created 2 pods for us, the number we put in `replicas`. We see that the pods have a unique name, but prefixed with the name of the deployment `simple-deployment`
Did k8s created something else for us? Let's have a look
Did Kubernetes created something else for us? Let's have a look
```bash
$ kubectl get all
@ -100,7 +100,7 @@ We see 3 things, you might have a section `ClusterIP` ignore it for now:
* `deployment`: named `deployment.apps/[...]`
* `replicaset`: named `replicaset.apps/[...]`
So in fact k8s created more `kind` than expected.
So in fact Kubernetes created more `kind` than expected.
We won't go into details of what a [`ReplicaSet`](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/) is, just keep it mind that it ensures that a specified number of pod are running at any time.
## Scale up
@ -134,7 +134,7 @@ Do not forget the `kubectl logs [...]` command.
Change again the number of replicas to `2`, reapply, see what is happening.
We know how to scale up/down a deployment, but how can we deploy a new version of the application. To achieve this, we need to tell k8s to update the image we are using in our `deployment`, for this:
We know how to scale up/down a deployment, but how can we deploy a new version of the application. To achieve this, we need to tell Kubernetes to update the image we are using in our `deployment`, for this:
```bash
$ kubectl set image deployment/simple-deployment simple-service=mhausenblas/simpleservice:0.5.0
@ -148,7 +148,7 @@ Remember the command `kubectl describe deployment`.
1. Deploy multiple nginx. The image name is `nginx`, see: <https://hub.docker.com/_/nginx/>
2. Play with the scaling up/down & the deployment of new versions
3. Do you think you can access your `deployment` of nginx from outside of k8s, *without changing the manifest*?
3. Do you think you can access your `deployment` of nginx from outside of Kubernetes, *without changing the manifest*?
## Clean up

@ -26,7 +26,7 @@ $ kubectl apply -f 08-service/01-simple-deployment.yml
deployment.apps "simple-deployment" created
```
Now start another container. We will use it to see what we can access internally inside k8s:
Now start another container. We will use it to see what we can access internally inside Kubernetes:
Apply the pod:
@ -49,7 +49,7 @@ root@bash:/# apt update && apt install dnsutils curl
[...]
```
You now have a shell inside a k8s pod running in your cluster. Let this console open so you can type commands.
You now have a shell inside a Kubernetes pod running in your cluster. Let this console open so you can type commands.
Try to curl one of the pods created by the deployment above. How can you access the `deployment` **without** targeting a specific `pod`?
Ok, now let's create our first service (08-service/03-internal-service.yml):
@ -81,7 +81,7 @@ selector:
app: simple-deployment
```
is central to k8s. It is with those fields that you will tell k8s which pods to give access through this `service`.
is central to Kubernetes. It is with those fields that you will tell Kubernetes which pods to give access through this `service`.
Apply the service:
@ -103,7 +103,7 @@ Address: 10.96.31.244
Try to curl the `/info` url, remember the `ports` we choose in the `service`.
Can you access this service from the outside of k8s?
Can you access this service from the outside of Kubernetes?
The answer is no, it's not possible. To do this you need an `ingress`. Ingress means "entering into".
@ -142,7 +142,7 @@ Let's have a look at the manifest:
* `nginx.ingress.kubernetes.io/ssl-redirect`: To fix a redirect, see [this](https://github.com/kubernetes/ingress-nginx/issues/1567). This is only used if you use the nginx ingress
* `spec`:
* `backend`: the default backend to redirect all the requests to
* `serviceName`: the name of the k8s traffic to redirect to
* `serviceName`: the name of the Kubernetes traffic to redirect to
* `servicePort`: the port of the service
Apply the ingress:
@ -165,7 +165,7 @@ With this manifest we have a `deployment` that manages pods. A `service` that gi
## Global overview
You have seen a lot different `kind` of k8s, let's take a step back and see how each `kind` interact with each other:
You have seen a lot different `kind` of Kubernetes, let's take a step back and see how each `kind` interact with each other:
```text
+----------------------------------------------------------------------------------+

@ -42,7 +42,7 @@ Wait a bit and access the logs of the pod created by the cron.
## `Job`
If you need to run a one time job, you can use the `Job` in k8s. In fact the `CronJob` will start a `Job` for you at the scheduled interval.
If you need to run a one time job, you can use the `Job` in Kubernetes. In fact the `CronJob` will start a `Job` for you at the scheduled interval.
```yml
apiVersion: batch/v1

@ -2,17 +2,17 @@
## Introduction
To be able to handle pods correctly, K8s needs to know how they are behaving. For this it needs to have a way to know the state of the containers running inside a given pod. The creator of k8s decided to go to a "black box" testing. "Black box" testing means that k8s doesn't interact directly with the containers. It is each container that says how k8s can know its state.
To be able to handle pods correctly, Kubernetes needs to know how they are behaving. For this it needs to have a way to know the state of the containers running inside a given pod. The creator of Kubernetes decided to go to a "black box" testing. "Black box" testing means that Kubernetes doesn't interact directly with the containers. It is each container that says how Kubernetes can know its state.
You can define two probes for k8s to know the state of your container: "liveness" and "readiness" probes.
You can define two probes for Kubernetes to know the state of your container: "liveness" and "readiness" probes.
## Liveness probe
The liveness probe is here to detect if a container is still alive. Meaning, if the container is not in a broken state, in a dead lock, or anything related. This is always usefull. It helps k8s to know if your container is alive or not and so it can take decision based on that, like restarting it.
The liveness probe is here to detect if a container is still alive. Meaning, if the container is not in a broken state, in a dead lock, or anything related. This is always usefull. It helps Kubernetes to know if your container is alive or not and so it can take decision based on that, like restarting it.
## Readiness probe
The readiness probe is here to detect if a container is ready to serve traffic. It is usefull to configure when your container will receive external traffic sent by k8s. Most of the time, when it's an API.
The readiness probe is here to detect if a container is ready to serve traffic. It is usefull to configure when your container will receive external traffic sent by Kubernetes. Most of the time, when it's an API.
## Defining a probe
@ -24,7 +24,7 @@ Both liveness and readiness probes have the same configuration. You have three w
### Exec probe
The `exec` probe let you configure a command that k8s will run in your container. If the command exits with a non zero status the probe will be considered unhealthy:
The `exec` probe let you configure a command that Kubernetes will run in your container. If the command exits with a non zero status the probe will be considered unhealthy:
```yaml
livenessProbe:
@ -41,7 +41,7 @@ We will see later what `initialDelaySeconds` and `periodSeconds` means.
### HTTP probe
The `http` probe let you configure a HTTP endpoint that k8s will call in your container. If this endpoint returns a non 2XX status the probe will be considered unhealthy:
The `http` probe let you configure a HTTP endpoint that Kubernetes will call in your container. If this endpoint returns a non 2XX status the probe will be considered unhealthy:
```yaml
livenessProbe:
@ -63,7 +63,7 @@ The `http` probe has two mandatory fields `path` and `port` and one optional `ht
### TCP probe
The `tcp` probe let you configure a TCP port that k8s will try to connect to. If it does not manage to establish a connection the probe will be considered unhealthy:
The `tcp` probe let you configure a TCP port that Kubernetes will try to connect to. If it does not manage to establish a connection the probe will be considered unhealthy:
```yaml
livenessProbe:
@ -73,11 +73,11 @@ livenessProbe:
periodSeconds: 20
```
The `http` probe has one mandatory fields `port`. It represents which TCP port k8s will try to connect to.
The `http` probe has one mandatory fields `port`. It represents which TCP port Kubernetes will try to connect to.
### `initialDelaySeconds` and `periodSeconds`
The `periodSeconds` field specifies that k8s should perform the probe every `N` seconds. The `initialDelaySeconds` field tells k8s that it should wait `N` second before performing the first probe.
The `periodSeconds` field specifies that Kubernetes should perform the probe every `N` seconds. The `initialDelaySeconds` field tells Kubernetes that it should wait `N` second before performing the first probe.
If we take the example:
@ -92,7 +92,7 @@ livenessProbe:
This probe will wait 3 seconds before doing the first probing. The probing will be an http call to `http://localhost:8080/healthz`. After the first wait of 3 seconds, each probing will be done every 5 seconds.
## Impact of probes on k8s
## Impact of probes on Kubernetes
### Liveness probe impact
@ -100,7 +100,7 @@ Look and apply the file [01-liveness-probe.yml](./01-liveness-probe.yml).
Run `kubectl get pods -w` and see what is happening.
The `liveness` of this pod fails (`exit 1`), so k8s detects that the pod is not alive anymore and restarts it. So this probe is key for k8s to know if a pod should be restarted or not.
The `liveness` of this pod fails (`exit 1`), so Kubernetes detects that the pod is not alive anymore and restarts it. So this probe is key for Kubernetes to know if a pod should be restarted or not.
### Readiness probe impact
@ -118,9 +118,9 @@ readiness-deployment-5dd7f6ff87-jsm9f 0/1 Running 0 2m17s
readiness-deployment-5dd7f6ff87-wnrmg 0/1 Running 0 2m17s
```
If you try to access the service `readiness-service`, with `kubectl port-forward service/readiness-service 8000:80`. It won't work. k8s sees that all the pods are not ready, so it won't send traffic to them.
If you try to access the service `readiness-service`, with `kubectl port-forward service/readiness-service 8000:80`. It won't work. Kubernetes sees that all the pods are not ready, so it won't send traffic to them.
The readiness probe is also used when you do rolling updates. k8s will wait for the pods with the new version to be ready before sending traffic to it.
The readiness probe is also used when you do rolling updates. Kubernetes will wait for the pods with the new version to be ready before sending traffic to it.
### Good practices

@ -4,7 +4,7 @@
When you declare a `pod` you can declare what resources it will use. Those resources are how much CPU and RAM your pods will use.
Specifying those will help k8s to schedule more efficently your pods on the available nodes.
Specifying those will help Kubernetes to schedule more efficently your pods on the available nodes.
For each resource you can define the `limits` and the `requests`.
@ -18,7 +18,7 @@ Specifying `123Mi` (or `128974848`, which means 128974848 bytes), will give that
## `requests` vs `limits`
k8s let's you configure the `requests` and the `limits` for each resource.
Kubernetes let's you configure the `requests` and the `limits` for each resource.
They are put on at the container level:
@ -44,20 +44,20 @@ Let's see in details each of those.
### `requests`
The `requests` is the number that will help k8s schedule your pod on a node were the resources are available.
The `requests` is the number that will help Kubernetes schedule your pod on a node were the resources are available.
Let's take an example. You have 3 nodes:
* node 1 has `100m` CPU available, and `1Gi` RAM available
* node 2 has `1000m` CPU available, and `100Mi` RAM available
1. You start a pod with only a CPU request of `10m`. It can be scheduled on any node, so k8s will take either of them.
1. You start a pod with only a CPU request of `10m`. It can be scheduled on any node, so Kubernetes will take either of them.
1. You start a pod with only a CPU request of `500m`. It can be scheduled only on node 2. Node 1 has only `100m` CPU available.
1. You start a pod with a CPU request of `500m` and a RAM request of `500Mi`. k8s will not be able to schedule your pod as no node has both requests at the same time. The pod will be in the state `Unschedulable`.
1. You start a pod with a CPU request of `500m` and a RAM request of `500Mi`. Kubernetes will not be able to schedule your pod as no node has both requests at the same time. The pod will be in the state `Unschedulable`.
### `limits`
The `limits` is the maximum utilization of a resource k8s will allow for a given pod. If your pod goes above this limit it will be restarted.
The `limits` is the maximum utilization of a resource Kubernetes will allow for a given pod. If your pod goes above this limit it will be restarted.
## Note on resources
@ -67,13 +67,13 @@ If running all your programs require more processing power than you have availab
On the other hand, if running all your program require more RAM than you have available, your computer will kill randomly processes that ask for too much RAM (at least on linux).
In kubernetes it's a bit the same. You can ask for `limits` that are higher than the node CPU, but not the RAM.
In Kubernetes it's a bit the same. You can ask for `limits` that are higher than the node CPU, but not the RAM.
Furthermore, you can ask for less than a CPU. But what does that mean? When you ask for 100 millicpu, kubernetes will give you 100 milliseconds of CPU during a second of time. If your container wants to use more it'll be throttled during the remaining 900 milliseconds of the second.
Furthermore, you can ask for less than a CPU. But what does that mean? When you ask for 100 millicpu, Kubernetes will give you 100 milliseconds of CPU during a second of time. If your container wants to use more it'll be throttled during the remaining 900 milliseconds of the second.
## Good practices
Only the `requests` is taken into account into the scheduling. So it's possible for a k8s node to go on ["overcommit"](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/resource-qos.md#qos-classes), to have the sum of resources used above the physical resource limitation of a node. In this case k8s will look for pods to terminate. Its algorithm is to look at pods that are above what they requested and terminates them. If a pod has no `requests`, so they are using more than what they requested, they will be the first to be terminated. Other candidates are the pods that have gone over their request but are still under their limit.
Only the `requests` is taken into account into the scheduling. So it's possible for a Kubernetes node to go on ["overcommit"](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/resource-qos.md#qos-classes), to have the sum of resources used above the physical resource limitation of a node. In this case Kubernetes will look for pods to terminate. Its algorithm is to look at pods that are above what they requested and terminates them. If a pod has no `requests`, so they are using more than what they requested, they will be the first to be terminated. Other candidates are the pods that have gone over their request but are still under their limit.
Unless your applications are designed to use multiple cores, it is usually a best practice to keep the CPU request at "1" or below.

@ -10,11 +10,11 @@ The first two are pod affinity/anti-affinity, the last one is node affinity.
Disclaimer: The `affinity` feature is very powerful, and we will only have a look at a part of it, the inter-pod (anti-)affinity.
Pod affinity and anti-affinity are declared at the pod level and hints k8s scheduler to run (or not run) some pods on the same node.
Pod affinity and anti-affinity are declared at the pod level and hints Kubernetes scheduler to run (or not run) some pods on the same node.
Running the same pods on the same host can improve network performances. A use case would be to try to have your application and its cache on the same node, to reduce latency.
Hinting k8s to run multiple pods on different nodes is a good way to improve fail-over. Example, if you have 3 nodes, and an application replicated 3 times. It would be unwise to have all the pods running on the same node. With pod anti-affinity you can ask k8s to schedule one pod on each node.
Hinting Kubernetes to run multiple pods on different nodes is a good way to improve fail-over. Example, if you have 3 nodes, and an application replicated 3 times. It would be unwise to have all the pods running on the same node. With pod anti-affinity you can ask Kubernetes to schedule one pod on each node.
### Common configuration
@ -99,7 +99,7 @@ Delete the pods, and try to use `preferredDuringSchedulingIgnoredDuringExecution
Node affinity is very close to pod affinity. Instead of specifying a `podAffinity` you define a `nodeAffinity`. As above, complete overview of all the options, have a look at the [specs](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/nodeaffinity.md).
Each resource in k8s can have labels, even nodes. You can see them with `kubectl`:
Each resource in Kubernetes can have labels, even nodes. You can see them with `kubectl`:
```bash
kubectl get nodes --show-labels

@ -2,11 +2,11 @@
## Introduction
In k8s pods are mortal and can be terminated at any time. When a pod is terminated it is called a [“disruption”](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/).
In Kubernetes pods are mortal and can be terminated at any time. When a pod is terminated it is called a [“disruption”](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/).
Disruptions can either be voluntary or involuntary. Involuntary means that it was not something anyone could expect (hardware failure for example). Voluntary means it was initiated by someone or something, like the upgrade of a node, a new deployment, etc.
Defining a “Pod Disruption Budget” helps k8s manage your pods when a voluntary disruption happens. k8s will try to ensure that not too many pods, matching a given selector, are unavailable at the same time
Defining a “Pod Disruption Budget” helps Kubernetes manage your pods when a voluntary disruption happens. Kubernetes will try to ensure that not too many pods, matching a given selector, are unavailable at the same time
## PDB
@ -29,7 +29,7 @@ A PDB is composed of two configurations:
* the `selector` to know on which pods to apply this PDB
* a number either `minAvailable` or `maxUnavailable`. It can either be a fixed number like `2` in the example, or a percentage like `20%`
If you want to see the effect of a PDB, you will need a multi-node k8s. As those lines are written `minikube` is a single node cluster. To have locally a multi-node cluster you can install [kind](https://github.com/kubernetes-sigs/kind).
If you want to see the effect of a PDB, you will need a multi-node Kubernetes. As those lines are written `minikube` is a single node cluster. To have locally a multi-node cluster you can install [kind](https://github.com/kubernetes-sigs/kind).
Use the [configuration file](./kind.yml) provided to create your cluster:

@ -73,7 +73,7 @@ Stop the load generator, again, wait a bit and see how it changes the pods and t
VPA means `Vertical Pod Autoscaler`. It automatically find the right resources for pods in a replication controller, deployment or replica set based on observed CPU utilization.
This feature is in beta, understand you can install it in your cluster but it's not integrate with the standard k8s source code.
This feature is in beta, understand you can install it in your cluster but it's not integrate with the standard Kubernetes source code.
If you are adventurous you can try it by:

@ -4,17 +4,17 @@
In this section you will learn how to deploy a stateful application, mysql in this example.
As you know a `pod` is mortal, meaning it can be destroyed by k8s anytime, and with it it's local data, memory, etc. So it's perfect for stateless applications. Of course, in the real world we need a way to store our data, and we need this data to be persistent in time.
As you know a `pod` is mortal, meaning it can be destroyed by Kubernetes anytime, and with it it's local data, memory, etc. So it's perfect for stateless applications. Of course, in the real world we need a way to store our data, and we need this data to be persistent in time.
So how can we deploy a stateful application with a persistent storage in k8s? Let's deploy a mysql.
So how can we deploy a stateful application with a persistent storage in Kubernetes? Let's deploy a mysql.
## Volumes
We need to review what a volume is before continuing with the deployment of our mysql. As stated above, the disk of a pod is destroyed with it, so it's lost. For a database it'll nice if we could keep the data between restarts of the pods. Here comes the `volume`.
We can see a `pod` as something that requests CPU & RAM. We can see a `volume` as something that requests a storage on disk. K8s handles a lot of different kind of volumes - 26 has this file hands on is written - from local disk storage to s3.
We can see a `pod` as something that requests CPU & RAM. We can see a `volume` as something that requests a storage on disk. Kubernetes handles a lot of different kind of volumes - 26 has this file hands on is written - from local disk storage to s3.
Here we will use `PersistentVolumeClaim`, it's an abstraction over the hard drives of the k8s nodes - a fancy name for local hard drive.
Here we will use `PersistentVolumeClaim`, it's an abstraction over the hard drives of the Kubernetes nodes - a fancy name for local hard drive.
Let's create the volume where our mysql data will be stored.
@ -82,7 +82,7 @@ kubectl apply -f 10-volumes/03-simple-mysql-deployment.yml
There is a bunch of parameters we haven't seen yet:
* `strategy`: the strategy of updates of the pods
* `type`: `Recreate`. This instructs k8s to not use rolling updates. Rolling updates will not work, as you cannot have more than one Pod running at a time.
* `type`: `Recreate`. This instructs Kubernetes to not use rolling updates. Rolling updates will not work, as you cannot have more than one Pod running at a time.
* `env`: the list of environment variables to pass to the container
* `name`: the name of the env variable
* `value`: the value of the env variable

@ -2,15 +2,15 @@
## Introduction
In this section you will get an overview of others k8s useful features, in order of complexity.
In this section you will get an overview of others Kubernetes useful features, in order of complexity.
## Namespace
`Namespaces` is the way to support multiple virtual clusters in k8s.
`Namespaces` is the way to support multiple virtual clusters in Kubernetes.
They are intended for use in environments with many users spread across multiple teams, or projects. For clusters with a few to tens of users, you should not need to create or think about `namespaces` at all. Start using `namespaces` when you need the features they provide.
By default, all objects are in the `default` namespace. There is a "hidden" `namespace` where k8s runs services for itself.
By default, all objects are in the `default` namespace. There is a "hidden" `namespace` where Kubernetes runs services for itself.
Try:
```bash
@ -29,7 +29,7 @@ $ kubectl get all --namespace=kube-system
## `kubeval`
It is a tool to validate your k8s YAML files: <https://github.com/garethr/kubeval>
It is a tool to validate your Kubernetes YAML files: <https://github.com/garethr/kubeval>
The easiest integration is with `docker run`, if you files are in the directory `kubernetes`
@ -39,8 +39,8 @@ docker run -it -v `pwd`/kubernetes:/kubernetes garethr/kubeval kubernetes/**/*
## Helm
It is a package manager for k8s: <https://helm.sh/>.
It contains multiple, ready to use, k8s manifest for projects, for example [mysql](https://github.com/helm/charts/tree/master/stable/mysql)
It is a package manager for Kubernetes: <https://helm.sh/>.
It contains multiple, ready to use, Kubernetes manifest for projects, for example [mysql](https://github.com/helm/charts/tree/master/stable/mysql)
## Stateful Set

@ -2,7 +2,7 @@
## Introduction
This section is a summary, a cheat sheet, of good practices for k8s. It is mostly a summary of previous sections.
This section is a summary, a cheat sheet, of good practices for Kubernetes. It is mostly a summary of previous sections.
## Cheat Sheet
@ -27,13 +27,13 @@ YAML can be a [tricky](https://docs.saltstack.com/en/latest/topics/troubleshooti
We recommand to use [`yamllint`](https://github.com/adrienverge/yamllint). Compared to other YAML linter. It has the nice feature of supporting multi-documents in a single file. The file [yamllint](./yamllint) is a good configuration for this tool.
You can also use k8s specifics linter. [kube-score](https://github.com/zegl/kube-score) lints your manifests and enforce good practices. [kubeval](https://github.com/instrumenta/kubeval) also lints the manifests, but only checks if they are valid.
You can also use Kubernetes specifics linter. [kube-score](https://github.com/zegl/kube-score) lints your manifests and enforce good practices. [kubeval](https://github.com/instrumenta/kubeval) also lints the manifests, but only checks if they are valid.
In k8s 1.13 the option [`--dry-run`](https://k8s.io/blog/2019/01/14/apiserver-dry-run-and-kubectl-diff/) appeared on “kubectl”. You could also use this feature to know if your YAML are valid for k8s.
In Kubernetes 1.13 the option [`--dry-run`](https://Kubernetes.io/blog/2019/01/14/apiserver-dry-run-and-kubectl-diff/) appeared on “kubectl”. You could also use this feature to know if your YAML are valid for Kubernetes.
### Handle `SIGTERM` signal in your applications
k8s sends this signal when it wants to stop a container. You should listen to it and react accordingly to your application (close connections, save a state, etc.).
Kubernetes sends this signal when it wants to stop a container. You should listen to it and react accordingly to your application (close connections, save a state, etc.).
### Probes
@ -53,6 +53,6 @@ Specify a [PDB](../15-pdb) for your deployments.
## Other good practices
Not directly related to k8s, but still usefull:
Not directly related to Kubernetes, but still usefull:
1. If you are in the cloud, use [`terraform`](https://www.terraform.io/) to configure your clusters.

@ -106,32 +106,32 @@ fi
### What this is
This is a hands-on course to get started with Kubernetes (k8s). It starts with the basics and moves up in complexity.
At the end of this course, you should be able to deploy an API in k8s that is accessible from the outside.
This is a hands-on course to get started with Kubernetes (Kubernetes). It starts with the basics and moves up in complexity.
At the end of this course, you should be able to deploy an API in Kubernetes that is accessible from the outside.
### What it's *not*
This is not a course on how to install, manage or deploy a k8s cluster.
Neither is it a course to understand how k8s works internally.
This is not a course on how to install, manage or deploy a Kubernetes cluster.
Neither is it a course to understand how Kubernetes works internally.
However, if you're interested in this topic, see [Kubernetes The Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way).
## What is k8s? What is it used for
## What is Kubernetes? What is it used for
k8s is an open-source system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications.
Kubernetes is an open-source system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications.
k8s has a number of features. It can be seen as:
Kubernetes has a number of features. It can be seen as:
* a container platform,
* a microservices platform,
* a portable cloud platform, and a lot more.
k8s provides a container-centric management environment. It orchestrates computing, networking, and storage infrastructure on behalf of user workloads. This provides much of the simplicity of Platform as a Service (PaaS) with the flexibility of Infrastructure as a Service (IaaS), and enables portability across infrastructure providers.
Kubernetes provides a container-centric management environment. It orchestrates computing, networking, and storage infrastructure on behalf of user workloads. This provides much of the simplicity of Platform as a Service (PaaS) with the flexibility of Infrastructure as a Service (IaaS), and enables portability across infrastructure providers.
## Glossary
* **YAML (yml)**
A markup language that relies on spaces and tabulations. All k8s configuration is written using YAML.
A markup language that relies on spaces and tabulations. All Kubernetes configuration is written using YAML.
You will feel the pain of missing tabs and spaces. Feel free to use a linter, such as <http://www.yamllint.com/>.
@ -151,37 +151,37 @@ Docker uses the resource isolation features of the Linux kernel, such as cgroups
* **kubectl**
The standard CLI to interact with k8s. We use it a lot in this course.
The standard CLI to interact with Kubernetes. We use it a lot in this course.
* **minikube**
A local k8s cluster, useful for testing. We use it a lot in this course.
A local Kubernetes cluster, useful for testing. We use it a lot in this course.
* **Manifest**
k8s configuration files are called *manifests*. This is a reference to the list or invoice of the passengers or goods being carried by a commercial vehicle or ship (from [wiktionary](https://en.wiktionary.org/wiki/manifest#Noun)).
Kubernetes configuration files are called *manifests*. This is a reference to the list or invoice of the passengers or goods being carried by a commercial vehicle or ship (from [wiktionary](https://en.wiktionary.org/wiki/manifest#Noun)).
* **(k8s) objects**
* **(Kubernetes) objects**
k8s contains a number of abstractions that represent the state of your system: deployed containerized applications and workloads, their associated network and disk resources, and other information about what your cluster is doing. These abstractions are called *objects*, and are represented by a *kind* in the k8s API.
Kubernetes contains a number of abstractions that represent the state of your system: deployed containerized applications and workloads, their associated network and disk resources, and other information about what your cluster is doing. These abstractions are called *objects*, and are represented by a *kind* in the Kubernetes API.
* **(k8s) node**
* **(Kubernetes) node**
A node is a worker machine in k8s.
A node is a worker machine in Kubernetes.
A worker machine may be a VM or physical machine, depending on the cluster. It has the necessary services to run the workloads and is managed by the master components. The services on a node include Docker, `kubelet` and `kube-proxy`.
* **(k8s) cluster**
* **(Kubernetes) cluster**
A set of machines, called nodes, that run containerized applications managed by k8s.
A set of machines, called nodes, that run containerized applications managed by Kubernetes.
A cluster has several worker nodes and at least one master node.
* **(k8s) master**
* **(Kubernetes) master**
The *master* is responsible for managing the cluster. It coordinates all activities in your cluster, such as scheduling applications, maintaining applications desired state, scaling applications, and rolling out new updates.
A k8s master automatically handles the scheduling of your services across nodes in the cluster. The masters automatic scheduling takes the available resources of each node into account.
A Kubernetes master automatically handles the scheduling of your services across nodes in the cluster. The masters automatic scheduling takes the available resources of each node into account.
## The base building block: pods

Loading…
Cancel
Save