chore: change kubernetes to k8s (#13)

pull/15/head
Stéphanie 5 years ago committed by Rémy-Christophe Schermesser
parent 8f3ee5bd3f
commit 695781a252

@ -4,8 +4,8 @@
In this section we will learn how to name things in k8s, and how to find them again.
Labels are the way to organize objects in kubernetes. The labels are a list of key/value.
Annotations are a way to mark objects in kubernetes, it's also a list of key/value.
Labels are the way to organize objects in k8s. The labels are a list of key/value.
Annotations are a way to mark objects in k8s, it's also a list of key/value.
They seem the same. The major difference is that you can query k8s based on labels.
On the other hand, annotations are not limited on characters used for labels.
@ -55,4 +55,4 @@ Nothing to see here.
```bash
kubectl delete pod --all
```
```

@ -42,7 +42,7 @@ Wait a bit and access the logs of the pod created by the cron.
## `Job`
If you need to run a one time job, you can use the `Job` in kubernetes. In fact the `CronJob` will start a `Job` for you at the scheduled interval.
If you need to run a one time job, you can use the `Job` in k8s. In fact the `CronJob` will start a `Job` for you at the scheduled interval.
```yml
apiVersion: batch/v1

@ -2,17 +2,17 @@
## Introduction
To be able to handle pods correctly, K8s needs to know how they are behaving. For this it needs to have a way to know the state of the containers running inside a given pod. The creator of k8s decided to go to a "black box" testing. "Black box" testing means that kubernetes doesn't interact directly with the containers. It is each container that says how kubernetes can know its state.
To be able to handle pods correctly, K8s needs to know how they are behaving. For this it needs to have a way to know the state of the containers running inside a given pod. The creator of k8s decided to go to a "black box" testing. "Black box" testing means that k8s doesn't interact directly with the containers. It is each container that says how k8s can know its state.
You can define two probes for kubernetes to know the state of your container: "liveness" and "readiness" probes.
You can define two probes for k8s to know the state of your container: "liveness" and "readiness" probes.
## Liveness probe
The liveness probe is here to detect if a container is still alive. Meaning, if the container is not in a broken state, in a dead lock, or anything related. This is always usefull. It helps kubernetes to know if your container is alive or not and so it can take decision based on that, like restarting it.
The liveness probe is here to detect if a container is still alive. Meaning, if the container is not in a broken state, in a dead lock, or anything related. This is always usefull. It helps k8s to know if your container is alive or not and so it can take decision based on that, like restarting it.
## Readiness probe
The readiness probe is here to detect if a container is ready to serve traffic. It is usefull to configure when your container will receive external traffic sent by kubernetes. Most of the time, when it's an API.
The readiness probe is here to detect if a container is ready to serve traffic. It is usefull to configure when your container will receive external traffic sent by k8s. Most of the time, when it's an API.
## Defining a probe
@ -24,7 +24,7 @@ Both liveness and readiness probes have the same configuration. You have three w
### Exec probe
The `exec` probe let you configure a command that kubernetes will run in your container. If the command exits with a non zero status the probe will be considered unhealthy:
The `exec` probe let you configure a command that k8s will run in your container. If the command exits with a non zero status the probe will be considered unhealthy:
```yaml
livenessProbe:
@ -41,7 +41,7 @@ We will see later what `initialDelaySeconds` and `periodSeconds` means.
### HTTP probe
The `http` probe let you configure a HTTP endpoint that kubernetes will call in your container. If this endpoint returns a non 2XX status the probe will be considered unhealthy:
The `http` probe let you configure a HTTP endpoint that k8s will call in your container. If this endpoint returns a non 2XX status the probe will be considered unhealthy:
```yaml
livenessProbe:
@ -63,7 +63,7 @@ The `http` probe has two mandatory fields `path` and `port` and one optional `ht
### TCP probe
The `tcp` probe let you configure a TCP port that kubernetes will try to connect to. If it does not manage to establish a connection the probe will be considered unhealthy:
The `tcp` probe let you configure a TCP port that k8s will try to connect to. If it does not manage to establish a connection the probe will be considered unhealthy:
```yaml
livenessProbe:
@ -73,11 +73,11 @@ livenessProbe:
periodSeconds: 20
```
The `http` probe has one mandatory fields `port`. It represents which TCP port kubernetes will try to connect to.
The `http` probe has one mandatory fields `port`. It represents which TCP port k8s will try to connect to.
### `initialDelaySeconds` and `periodSeconds`
The `periodSeconds` field specifies that kubernetes should perform the probe every `N` seconds. The `initialDelaySeconds` field tells kubernetes that it should wait `N` second before performing the first probe.
The `periodSeconds` field specifies that k8s should perform the probe every `N` seconds. The `initialDelaySeconds` field tells k8s that it should wait `N` second before performing the first probe.
If we take the example:
@ -92,7 +92,7 @@ livenessProbe:
This probe will wait 3 seconds before doing the first probing. The probing will be an http call to `http://localhost:8080/healthz`. After the first wait of 3 seconds, each probing will be done every 5 seconds.
## Impact of probes on kubernetes
## Impact of probes on k8s
### Liveness probe impact
@ -100,7 +100,7 @@ Look and apply the file [01-liveness-probe.yml](./01-liveness-probe.yml).
Run `kubectl get pods -w` and see what is happening.
The `liveness` of this pod fails (`exit 1`), so kubernetes detects that the pod is not alive anymore and restarts it. So this probe is key for kubernetes to know if a pod should be restarted or not.
The `liveness` of this pod fails (`exit 1`), so k8s detects that the pod is not alive anymore and restarts it. So this probe is key for k8s to know if a pod should be restarted or not.
### Readiness probe impact
@ -118,9 +118,9 @@ readiness-deployment-5dd7f6ff87-jsm9f 0/1 Running 0 2m17s
readiness-deployment-5dd7f6ff87-wnrmg 0/1 Running 0 2m17s
```
If you try to access the service `readiness-service`, with `kubectl port-forward service/readiness-service 8000:80`. It won't work. Kubernetes sees that all the pods are not ready, so it won't send traffic to them.
If you try to access the service `readiness-service`, with `kubectl port-forward service/readiness-service 8000:80`. It won't work. k8s sees that all the pods are not ready, so it won't send traffic to them.
The readiness probe is also used when you do rolling updates. Kubernetes will wait for the pods with the new version to be ready before sending traffic to it.
The readiness probe is also used when you do rolling updates. k8s will wait for the pods with the new version to be ready before sending traffic to it.
### Good practices

@ -4,7 +4,7 @@
When you declare a `pod` you can declare what resources it will use. Those resources are how much CPU and RAM your pods will use.
Specifying those will help kubernetes to schedule more efficently your pods on the available nodes.
Specifying those will help k8s to schedule more efficently your pods on the available nodes.
For each resource you can define the `limits` and the `requests`.
@ -18,24 +18,24 @@ Specifying `123Mi` (or `128974848`, which means 128974848 bytes), will give that
## `requests` vs `limits`
Kubernetes let's you configure the `requests` and the `limits` for each resource.
k8s let's you configure the `requests` and the `limits` for each resource.
### `requests`
The `requests` is the number that will help kubernetes schedule your pod on a node were the resources are available.
The `requests` is the number that will help k8s schedule your pod on a node were the resources are available.
Let's take an example. You have 3 nodes:
* node 1 has `100m` CPU available, and `1Gi` RAM available
* node 2 has `1000m` CPU available, and `100Mi` RAM available
1. You start a pod with only a CPU request of `10m`. It can be scheduled on any node, so kubernetes will take either of them.
1. You start a pod with only a CPU request of `10m`. It can be scheduled on any node, so k8s will take either of them.
1. You start a pod with only a CPU request of `500m`. It can be scheduled only on node 2. Node 1 has only `100m` CPU available.
1. You start a pod with a CPU request of `500m` and a RAM request of `500Mi`. Kubernetes will not be able to schedule your pod as no node has both requests at the same time. The pod will be in the state `Unschedulable`.
1. You start a pod with a CPU request of `500m` and a RAM request of `500Mi`. k8s will not be able to schedule your pod as no node has both requests at the same time. The pod will be in the state `Unschedulable`.
### `limits`
The `limits` is the maximum utilization of a resource kubernetes will allow for a given pod. If your pod goes above this limit it will be restarted.
The `limits` is the maximum utilization of a resource k8s will allow for a given pod. If your pod goes above this limit it will be restarted.
## Good practices

@ -14,7 +14,7 @@ Pod affinity and anti-affinity are declared at the pod level and hints k8s sched
Running the same pods on the same host can improve network performances. A use case would be to try to have your application and its cache on the same node, to reduce latency.
Hinting kubernetes to run multiple pods on different nodes is a good way to improve fail-over. Example, if you have 3 nodes, and an application replicated 3 times. It would be unwise to have all the pods running on the same node. With pod anti-affinity you can ask k8s to schedule one pod on each node.
Hinting k8s to run multiple pods on different nodes is a good way to improve fail-over. Example, if you have 3 nodes, and an application replicated 3 times. It would be unwise to have all the pods running on the same node. With pod anti-affinity you can ask k8s to schedule one pod on each node.
### Common configuration
@ -81,7 +81,7 @@ kubectl get nodes --show-labels
You can add labels to a node with:
```bash
kubectl label nodes node1 gpu=yes
kubectl label nodes [nodeName] gpu=yes
```
Here we add the label `gpu` with value `yes` to the node `node1`.

@ -2,11 +2,11 @@
## Introduction
In kubernetes pods are mortal and can be terminated at any time. When a pod is terminated it is called a [“disruption”](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/).
In k8s pods are mortal and can be terminated at any time. When a pod is terminated it is called a [“disruption”](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/).
Disruptions can either be voluntary or involuntary. Involuntary means that it was not something anyone could expect (hardware failure for example). Voluntary means it was initiated by someone or something, like the upgrade of a node, a new deployment, etc.
Defining a “Pod Disruption Budget” helps kubernetes manage your pods when a voluntary disruption happens. Kubernetes will try to ensure that not too many pods, matching a given selector, are unavailable at the same time
Defining a “Pod Disruption Budget” helps k8s manage your pods when a voluntary disruption happens. k8s will try to ensure that not too many pods, matching a given selector, are unavailable at the same time
## PDB

@ -82,7 +82,7 @@ kubectl apply -f 10-volumes/03-simple-mysql-deployment.yml
There is a bunch of parameters we haven't seen yet:
* `strategy`: the strategy of updates of the pods
* `type`: `Recreate`. This instructs Kubernetes to not use rolling updates. Rolling updates will not work, as you cannot have more than one Pod running at a time.
* `type`: `Recreate`. This instructs k8s to not use rolling updates. Rolling updates will not work, as you cannot have more than one Pod running at a time.
* `env`: the list of environment variables to pass to the container
* `name`: the name of the env variable
* `value`: the value of the env variable

@ -2,7 +2,7 @@
## Introduction
This section is a summary, a cheat sheet, of good practices for kubernetes. It is mostly a summary of previous sections.
This section is a summary, a cheat sheet, of good practices for k8s. It is mostly a summary of previous sections.
## Cheat Sheet
@ -27,13 +27,13 @@ YAML can be a [tricky](https://docs.saltstack.com/en/latest/topics/troubleshooti
We recommand to use [`yamllint`](https://github.com/adrienverge/yamllint). Compared to other YAML linter. It has the nice feature of supporting multi-documents in a single file. The file [yamllint](./yamllint) is a good configuration for this tool.
You can also use kubernetes specifics linter. [kube-score](https://github.com/zegl/kube-score) lints your manifests and enforce good practices. [kubeval](https://github.com/instrumenta/kubeval) also lints the manifests, but only checks if they are valid.
You can also use k8s specifics linter. [kube-score](https://github.com/zegl/kube-score) lints your manifests and enforce good practices. [kubeval](https://github.com/instrumenta/kubeval) also lints the manifests, but only checks if they are valid.
In kubernetes 1.13 the option [`--dry-run`](https://kubernetes.io/blog/2019/01/14/apiserver-dry-run-and-kubectl-diff/) appeared on “kubectl”. You could also use this feature to know if your YAML are valid for k8s.
In k8s 1.13 the option [`--dry-run`](https://k8s.io/blog/2019/01/14/apiserver-dry-run-and-kubectl-diff/) appeared on “kubectl”. You could also use this feature to know if your YAML are valid for k8s.
### Handle `SIGTERM` signal in your applications
Kubernetes sends this signal when it wants to stop a container. You should listen to it and react accordingly to your application (close connections, save a state, etc.).
k8s sends this signal when it wants to stop a container. You should listen to it and react accordingly to your application (close connections, save a state, etc.).
### Probes

@ -105,25 +105,25 @@ fi
### What this is
This is a hands on to start with using kubernetes (k8s). It starts from the basics and moves up in complexity.
At the end of this hands on you should be able to deploy an API in kubernetes that is accessible from the outside.
At the end of this hands on you should be able to deploy an API in k8s that is accessible from the outside.
### What this is *not*
This is not a hands on on how to install/manage/deploy a k8s cluster.
This is neither a hands on to understand how kubernetes is working internally.
This is neither a hands on to understand how k8s is working internally.
If this topic interests you, see [Kubernetes the hard way](https://github.com/kelseyhightower/kubernetes-the-hard-way).
## What is kubernetes? What is it used for?
## What is k8s? What is it used for?
Kubernetes is an open source system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications.
k8s is an open source system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications.
Kubernetes has a number of features. It can be thought of as:
k8s has a number of features. It can be thought of as:
* a container platform,
* a microservices platform,
* a portable cloud platform and a lot more.
Kubernetes provides a container-centric management environment. It orchestrates computing, networking, and storage infrastructure on behalf of user workloads. This provides much of the simplicity of Platform as a Service (PaaS) with the flexibility of Infrastructure as a Service (IaaS), and enables portability across infrastructure providers.
k8s provides a container-centric management environment. It orchestrates computing, networking, and storage infrastructure on behalf of user workloads. This provides much of the simplicity of Platform as a Service (PaaS) with the flexibility of Infrastructure as a Service (IaaS), and enables portability across infrastructure providers.
## Glossary
@ -154,31 +154,31 @@ The standard cli to interact with k8s, we will use it a lot.
* **minikube**
A local kubernetes, useful for testing. We will use it during this hands on.
A local k8s, useful for testing. We will use it during this hands on.
* **manifest**
Kubernetes configuration files are called `manifest`. In reference to the `manifest` of a ship: A list or invoice of the passengers or goods being carried by a commercial vehicle or ship (from [wiktionary](https://en.wiktionary.org/wiki/manifest#Noun)).
k8s configuration files are called `manifest`. In reference to the `manifest` of a ship: A list or invoice of the passengers or goods being carried by a commercial vehicle or ship (from [wiktionary](https://en.wiktionary.org/wiki/manifest#Noun)).
* **(kubernetes) objects**
* **(k8s) objects**
Kubernetes contains a number of abstractions that represent the state of your system: deployed containerized applications and workloads, their associated network and disk resources, and other information about what your cluster is doing. These abstractions are called `objects` and represented by a `kind` in the Kubernetes API.
k8s contains a number of abstractions that represent the state of your system: deployed containerized applications and workloads, their associated network and disk resources, and other information about what your cluster is doing. These abstractions are called `objects` and represented by a `kind` in the k8s API.
* **(kubernetes) cluster**
* **(k8s) cluster**
A set of machines, called nodes, that run containerized applications managed by Kubernetes.
A set of machines, called nodes, that run containerized applications managed by k8s.
A cluster has several worker nodes and at least one master node.
* **(kubernetes) master**
* **(k8s) master**
The Master is responsible for managing the cluster. The master coordinates all activities in your cluster, such as scheduling applications, maintaining applications desired state, scaling applications, and rolling out new updates.
Kubernetes master automatically handles scheduling your services across the Nodes in the cluster. The Masters automatic scheduling takes into account the available resources on each Node.
k8s master automatically handles scheduling your services across the Nodes in the cluster. The Masters automatic scheduling takes into account the available resources on each Node.
* **(kubernetes) node**
* **(k8s) node**
A node is a worker machine in Kubernetes.
A node is a worker machine in k8s.
A worker machine may be a VM or physical machine, depending on the cluster. It has the Services necessary to run the services and is managed by the master components. The Services on a node include Docker, `kubelet` and `kube-proxy`.

Loading…
Cancel
Save