Merge branch 'master' into fix/doc

This commit is contained in:
Rémy-Christophe Schermesser 2019-05-23 11:43:22 +02:00 committed by GitHub
commit 0c51e21fa6
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
11 changed files with 50 additions and 26 deletions

View File

@ -10,11 +10,7 @@ But it has a base assumption that a `pod` can be killed whenever it wants to. So
## First pod
Let's start to deploy the docker image [mhausenblas/simpleservice](https://hub.docker.com/r/mhausenblas/simpleservice/). It's a stateless python JSON API that answers on:
* `/env`
* `/info`
* `/health`
Let's start to deploy the docker image [mhausenblas/simpleservice](https://hub.docker.com/r/mhausenblas/simpleservice/). It's a stateless python JSON API that answers on multiple endpoints. In this hands-on we will only use the `/health`.
Here is our first manifest for Kubernetes:

View File

@ -24,6 +24,8 @@ secret "mysecret" created
You can reference a secret from a pod, either per env variable or mounting a volume containing a secret.
## Reference the secret by mounting it as a volume
Here we mount the secret `mysecret` to the path `/etc/foo` inside the pod:
```yml
@ -45,6 +47,17 @@ spec:
secretName: mysecret
```
You can look up the secrets in the pod by connecting to the pod:
```sh
$ kubectl exec -ti redis-with-volume-secrets /bin/bash
root@redis-with-volume-secrets:/data# cd /etc/foo/
root@redis-with-volume-secrets:/etc/foo# ls
password username
```
## Reference the secret by using environmental variables
Here we bind the value `username` from the secret `mysecret` to the env variable `SECRET_USERNAME`,
`password` from the secret `mysecret` to the env variable `SECRET_PASSWORD`:
@ -70,6 +83,16 @@ spec:
key: password
```
You can look up the secrets in the pod by connecting to the pod:
```sh
$ kubectl exec -ti redis-with-env-secrets /bin/bash
root@redis-with-env-secrets:/data# echo $SECRET_USERNAME
admin
root@redis-with-env-secrets:/data# echo $SECRET_PASSWORD
1f2d1e2e67df
```
Careful, if you change a secret after starting the pods, it won't update the pods. So you need to restart them.
## Clean up

View File

@ -10,7 +10,8 @@ spec:
livenessProbe:
exec:
command:
- exit
- "1"
- /bin/bash
- -c
- "exit 1"
initialDelaySeconds: 5
periodSeconds: 5

View File

@ -21,8 +21,9 @@ spec:
readinessProbe:
exec:
command:
- exit
- "1"
- /bin/bash
- -c
- "exit 1"
initialDelaySeconds: 5
periodSeconds: 5
---

View File

@ -8,11 +8,11 @@ You can define two probes for Kubernetes to know the state of your container: "l
## Liveness probe
The liveness probe is here to detect if a container is still alive. Meaning, if the container is not in a broken state, in a dead lock, or anything related. This is always usefull. It helps Kubernetes to know if your container is alive or not and so it can take decision based on that, like restarting it.
The liveness probe is here to detect if a container is still alive. Meaning, if the container is not in a broken state, in a dead lock, or anything related. This is always useful. It helps Kubernetes to know if your container is alive or not and so it can take decision based on that, like restarting it.
## Readiness probe
The readiness probe is here to detect if a container is ready to serve traffic. It is usefull to configure when your container will receive external traffic sent by Kubernetes. Most of the time, when it's an API.
The readiness probe is here to detect if a container is ready to serve traffic. It is useful to configure when your container will receive external traffic sent by Kubernetes. Most of the time, when it's an API.
## Defining a probe
@ -24,7 +24,7 @@ Both liveness and readiness probes have the same configuration. You have three w
### Exec probe
The `exec` probe let you configure a command that Kubernetes will run in your container. If the command exits with a non zero status the probe will be considered unhealthy:
The `exec` probe lets you configure a command that Kubernetes will run in your container. If the command exits with a non zero status the probe will be considered unhealthy:
```yml
livenessProbe:
@ -41,7 +41,7 @@ We will see later what `initialDelaySeconds` and `periodSeconds` means.
### HTTP probe
The `http` probe let you configure a HTTP endpoint that Kubernetes will call in your container. If this endpoint returns a non 2XX status the probe will be considered unhealthy:
The `http` probe lets you configure a HTTP endpoint that Kubernetes will call in your container. If this endpoint returns a non 2XX status the probe will be considered unhealthy:
```yml
livenessProbe:
@ -57,13 +57,13 @@ livenessProbe:
The `http` probe has two mandatory fields `path` and `port` and one optional `httpHeaders`.
* `path`: let you configure which http path the probe should call.
* `port`: let you configure which port the probe should connect to.
* `httpHeaders`: let you configure http headers the probe should send with its call.
* `path`: lets you configure which http path the probe should call.
* `port`: lets you configure which port the probe should connect to.
* `httpHeaders`: lets you configure http headers the probe should send with its call.
### TCP probe
The `tcp` probe let you configure a TCP port that Kubernetes will try to connect to. If it does not manage to establish a connection the probe will be considered unhealthy:
The `tcp` probe lets you configure a TCP port that Kubernetes will try to connect to. If it does not manage to establish a connection the probe will be considered unhealthy:
```yml
livenessProbe:

View File

@ -10,7 +10,7 @@ For each resource you can define the `limits` and the `requests`.
## Resources definition
The CPU resource is measured in a number of CPU the pod will use for a given amount of time. It can be inferior to 0.
The CPU resource is measured in a number of CPU the pod will use for a given amount of time. It cannot be inferior to 0.
Specifying `0.5` (or `500m`, which means 500 millicpu), will give half of a CPU to your pod.
The RAM resource is measured in the number of bytes of RAM the pod will use.

View File

@ -54,7 +54,7 @@ spec:
In english words, this configuration means that we want to ensure that pods with the label `run=nginx` will not run on node with the same hostname (`kubernetes.io/hostname`).
You also have `preferredDuringSchedulingIgnoredDuringExecution` to not require but only hints the scheduler. Carefull the configuration for this is different:
You also have `preferredDuringSchedulingIgnoredDuringExecution` to not require but only hints the scheduler. Be careful the configuration for this is different:
```yml
apiVersion: v1

View File

@ -34,7 +34,7 @@ If you want to see the effect of a PDB, you will need a multi-node Kubernetes. A
Use the [configuration file](kind.yml) provided to create your cluster:
```sh
kind create cluster --config kind.yml
kind create cluster --config 14-pdb/kind.yml
```
Review and apply the manifests in [01-pdb.yml](01-pdb.yml). Why did we specify a soft anti-affinity?

View File

@ -11,12 +11,12 @@ spec:
- name: nginx
image: nginx
volumeMounts:
- name: data
- name: shared-data
mountPath: /usr/share/nginx/html
- name: debian
image: debian
volumeMounts:
- name: data
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh"]
args: ["-c", "echo Hello from the debian container > /pod-data/index.html"]

View File

@ -47,7 +47,7 @@ Let's review some parameters:
Apply it:
```sh
kubectl apply -f 10-volumes/01-simple-mysql-pv.yml
kubectl apply -f 01-simple-mysql-pv.yml
```
Now that we have a storage, we need to claim it, make it available for our pods. So we need a `PersistentVolumeClaim`. It is a request for storage by a user. It is similar to a pod. Pods consume node resources and `PersistentVolumeClaim` consume `PersistentVolume` resources.
@ -68,7 +68,7 @@ spec:
The manifest is pretty similar to the `PersistentVolume`:
```sh
kubectl apply -f 10-volumes/02-simple-mysql-pvc.yml
kubectl apply -f 02-simple-mysql-pvc.yml
```
## Stateful application
@ -76,7 +76,7 @@ kubectl apply -f 10-volumes/02-simple-mysql-pvc.yml
Now let's create the `deployment` of mysql:
```sh
kubectl apply -f 10-volumes/03-simple-mysql-deployment.yml
kubectl apply -f 03-simple-mysql-deployment.yml
```
There is a bunch of parameters we haven't seen yet:
@ -97,7 +97,7 @@ There is a bunch of parameters we haven't seen yet:
Let's finish by creating a `service` to have stable DNS entry inside our cluster.
```sh
kubectl apply -f 10-volumes/04-simple-mysql-service.yml
kubectl apply -f 04-simple-mysql-service.yml
```
Finally let's access the mysql:

View File

@ -52,6 +52,9 @@ open https://download.docker.com/mac/stable/Docker.dmg
Install minikube and the "ingress" and "metrics-server" addons:
```sh
$ brew install kubectl
[...]
$ brew cask install minikube
[...]