chore: cleanup md, remove empty sections (#32)

pull/33/head
Rémy-Christophe Schermesser 5 years ago committed by GitHub
parent 9b721db650
commit 348d5f2bd6
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -10,8 +10,7 @@ But it has a base assumption that a `pod` can be killed whenever it wants to. So
## First pod
Let's start to deploy this docker image <https://hub.docker.com/r/mhausenblas/simpleservice/>.
It's a stateless python JSON API that answers on:
Let's start to deploy the docker image [mhausenblas/simpleservice](https://hub.docker.com/r/mhausenblas/simpleservice/). It's a stateless python JSON API that answers on:
* `/env`
* `/info`
@ -49,7 +48,7 @@ $ kubectl apply -f 05-pods/01-simple-pod.yml
pod "simple-pod" created
```
We also could have used the `kubectl create -f ...`. But it's better to have a declarative approach in Kubernetes rather than an imperative one, [see]( https://medium.com/bitnami-perspectives/imperative-declarative-and-a-few-kubectl-tricks-9d6deabdde).
We also could have used the `kubectl create -f ...`. But it's better to have a declarative approach in Kubernetes rather than an imperative one, [see](https://medium.com/bitnami-perspectives/imperative-declarative-and-a-few-kubectl-tricks-9d6deabdde).
## `kubectl get`
@ -107,7 +106,7 @@ minikube dashboard
## Exercises
1. Deploy a `pod` containing nginx. The image name is `nginx`, see: <https://hub.docker.com/_/nginx/>
1. Deploy a `pod` containing nginx. The image name is `nginx`, see: https://hub.docker.com/_/nginx/.
2. Do you think you can access the pod `simple-service` from outside of Kubernetes, *without changing the manifest*?
## Clean up
@ -118,7 +117,7 @@ kubectl delete pod --all
## Answers
For 2. Nop, the pod is only visible from the inside of the cluster
For 2), no, the pod is only visible from the inside of the cluster.
## Links

@ -16,12 +16,12 @@ Valid label values must be 63 characters or less and must be empty or begin and
## Labels in action
Apply the pod `06-label-annotation/01-simple-pod.yml`. It is the same as `05-pods/01-simple-pod.yml` but with 2 labels:
Apply the manifest [01-simple-pod.yml](01-simple-pod.yml). It is the same as [05-pods/01-simple-pod.yml](../05-pods/01-simple-pod.yml) but with 2 labels:
* `env`: `production`
* `tier`: `backend`
Apply the pod `06-label-annotation/02-nginx.yml`. It is a simple nginx with 2 labels:
Apply the manifest [02-nginx.yml](02-nginx.yml). It is a simple nginx with 2 labels:
* `env`: `production`
* `tier`: `frontend`
@ -47,10 +47,6 @@ nginx 1/1 Running 0 1m
Those queries we call them `selector` in the Kubernetes jargon. We will use them later on with deployments.
## Exercises
Nothing to see here.
## Clean up
```sh

@ -146,8 +146,8 @@ Remember the command `kubectl describe deployment`.
## Exercises
1. Deploy multiple nginx. The image name is `nginx`, see: <https://hub.docker.com/_/nginx/>
2. Play with the scaling up/down & the deployment of new versions
1. Deploy multiple nginx. The image name is `nginx`, see: https://hub.docker.com/_/nginx/.
2. Play with the scaling up/down & the deployment of new versions.
3. Do you think you can access your `deployment` of nginx from outside of Kubernetes, *without changing the manifest*?
## Clean up
@ -158,7 +158,7 @@ kubectl delete deployment,rs,pod --all
## Answers
For 3. Nop, same as the pod. A `deployment` only creates pods, it doesn't do anything else
For 3), nop same as the pod. A `deployment` only creates pods, it doesn't do anything else.
## Links

@ -52,7 +52,7 @@ root@bash:/# apt update && apt install dnsutils curl
You now have a shell inside a Kubernetes pod running in your cluster. Let this console open so you can type commands.
Try to curl one of the pods created by the deployment above. How can you access the `deployment` **without** targeting a specific `pod`?
Ok, now let's create our first service (08-service/03-internal-service.yml):
Ok, now let's create our first service [03-internal-service.yml](03-internal-service.yml):
```yml
apiVersion: v1
@ -197,10 +197,10 @@ You have seen a lot different `kind` of Kubernetes, let's take a step back and s
## Exercises
1. Deploy an nginx and expose it internally
1. Read [this](https://kubernetes.io/docs/concepts/services-networking/ingress/#simple-fanout) and modify the ingress to have:
2. Read [this](https://kubernetes.io/docs/concepts/services-networking/ingress/#simple-fanout) and modify the ingress to have:
* `/simple` that goes to the `simple-service`
* `/nginx` that goes to your nginx deployment
1. Change the `selector` in your `simple-service` look at what is happening
3. Change the `selector` in your `simple-service` look at what is happening
## Clean up
@ -210,7 +210,7 @@ kubectl delete ingress,service,deployment,rs,pod --all
## Answers
For 2. You need to add the metadata `nginx.ingress.kubernetes.io/rewrite-target: /` to the ingress:
For 2), you need to add the metadata `nginx.ingress.kubernetes.io/rewrite-target: /` to the ingress:
* Don't forget to create 2 deployments and 2 services.
* You can either change your `/etc/hosts` to add the name resolution for `foo.bar.com`, or use `curl http://YOUR-IP -H "Host: foo.bar.com"`

@ -72,10 +72,6 @@ spec:
Careful, if you change a secret after starting the pods, it won't update the pods. So you need to restart them.
## Exercices
Nothing to see here.
## Clean up
```sh

@ -96,7 +96,7 @@ This probe will wait 3 seconds before doing the first probing. The probing will
### Liveness probe impact
Look and apply the file [01-liveness-probe.yml](./01-liveness-probe.yml).
Look and apply the file [01-liveness-probe.yml](01-liveness-probe.yml).
Run `kubectl get pods -w` and see what is happening.
@ -104,7 +104,7 @@ The `liveness` of this pod fails (`exit 1`), so Kubernetes detects that the pod
### Readiness probe impact
Look and apply the file [02-readiness-probe.yml](./02-readiness-probe.yml).
Look and apply the file [02-readiness-probe.yml](02-readiness-probe.yml).
Run `kubectl get pods -w` and see what is happening.
Run `kubectl get deployments -w` and see what is happening.
@ -128,10 +128,6 @@ Most of time having the readiness and liveness probe to be the same is enough. I
Another tip, your probes should not call dependent services of your application, to prevent cascading failure.
## Exercices
Nothing to see here.
## Clean up
```sh

@ -98,10 +98,10 @@ We will use other options, but ignore those. You can find more information on th
### CPU
First let's try to declare a CPU `requests` that is too high compared to what your cluster has available.
Review and apply the file [01-cpu-requests.yml](./01-cpu-requests.yml). Look at the pod created. What can you see?
Review and apply the file [01-cpu-requests.yml](01-cpu-requests.yml). Look at the pod created. What can you see?
Second, let's try to have a pod consuming more CPU resources than allowed.
Review and apply the file [02-cpu-limits.yml](./02-cpu-limits.yml). Look at the pod created. What can you see? Look at the logs of the pods:
Review and apply the file [02-cpu-limits.yml](02-cpu-limits.yml). Look at the pod created. What can you see? Look at the logs of the pods:
```sh
kubectl logs -f cpu-limits
@ -116,10 +116,10 @@ kubectl top pod cpu-limits
### RAM
First let's try to declare a memory `requests` that is too high compared to what your cluster has available.
Review and apply the file [03-ram-requests.yml](./03-ram-requests.yml). Look at the pod created. What can you see?
Review and apply the file [03-ram-requests.yml](03-ram-requests.yml). Look at the pod created. What can you see?
Second, let's try to have a pod consuming more RAM resources than allowed.
Review and apply the file [04-ram-limits.yml](./04-ram-limits.yml). Look at the pod created. What can you see?
Review and apply the file [04-ram-limits.yml](04-ram-limits.yml). Look at the pod created. What can you see?
## Clean up

@ -139,10 +139,6 @@ Review and apply the manifests in [02-node-affinity.yml](02-node-affinity.yml).
Describe the pods what can you see? How do you explain it?
## Exercices
Nothing to see here.
## Clean up
```sh

@ -31,13 +31,13 @@ A PDB is composed of two configurations:
If you want to see the effect of a PDB, you will need a multi-node Kubernetes. As those lines are written `minikube` is a single node cluster. To have locally a multi-node cluster you can install [kind](https://github.com/kubernetes-sigs/kind).
Use the [configuration file](./kind.yml) provided to create your cluster:
Use the [configuration file](kind.yml) provided to create your cluster:
```sh
kind create cluster --config kind.yml
```
Review and apply the manifests in [01-pdb.yml](./01-pdb.yml). Why did we specify a soft anti-affinity?
Review and apply the manifests in [01-pdb.yml](01-pdb.yml). Why did we specify a soft anti-affinity?
In a terminal run the command:
@ -57,10 +57,6 @@ This command will remove, drain, the node `kind-worker2` from the cluster. Watch
What do you see? How can you explain this?
## Exercices
Nothing to see here.
## Clean up
```sh

@ -43,13 +43,13 @@ This image computes the square root of numbers:
?>
```
First, you need to active the [`metric-server`](https://github.com/kubernetes-incubator/metrics-server/) on minikube:
First, you need to active the [metric-server](https://github.com/kubernetes-incubator/metrics-server/) on minikube:
```sh
minikube addons enable metrics-server
```
Review and apply the file [01-hpa.yml](./01-hpa.yml).
Review and apply the file [01-hpa.yml](01-hpa.yml).
Now let's generate some load on our service:
@ -119,10 +119,6 @@ The second part is a regular deployment, that have undersized CPU `requests`.
After applying those manifest, look at the resources requests for the deployment.
## Exercices
Nothing to see here.
## Clean up
```sh

@ -33,7 +33,7 @@ Istio, the service mesh tool, installs a sidecar container to do its job: https:
## Exercices
Review and apply the file [01-sidecar.yml](./01-sidecar.yml). Connect to the `nginx` container and look at the file system in `/usr/share/nginx/html`.
Review and apply the file [01-sidecar.yml](01-sidecar.yml). Connect to the `nginx` container and look at the file system in `/usr/share/nginx/html`.
This exercice is taken from the [official Kubernetes documentation](https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/#creating-a-pod-that-runs-two-containers).

@ -133,10 +133,6 @@ kubectl delete service,deployment --all
Recreate them, reconnect to mysql and see if you still have the database `testing` you created.
## Exercices
Nothing to see here.
## Clean up
```sh

@ -46,7 +46,7 @@ spec:
storage: 1Gi
```
As you can see the manifest is very close to the one of a deployment. Apply the manigest [01-statefulset.yml](./01-statefulset.yml).
As you can see the manifest is very close to the one of a deployment. Apply the manigest [01-statefulset.yml](01-statefulset.yml).
Look at the pods generated, see how they are generated. Connect to one of the pods:
@ -56,10 +56,6 @@ kubectl exec -ti web-0 /bin/bash
Write a file in the volume `www`. Terminate the same pod. See what happens. Reconnect to the pod, look at volume `www`. What can you see?
## Exercises
Nothing to see here.
## Clean up
```sh

@ -46,10 +46,6 @@ It contains multiple, ready to use, Kubernetes manifest for projects, for exampl
[Kube State Metrics](https://github.com/kubernetes/kube-state-metrics) is a service you can install on your Kubernetes clusters to get metrics from its state. It's very useful for production cluster as you can measure and put alerts on the state of your applications. Like when do you have pod evictions, are your deployment fully deployed, etc.
## Exercises
Nothing to see here.
## Clean up
```sh

@ -266,7 +266,7 @@ See the dedicated [README](99-good-practices).
## Links
* <http://kubernetesbyexample.com/>
* <https://kubernetes.io/docs/home/>
* <https://kubernetes.io/docs/reference/kubectl/cheatsheet/>
* <https://hub.docker.com/r/mhausenblas/simpleservice/>
* http://kubernetesbyexample.com/
* https://kubernetes.io/docs/home/
* https://kubernetes.io/docs/reference/kubectl/cheatsheet/
* https://hub.docker.com/r/mhausenblas/simpleservice/

Loading…
Cancel
Save