From 1f92693148d378e07b444e2e26cc79507e4de01c Mon Sep 17 00:00:00 2001 From: Samik Malhotra <72279316+Samikmalhotra@users.noreply.github.com> Date: Sun, 25 Sep 2022 19:24:03 +0530 Subject: [PATCH] Replace katacoda with suitable alternatives (#143) * feat: replace katacoda with docker as reqd * fix: teardown time of play with k8s lab --- .../containerization_with_docker.md | 4 ++-- .../orchestration_with_kubernetes.md | 8 +++----- 2 files changed, 5 insertions(+), 7 deletions(-) diff --git a/courses/level102/containerization_and_orchestration/containerization_with_docker.md b/courses/level102/containerization_and_orchestration/containerization_with_docker.md index 0bc23ff..f50adcd 100644 --- a/courses/level102/containerization_and_orchestration/containerization_with_docker.md +++ b/courses/level102/containerization_and_orchestration/containerization_with_docker.md @@ -83,7 +83,7 @@ The official [docker github](https://github.com/docker/labs) provides labs at se 3. [Creating and containerizing a basic Flask app](https://github.com/docker/labs/blob/master/beginner/chapters/webapps.md) -Here is another [beginner level lab](https://www.katacoda.com/courses/docker/2) from Katacoda for dockerizing a node js application. You don’t even need a local setup for this and it’s easy to follow along. +Here is another [beginner level lab](https://github.com/docker/awesome-compose/tree/master/react-express-mongodb) for dockerizing a MERN (Mongo + React + Express) application and it’s easy to follow along. ## Advanced features of Docker @@ -95,6 +95,6 @@ Docker networks facilitate the interaction between containers running on the sam **Volumes** -Apart from images, containers and networks, Docker also provides the option to create and mount volumes within containers. Generally, data within docker containers is non-persistent i.e once you kill the container the data is lost. Volumes are used for storing persistent data in containers. This [KataKoda lab](https://www.katacoda.com/courses/docker/persisting-data-using-volumes) is a great place to start playing with volumes. +Apart from images, containers and networks, Docker also provides the option to create and mount volumes within containers. Generally, data within docker containers is non-persistent i.e once you kill the container the data is lost. Volumes are used for storing persistent data in containers. This [Docker lab](https://dockerlabs.collabnix.com/beginners/volume/creating-volume-mount-from-dockercli.html) is a great place to start playing with volumes. [In the next section](https://linkedin.github.io/school-of-sre/level102/containerization_and_orchestration/orchestration_with_kubernetes/) we see how container deployments are orchestrated with Kubernetes. diff --git a/courses/level102/containerization_and_orchestration/orchestration_with_kubernetes.md b/courses/level102/containerization_and_orchestration/orchestration_with_kubernetes.md index a9b0862..a84af8b 100644 --- a/courses/level102/containerization_and_orchestration/orchestration_with_kubernetes.md +++ b/courses/level102/containerization_and_orchestration/orchestration_with_kubernetes.md @@ -76,9 +76,9 @@ This workflow might help you understand the working on components better: ### Prerequisites -The best way to start this exercise is to use a [Katacoda kubernetes playground](https://www.katacoda.com/courses/kubernetes/playground). A single node kubernetes cluster is already set up for you here for quick experimentation. You can also use this to play with docker. +The best way to start this exercise is to use a [Play with kubernetes lab](https://labs.play-with-k8s.com/). -The environment gets torn down after 10 mins. So make sure that you save your files if you want to resume them. For persistent kubernetes clusters, you can set it up either in your local (using [minikube](https://minikube.sigs.k8s.io/docs/start/)) or you can create a [kubernetes cluster in Azure](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-portal), GCP or any other cloud provider. +The environment gets torn down after 4 hours. So make sure that you save your files if you want to resume them. For persistent kubernetes clusters, you can set it up either in your local (using [minikube](https://minikube.sigs.k8s.io/docs/start/)) or you can create a [kubernetes cluster in Azure](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-portal), GCP or any other cloud provider. Knowledge of YAML is nice to have for understanding the manifest files. @@ -143,7 +143,6 @@ Here, this is 10.244.1.3 A container is created within the pod but the pod is the same. You can verify by checking the pod start time in describe command. It would show a much older time. -You can actually see the nginx container by doing `docker ps` on the node01 terminal (if you’re using Katacoda). What if we want to change the image to 1.20.1 for 1000 nginx pods? Stepping a little back, what if we want to create 1000 nginx pods. Of course, we can write a script but Kubernetes already offers a resource type called “deployment” to manage large scale deployments better. @@ -222,8 +221,7 @@ It is possible to have a public IP instead (i.e an actual external load balancer The above exercises a pretty good exposure to using Kubernetes to manage large scale deployments. Trust me, the process is very similar to the above for operating 1000 deployments and containers too! While a Deployment object is good enough for managing stateless applications, Kuberenetes provides other resources like Job, Daemonset, Cronjob, Statefulset etc. to manage special use cases. **eAdditional labs:** -https://www.katacoda.com/lizrice/scenarios/kube-web -https://www.katacoda.com/courses/kubernetes (Huge number of free follow-along exercises to play with Kubernetes) +https://kubernetes.courselabs.co/ (Huge number of free follow-along exercises to play with Kubernetes) ## Advanced topics Most often than not, microservices orchestrated with Kubernetes contain dozens of instances of resources like deployment, services and configs. The manifests for these applications can be auto- generated with Helm templates and passed on as Helm charts. Similar to how we have PiPy for python packages there are remote repositories like Bitnami where Helm charts (e.g for setting up a production-ready Prometheus or Kafka with a single click) can be downloaded and used. [This is a good place to begin](https://www.digitalocean.com/community/tutorials/an-introduction-to-helm-the-package-manager-for-kubernetes).