diff --git a/courses/level102/networking/infrastructure-features.md b/courses/level102/networking/infrastructure-features.md index e4d3ebe..1c6a76a 100644 --- a/courses/level102/networking/infrastructure-features.md +++ b/courses/level102/networking/infrastructure-features.md @@ -113,7 +113,7 @@ implemented in different ways 1. Hardware load balancers: A LB device is placed inline of the traffic flow, and looks at the layer 3 and layer 4 information in an incoming packet. Then determine the set of real hosts, to which the connections -are to be redirected. As covered in the [Scale](http://athiagar-ld2:8000/linux_networking/Phase_2/scale/#load-balancer) topic, these load balancers can be set up in two ways, +are to be redirected. As covered in the [Scale](https://linkedin.github.io/school-of-sre/level102/networking/scale/#load-balancer) topic, these load balancers can be set up in two ways, - Single-arm mode: In this mode, the load balancer handles only the incoming requests to the VIP. The response from the server goes directly @@ -140,7 +140,7 @@ and outgoing traffic. 2. DNS based load balancer: Here the DNS servers keep a check of the health of the real servers and resolve the domain in such a way that the client can connect to different servers in that cluster. This part was -explained in detail in the deployment at [scale](http://athiagar-ld2:8000/linux_networking/Phase_2/scale/#dns-based-load-balancing) section. +explained in detail in the deployment at [scale](https://linkedin.github.io/school-of-sre/level102/networking/scale/#dns-based-load-balancing) section. 3. IPVS based load balancing: This is another means, where an IPVS server presents itself as the service endpoint to the clients. Upon diff --git a/courses/level102/networking/scale.md b/courses/level102/networking/scale.md index d1c4e2c..55bc7c2 100644 --- a/courses/level102/networking/scale.md +++ b/courses/level102/networking/scale.md @@ -34,7 +34,7 @@ in this case. It requires planning to decide how much server loss can be handled without overloading other servers. Based on this, the service can be distributed across many cabinets. These calculations may vary, depending upon the resiliency in the ToR design, which will be covered -in [ToR connectivity](http://athiagar-ld2:8000/linux_networking/Phase_2/infrastructure-features/#dual-tor) section. +in [ToR connectivity](https://linkedin.github.io/school-of-sre/level102/networking/infrastructure-features/#dual-tor) section. #### Site failures