> *Some of the aspects to consider are, whether the underlying data
centre infrastructure supports ToR resiliency, i.e. features like link
bundling (bonds), BGP, support for anycast service, load balancer,
firewall, Quality of Service.*
As seen in previous sections, to deploy applications at scale, it will
need certain capabilities to be supported from the infrastructure. This
section will cover different options available, and their suitability.
### ToR connectivity
This being one of the most frequent points of failure (considering the scale of deployment), there are different options available to connect the servers to the ToR. We are going to see them in detail below,
#### Single ToR
This is the simplest of all the options. Where a NIC of the server is
connected to one ToR. The advantage of this approach is, there is a
minimal number of switch ports used, allowing the DC fabric to support
the rapid growth of server infrastructure (Note: Not only the ToR ports
are used efficiently, but the upper switching layer in DC fabric as well,
the port usage will be efficient). On the downside, the servers can be
unreachable if there is an issue with the ToR, link or NIC. This will
impact the stateful apps more, as the existing connections get
abruptly disconnected.
![Graphical user interface, application Description automatically
generated with medium
confidence](./media/Single ToR.png)
Fig 4: Single ToR design
#### Dual ToR
In this option, each server is connected to two ToR, of the same
cabinet. This can be set up in active/passive mode, thereby providing
resiliency during ToR/link/NIC failures. The resiliency can be achieved
either in layer 2 or in layer 3.
##### Layer 2
In this case, both the links are bundled together as a [bond](https://en.wikipedia.org/wiki/Link_aggregation) on the
server side (with one NIC taking the active role and the other being
passive). On the switch side, these two links are made part of
[multi-chassis lag](https://en.wikipedia.org/wiki/Multi-chassis_link_aggregation_group) (similar to bonding, but spread across switches). The
prerequisite here is, both the ToR should be part of the same layer 2
domain. The IP addresses are configured on the bond interface on the
server and SVI on the switch side.
![Diagram Description automatically
generated](./media/Dual ToR.png)
Note: In this, the ToR 2 role is only to provide resiliency.
Fig 5: Dual ToR layer 2 setup
##### Layer 3
In this case, both the links are configured as separate layer 3
interfaces. The resiliency is achieved by setting up a routing protocol
(like BGP). Wherein one link is given higher preference over the other.
In this case, the two ToR's can be set up independently, in layer 3
mode. The servers would need a virtual address, to which the services
have to be bound.
![Diagram Description automatically
generated](./media/Dual ToR BGP.png)
Note: In this, the ToR 2 role is only to provide resiliency.
Fig 6: Dual ToR layer 3 setup
Though the resiliency is better with dual ToR, the drawback is, the
number of ports being used. As the access port in the ToR doubles up,
the number of ports required in the Spine layer also doubles up, and
this keeps cascading to higher layers.
Type | Single ToR | Dual ToR (layer 2) | Dual ToR (layer 3)
are to be redirected. As covered in the [Scale](https://linkedin.github.io/school-of-sre/level102/networking/scale/#load-balancer) topic, these load balancers can be set up in two ways,
explained in detail in the deployment at [scale](https://linkedin.github.io/school-of-sre/level102/networking/scale/#dns-based-load-balancing) section.