Common use of Network Latency Clause in Contracts

Network Latency. As the primary locus of data moves from disk to flash or even DRAM, the network is becoming the primary source of latency in remote data accesses. Network latency is an expression of how much time it takes for a packet of data to get from one point to another. Several factors contribute to latency, including not only the time it takes for a packet to travel in the cable, but the equipment/switch used to transmit/receive and forward the packet. Total packet latency is the sum of all of the path latencies and all of the switch latencies encountered along the route (e.g, RTT, Round Trip Time). A packet that travels over N paths will pass through N −1 switches. The value of N for any given packet will vary depending on the amount of locality that can be exploited in an application’s communication pattern, the topology of the network, the routing algorithm, and the size of the network. However, when it comes to typical case latency in a big-scale data centre network, path latency is a very small part of total latency. Total latency is dominated by the switch latency which includes delays due to buffering, routing algorithm complexity, arbitration, flow control, switch traversal, and the load congestion for a particular switch egress port. Note that these delays are incurred at every switch in the network and hence these delays are multiplied by the hop count. One of the suitable ways to reduce hop-count is to increase the radix of the switches. Increased switch radix means fewer switches for a network of a given size and therefore a reduced CapEx cost. Reduced hop-count and fewer switches also lead to reduced power consumption as well as reduced latency. For all-electrical switches, there is a fundamental trade-off due to the poor scaling of both signal pins and per pin bandwidth. Namely one could choose to utilize more pins per port which results in a lower radix but with higher bandwidth per port. The other option is to use fewer pins per port which would increase the switch radix but the bandwidth of each port would suffer. Photonics may lead to a better option, namely the bandwidth advantage due to spatial/spectrum division multiplexing, and the tighter signal packaging density of optics means that high-radix switches are feasible without a corresponding degradation of port bandwidth.

Appears in 1 contract

Sources: Grant Agreement

Network Latency. As the primary locus of data moves from disk to flash or even DRAM, the network is becoming the primary source of latency in remote data accessesaccess. Network latency is an expression of how much time it takes for a packet of data to get from one point to another. Several factors contribute to network latency, including not only the time it takes for a packet to travel in the cable, but also the time the equipment/switch used uses to transmit/receive , receive, buffer, and forward the packet. Total packet latency is the sum of all of the path latencies and of all of the switch latencies encountered along the route (e.g, usually reported as RTT, Round Trip Time). A packet that travels over N paths links will pass through N −1 switches. The value of N for any given packet will vary depending on the amount of locality that can be exploited in an application’s communication pattern, the topology of the network, the routing algorithm, and the size of the network. However, when it comes to typical case latency in a biglarge-scale data centre network, path latency is a very small part of total latency. Total latency is dominated by the switch latency which includes delays due to buffering, routing algorithm complexity, arbitration, flow control, switch traversal, and the load congestion for a particular switch egress port. Note that these delays are incurred at every switch in the network network, and hence these delays are multiplied by the hop count. One of the suitable possible ways to reduce hop-hop count is to increase the radix of the switches. Increased switch radix also means fewer switches for a network of a given size and therefore a reduced CapEx cost. Reduced hop-hop count and fewer switches also lead to reduced power consumption as well as reduced latencyconsumption. For all-electrical switches, there is a fundamental trade-off due to the poor scaling of both signal pins and per pin bandwidth. Namely For example, one could choose to utilize more pins per port which results in a lower radix radix, but with a higher bandwidth per port. The other Another option is to use fewer pins per port which would increase the switch radix radix, but the bandwidth of each port would suffer. Photonics may lead to a better optionsolution, namely the bandwidth advantage due to spatial/spectrum division multiplexing, multiplexing and the tighter signal packaging density of optics means that optics, i.e., high-radix switches are feasible without a corresponding degradation of port bandwidth.

Appears in 1 contract

Sources: Grant Agreement