Patent application number | Description | Published |
20090303882 | Mechanism for implementing load balancing in a network - A mechanism is disclosed for enabling load balancing to be achieved in a network. In one implementation, load balancing is implemented on a “per flow” basis. At the time that a new flow starts, a path is selected. Packets associated with the flow are thereafter sent along that particular path. As the packets associated with the flow are forwarded along the particular path, a congestion metric is determined for the particular path as well as for a set of one or more other paths. Based at least partially upon the congestion metrics, a determination is made as to whether the flow should be moved. If so, then the flow is moved to an alternate path. By determining the congestion metrics for the multiple paths, and by moving the flow in response, it is possible to adapt to changing traffic conditions to keep the loads on the paths relatively balanced. | 12-10-2009 |
20090304007 | Mechanism for determining a congestion metric for a path in a network - A mechanism is disclosed for determining a congestion metric for a path in a network. In one implementation, a congestion metric for a path includes one or more latency values and one or more latency variation values. A latency value for a path may be determined by exchanging latency packets with another component. For example, to determine the latency for a particular path, a first component may send a latency request packet to a second component via the particular path. In response, the second component may send a latency response packet back to the first component. Based upon timestamp information in the latency response packet, the latency on the particular path may be determined. From a plurality of such latencies, a latency variation may be determined. Taken individually or together, the latency value(s) and the latency variation value(s) provide an indication of how congested the particular path currently is. | 12-10-2009 |
20090316584 | Mechanism for Enabling Load Balancing to be Achieved in a Loop-Free Switching Path, Reverse Path Learning Network - A mechanism is disclosed for enabling load balancing to be achieved in a loop-free switching path, reverse path learning network, such as an Ethernet network. The network is divided into a plurality of virtual networks, with each virtual network providing a different path through the network. When it comes time to send a set of information through the network, one of the plurality of virtual networks, and hence, one of the plurality of paths, is selected. The set of information is then updated to indicate the selected virtual network, and sent into the network to be transported along the selected path. With multiple paths, and with the ability to select between the multiple paths, it is possible to balance the load imposed on the multiple paths. | 12-24-2009 |
20090319634 | Mechanism for enabling memory transactions to be conducted across a lossy network - A network interface is disclosed for enabling remote programmed I/O to be carried out in a “lossy” network (one in which packets may be dropped). The network interface: (1) receives a plurality of memory transaction messages (MTM's); (2) determines that they are destined for a particular remote node; (3) determines a transaction type for each MTM; (4) composes, for each MTM, a network packet to encapsulate at least a portion of that MTM; (5) assigns a priority to each network packet based upon the transaction type of the MTM that it is encapsulating; (6) sends the network packets into a lossy network destined for the remote node; and (7) ensures that at least a subset of the network packets are received by the remote node in a proper sequence. By doing this, the network interface makes it possible to carry out remote programmed I/O, even across a lossy network. | 12-24-2009 |
20100098073 | Mechanism for Enabling Layer Two Host Addresses to be Shielded from the Switches in a Network - A mechanism is disclosed that enables layer two host addresses (e.g. a MAC addresses) to be shielded from a network. In one implementation, the mechanism updates each packet sent by the hosts into the network to indicate that the source layer two (L2) address for that packet is a shared L2 address instead of the actual L2 address of the sending host. By doing so, the mechanism exposes only the shared L2 address to the network, and shields the actual L2 addresses of the hosts from the network. The effect of this is that the switches in the network will need to store only the shared L2 address in their forwarding tables, not the actual L2 addresses of the hosts. By reducing the number of L2 addresses that need to be stored in the forwarding tables of the switches, the mechanism improves the scalability of the network. | 04-22-2010 |
20100205502 | ENABLING MEMORY TRANSACTIONS ACROSS A LOSSY NETWORK - Methods and systems for enabling remote programmed I/O to be carried out across a “lossy” network are provided. According to one embodiment, a node maps a portion of a remote memory of a remote node into its physical address space. MTMs conforming to a processor bus protocol are received by a network interface of the node. The MTMs destined for the remote node are encapsulated within network packets. Each network packet is assigned a sending priority based upon a transaction type of the encapsulated MTM and based upon ordering rules associated with the processor bus protocol. The network packets are organized into groups based upon sending priority and transmitted to the remote node via a lossy network according to the sending priorities. It is ensured that a particular subset of the network packets having a particular sending priority is received by the remote node in a proper sequence. | 08-12-2010 |
20100290343 | PERFORMING RATE LIMITING WITHIN A NETWORK - Methods and systems for performing rate limiting are provided. According to one embodiment, multiple paths are provided between each pair of multi-path load balancing (MPLB) components within a Layer 2 network by establishing overlapping loop-free topologies in which each MPLB component is reachable by any other via each overlapping topology. A first MPLB component receives packets associated with a flow sent by a source component at a particular rate. The first MPLB component forwards the packets to a second MPLB component along a particular path in a network. A congestion metric for the particular path is determined. Based upon the congestion metric for the particular path, it is determined whether the particular path has reached a congestion threshold. In response to an affirmative determination, the source component is instructed to limit the rate at which it sends packets associated with the flow. | 11-18-2010 |
20100296392 | DETERMINING LINK FAILURE WITHIN A NETWORK - Methods and systems for determining link failure in a network are provided. According to one embodiment, multiple paths are provided between each pair of multi-path load balancing (MPLB) components within a Layer 2 network by establishing overlapping loop-free topologies in which each MPLB component is reachable by any other via each loop-free topology. A first MPLB component sends latency requests to a second MPLB component via a particular path. Responsive thereto, the first MPLB component receives latency responses. Based on timestamp information in the latency responses, an estimated latency between the first and second MPLB components is determined. A link failure timeout period is derived based upon the estimated latency. An additional latency request is sent. If an additional latency response is not received by the first MPLB component prior to expiration of the link failure timeout period, then it is concluded that a link failure has occurred. | 11-25-2010 |
20100309811 | DETERMINING A CONGESTION METRIC FOR A PATH IN A NETWORK - Methods and systems for determining a congestion metric for a path in a network are provided. According to one embodiment, multiple paths are provided between each pair of multi-path load balancing (MPLB) components within a Layer 2 network by establishing overlapping loop-free topologies in which each MPLB component is reachable by any other via each of the overlapping topologies. A first MPLB component associated with a first network device sends a latency request packet, including a first timestamp provided by a first clock associated with the first MPLB component, to a second MPLB component associated with a second network device via the path. Responsive thereto, the first MPLB component receives, from the second MPLB component, a latency response packet, including a second timestamp provided by a second clock associated with the second MPLB component. The first MPLB component derives a one-way latency value for the path based upon the timestamps. | 12-09-2010 |
20110078331 | MECHANISM FOR ENABLING LAYER TWO HOST ADDRESSES TO BE SHIELDED FROM THE SWITCHES IN A NETWORK - Methods and systems for shielding layer two host addresses (e.g., MAC addresses) from a network are provided. According to one embodiment, a border component of a network of switches receives a first packet intended for a first host having a first L2 address and a first L3 address associated therewith. The first packet includes the first L3 address and a substitute L2 address as destination addresses. The substitute L2 address is associated with a communication channel of the border component. A data structure including information regarding an association between the first L3 address and the first L2 address is accessed by the border component. A determination is made that the destination L2 address for the first packet should be the first L2 address. A first updated packet is derived from the first packet by replacing the substitute L2 address with the first L2 address and sent to the first host. | 03-31-2011 |
20110235639 | MECHANISM FOR ENABLING LAYER TWO HOST ADDRESSES TO BE SHIELDED FROM THE SWITCHES IN A NETWORK - Methods and systems for shielding layer two host addresses (e.g., MAC addresses) from a network are provided. A border component interposed between a network of switches and multiple local hosts receives from a first local host a first packet destined for a first destination host. The first local host has a first layer 2 (L2) address and a first layer 3 (L3) address associated therewith. The first packet includes the first L2 address as a source L2 address for the first packet, and includes the first L3 address as a source L3 address for the first packet. The border component shields the first L2 address from the network of switches by replacing the source L2 address for the first packet with a substitute L2 address associated with a communication channel of the border component before sending the first packet to the network of switches. | 09-29-2011 |
20130121152 | ADAPTIVE LOAD BALANCING - Methods and systems for performing load balancing within an Ethernet network are provided. According to one embodiment, a set of virtual networks, into which a network has been logically divided that can be used by a first component is maintained. Each of the virtual networks is a loop-free switching path, reverse path learning network and provides a path through the network between the first component and a second component. A packet destined for the second component is received by the first component. On a packet-by-packet basis or on a per flow basis, the first component dynamically selects a particular path by selecting a virtual network for transporting the received packet that tends to balance traffic load across the virtual networks. The first component causes the received packet to be transported through the network to the second component via the particular path. | 05-16-2013 |
20130155862 | PERFORMING RATE LIMITING WITHIN A NETWORK - Methods and systems for performing rate limiting are provided. According to one embodiment, information is maintained regarding a set of virtual networks into which a network has been logically divided. Each virtual network comprises a loop-free switching path, reverse path learning network and provides a path through the network between a first and second component thereby collectively providing multiple paths between the first and second components. Packets are received by the first component that are associated with a flow sent by a source component. The packets are forwarded by the first component to the second component along a particular path defined by the set of virtual networks. A congestion metric is determined for the particular path and based thereon it is determined whether a congestion threshold has been reached. Responsive to an affirmative determination, the source component is instructed to limit the rate at which the packets are sent. | 06-20-2013 |
20130308640 | MECHANISM FOR ENABLING LAYER TWO HOST ADDRESSES TO BE SHIELDED FROM THE SWITCHES IN A NETWORK - Methods and systems for shielding layer two host addresses (e.g., MAC addresses) from a network are provided. A border component interposed between a network of switches and multiple local hosts receives from a first local host a first packet destined for a first destination host. The first local host has a first layer 2 (L2) address and a first layer 3 (L3) address associated therewith. The first packet includes the first L2 address as a source L2 address for the first packet, and includes the first L3 address as a source L3 address for the first packet. The border component shields the first L2 address from the network of switches by replacing the source L2 address for the first packet with a substitute L2 address before sending the first packet to the network of switches. | 11-21-2013 |
20140029429 | ADAPTIVE LOAD BALANCING - Methods and systems for performing load balancing within an Ethernet network are provided. According to one embodiment, a set of paths is maintained by a first component of multiple components coupled in communication with a network. Each path is a loop-free switching path, reverse path learning network and the first component and a second component of the multiple components are connected through each path. A packet destined for the second component is received by the first component. On a packet-by-packet basis or on a per flow basis, the first component dynamically selects a particular path of the multiple of paths by selecting a virtual network of the set of virtual networks for transporting the received packet that tends to balance traffic load across the set of virtual networks. The first component causes the received packet to be transported through the network to the second component via the particular path. | 01-30-2014 |
20140177442 | PERFORMING RATE LIMITING WITHIN A NETWORK - Methods and systems for performing rate limiting are provided. According to one embodiment, information is maintained regarding a set of virtual networks into which a network has been logically divided. Each virtual network comprises a loop-free switching path, reverse path learning network and provides a path through the network between a first and second network device thereby collectively providing multiple paths between the first and second network devices. Packets are received by the first device that are associated with a flow sent by a source network device. The packets are forwarded by the first device to the second device via a particular path of the multiple paths. A congestion metric is determined for the particular path and based thereon it is determined whether a congestion threshold has been reached. Responsive to an affirmative determination, the source device is instructed to reduce the rate at which the packets are sent. | 06-26-2014 |