Patent application number | Description | Published |
20080198746 | SWITCH FABRIC END-TO-END CONGESTION AVOIDANCE MECHANISM - Aspects of a switch fabric end-to-end congestion avoidance mechanism are presented. Aspects of a system for end-to-end congestion avoidance in a switch fabric may include at least one circuit that enables reception of a congestion notification message that specifies a traffic flow identifier. The circuitry may enable increase or decrease of a current rate for transmission of data link layer (DLL) protocol data units (PDU) associated with the specified traffic flow identifier as a response to the reception of the congestion notification message. | 08-21-2008 |
20080205403 | Network packet processing using multi-stage classification - Methods and systems for processing packets in data network using multistage classification are disclosed. An example method for processing packets includes receiving a data packet at a first processing stage and examining the packet at the first processing stage to determine a first attribute of the packet. Based on the first attribute, a first classification is assigned to the packet. In the example method, the packet and the first classification are communicated from the first processing stage to a second processing stage and the packet is examined at the second processing stage to determine a second attribute of the packet. Based on the second attribute, a second classification is assigned to the packet. The example method further includes processing the packet based on the first classification and the second classification. | 08-28-2008 |
20080229056 | METHOD AND APPARATUS FOR DUAL-HASHING TABLES - Methods and apparatus for dual hash tables are disclosed. An example method includes logically dividing a hash table data structure into a first hash table and a second hash table, where the first hash table and the second hash table are substantially logically equivalent. The example method further includes receiving a key and a corresponding data value, applying a first hash function to the key to produce a first index to a first bucket in the first hash table, and applying a second hash function to the key to produce a second index to a second bucket in the second hash table. In the example method the key and the data value are inserted in one of the first hash table and the second hash table based on the first index and the second index. | 09-18-2008 |
20080298397 | COMMUNICATION FABRIC BANDWIDTH MANAGEMENT - Methods and apparatus for communication fabric bandwidth management are disclosed. An example method includes receiving data at a first network entity, where the data being received from a second network entity. The example method further includes, at the first network entity, queuing the received data in a data queue associated with the second network entity. The example method still further includes determining that an amount of queued data in the data queue associated with the second network entity exceeds a first threshold. In response to the first threshold being exceeded, a first control message is communicated from the first network entity to the second network entity. In the example method, in response to the first control message, a data rate at which the second network entity sends data to the first network entity is reduced. | 12-04-2008 |
20090003212 | DATA SWITCHING FLOW CONTROL WITH VIRTUAL OUTPUT QUEUING - Methods and apparatus for data switching are disclosed. An example method includes receiving a data traffic flow at a data ingress module and buffering the data traffic flow in a virtual output queue included in the data ingress module, where the virtual output queue being associated with a data egress module. The example method also includes communicating the data traffic flow to the data egress module via a fabric egress queue included in a data-switch fabric. The example method further includes monitoring data occupancy in the fabric egress queue and determining, based on the data occupancy, that a change in congestion state in the fabric egress queue has occurred. The example method still further includes, in response to the change in congestion state, communicating a flow control message to the data ingress module and, in response to the flow control message, modifying communication of the data traffic flow. | 01-01-2009 |
20090201806 | ROUTING FAILOVER WITH ACCURATE MULTICAST DELIVERY - A node comprising: an ingress port configured to receive data; a plurality of egress ports configured to transmit data; a routing table configured to provide, at least part of, both a preferred routing path and a recovery routing path; a data tag engine configured to read a tag, associated with the data, that indicates the routing state of the data and, based at least in part upon the tag determine whether to use the preferred routing path or the recovery routing path for a selected path, and determine if the tag is to be modified to indicate a change in the routing status of the data; and a routing engine configured to utilize the selected path to determine the egress port from which to transmit the data. | 08-13-2009 |
20090207833 | EFFICIENT KEY SQUENCER - A method includes for determining a plurality of fields of a packet associated with a routing of the packet, wherein each field of the plurality of fields includes one or more bits. Arranging the bits of the plurality of fields into a plurality of ordered partitions of a search sequence, the search sequence being associated with a plurality of searches, wherein the searches are based on the bits included in one or more of the ordered partitions. Providing, to a routing table including routing information associated with the routing of the packet, one or more of the ordered partitions of the search sequence, wherein the routing table is structured based on the search sequence. Receiving, based on the plurality of searches, the routing information associated with the routing of the packet from the routing table. Routing the packet based on the routing information. | 08-20-2009 |
20090207848 | FLEXIBLE BUFFER ALLOCATION ENTITIES FOR TRAFFIC AGGREGATE CONTAINMENT - An apparatus comprising a plurality of physical ingress ports configured to receive data, each data having a data type; a plurality of physical egress ports configured to transmit data; a memory configured to buffer data that has been received; a plurality of virtual routing devices, wherein each of the virtual routing devices is associated with a particular data type and each of the virtual routing devices is configured to: virtually buffer data associated with the respective data type, and regulate the quality of service provided to the respective data type; and a data manager configured to manage the receipt and transmission of data. | 08-20-2009 |
20090213868 | SEPARATION OF FABRIC AND PACKET PROCESSING SOURCE IN A SYSTEM - An apparatus may include a port interface that is arranged and configured to receive a packet from an ingress port, a traffic management module being operatively coupled to the port interface and that is arranged and configured to manage routing of the packet to a destination, and a packet processing engine that is arranged and configured to perform packet processing on the packet and to associate a tag with the packet, where the tag includes a packet processing source field, a destination field, and a fabric source field. | 08-27-2009 |
20090259810 | ACCESS CONTROL LIST RULE COMPRESSION USING METER RE-MAPPING - A system may include a content addressable memory (CAM) that is configured to include multiple services, receive a key, where the key includes source port information and IP information related to a packet received on one of multiple ports, and output a match index value in response to a search of the CAM using the key. The system may include a policy memory module that is configured to receive the match index value and to output meter controls and a meter address based on the match index value, a port meter map module that is configured to receive the source port information and to output a mask value and a per port meter value, and a remapping module that is configured to receive the meter address, receive the mask value and the per port meter value, and modify the meter address based on those values. | 10-15-2009 |
20090276604 | ASSIGNING MEMORY FOR ADDRESS TYPES - Various example implementations are disclosed. According to one example, an integrated circuit may include a key extractor, a translation table block, and a memory assigner. The key extractor may be configured to receive data, extract key-related information from the data, and send the key-related information to a first memory device. The translation table block may be configured to update a mapping table based on a memory assigner assigning physical portions of the first memory device to each of a plurality of address types, receive an index from the first memory device in response to the key extractor sending the key-related information to the first memory device, and send a data request to a second memory device based on the received index, the data request identifying a physical portion of the second memory device. | 11-05-2009 |
20100054126 | METER-BASED HIERARCHICAL BANDWIDTH SHARING - Example methods and apparatus for hierarchical bandwidth management are disclosed. An example method includes, receiving a data packet included in a first data traffic flow and determining if a first rate of traffic of the first data traffic flow is less than or equal to a first threshold. In the event the first rate of traffic is less than or equal to the first threshold, the example method includes marking the data packet with a first marker type. In the event the first rate of traffic is greater than the first threshold, the example method includes marking the data packet with a second marker type. The method further includes receiving a second data traffic flow having a second rate of traffic and combining the first data traffic flow and the second data traffic flow to produce a third data traffic flow. In the event the data packet is marked with the first marker type, the data packet is forwarded in the third data flow. The example method also includes determining whether a third rate of traffic of the third data traffic flow is less than or equal to a second threshold. In the event the data packet is marked with the second marker type and the third rate of traffic is less than or equal to the second threshold, the example method includes changing the second marker type to the first marker type and forwarding the data packet in the third data flow. The example method still further includes, in the event the data packet is marked with the second marker type and the third rate of traffic is greater than the second threshold, discarding the data packet. | 03-04-2010 |
20100054127 | AGGREGATE CONGESTION DETECTION AND MANAGEMENT - Example embodiments of methods and apparatus for aggregate congestion detection and management are disclosed. An example method includes, receiving a data packet, where the data packet being associated with a respective destination data queue. The example method also includes determining an average queue utilization for the destination queue and determining a first aggregate utilization for a first set of data queues, the first set of data queues including the destination queue. The example method further includes determining, based on the average queue utilization and the first aggregate utilization, one or more probabilities associated with the data packet. The example method still further includes, in accordance with the one or more probabilities, randomly marking the packet to indicate a congestion state or randomly determining whether to drop the data packet. The example method also includes, dropping the packet if a determination to drop the packet is made. | 03-04-2010 |
20100094982 | GENERIC OFFLOAD ARCHITECTURE - An system comprising an ingress device configured to receive and process data, wherein the ingress device comprises a plurality of processing stages configured to process the data, wherein a configurable subset of the stages comprises a selectable tap point, and wherein the ingress device is further configured to, upon reaching a selected tap point, suspend processing and send at least a portion of the data to another device; an offload engine device configured to receive data from the ingress device, after the selected tap point has been reached, and to provide additional processing of the data, which the ingress device is not configured to provide; an egress device configured to transmit the data that has been additionally processed by the offload engine device. | 04-15-2010 |
20100097934 | NETWORK SWITCH FABRIC DISPERSION - Methods and apparatus for communicating data traffic using switch fabric dispersion are disclosed. An example apparatus includes a first tier of switch elements; and a second tier of switch elements operationally coupled with the first tier of switch elements. In the example apparatus, the first tier of switch elements is configured to receive a data packet from a source. The first tier of switch elements is also configured to route the data packet to the second tier of switch elements in accordance with a dispersion function, where the dispersion function is based on a dispersion tag associated with the data packet. The first tier of switch elements is still further configured to transmit the data packet to a destination for the data packet after receiving it from the second tier of switch elements. In the example apparatus the second tier of switch elements is configured to receive the data packet from the first tier of switch elements and route the data packet, based on a destination address of the data packet, back to the first tier of switch elements for transmission to the destination. | 04-22-2010 |
20100172260 | METHOD AND SYSTEM FOR TRANSMISSION CONTROL PROTOCOL (TCP) TRAFFIC SMOOTHING - Various aspects of a method and system for transmission control protocol (TCP) traffic smoothing are presented. Traffic smoothing may comprise a method for controlling data transmission in a communications system that further comprises scheduling the timing of transmission of information from a TCP offload engine (TOE) based on a traffic profile. Traffic smoothing may comprise transmitting information from a TOE at a rate that is either greater than, approximately equal to, or less than, the rate at which the information was generated. Some conventional network interface cards (NIC) that utilize TOEs may not provide a mechanism that enables traffic shaping. By not providing a mechanism for traffic shaping, there may be a greater probability of lost packets in the network. | 07-08-2010 |
20100271946 | METER-BASED HIERARCHICAL BANDWIDTH SHARING - Example methods and apparatus for hierarchical bandwidth management are disclosed. An example method includes, using dual-token bucket meters (two-rate three-color meters) to meter bandwidth usage by individual microflows and associated macroflows (combinations of microflows). The dual-token bucket meters are used to locally and finally mark the packets using a three-color marking approach. In the example method, forwarding and discard decisions for packets processed using such techniques are made based on the final marking. | 10-28-2010 |
20100318821 | SCALABLE, DYNAMIC POWER MANAGEMENT SCHEME FOR SWITCHING ARCHITECTURES UTILIZING MULTIPLE BANKS - According to one general aspect, a method may include receiving data from a network device. In some embodiments, the method may include writing the data to a memory bank that is part of a plurality of at least single-ported memory banks that have been grouped to act as a single at least dual-ported aggregated memory element. In various embodiments, the method may include monitoring the usage of the plurality of memory banks. In one embodiment, the method may include, based upon a predefined set of criteria, placing a memory bank that meets the predefined criteria in a low-power mode. | 12-16-2010 |
20110002222 | METER-BASED HIERARCHICAL BANDWIDTH SHARING - Example methods and apparatus for hierarchical bandwidth management are disclosed. An example method includes, receiving a data packet included in a first data traffic flow having a first rate of traffic. The example method further includes marking the data packet with a first marker type if the first rate of traffic is less than or equal to a first threshold, otherwise marking the data packet with a second marker type. The example method also includes combining the first data traffic flow with a second data traffic flow having a second rate of traffic to produce a third data traffic flow having a third rate of traffic. The example method still further includes, if the data packet is marked with the first marker type, forwarding the data packet in the third data flow. The example method yet further includes, if the data packet is marked with the second marker type and the third rate of traffic is less than or equal to a second threshold, forwarding the data packet in the third data flow, otherwise, discarding the packet. | 01-06-2011 |
20110013627 | FLOW BASED PATH SELECTION RANDOMIZATION - Methods and apparatus for randomizing selection of a next-hop path/link in a network are disclosed. An example method includes receiving, at the network device, a data packet. The example method further includes generating a first hash key based on the data packet and generating a first hash value from the first hash key using a first hash function. The example method also includes generating a second hash key based on the data packet and the first hash value and generating a second hash value from the second hash key using a second hash function. The example method still further includes selecting a next-hop path based on the second hash value. | 01-20-2011 |
20110013638 | NODE BASED PATH SELECTION RANDOMIZATION - Methods and apparatus for randomizing selection of a next-hop path/link in a network are disclosed. An example method includes randomly selecting one or more path-selection randomization options to be applied to data packets processed in the network device. The example method further includes receiving a data packet and applying, by the network device, the one or more path-selection randomization operations to the data packet. The example method also includes determining a next-hop path for the data packet based on the one or more path-selection randomization operations and transmitting the data packet to a next-hop network device using the determined next-hop path. | 01-20-2011 |
20110013639 | FLOW BASED PATH SELECTION RANDOMIZATION USING PARALLEL HASH FUNCTIONS - Methods and apparatus for randomizing selection of a next-hop path/link in a network are disclosed. An example method includes receiving, at the network device, a data packet. The example method further includes generating a first hash key based on the data packet and generating a first hash value from the first hash key using a first hash function. The example method also includes generating a second hash key based on the data packet and generating a second hash value from the second hash key using a second hash function. The method still further includes combining the first hash value and the second hash value to produce a combined hash value and selecting a next-hop path based on the combined hash value. | 01-20-2011 |
20110029796 | System and Method for Adjusting an Energy Efficient Ethernet Control Policy Using Measured Power Savings - A system and method for adjusting an energy efficient Ethernet (EEE) control policy using measured power savings. An EEE-enabled device can be designed to report EEE event data. This reported EEE event data can be used to quantify the actual EEE benefits of the EEE-enabled device, debug the EEE-enabled device, and adjust the EEE control policy. | 02-03-2011 |
20110051602 | DYNAMIC LOAD BALANCING - Methods and apparatus for dynamic load balancing are disclosed. An example method includes receiving, at a network device, a data packet to be sent via an aggregation group, where the aggregation group comprising a plurality of aggregate members. The example method further includes determining, based on the data packet, a flow identifier of a flow to which the data packet belongs and determining a state of the flow. The example method also includes determining, based on the flow identifier and the state of the flow, an assigned member of the plurality of aggregate members for the flow and communicating the packet via the assigned member. | 03-03-2011 |
20110051603 | DYNAMIC LOAD BALANCING USING QUALITY/LOADING BANDS - Methods and apparatus for. An example method includes determining, by a network device, respective quality metrics for each of a plurality of members of an aggregation group of the network device, the respective quality metrics representing respective data traffic loading for each member of the aggregation group. The example method further includes grouping the plurality of aggregation members into a plurality of loading/quality bands based on their respective quality metrics. The example method also includes selecting members of the aggregation group for transmitting packets from a loading/quality band corresponding with members of the aggregation group having lower data traffic loading relative to the other members of the aggregation group. | 03-03-2011 |
20110051735 | DYNAMIC LOAD BALANCING USING VIRTUAL LINK CREDIT ACCOUNTING - Methods and apparatus for dynamic load balancing using virtual link credit accounting are disclosed. An example method includes receiving, at a network device, a data packet to be communicated using an aggregation group, the aggregation group including a plurality of virtual links having a common destination. The example method further includes determining a hash value based on the packet and determining an assigned virtual link of the plurality of virtual links based on the hash value. The example method also includes reducing a number of available transmission credits for the aggregation group and reducing a number of available transmission credits for the assigned virtual link. The example method still further includes communicating the packet to another network device using the assigned virtual link. | 03-03-2011 |
20110058477 | INTELLIGENT CONGESTION FEEDBACK APPARATUS AND METHOD - Apparatus and methods for intelligent congestion feedback are disclosed. An example apparatus includes a data interface configured to receive data packets from a source endpoint via an intermediate node. The data packets include a field indicating whether data congestion for data being sent to the destination endpoint is occurring. The example apparatus also includes a timer. The example apparatus further includes a feedback loop interface configured to selectively enable a feedback loop to the source endpoint and to transmit congestion notification (CN) messages to the source endpoint over the feedback loop. Upon receiving a data packet indicating that congestion has occurred due to the data packets from the source endpoint to the destination endpoint, the destination endpoint is configured to set the timer to a preset time value; start the timer reverse counting from the preset time value to zero, enable the feedback loop and transmit the CN messages. | 03-10-2011 |
20120124093 | INTELLIGENT NETWORK INTERFACE CONTROLLER - A network interface device includes a security database and a security services engine. The security database is configured to store patterns corresponding to predetermined malware. The security services engine is configured to compare data to be transmitted through a network to the patterns stored in the security database, and the security database is configured to receive updated patterns from the network. | 05-17-2012 |
20120195192 | DYNAMIC MEMORY BANDWIDTH ALLOCATION - Methods and apparatus for dynamic bandwidth allocation are disclosed. An example method includes determining, by a network device, at least one of a congestion state of a packet memory buffer of the network device and a congestion state of an external packet memory that is operationally coupled with the network device. The example method further includes dynamically adjusting, by the network device, respective bandwidth allocations for read and write operations between the network device and the external packet memory, the dynamic adjusting being based on the determined congestion state of the packet memory buffer and/or the determined congestion state of the external packet memory. | 08-02-2012 |
20120230194 | Hash-Based Load Balancing in Large Multi-Hop Networks with Randomized Seed Selection - Methods and apparatus for improving hash-based load balancing with randomized seed selection are disclosed. The methods and apparatus described herein increase the number of unique fields in a hash key before the hash key is presented to a hash function. The methods include selecting one or more seed values based the output of a first arbitrary function having a first set of packet fields as input. The one or more seed values are combined with a second set of packet fields. A second arbitrary function generates a hash value based on the one or more seed values and the second set of packet fields. The hash value is applied as input to a hash function in a member selection module. The method enables per flow randomization attributes based on per packet attributes to perform aggregate member selection while remaining deterministic from a root-node or network perspective. | 09-13-2012 |
20120230225 | Hash-Based Load Balancing with Per-Hop Seeding - Methods and apparatus for improving hash-based load balancing with per-hop seeding are disclosed. The methods and apparatus described herein provide a set of techniques that enable nodes to perform differing mathematical transformations when selecting a destination link. The techniques include manipulation of seeds, hash configuration mode randomization at a per node basis, per node/microflow basis or per microflow basis, seed index generation, and member selection. A node can utilize any, or all, of the techniques presented in this disclosure simultaneously to improve traffic distribution and avoid path starvation with a degree of determinism. | 09-13-2012 |
20120287946 | Hash-Based Load Balancing with Flow Identifier Remapping - Methods and apparatus for improving hash-based load balancing using flow identifier remapping are disclosed. The node-based remapping of flow identifiers introduces additional information into the hash function by injecting new values into the hash key on a per node basis. The methods and apparatus described herein perform a remapping operation on a fixed per-flow attribute such as one or more packet fields. Upon receipt of a packet, a set of the packet fields is selected as a hash key. From these selected packet fields, one or more fields are selected and remapped using a remapping operation. A transformed hash key is formed using the one or more remapped values along with other packet fields. The transformed hash key is then presented as an input to an arbitrary hash function. The hash function generates a hash value that is then used for path selection. | 11-15-2012 |
20120307828 | Method and System of Frame Forwarding with Link Aggregation in Distributed Ethernet Bridges - Embodiments relate to forwarding of packets in link aggregation environments. A method for forwarding a packet through an extended switch including a first port extender and a second port extender directly or indirectly communicatively coupled to respectively a first interface and a second interface of a controlling bridge includes, associating a first port extender interface of the first port extender with a global namespace or an interface-specific namespace. The method further includes receiving a packet through the first port extender interface, marking the received packet with an indication of the namespace configuration of the first port extender interface, processing the marked packet in the controlling bridge based at least in part upon the indication, and transmitting the processed packet out of the controlling bridge. | 12-06-2012 |
20130003549 | Resilient Hashing for Load Balancing of Traffic Flows - Methods, systems, and computer program product embodiments for managing traffic flows member of a plurality of available member resources in a communications device are disclosed. Embodiments include configuring a flow table containing a plurality of mappings, where each of the mappings specifies a relationship between one of a range of index values and at least one of the plurality of available member resources of an aggregated resource, assigning using the flow table respective traffic flows to at least one of the plurality of available links, and responsive to a change in the plurality of available member resources, changing the plurality of mappings. | 01-03-2013 |
20130121153 | DYNAMIC LOAD BALANCING USING QUALITY/LOADING BANDS - Methods and apparatus for load balancing data traffic are disclosed. An example method includes determining a respective quality metric for each of a plurality of members of an aggregation group of the network device, each respective quality metric representing respective data traffic loading for each member of the plurality of aggregation group members. The example method also includes grouping the plurality of aggregation members into a plurality of loading/quality bands based on their respective quality metric. The example method further includes selecting members of the aggregation group for transmitting packets from a loading/quality band corresponding with members of the aggregation group having lower data traffic loading relative to other members of the aggregation group. | 05-16-2013 |
20130155859 | System and Method for Hierarchical Adaptive Dynamic Egress Port and Queue Buffer Management - A system and method for hierarchical adaptive dynamic egress port and queue buffer management. Efficient utilization of buffering resources in a commodity shared memory buffer switch is key to minimizing packet loss. Efficient utilization of buffering resources is enabled through adaptive queue limits that are derived from an adaptive port limit. | 06-20-2013 |
20130173908 | Hash Table Organization - Disclosed are various embodiments for improving hash table utilization. A key corresponding to a data item to be inserted into a hash table can be transformed to improve the entropy of the key space and the resultant hash codes that can generated. Transformation data can be inserted into the key in various ways, which can result in a greater degree of variance in the resultant hash code calculated based upon the transformed key. | 07-04-2013 |
20130308496 | System and Method for Generic Multi-Domain Network Pruning - A system and method for generic multi-domain network pruning A generic mechanism that can control network pruning can be applied to a multi-domain context. In one embodiment, the pruning mechanism is implemented using a network pruning control table that can be accessed using a source domain identifier and a destination domain identifier. The source domain identifier is shared by network traffic that is received from any of a first plurality of network devices that are included in a source network domain and the destination network domain identifier is shared by network traffic destined to any of a second plurality of network devices that are included in a destination network domain. | 11-21-2013 |
20130322271 | SYSTEM FOR PERFORMING DATA CUT-THROUGH - A system transfers data. The system includes an ingress node transferring data at a determined bandwidth. The ingress node includes a buffer and operates based on a monitored node parameter. The system includes a controller in communication with the ingress node. The controller is configured to allocate, based on the monitored node parameter, an amount of the determined bandwidth for directly transferring data to bypass the buffer of the ingress node. | 12-05-2013 |
20130322457 | MULTI-HOMING IN AN EXTENDED BRIDGE - Disclosed are various embodiments for multi-homing in an extended bridge, including both multi-homing of port extenders and multi-homing of end stations. In various embodiments, a controlling bridge device receives a packet via an ingress virtual port and determines a destination virtual port link aggregation group based at least in part on a destination media access control (MAC) address of an end station in the packet. The controlling bridge device selects one of multiple egress virtual ports of the destination virtual port link aggregation group. The end station of the extended bridge is reachable through any of the egress virtual ports of the destination virtual port link aggregation group. The controlling bridge device forwards the packet through the selected egress virtual port, and the forwarded packet includes an identifier of a destination virtual port to which the end station is connected. | 12-05-2013 |
20130336332 | SCALING OUTPUT-BUFFERED SWITCHES - The systems and methods described herein allow for the scaling of output-buffered switches by decoupling the data path from the control path. Some embodiment of the invention include a switch with a memory management unit (MMU), in which the MMU enqueues data packets to an egress queue at a rate that is less than the maximum ingress rate of the switch. Other embodiments include switches that employ pre-enqueue work queues, with an arbiter that selects a data packet for forwarding from one of the pre-enqueue work queues to an egress queue. | 12-19-2013 |
20130343193 | Switch Fabric End-To-End Congestion Avoidance Mechanism - Aspects of a switch fabric end-to-end congestion avoidance mechanism are presented. Aspects of a system for end-to-end congestion avoidance in a switch fabric may include at least one circuit that enables reception of a congestion notification message that specifies a traffic flow identifier. The circuitry may enable increase or decrease of a current rate for transmission of data link layer (DLL) protocol data units (PDU) associated with the specified traffic flow identifier as a response to the reception of the congestion notification message. | 12-26-2013 |
20140022895 | Reducing Store And Forward Delay In Distributed Systems - Processing techniques in a network switch help reduce latency in the delivery of data packets to a recipient. The processing techniques include speculative flow status messaging, for example. The speculative flow status messaging may alert an egress tile or output port of an incoming packet before the incoming packet is fully received. The processing techniques may also include implementing a separate accelerated credit pool which provides controlled push capability for the ingress tile or input port to send packets to the egress tile or output port without waiting for a bandwidth credit from the egress tile or output port. | 01-23-2014 |
20140043974 | LOW-LATENCY SWITCHING - Disclosed are systems and methods for cut-through switching in port-speed-mismatched networks. Specifically, systems and methods are described in which data packets from an ingress device are paced, thereby matching the data rate of the ingress device with the data rate of the egress device. | 02-13-2014 |
20140064079 | ADAPTIVE CONGESTION MANAGEMENT - A computer-implemented method for implementing a congestion management policy, the method including, determining a minimum congestion state for a first queue, based on a minimum guarantee use count of the first queue, determining a shared congestion state for the first queue, based on a shared buffer use count and a shared buffer congestion threshold, wherein the shared buffer congestion threshold is further based on an amount of remaining buffer memory and determining a global congestion state based on a global shared buffer use count. In certain aspects, the method further includes implementing a congestion management policy based on the minimum congestion state, the shared congestion state and the global congestion state. Systems and computer-readable media are also provided. | 03-06-2014 |
20140086258 | Buffer Statistics Tracking - The systems and methods disclosed herein allow for a switch (in a packet-switching network) to track buffer statistics, and trigger an event, such as a hardware interrupt or a system snapshot, in response to the buffer statistics reaching a threshold that may indicate an impending problem. Since the switch itself triggers the event to alert the network administrator, the network administrator no longer needs to sift through mountains of data to identify potential problems. Also, since the switch triggers the event prior to a problem arising, the network administrator can provide remedial action prior to a problem occurring. This type of event-triggering mechanism makes the administration of packet-switching networks more manageable. | 03-27-2014 |
20140086262 | SCALABLE EGRESS PARTITIONED SHARED MEMORY ARCHITECTURE - Disclosed are various embodiments that provide an architecture of memory buffers for a network component configured to process packets. A network component may receive a packet, the packet being associated with a control structure and packet data, an input port set and an output port set. The network component determines one of a plurality of control structure memory partitions for writing the control structure, the one of the plurality of control structure memory partitions being determined based at least upon the input port set and the output port set; and determines one of a plurality of packet data memory partitions for writing the packet data, the one of the plurality of packet data memory partitions being determined independently of the input port set. | 03-27-2014 |
20140098816 | MULTICAST SWITCHING FOR DISTRIBUTED DEVICES - A system for multicast switching for distributed devices may include an ingress node including an ingress memory and an egress node including an egress memory, where the ingress node is communicatively coupled to the egress node. The ingress node may be operable to receive a portion of a multicast frame over an ingress port, bypass the ingress memory and provide the portion to the egress node when the portion satisfies an ingress criteria, otherwise receive and store the entire frame in the ingress memory before providing the frame to the egress node. The egress node may be operable to receive the portion from the ingress node, bypass the egress memory for the portion and provide the portion to the first egress port when an egress criteria is satisfied, otherwise receive and store the entire multicast frame in the egress memory before providing the multicast frame to an egress port. | 04-10-2014 |
20140098818 | Internal Cut-Through For Distributed Switches - Processing techniques in a network switch help reduce latency in the delivery of data packets to a recipient. The processing techniques include internal cut-through. The internal cut-through may bypass input port buffers by directly forwarding packet data that has been received to an output port. At the output port, the packet data is buffered for processing and communication out of the switch. | 04-10-2014 |
20140112348 | TRAFFIC FLOW MANAGEMENT WITHIN A DISTRIBUTED SYSTEM - Various methods and systems are provided for traffic flow management within distributed traffic. In one example, among others, a distributed system includes egress ports supported by nodes of the distributed system, cut-through tokens (c-tokens) including an indication of eligibility of the corresponding egress port to handle cut-through traffic, and a cut-through control ring to pass the c-tokens between the nodes. In another example, a method includes determining whether an egress port is available to handle cut-through traffic based upon a corresponding c-token, claiming the egress port for transmission of at least a portion of a packet, and routing it to the claimed egress port for transmission. In another example, a distributed system includes a first node configured to modify an eligibility indication of a c-token before transmission to a second node configured to route at least a portion of a packet based at least in part upon the eligibility indication. | 04-24-2014 |
20140126395 | SWITCH STATE REPORTING - Disclosed are various embodiments that relate to a network switch. The network switch obtains a network state metric, the network state metric quantifying a network traffic congestion associated with a switch. The network switch identifies a synchronous time stamp associated with the network state metric and generates an network state reporting message, the network state reporting message comprising the network state metric and the synchronous time stamp. The network state reporting message may be transmitted to a monitoring system. | 05-08-2014 |
20140126396 | Annotated Tracing Driven Network Adaptation - Network devices add annotation information to network packets as they travel through the network devices. The network devices may be switches, routers, bridges, hubs, or any other network device. The annotation information may be information specific to the network devices, as opposed to simply the kinds of information available at application servers that receive the network packets. As just a few examples, the annotation information may include switch buffer levels, routing delay, routing parameters affecting the packet, switch identifiers, power consumption, and heat, moisture, or other environmental data. | 05-08-2014 |
20140126573 | Annotated Tracing for Data Networks - Network devices add annotation information to network packets as they travel through the network devices. The network devices may be switches, routers, bridges, hubs, or any other network device. The annotation information may be information specific to the network devices, as opposed to simply the kinds of information available at application servers that receive the network packets. As just a few examples, the annotation information may include switch buffer levels, routing delay, routing parameters affecting the packet, switch identifiers, power consumption, and heat, moisture, or other environmental data. | 05-08-2014 |
20140133314 | FORENSICS FOR NETWORK SWITCHING DIAGNOSIS - A method for diagnosing performance of a network switch device includes a processor monitoring data generated by a sensor associated with a network switch device, the data related to states or attributes of the network switch device. The processor detects a determined condition in the operation of the network switch device related to the state or attribute. The processor generates an event trigger in response to detecting the determined condition and executes a forensic command in response to the event trigger. Executing the command includes sending information relevant to the determined condition for aggregation in computer storage and for analysis. | 05-15-2014 |
20140133483 | Distributed Switch Architecture Using Permutation Switching - A distributed switch architecture using permutation switching. In one embodiment, the distributed switch architecture facilitates connections between a plurality of ingress nodes and a plurality of egress nodes, wherein each of the plurality of ingress nodes and plurality of egress nodes are coupled to a plurality of ports (e.g., 40 gigabit Ethernet (GbE), 100 GbE, etc.). A plurality of crossbar switch modules are provided that are configured for coupling to a single output from each of the plurality of ingress nodes, and for coupling to a single input from each of the plurality of egress nodes. Permutations of connections for a crossbar switch module are defined by a permutation connection set that is stored in a permutation engine. Each permutation connection in the permutation connection can be designed to couple one of the outputs from the plurality of ingress nodes to one of the inputs from the plurality of ingress nodes, wherein the permutation connection set can ensures that each of the plurality of ingress nodes has an opportunity to connect with each of the plurality of egress nodes. | 05-15-2014 |
20140146666 | DEADLOCK RECOVERY FOR DISTRIBUTED DEVICES - A system for deadlock recovery of distributed devices may include a processor and memory. The processor may transmit packets to a device, receive a pause message indicating that the packet transmission should be paused, and initiate a timer and pause the packet transmission in response to receiving the pause message. The processor may enter a deadlock recovery state if the timer reaches a timeout before a resume message is received that indicates that the packet transmission can resume. The processor may, while in the deadlock recovery state, drop packets that have a packet age that is greater than a threshold, and may exit the deadlock recovery state upon dropping a packet that has a packet age less than the threshold, or upon receiving the resume message. The processor may re-initiate the timer if the resume message has not been received, otherwise the processor may resume the packet transmission. | 05-29-2014 |
20140185628 | DEADLINE AWARE QUEUE MANAGEMENT - A method for managing data traffic operating on a deadline is provided. The method includes receiving, on an intermediate node, a packet having one or more traffic characteristics. The method also includes evaluating, on the intermediate node, the one or more traffic characteristics to determine a priority of the packet. The method also includes selecting one of multiple queues on the intermediate node based on the determined priority. The method also includes processing, on the intermediate node, the packet based on the determined priority. The method also includes enqueuing the processed packet into the selected queue. The method further includes outputting the queued packet from the selected queue. | 07-03-2014 |
20140201354 | NETWORK TRAFFIC DEBUGGER - Disclosed are various embodiments that relate to a network switch. The switch determines whether a network packet is associated with a packet processing context, the packet processing context specifying a condition of handling network packets processed in the switch. The switch determines debug metadata for the network packet in response to the network packet being associated with the packet processing context; and the debug metadata is stored in a capture buffer. | 07-17-2014 |
20140211639 | Network Tracing for Data Centers - Network devices facilitate network tracing using tracing packets that travel through the network devices. The network devices may be switches, routers, bridges, hubs, or any other network device. The network tracing may include sending tracing packets down each of multiple routed paths between a source and a destination, at each hop through the network, or through a selected subset of the paths between a source and a destination. The network devices may add tracing information to the tracing packets, which an analysis system may review to determine characteristics of the network and the characteristics of the potentially many paths between a source and a destination. | 07-31-2014 |
20140219087 | Packet Marking For Flow Management, Including Deadline Aware Flow Management - Network devices facilitate flow management through packet marking. The network devices may be switches, routers, bridges, hubs, or any other network device. The packet marking may include analyzing received packets to determine when the received packets meet a marking criterion, and then applying a configurable marking function to mark the packets in a particular way. The marking capability may facilitate deadline aware end-to-end flow management, as one specific example. More generally, the marking capability may facilitate traffic management actions such as visibility actions and flow management actions. | 08-07-2014 |
20140233382 | Oversubscription Monitor - Aspects of oversubscription monitoring are described. In one embodiment, oversubscription monitoring includes accumulating an amount of data that arrives at a network component over at least one epoch of time. Further, a core processing rate at which data can be processed by the network component is calculated. Based on the amount of data and the core processing rate, it is determined whether the network component is operating in an oversubscribed region of operation. In one embodiment, when the network component is operating in the oversubscribed region of operation, certain quality of service metrics are monitored. Using the monitored metrics, a network operation display object may be generated for identifying or troubleshooting network errors during an oversubscribed region of operation of the network component. | 08-21-2014 |
20140241160 | Scalable, Low Latency, Deep Buffered Switch Architecture - A switch architecture includes an ingress module, ingress fabric interface module, and a switch fabric. The switch fabric communicates with egress fabric interface modules and egress modules. The architecture implements multiple layers of congestion management. The congestion management may include fast acting link level flow control and more slowly acting end-to-end flow control. The switch architecture simultaneously provides high scalability, with low latency and low frame loss. | 08-28-2014 |
20140254357 | FACILITATING NETWORK FLOWS - Disclosed are various embodiments for facilitating network flows in a networked environment. In various embodiments, a switch transmits data using an egress port that comprises an egress queue. The switch sets a congestion notification threshold for the egress queue. The switch generates a drain rate metric based at least in part on a drain rate for the egress queue, and the congestion notification threshold is adjusted based at least in part on the drain rate metric. | 09-11-2014 |
20140254385 | FACILITATING NETWORK FLOWS - In various embodiments, a system includes a switch comprising a resource that is shared between multiple objects. The switch comprises circuitry that determines a congestion metric for the switch in response to an amount of used of the resource by the objects. The circuitry determines a feedback parameter that is responsive to the congestion metric. The circuitry generates a congestion notification message that comprises a congestion feedback value responsive to the feedback parameter. | 09-11-2014 |
20140293786 | Path Resolution for Hierarchical Load Distribution - Network devices perform multiple stage path resolution. The path resolution may be ECMP resolution. Any particular stage of the multiple stage path resolution may be skipped under certain conditions. Further, the network device facilitate redistribution of traffic when a next hop goes down in a fast, efficient manner, and without reassigning traffic that was going to other unaffected next hops, using multiple stage ECMP resolution. | 10-02-2014 |
20140293825 | TIMESTAMPING DATA PACKETS - Disclosed are various embodiments for providing a data packet with timestamp information. A data packet is generated such that it comprises a payload and a header. The payload comprises a first timestamp field that comprises data indicating when a network device processed the data packet. The payload also comprises a body data field and a body data protocol field. The body data protocol field comprises data identifying a protocol used by body data in the body data field. The header comprises a payload protocol field that comprises data identifying that the payload comprises timestamp data. | 10-02-2014 |
20140310362 | Congestion Management in Overlay Networks - A system forwards congestion management messages to a source host updating the source address in the management message. The system may determine that the congestion management message was triggered responsive to an initial communication that was previously forwarded by the system. The system may use header translation within a single addressing scheme and/or may translate the congestion management message into a different type to support forwarding to the source of the initial communication. The system may use portions of the payload of the congestion management message to determine the source of the initial communication and to derive a different header for the translated congestion management message. | 10-16-2014 |
20140362858 | Efficient Management of Linked-Lists Traversed by Multiple Processes - A network device, such as a switch, implements enhanced linked-list processing features. The processing features facilitate packet manipulation actions performed, e.g., by hardware or software processes. Hardware processes may run for egress ports, for example, to traverse the linked-lists to apply the packet manipulation actions on packets before sending packets out of the ports. | 12-11-2014 |
20150016258 | Path Aggregation Group Monitor - A network device monitors a path aggregation group. The network device may monitor path selection for network traffic (e.g., packets) communicated through the path aggregation group. During a monitoring period, the network device may obtain a path selection indication that a network packet has been selected for communication through the path aggregation group and specifically a first path in the path aggregation group. The network device may update a path entry associated with the first path in the path aggregation group. | 01-15-2015 |
20150089047 | CUT-THROUGH PACKET MANAGEMENT - Disclosed are various embodiments that relate identifying a source of corruption in a network made up of multiple network nodes. A network node is configured to provide corruption source identification while handling packets according to a cut-through scheme. According to some embodiments, a network node may perform a running error detection operation on a cut-through packet and then insert a debug indicator into the cut through packet. In other embodiments, the network node may process some packets according to a cut-through scheme while process other packets according to a store-and-forward scheme to detect packet corruption in a network. | 03-26-2015 |