Patent application number | Description | Published |
20130297798 | TWO LEVEL PACKET DISTRIBUTION WITH STATELESS FIRST LEVEL PACKET DISTRIBUTION TO A GROUP OF SERVERS AND STATEFUL SECOND LEVEL PACKET DISTRIBUTION TO A SERVER WITHIN THE GROUP - A method, in one or more network elements that are in communication between clients that transmit packets and servers, of distributing the packets among the servers which are to process the packets. Stickiness of flows to servers assigned to process them is provided. A packet of a flow is received at a static first level packet distribution module. A group of servers is statically selected for the packet of the flow with the first level module. State that assigns the packet of the flow to the selected group of servers is not used. The packet of the flow is distributed to a distributed stateful second level packet distribution system. A server of the selected group is statefully selected with the second level system by accessing state that assigns processing of packets of the flow to the selected server. The packet of the flow is distributed to the selected server. | 11-07-2013 |
20130301641 | METHOD AND APPARATUS FOR PACKET CLASSIFICATION - In one aspect, the present invention reduces the amount of low-latency memory needed for rules-based packet classification by representing a packet classification rules database in compressed form. A packet processing rules database, e.g., an ACL database comprising multiple ACEs, is preprocessed to obtain corresponding rule fingerprints. These rule fingerprints are much smaller than the rules and are easily accommodated in on-chip or other low-latency memory that is generally available to the classification engine in limited amounts. The rules database in turn can be stored in off-chip or other higher-latency memory, as initial matching operations involve only the packet key of the subject packet and the fingerprint database. The rules database is accessed for full packet classification only if a tentative match is found between the packet key and an entry in the fingerprint database. Thus, the present invention also advantageously minimizes accesses to the rules database. | 11-14-2013 |
20140119193 | METHOD FOR DYNAMIC LOAD BALANCING OF NETWORK FLOWS ON LAG INTERFACES - A method is implemented by a network element to improve load sharing for a link aggregation group by redistributing data flows to less congested ports in a set of ports associated with the link aggregation group. The network element receives a data packet in a data flow at an ingress port of the network element. A load sharing process is performed to select an egress port of the network element. A check is whether the selected egress port is congested. A check is made whether a time since a previous data packet in the data flow was received exceeds a threshold value. A less congested egress port is identified in the set of ports. A flow table is updated to bind the data flow to the less congested egress port and the data packet is forwarded to the less congested egress port. | 05-01-2014 |
20140169166 | PACKET TRAIN GENERATION FOR ESTIMATING AVAILABLE NETWORK BANDWIDTH - Aspects of a high-precision packet train generation process are distributed among several distinct processing elements. In some embodiments a control processor configures a packet-processing unit with a packet train context that includes details such as the number of packets to be generated and the headers to be included in the packets. The packet-processing unit takes a packet to be used in the packet train and recirculates it a number of times, as specified by the packet train context. The recirculated packets, with the appropriate headers inserted, are forwarded to a traffic-shaping queue in queuing hardware. The traffic-shaping queue is configured to output the forwarded packets with a constant inter-packet gap. Thus, the generation of the multiple packets in the packet train is handled by the packet-processing unit, while the precise inter-packet timing is provided by the traffic-shaping queue in the queuing hardware. | 06-19-2014 |
20140195545 | HIGH PERFORMANCE HASH-BASED LOOKUP FOR PACKET PROCESSING IN A COMMUNICATION NETWORK - The present invention relates to methods and apparatus for performing a lookup on a hash table stored in external memory. An index table stored in local memory is used to perform an enhanced lookup on the hash table stored in external memory. The index table stores signature patterns that are derived from the hash keys stored in the hash entries. Using the stored signature patterns, the packet processing node predicts which hash key is likely to store the desired data. The prediction may yield a false positive, but will never yield a false negative. Thus, the hash table is accessed only once during a data lookup. | 07-10-2014 |
20140369204 | METHODS OF LOAD BALANCING USING PRIMARY AND STAND-BY ADDRESSES AND RELATED LOAD BALANCERS AND SERVERS - A first data packet of a data flow may be addressed to a primary address and include information for the data flow and a bucket ID may be computed based on the information. Responsive to the bucket ID mapping to first and second servers and the first data packet being addressed to the primary address, the first data packet may be transmitted to the first server. A second data packet may be received addressed to a stand-by address and including the information for the data flow, and a bucket ID may be computed based on the information with the bucket IDs for the first and second packets being the same. Responsive to the bucket ID for the second data packet mapping to first and second servers and the second data packet being addressed to the stand-by address, the second data packet may be transmitted to the second server. | 12-18-2014 |
20140372567 | METHODS OF FORWARDING DATA PACKETS USING TRANSIENT TABLES AND RELATED LOAD BALANCERS - Methods may be provided to forward data packets to a plurality of servers with each server being identified by a respective server identification (ID). A non-initial data packet of a data flow may be received, with the non-initial data packet including information for the data flow, and a bucket ID for the non-initial data packet may be computed as a function of the information for the data flow. Responsive to the bucket ID for the data packet mapping to first and second server identifications (IDs) of respective first and second servers and responsive to the non-initial data packet being a non-initial data packet for the data flow, the non-initial data packet may be transmitted to one of the first and second servers using one of the first and second server IDs based on a flow identification of the data flow being included in a transient table for the bucket ID. | 12-18-2014 |
20140372616 | METHODS OF FORWARDING/RECEIVING DATA PACKETS USING UNICAST AND/OR MULTICAST COMMUNICATIONS AND RELATED LOAD BALANCERS AND SERVERS - Data packets may be forwarded to servers identified by respective server IDs. A mapping table includes bucket IDs identifying respective buckets. The mapping table maps: a first bucket ID to a first server ID as a current server ID; a second bucket ID to a second server IDs as a current server ID; and the first bucket ID to a third server ID as an old server ID. A data packet of a data flow may be received, and a bucket ID may be computed for the data packet. Responsive to computing the first bucket ID as the bucket ID for the data flow and responsive to the mapping table mapping the first bucket ID to the to the first server ID as the current server ID and to the third server ID as the old server ID, the data packet may be transmitted to the first server and/or to the third server. | 12-18-2014 |
20150016255 | REMOVING LEAD FILTER FROM SERIAL MULTIPLE-STAGE FILTER USED TO DETECT LARGE FLOWS IN ORDER TO PURGE FLOWS FOR PROLONGED OPERATION - A network device to detect large flows includes a card to receive packets of flows. The device includes a large flow detection module including a serial multiple-stage filter module including series filter modules including a lead filter module and a tail filter module. Each filter module includes counters. The serial filter module is to serially increment the counters to reflect the flows, and is to increment counters that correspond to flows of subsequent filter modules only after all counters that correspond to the flows of all prior filter modules have been incremented serially up to maximum values. The serial filter module is to detect flows that correspond to counters of the tail filter module that have been incremented up to maximum values as the large flows. The large flow detection module includes a lead filter removal module to remove the lead filter module from the start of the series. | 01-15-2015 |
20150078159 | PACKET TRAIN GENERATION FOR ESTIMATING AVAILABLE NETWORK BANDWIDTH - Aspects of a high-precision packet train generation process are distributed among several distinct processing elements. In some embodiments a control processor configures a packet-processing unit with a packet train context that includes details such as the number of packets to be generated and the headers to be included in the packets. The packet-processing unit takes a packet to be used in the packet train and recirculates it a number of times, as specified by the packet train context. The recirculated packets, with the appropriate headers inserted, are forwarded to a traffic-shaping queue in queuing hardware. The traffic-shaping queue is configured to output the forwarded packets with a constant inter-packet gap. Thus, the generation of the multiple packets in the packet train is handled by the packet-processing unit, while the precise inter-packet timing is provided by the traffic-shaping queue in the queuing hardware. | 03-19-2015 |