Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


Queuing arrangement

Subclass of:

370 - Multiplex communications

370351000 - PATHFINDING OR ROUTING

370389000 - Switching a message which includes an address header

Patent class list (only not empty are listed)

Deeper subclasses:

Class / Patent application numberDescriptionNumber of patent applications / Date published
370413000 Having both input and output queuing 19
370417000 Having output queuing only 19
370415000 Having input queuing only 12
Entries
DocumentTitleDate
20090059941DYNAMIC DATA FILTERING - Networks, systems and methods for dynamically filtering data are disclosed. Streams of data may be buffered or stored in a queue when inbound rates exceed distribution or publication limitations. Inclusive messages in the queue may be removed, replaced or aggregated, reducing the number of messages to be published when distribution limitations are no longer exceeded.03-05-2009
20120163396QUEUE SPEED-UP BY USING MULTIPLE LINKED LISTS - One embodiment of the present invention provides a switch that includes a transmission mechanism configured to transmit frames stored in a queue, and a queue management mechanism configured to store frames associated with the queue in a number of sub-queues which allow frames in different sub-queues to be retrieved independently, thereby facilitating parallel processing of the frames stored in the sub-queues.06-28-2012
20080285579Digital Broadcast Network Best Effort Services - In accordance with an embodiment, a best-effort service is divided into packets for best-effort digital broadcast transmission. The packets are encapsulated with an encapsulation protocol that uses a packet order defining field. The encapsulated packets are inserted into an unused portion of a slot of a digital broadcast transmission frame. Then, the encapsulated packets are repeatedly inserted into the unused portion of the slot of the digital broadcast transmission frame in a packet-carousel fashion. And the transmission frame is digitally broadcast. In accordance with an embodiment, a digital broadcast transmission is received. Encapsulated packets that have been repeatedly broadcast in a packet-carousel fashion are accessed from a best-effort portion of a digital broadcast transmission frame slot. And a best-effort service is composed from the encapsulated packets by combining the encapsulated packets in an order based on a packet order defining field of the encapsulated packets.11-20-2008
20110188510DATA CONVERSION DEVICE AND DATA CONVERSION METHOD - A data conversion device includes a receiving unit that receives first data and second data, transmitting after a start of the first data, transmitted from the first device to the second device, a transmitting unit that transmits the received first data and second data to a third device, and a control unit that controls a time point of transmitting the second data from the transmitting unit to lengthen a time interval between transmission of the first data and second data from the transmitting unit than a first time interval between transmission of the first data from the transmitting unit and reception of response data to the first data by the receiving unit when the first time interval is longer than a time interval between the transmission of the first data and second data from the first device to the second device.08-04-2011
20080259947Method and System for High-Concurrency and Reduced Latency Queue Processing in Networks - A method and a system for controlling a plurality of queues of an input port in a switching or routing system. The method supports the regular request-grant protocol along with speculative transmission requests in an integrated fashion. Each regular scheduling request or speculative transmission request is stored in request order using references to minimize memory usage and operation count. Data packet arrival and speculation event triggers can be processed concurrently to reduce operation count and latency. The method supports data packet priorities using a unified linked list for request storage. A descriptor cache is used to hide linked list processing latency and allow central scheduler response processing with reduced latency.10-23-2008
20100008377QUEUE MANAGEMENT BASED ON MESSAGE AGE - A system for managing inbound messages in a server complex including one or more message consumers. The system includes a server configured to receive the inbound messages from a first peripheral device and to transmit messages to one or more of the plurality of message consumers. The system also includes an inbound message queue coupled to the server, the inbound message queue configured to store inbound message and discard at least one message when an age of the message exceeds an expiration time.01-14-2010
20090122804MEDIA ACCESS CONTROL APPARATUS AND METHOD FOR GUARANTEEING QUALITY OF SERVICE IN WIRELESS LAN - A media access control (MAC) apparatus and corresponding methods for guaranteeing quality-of-service in a wireless local area network (LAN) are presented. The MAC method includes the steps of extracting, performing, determining, a first transmitting step, and a second transmitting step. The extracting step includes extracting a user priority from a frame received from an upper layer and separately storing a voice frame and a non-voice frame according to an access category (AC). The performing step includes independently performing backoff operations for the voice frame and the non-voice frame. The determining step includes determining whether the backoff operations for the voice frame and the non-voice frame have simultaneously ended. The first transmitting step includes transmitting the voice frame having a higher priority first and performing the backoff operation for the non-voice frame if the backoff operations have simultaneously ended. The second transmitting step includes transmitting a frame whose backoff operation ends if the backoff operations have not simultaneously ended.05-14-2009
20100158031METHODS AND APPARATUS FOR TRANSMISSION OF GROUPS OF CELLS VIA A SWITCH FABRIC - In one embodiment, a method can include receiving at an egress schedule module a request to schedule transmission of a group of cells from an ingress queue through a switch fabric of a multi-stage switch. The ingress queue can be associated with an ingress stage of the multi-stage switch. The egress schedule module can be associated with an egress stage of the multi-stage switch. The method can also include determining, in response to the request, that an egress port at the egress stage of the multi-stage switch is available to transmit the group of cells from the multi-stage switch.06-24-2010
20130077636Time-Preserved Transmissions In Asynchronous Virtual Machine Replication - The method includes determining a timestamp corresponding to a received data packet associated with the virtual machine and releasing the data packet from a buffer based on the timestamp and a time another data packet is released from the buffer.03-28-2013
20100046536METHODS AND SYSTEMS FOR AGGREGATING ETHERNET COMMUNICATIONS - Methods and systems for aggregating Ethernet communications are disclosed. A disclosed apparatus includes a first Ethernet port to communicate with a second Ethernet port of a first device, a third Ethernet port to communicate with a fourth Ethernet port of a second device, a fifth Ethernet port to receive Ethernet frames, and a switching portion to direct nth ones of the frames to a first queue associated with the second port, direct n−1 frames preceding each of the nth ones of the frames to a second queue associated with the fourth port, and select a value of n based on a ratio of a first non-zero data rate of the first device for a first communication link in a first direction and a second non-zero data rate of the second device for a second communication link in the first direction, and based on a remaining capacity of the first queue.02-25-2010
20130089106SYSTEM AND METHOD FOR DYNAMIC SWITCHING OF A TRANSMIT QUEUE ASSOCIATED WITH A VIRTUAL MACHINE - Methods and systems for managing multiple transmit queues of a networking device of a host machine in a virtual machine system. The networking device includes multiple transmit queues that are used by multiple guests of the virtual machine system for the transmission of packets in a data communication. A hypervisor of the virtual machine system manages the switching from one or more transmit queues (i.e., old transmit queues) to one or more other queues (i.e., new transmit queues) by managing a flow of packets in the virtual machine system to maintain a proper sequence of packets and avoid a need to re-order the transmitted packets at a destination.04-11-2013
20130089107Method and Apparatus for Multimedia Queue Management - Methods and systems for a multimedia queue management solution that maintaining graceful Quality of Experience (QoE) degradation are provided. The method selects a frame from all weighted queues based on a gradient function indicating a network performance rate change and a distortion rate caused by the frame and its related frames in the queue, and dropping the selected frame and all its related frames, and continues to drop similarly chosen frame until a network performance rate change caused by the dropping frame and its related frames meets a predetermined performance metric. A frame gradient is a distortion rate divided by a network performance rate change caused by the frame and its related frames, and a distortion rate is based on a sum of each individual frame distortion rate when the frame and its related frames are replaced by some other frames derived from remaining frames based on a replacement method.04-11-2013
20090190604Method and System for Dynamically Adjusting Acknowledgement Filtering for High-Latency Environments - A system and method for adjusting the filtering of acknowledgments (ACKS) in a TCP environment. State variables are used to keep track of, first, the number of times an ACK has been promoted into (a variable which can be stored on a per-packet basis along with the session ID), and second, the number of times an ACK is allowed to be promoted into (which can be global, or can be stored per-session).07-30-2009
20100040077METHOD, DEVICE AND SOFTWARE APPLICATION FOR SCHEDULING THE TRANSMISSION OF DATA STREAM PACKETS - The invention relates to a method for transmitting over a data communication network data packets of a data stream to a receiving device, characterized in that it comprises the steps of: selecting a data packet from a buffer memory containing data packets to be transmitted (02-18-2010
20100040076NETWORK DEVICE AND METHOD FOR PROCESSING DATA PACKETS - A network device for processing data packets receives data packets from networks connected to the network device, searches a rule table for data packet matching conditions corresponding to the data packets, and transmits the data packets to corresponding data packet targets. The network device further retrieves matching actions corresponding to the data packets, transmits the data packets and the corresponding matching actions to the user daemon thread module, and further transmits the data packets to corresponding daemon threads according to the corresponding matching actions.02-18-2010
20100027556HYBRID COMMUNICATIONS LINK - A hybrid communications link includes a slow, reliable communications link and a fast unreliable communications link. Communication via the hybrid communications link selectively uses both the slow, reliable communications link and the fast, unreliable communications link.02-04-2010
20090034550METHOD AND SYSTEM FOR ROUTING FIBRE CHANNEL FRAMES - A method and system for transmitting frames using a fibre channel switch element is provided. The switch element includes a port having a receive segment and a transmit segment, wherein the fibre channel switch element determines if a port link has been reset; determines if a flush state has been enabled for the port; and removes frames from a buffer, if the flush state has been enabled for the port. For a flush state operation, frames are removed from a receive buffer of the fibre channel port as if it is a typical fibre channel frame transfer. The removed frames are sent to a processor for analysis. The method also includes, setting a control bit for activating frame removal from the transmit buffer; and diverting frames that are waiting in the transmit buffer and have not been able to move from the transmit buffer.02-05-2009
20100091783METHOD AND SYSTEM FOR WEIGHTED FAIR QUEUING - A system for scheduling data for transmission in a communication network includes a credit distributor and a transmit selector. The communication network includes a plurality of children. The transmit selector is communicatively coupled to the credit distributor. The credit distributor operates to grant credits to at least one of eligible children and children having a negative credit count. Each credit is redeemable for data transmission. The credit distributor further operates to affect fairness between children with ratios of granted credits, maintain a credit balance representing a total amount of undistributed credits available, and deduct the granted credits from the credit balance. The transmit selector operates to select at least one eligible and enabled child for dequeuing, bias selection of the eligible and enabled child to an eligible and enabled child with positive credits, and add credits to the credit balance corresponding to an amount of data selected for dequeuing.04-15-2010
20090316711PACKET SWITCHING - In an embodiment, an apparatus is provided that may include an integrated circuit including switch circuitry to determine, at least in part, an action to be executed involving a packet. This determination may be based, at least in part, upon flow information determined, at least in part, from the packet, and packet processing policy information. The circuitry may examine the policy information to determine whether a previously-established packet processing policy has been established that corresponds, at least in part, to the flow information. If the circuitry determines, at least in part, that the policy has not been established and the packet is a first packet in a flow corresponding at least in part to the flow information, the switch circuitry may request that at least one switch control program module establish, at least in part, a new packet processing policy corresponding, at least in part, to the flow information.12-24-2009
20090304017APPARATUS AND METHOD FOR HIGH-SPEED PACKET ROUTING SYSTEM - An apparatus and method for packet routing in a high-speed packet routing system. The apparatus includes an input unit and a control unit. The input unit temporarily stores an input packet and outputs the temporarily stored input packet to an output port determined by a previous router. The control unit determines an output port of a next router for the input packet.12-10-2009
20090304016METHOD AND SYSTEM FOR EFFICIENTLY USING BUFFER SPACE - A method and system for transferring iSCSI protocol data units (“PDUs”) to a host system is provided. The system includes a host bus adapter with a TCP/IP offload engine. The HBA includes, a direct memory access engine operationally coupled to a pool of small buffers and a pool of large buffers, wherein an incoming PDU size is compared to the size of a small buffer and if the PDU fits in the small buffer, then the PDU is placed in the small buffer. If the incoming PDU size is compared to a large buffer size and if the incoming PDU size is less than the large buffer size then the incoming PDU is placed in the large buffer. If the coming PDU size is greater than a large buffer, then the incoming PDU is placed is more than one large buffer and a pointer to a list of large buffers storing the incoming PDU is placed in a small buffer.12-10-2009
20090304015Method and devices for installing packet filters in a data transmission - A method is described for associating a data packet (DP) with a packet bearer (PB) in a user equipment (UE12-10-2009
20090304014METHOD AND APPARATUS FOR LOCAL ADAPTIVE PROVISIONING AT A NODE12-10-2009
20120219010Port Packet Queuing - A port queue includes a first memory portion having a first memory access time and a second memory portion having a second memory access time. The first memory portion includes a cache row. The cache row includes a plurality of queue entries. A packet pointer is enqueued in the port queue by writing the packet pointer in a queue entry in the cache row in the first memory. The cache row is transferred to a packet vector in the second memory. A packet pointer is dequeued from the port queue by reading a queue entry from the packet vector stored in the second memory.08-30-2012
20130070778WEIGHTED DIFFERENTIAL SCHEDULER - A method for managing packets, including: identifying a first plurality of packets from a first packet source having a first weight; identifying a second plurality of packets from a second packet source having a second weight; obtaining a first weight ratio based on the first weight and the second weight; obtaining an error threshold and a first error value corresponding to the second packet source, where the error threshold exceeds the first error value; forwarding a first packet from the first packet source in response to the error threshold exceeding the first error value; incrementing the first error value by the first weight ratio; forwarding a first packet from the second packet source, after incrementing the first error value and in response to the first error value exceeding the error threshold; and decrementing the first error value.03-21-2013
20130070777Reordering Network Traffic - Impairment units and methods for impairing network traffic. An impairment unit may receive packets from a network and determine an impairment class of each packet from a plurality of impairment classes. Input logic may determine whether or not each received packet will be reordered. A received packet not to be reordered may be stored in a normal traffic FIFO queue uniquely associated with the impairment class of the received packet. A received packet to be reordered may be stored in a reorder traffic FIFO queue uniquely associated with the impairment class of the received packet. Output logic may select a sequence of packets from head ends of the plurality of normal traffic FIFO queues and the plurality of reorder traffic FIFO queues to provide outgoing traffic. A transmitter may transmit the outgoing traffic to the network.03-21-2013
20130070779Interleaving Data Packets In A Packet-Based Communication System - In one embodiment, the present invention includes a method for receiving a first portion of a first packet at a first agent and determining whether the first portion is an interleaved portion based on a value of an interleave indicator. The interleave indicator may be sent as part of the first portion. In such manner, interleaved packets may be sent within transmission of another packet, such as a lengthy data packet, providing improved processing capabilities. Other embodiments are described and claimed.03-21-2013
20100091785PACKET PROCESSING APPARATUS - A packet processing apparatus includes a packet buffer with a queue for storing packets. An actual queue length/position discriminator acquires, at every sampling period, the latest actual queue length indicating the occupancy status of the queue, determines the positional relationship of the actual queue length to a random early detection interval, and outputs the positional relationship as position information. A discard probability computation processor calculates, at every sampling period, a packet discard probability based on the position information. A packet discard processor discards, at every sampling period and in accordance with the discard probability, packets that are not yet stored in the queue. If it is judged from the position information that the actual queue length is within the random early detection interval, the discard probability computation processor calculates an average queue length, and then calculates the discard probability from the ratio of a discard target to a reception target.04-15-2010
20130058357DISTRIBUTED NETWORK VIRTUALIZATION APPARATUS AND METHOD - Some embodiments provide a distributed control system for controlling managed switching elements of a network. The distributed control system comprises a first network virtualizer for converting a first set of input logical forwarding plane data to a first set of output physical control plane data. It also includes a second network virtualizer for converting a second set of input logical forwarding plane data to a second set of output physical control plane data. In some embodiments, the physical control plane data is translated into physical forwarding behaviors that direct the forwarding of data by the managed switching elements.03-07-2013
20130058356METHOD AND APPARATUS FOR USING A NETWORK INFORMATION BASE TO CONTROL A PLURALITY OF SHARED NETWORK INFRASTRUCTURE SWITCHING ELEMENTS - Some embodiments provide a program for managing several switching elements. The program receives, at a network information base (NIB) data structure that stores data for managing the several switching elements, a request to modify data stored in at least one particular switching element. The program modifies at least a first set of data tuples stored in the NIB for managing the particular switching element. The program sends a request to the particular switching element to modify at least a second set of data tuples for managing the particular switching element's operation.03-07-2013
20130058358NETWORK CONTROL APPARATUS AND METHOD WITH QUALITY OF SERVICE CONTROLS - A control application of some embodiments allows a user to enable a logical switching element for Quality of Service (QoS). QoS in some embodiments is a technique to apply to a particular logical port of a logical switching element such that the switching element can guarantee a certain level of performance to network data that a machine sends through the particular logical port. The control application of some embodiments receives user inputs that specify a particular logical switch to enable for QoS. The control application may additionally receive performance constraints data. The control application in some embodiments formats the user inputs into logical control plane data. The control application in some embodiments then converts the logical control plane data into logical forwarding data that specify QoS functions.03-07-2013
20090268747COMMUNICATION APPARATUS - To provide a communication apparatus which is capable of voluntarily controlling, according to its own reception capability, data transmission traffic, while reducing the burden for the control. The communication apparatus includes: a communication unit (10-29-2009
20090238199WIDEBAND UPSTREAM PROTOCOL - Some embodiments of the present invention may include a method to stream packets into a queue for an upstream transmission, send multiple requests for upstream bandwidth to transmit data from the queue and receiving multiple grants to transmit data, and transmit data from the queue to the upstream as grants are received. Another embodiment may provide a network comprising a cable modem termination system (CMTS), and a cable modem wherein the cable modem may transmit data to the CMTS with a streaming protocol that sends multiple requests for upstream bandwidth to transmit data and receives multiple grants to transmit data, and transmits data to the CMTS as grants are received.09-24-2009
20090238198Packing Switching System and Method - A packing switching system and method is disclosed. A pipelined processor processes image pixels to generate a number of bit streams. Subsequently, a packing unit packs the bit streams into packets in a way that the bit stream or streams with minimum pixel order number are packed before other bit stream or streams.09-24-2009
20090238197Ethernet Virtualization Using Assisted Frame Correction - A method for Ethernet virtualization using assisted frame correction. The method comprises receiving at a host adapter data packets from a network, storing the received data packets in host memory, storing the received data packets in a hardware queue located on the host adapter, setting a status indicator reflecting the status of the data packets based on results of the checking, and sending the status indicator to the host memory.09-24-2009
20090016370Creating a Telecommunications Channel from Multiple Channels that Have Differing Signal-Quality Guarantees - A technique is disclosed that enables the adaptive pooling of M transmission paths that offer a first signal-quality guarantee, or no guarantee at all, with N transmission paths that offer a second signal-quality guarantee. Through this adaptive pooling, a telecommunications channel is created that meets the quality of service or waveform quality required for a packet stream being transmitted, while not excessively exceeding the required quality. The technique adaptively recaptures any excess signal quality from one path and uses it to boost the quality of an inferior path. A node of the illustrative embodiment selects the paths to handle a current segment of source packets, based on one or more parameters that are disclosed herein. The node adapts to changing conditions by adjusting the transmission characteristics for each successive segment of packets from the source packet stream.01-15-2009
20080310439COMMUNICATING PRIORITIZED MESSAGES TO A DESTINATION QUEUE FROM MULTIPLE SOURCE QUEUES USING SOURCE-QUEUE-SPECIFIC PRIORITY VALUES - There is disclosed a method, apparatus and computer program for communicating messages between a first messaging system and a second messaging system. The messaging system comprises a set of source queues with each source queue owning messages retrievable in priority order. It is determined that a message should be transferred from the first messaging system to the second messaging system. A source queue is selected which contains a message having at least an equal highest priority when compared with messages on the source queues. A message having the at least equal highest priority from the selected source queue of the first messaging system is then transferred to a target queue at the second messaging system.12-18-2008
20100172364FLEXIBLE QUEUE AND STREAM MAPPING SYSTEMS AND METHODS - A system processes data corresponding to multiple data streams. The system includes multiple queues that store the data, stream-to-queue logic, dequeue logic, and queue-to-stream logic. Each of the queues is assigned to one of the streams based on a predefined queue-to-stream assignment. The stream-to-queue logic identifies which of the queues has data to be processed. The dequeue logic processes data in the identified queues. The queue-to-stream logic identifies which of the streams correspond to the identified queues.07-08-2010
20110096790SIGNAL PROCESSING CIRCUIT, INTERFACE UNIT, FRAME TRANSMISSION APPARATUS, AND SEGMENT DATA READING METHOD - A signal processing circuit for controlling reading of segment data from a buffer in which a plurality of segment data generated by dividing a frame and received via a plurality of switches which direct each of the segment data to a designated destination are stored, comprises: a start detecting unit which detects a starting segment representing the first transmitted segment data to the switch among the segment data received after the buffer has emptied; a transmission time acquiring unit which acquires a transmission time at which the starting segment was transmitted to the switch; and a read timing control unit which determines, based on the transmission time, a read timing for reading the segment data from the buffer.04-28-2011
20080247409Queuing and Scheduling Architecture Using Both Internal and External Packet Memory for Network Appliances - Enhanced memory management schemes are presented to extend the flexibility of using either internal or external packet memory within the same network device. In the proposed schemes, the user can choose either static or dynamic schemes, both or which are capable of using both internal and external memory, depending on the deployment scenario and applications. This gives the user flexible choices when building unified wired and wireless networks that are either low-cost or feature-rich, or a combination of both. A method for buffering packets in a network device, and a network device including processing logic capable of performing the method are presented. The method includes initializing a plurality of output queues, determining to which of the plurality of output queues a packet arriving at the network device is destined, storing the packet in one or more buffers, where the one or more buffers is selected from a packet memory group including an internal packet memory and an external packet memory, and enqueuing the one or more buffers to the destined output queue.10-09-2008
20100128735PROCESSING OF PARTIAL FRAMES AND PARTIAL SUPERFRAMES - A system determines when to send out a partial data unit or when to complete a data unit before sending it. The system may identify a data unit, determine whether the data unit is a partial data unit, increase a partial count when the data unit is the partial data unit, determine whether the partial count is greater than a threshold, and fill a subsequent data unit with data to form a complete data unit when the partial count is greater than the threshold. The system may, alternatively or additionally, determine a schedule of flush events for a queue, identify whether the queue includes information associated with a partial data unit, identify whether the queue should be flushed based on the schedule of flush events and whether the queue includes information associated with the partial data unit, wait for additional data when the queue should not be flushed, and send out the partial data unit when the queue should be flushed.05-27-2010
20100189122EFFICIENTLY STORING TRANSPORT STREAMS - Described are computer-based methods and apparatuses, including computer program products, for efficiently storing transport streams. A first sequence of one or more packets associated with the first transport stream is received, the first sequence comprising one or more data packets. A storage packet is generated by selecting one or more packets from the first sequence, the storage packet comprising a packet header and the one or more data packets. One or more null packet insertion locations are identified in a second sequence of one or more packets associated with a second transport stream. Null packet insertion information is generated based on the one or more null packet insertion locations, the information including data indicative of a reconstruction parameter related to reconstructing the second sequence from the storage packet by inserting one or more null packets that are not stored in the storage packet, wherein the packet header includes the null packet insertion information. The storage packet is stored.07-29-2010
20090086746DIRECT MESSAGING IN DISTRIBUTED MEMORY SYSTEMS - A system and method for sending a cache line of data in a single message is described. An instruction issued by a processor in a multiprocessor system includes an address of a message payload and an address of a destination. Each address is translated to a physical address and sent to a scalability interface associated with the processor and in communication with a system interconnect. Upon translation the payload of the instruction is written to the scalability interface and thereafter communicated to the destination. According to one embodiment, the translation of the payload address is accomplished by the processor while in another embodiment the translation occurs at the scalability interface.04-02-2009
20090022171INTERRUPT COALESCING SCHEME FOR HIGH THROUGHPUT TCP OFFLOAD ENGINE - An interrupt coalescing scheme for high throughput TCP offload engine and method thereof are disclosed. An interrupt descriptor queue is used, that TCP offload engine saves TCP connection information and interrupt information in an interrupt event descriptor per interrupt. Meanwhile the software processes an interrupt by reading interrupt event descriptors asynchronously. The software may process multiple interrupt event descriptors in one interrupt context.01-22-2009
20110286469Packet retransmission control system, method and program - A lower layer retransmission control unit performs the following processing. When transmitting a transmission packet, giving a sequence number indicating a transmission order to the transmission packet. Receiving, from a receiving device that receives the transmission packet as a reception packet, an ACK packet indicating the sequence number of the reception packet. Referring to the sequence number of the received ACK packet to determine whether or not the ACK packet is received in an order of the sequence number. Transmitting first to third transmission packets and, if receiving a first ACK packet and receiving a third ACK packet following the first ACK packet without receiving a second ACK packet, performing fast retransmission control processing. Specifically, determining whether or not to receive the second ACK packet before a fast retransmission determination period passes after a reception time of the third ACK packet. If failing to receive the second ACK packet within the fast retransmission determination period, retransmitting the second transmission packet.11-24-2011
20110286468PACKET BUFFERING DEVICE AND PACKET DISCARDING METHOD - A packet buffering device includes: a queue for temporarily holding an arriving packet; a residence time predicting unit which predicts a length of time during which the arriving packet will reside in the queue; and a packet discarding unit which discards the arriving packet when the length of time predicted by the residence time predicting unit exceeds a first reference value.11-24-2011
20090201942METHOD AND APPARATUS FOR MARKING AND SCHEDULING PACKETS FOR TRANSMISSION - A method and system for profile-marking and scheduling of packets are disclosed. Using a dual-rate scheduler, the profile state of a packet being scheduled for transmission by a flow traffic descriptor is determined based on the traffic rate of the flow traffic descriptor, which is associated with the queue that the packet belongs to. The profile state of the packet is marked prior to the transmission of the packet.08-13-2009
20090003370SYSTEM AND METHOD FOR IMPROVED PERFORMANCE BY A DVB-H RECEIVER - A system and method for improved performance by a DVB-H receiver is described that allows good Internet Protocol (IP) packets in a Multiprotocol Encapsulation-Forward Error Correction (MPE-FEC) frame to be salvaged even when there are other IP packets in the frame that may have bytes in error after the performance of MPE-FEC operations. To achieve this, the system and method provides a means for ascertaining where IP packets loaded into a memory begin and end in a manner that can be relied upon even when individual bytes of the IP packets, such as certain bytes of the IP packet header used to determine total packet length, may be in error.01-01-2009
20090141732METHODS AND APPARATUS FOR DIFFERENTIATED SERVICES OVER A PACKET-BASED NETWORK - Methods and apparatus for the provision of differentiated services in a packet-based network may be provided in a communications device such as a switch or router having input ports and output ports. Each output port is associated with a set of configurable queues that store incoming data packets from one or more input ports. A scheduling mechanism retrieves data packets from individual queues in accord with a specified configuration, providing both pure priority and proportionate de-queuing to achieve a guaranteed QoS over a connectionless network.06-04-2009
20100034212METHODS AND APPARATUS FOR PROVIDING MODIFIED TIMESTAMPS IN A COMMUNICATION SYSTEM - Methods and apparatus for providing modified timestamps in a communication system. In an aspect, a method includes receiving one or more packets associated with a selected destination, computing an average relative delay associated with each packet, determining a modified timestamp associated with each packet based on the average relative delay associated with each packet, and outputting the one or more packets and their associated modified timestamps. In an aspect, an apparatus is provided for generating modified timestamps. The apparatus includes a packet receiver configured to receive one or more packets associated with a selected destination and processing logic configured to compute an average relative delay associated with each packet, determine a modified timestamp associated with each packet based on the average relative delay associated with each packet, and output the one or more packets and their associated modified timestamps.02-11-2010
20090168791TRANSMISSION DEVICE AND RECEPTION DEVICE - A transmission device (07-02-2009
20100091782Method and System for Controlling a Delay of Packet Processing Using Loop Paths - A method and system for introducing controlled delay of packet processing at a network device using one or more delay loop paths (DLPs). For each packet received at the network device, a determination will be made as to whether or not packet processing should be delayed. If delay is chosen, a DLP will be selected according to a desired delay for the packet. The desired delay value is used to determine a time value and inserts the time value in the DLP ahead of the packet. Upon completion of a DLP delay, a packet will be returned for processing, an additional delay, or some other action. One or more DLPs may be enabled with packet queues, and may be used advantageously by devices, for which in-order processing of packets may be desired or required.04-15-2010
20100091784FILTERING OF REDUNDANT FRAMES IN A NETWORK NODE - A method of filtering redundant frames including a MAC source address, a frame ID and a CRC value, in a network node with two ports each including a transmitting device and a receiving device, is provided. The transmitting device includes a transmission list in which frames to be transmitted are stored. The receiving device includes a receiving memory for storing a received frame. For filtering redundant frames in a network node, a first frame is received by one of the two ports. After reception of the MAC source address and the frame ID of the first frame in the transmission list of the port, a second frame with the same MAC source address and frame ID is sought. If the second frame is present, the first frame is neither forwarded to a local application nor forwarded to send to other ports, and the second frame is not sent.04-15-2010
20090003369Method and receiver for determining a jitter buffer level - The invention relates to a method and a receiver having control logic means for determining a target packet level of a jitter buffer adapted to receive packets with digitized signal samples, which packets are subject to delay jitter, from a packet data network. According to the invention, the jitter buffer is made adaptive to current network conditions, i.e., the nature and magnitude of the jitter observed by the receiver, by collecting statistical measures that describe these conditions. The target buffer level is determined with regard to the effect of packet losses in terms of duration of the discontinued playback of the true signal. This effect is derived from statistical measures of the network conditions as perceived by the receiving side and as reflected by a probability mass function which is continuously updated with packet inter-arrival times. The target buffer level is the result of minimization of a cost function which weights the internal buffer delay and an expected length of buffer underflow.01-01-2009
20080267206CAM BASED SYSTEM AND METHOD FOR RE-SEQUENCING DATA PACKETS - An embodiment of the system operates in a parallel packet switch architecture having at least one egress adapter arranged to receive data packets issued from a plurality of ingress adapters and switched through a plurality of independent switching planes. Each received data packet belongs to one sequence of data packets among a plurality of sequences where the data packets are numbered with a packet sequence number (PSN) assigned according to at least a priority level of the data packet. Each data packet received by the at least one egress adapter has a source identifier to identify the ingress adapter from which it is issued. The system for restoring the sequences of the received data packets operates within the egress adapter and comprises buffer for temporarily storing each received data packet at an allocated packet buffer location, a controller, and a determination means coupled to a storing means and extracting means.10-30-2008
20080267204Compact Load Balanced Switching Structures for Packet Based Communication Networks - A switching node is disclosed for the routing of packetized data employing a multi-stage packet based routing fabric combined with a plurality of memory switches employing memory queues. The switching node allowing reduced throughput delays, dynamic provisioning of bandwidth and packet prioritization.10-30-2008
20100128736PACKET PROCESSING APPARATUS, NETWORK EQUIPMENT AND PACKET PROCESSING METHOD - A packet processing apparatus includes a static pattern matcher comparing pattern information defining a packet to be filtered with a value regarding at least a part of a received packet, the pattern information being stored by a pattern information manager. A frequency calculator calculates the frequency of matching by the static pattern matcher. A dynamic pattern matcher matches the frequency and a preset comparison value and a processing determiner determines a processing on the received packet based upon the dynamic pattern match.05-27-2010
20080291935Methods, Systems, and Computer Program Products for Selectively Discarding Packets - A method, system, and computer program product are provided for selectively discarding packets in a network device. The method includes receiving an upstream bandwidth saturation indicator for a queue in the network device, and identifying one or more codecs employed in packets in the queue when the upstream bandwidth saturation indicator indicates saturation. The method further includes determining a packet discarding policy based on the one or more codecs, and discarding packets in accordance with the packet discarding policy.11-27-2008
20080291936TEMPORARY BLOCK FLOW CONTROL IN WIRELESS COMMUNICATION DEVICE - A wireless communication device (11-27-2008
20080291933METHOD AND APPARATUS FOR PROCESSING PACKETS - A computer implemented method, apparatus, and computer usable program code for processing packets for transmission. A set of interface specific network buffers is identified from a plurality of buffers containing data for a packet received for transmission. A data structure describing the set of interface specific network buffers within the plurality of buffers is created, wherein a section in the data structure for an interface specific network buffer in the set of interface specific network buffers includes information about a piece of data in interface specific network buffer, wherein the data structure is used to process the packet for transmission.11-27-2008
20080205423Method and Apparatus for Communicating Variable-Sized Packets in a Communications Network - Methods and apparatus for managing a packet buffer memory are disclosed. One method includes providing a memory arranged as a plurality of cells identified by cell id, each cell having a granularity of k individual memory addresses. A cell list indexed by cell id is provided. The cell list includes a free cell list identifying cells available for storing data as a linked list, wherein a beginning of the free cell list identifies a starting cell id. A cell list indexed by cell id is provided. The free cell list includes cells available for storing data as a linked list, wherein a beginning of the free cell list identifies a starting cell id. Each portion of a packet is stored in cells indicated by and in a sequence indicated by traversing the cell list beginning with the starting cell id.08-28-2008
20100118884Method for Resolving Mutex Contention in a Network System - A method of resolving mutex contention within a network interface unit which includes providing a plurality of memory access channels, and moving a thread via at least one of the plurality of memory access channels, the plurality of memory access channels allowing moving of the thread while avoiding mutex contention when moving the thread via the at least one of the plurality of memory access channels is disclosed.05-13-2010
20100118883SYSTEMS AND METHODS FOR QUEUE MANAGEMENT IN PACKET-SWITCHED NETWORKS - This disclosure relates to methods and systems for queuing traffic in packet-switched networks. In one of many possible embodiments, a queue management system includes a plurality of queues and a priority module configured to assign incoming packets to the queues based on priorities associated with the incoming packets. The priority module is further configured to drop at least one of the packets already contained in the queues. The priority module is configured to operate across multiple queues when determining which of the packets contained in the queues to drop. Some embodiments provide for hybrid queue management that considers both classes and priorities of packets.05-13-2010
20120033680SYSTEMS AND METHODS FOR RECEIVE AND TRANSMISSION QUEUE PROCESSING IN A MULTI-CORE ARCHITECTURE - Described herein is a method and system for directing outgoing data packets from packet engines to a transmit queue of a NIC in a multi-core system, and a method and system for directing incoming data packets from a receive queue of the NIC to the packet engines. Packet engines store outgoing traffic in logical transmit queues in the packet engines. An interface module obtains the outgoing traffic and stores it in a transmit queue of the NIC, after which the NIC transmits the traffic from the multi-core system over a network. The NIC receives incoming traffic and stores it in a NIC receive queue. The interface module obtains the incoming traffic and applies a hash to a tuple of each obtained data packet. The interface module then stores each data packet in the logical receive queue of a packet engine on the core identified by the result of the hash.02-09-2012
20090207850System and method for data packet transmission and reception - A system transmits a data packet from a transmitting apparatus to a receiving apparatus. The receiving apparatus includes a receive buffer, and a size specifying information transmitting unit that transmits size specifying information to the transmitting apparatus. The transmitting apparatus includes a transmit buffer, a credit storage unit that stores, as a credit, a value corresponding to a total size of all data packets stored in the receive buffer, a credit adding unit that adds a credit to the stored credit on transmitting a data packet, a credit subtracting unit that specifies a size of a read-out data packet on receiving the size specifying information, subtracts a credit corresponding to the specified size from a stored credit, and a transmission controlling unit that controls data packet transmission based on a credit stored in the credit storage unit.08-20-2009
20090285230DELAY VARIATION BUFFER CONTROL TECHNIQUE - A delay variation buffer controller allowing proper cell delay variation control reflecting an actual network operation status is disclosed. A detector detects an empty status of the data buffer when data is read out from the data buffer at intervals of a controllable time period. A counter counts the number of contiguous times the empty status was detected. A proper time period is calculated depending on a value of the counter at a time when the empty status is not detected and the value of the counter is not zero. A timing corrector corrects the controllable time period to match the proper time delay and setting the controllable time delay to a predetermined value when the empty status is not detected and the value of the counter is zero.11-19-2009
20090046735METHOD FOR PROVIDING PRIORITIZED DATA MOVEMENT BETWEEN ENDPOINTS CONNECTED BY MULTIPLE LOGICAL CHANNELS - A data network and a method for providing prioritized data movement between endpoints connected by multiple logical channels. Such a data network may include a first node comprising a first plurality of first-in, first-out (FIFO) queues arranged for high priority to low priority data movement operations; and a second node operatively connected to the first node by multiple control and data channels, and comprising a second plurality of FIFO queues arranged in correspondence with the first plurality of FIFO queues for high priority to low priority data movement operations via the multiple control and data channels; wherein an I/O transaction is accomplished by one or more control channels and data channels created between the first node and the second node for moving commands and data for the I/O transaction during the data movement operations, in the order from high priority to low priority.02-19-2009
20120106567MLPPP OCCUPANCY BASED ROUND ROBIN - Embodiments of the invention are directed to providing a method for selecting a link for transmitting a data packet, from links of a Multi-Link Point-to-Point Protocol (MLPPP) bundle, by compiling a list of links having a minimum queue depth and selecting the link in a round robin manner from the list. Some embodiments of the invention further provide for a flag to indicate if the selected link has been assigned to a transmitter so that an appropriate link will be selected even if link queue depth status is not current.05-03-2012
20090168792Method and Apparatus for Data Traffic Smoothing - A method and device for data traffic smoothing are provided. Arriving data packets are buffer-stored and passed on by taking account of an overhead of management information which is attached to the data packet in a protocol conversion process, which is carried out later. This protocol conversion process is carried out at a later time, for example by a DSL modem. The data transmission rate measured from the point of view of the network element carrying out the data traffic smoothing is not the criterion to be adjusted, but the data transmission rate after protocol conversion. A quality of service both for low and high data packet lengths is ensured, and the bandwidth of a DSL connection can therefore be exploited fully both for the VOIP and for data transmission.07-02-2009
20090274161NETWORK ROUTING METHOD AND SYSTEM UTILIZING LABEL-SWITCHING TRAFFIC ENGINEERING QUEUES - The present invention is directed to a scalable packet-switched network routing method and system that utilizes modified traffic engineering mechanisms to prioritize tunnel traffic and non-tunnel traffic. The method includes the steps of receiving a request to establish a traffic engineering tunnel across the packet-switched network. Then at a router traversed by the traffic engineering tunnel, a queue for packets carried inside the traffic engineering tunnel is created. Subsequently, bandwidth for the queue is reserved in accordance with the request to establish the traffic engineering tunnel, wherein the queue created for packets carried inside the traffic the traffic engineering tunnel is given priority over other traffic at the router and the reserved bandwidth for the queue can only be used by packets carried inside the traffic engineering tunnel.11-05-2009
20090003371METHOD FOR TRANSMITTING PACKET AND NETWORK SYSTEM THEREOF - A method for transmitting packets and a network system thereof are provided. In the present invention, each packet entering the network system is added an assigning tag to indicate the arrival time of the packet, and at least two queues in a node of the network system are used for respectively sorting the local packets of the node and the relayed packets of the preceding node. The order of the packet for transmitting can be decided by comparing the assigning tags of the two packets positioned at first order in different queues. Therefore, a condition of First-In First-Out (FIFO) is satisfied in the network system, and the sequence for transmitting packets is arbitrated fair.01-01-2009
20090279558Network routing apparatus for enhanced efficiency and monitoring capability - According to an embodiment of the invention, a network device such as a router or switch provides efficient data packet handling capability. The network device includes one or more input ports for receiving data packets to be routed, as well as one or more output ports for transmitting data packets. The network device includes an integrated port controller integrated circuit for routing packets. The integrated circuit includes an interface circuit, a received packets circuit, a buffer manager circuit for receiving data packets from the received packets circuit and transmitting data packets in one or more buffers and reading data packets from the one or more buffers. The integrated circuit also includes a rate shaper counter for storing credit for a traffic class, so that the integrated circuit can support input and/or output rate shaping. The integrated circuit may be associated with an IRAM, a CAM, a parameter memory configured to hold routing and/or switching parameters, which may be implemented as a PRAM, and an aging RAM, which stores aging information. The aging information may be used by a CPU coupled to the integrated circuit via a system interface circuit to remove entries from the CAM and/or the PRAM when an age count exceeds an age limit threshold for the entries.11-12-2009
20100085980RECEIVER DEVICE, TRANSMISSION SYSTEM, AND PACKET TRANSMISSION METHOD - In a transmission system of transferring a packet input from a first device to a second device via a network, a receiver device comprises a storage module configured to successively accumulate received packets, which are transferred over a multiple transmission paths, in correlation to each of the multiple transmission paths, a packet selector configured to sequentially perform a packet selection process with respect to each of the received packets accumulated in the storage module, where after elapse of a predetermined time period since a receipt time of a first packet received by the receiver device, the packet selection process respectively reads out one packet for each of the multiple transmission paths among the received identical packets, which are accumulated in correlation to each of the multiple transmission paths, and selects one packet with higher reliability out of the read-out packets, and an output module configured to output the packet selected by the packet selector to the second device.04-08-2010
20090262749ESTABLISHING OPTIMAL LATENCY IN STREAMING DATA APPLICATIONS THAT USE DATA PACKETS - Embodiments for an apparatus and method are provided that can build latency in streaming applications that use data packets. In an embodiment, a system has an under-run forecasting mechanism, a statistics monitoring mechanism, and a playback queuing mechanism. The under-run forecasting mechanism determines an estimate of when a supply of data packets to convert will be exhausted. The statistics monitoring mechanism measures the arrival fluctuations of the supply of data packets. The playback queuing mechanism can build the latency.10-22-2009
20090262748RELAYING APPARATUS AND PACKET RELAYING APPARATUS - Each transmission port module includes a plurality of queues in association with combinations of a priority and a VLAN number. An accumulated-amount storage unit stores a total size of packets accumulated in queues associated with the same priority. A threshold storage unit stores a threshold of a total packet accumulated amount for each queue. When a packet is received, whether to discard the packet is determined based on a total packet accumulated amount stored in the accumulated-amount storage unit in association with a priority set for the packet and the threshold stored in the threshold storage unit in association with a storage-destination queue of the packet.10-22-2009
20090262747RELAYING APPARATUS AND PACKET RELAYING METHOD - A packet storing unit stores relay instructions for received packets in different queues depending on priority and a VLAN number. DRR schedulers take out relay instructions from respective queues through a DRR technique. A priority control transmission scheduler transmits the packets to another apparatus according to the relay instructions in a descending order of priority.10-22-2009
20090279559Method and apparatus for aggregating input data streams - A method and apparatus aggregate a plurality of input data streams from first processors into one data stream for a second processor, the circuit and the first and second processors being provided on an electronic circuit substrate. The aggregation circuit includes (a) a plurality of ingress data ports, each ingress data port adapted to receive an input data stream from a corresponding first processor, each input data stream formed of ingress data packets, each ingress data packet including priority factors coded therein, (b) an aggregation module coupled to the ingress data ports, adapted to analyze and combine the plurality of input data steams into one aggregated data stream in response to the priority factors, (c) a memory coupled to the aggregation module, adapted to store analyzed data packets, and (d) an output data port coupled to the aggregation module, adapted to output the aggregated data stream to the second processor.11-12-2009
20090285232Service Interface for QoS-Driven HPNA Networks - An in-band signaling model media control (MC) terminal for an HPNA network includes a frame classification entity (FCE) and a frame scheduling entity (FSE) and provides end-to-end Quality of Service (QoS) by passing the QoS requirements from higher layers to the lower layers of the HPNA network. The FCE is located at an LLC sublayer of the MC terminal, and receives a data frame from a higher layer of the MC terminal that is part of a QoS stream. The FCE classifies the received data frame for a MAC sublayer of the MC terminal based on QoS information contained in the received data frame, and associates the classified data frame with a QoS stream queue corresponding to a classification of the data frame. The FSE is located at the MAC sublayer of the MC terminal, and schedules transmission of the data frame to a destination for the data frame based on a QoS requirement associated with the QoS stream.11-19-2009
20090285231PRIORITY SCHEDULING USING PER-PRIORITY MEMORY STRUCTURES - A system schedules traffic flows on an output port using circular memory structures. The circular memory structures may include rate wheels that include a group of sequentially arranged slots. The traffic flows may be assigned to different rate wheels on a per-priority basis.11-19-2009
20090285228MULTI-STAGE MULTI-CORE PROCESSING OF NETWORK PACKETS - Techniques for multi-stage multi-core processing of network packets are described herein. In one embodiment, work units are received within a network element, each work unit representing a packet of different flows to be processed in multiple processing stages. Each work unit is identified by a work unit identifier that uniquely identifies a flow in which the associated packet belongs and a processing stage that the associated packet is to be processed. The work units are then dispatched to multiple core logic, such that packets of different flows can be processed concurrently by multiple core logic and packets of an identical flow in different processing stages can be processed concurrently by multiple core logic, in order to determine whether the packets should be transmitted to one or more application servers of a datacenter. Other methods and apparatuses are also described.11-19-2009
20090290592RING BUFFER OPERATION METHOD AND SWITCHING DEVICE - A buffer operation method, for use with a buffer organized as a plurality of sections, two or more continuous ones of the sections being defined as a monitor block, the method including: receiving a data packet and dividing the same into a plurality of divisions; storing the divisions in a given one of the sections; moving, in the case where the given section is behind the monitor block, the monitor block so that a tail end thereof corresponds to the given section; monitoring whether the plurality of divisions required for reassembly of the packet are stored in the monitor block; and transferring, once all the required plurality of divisions are collected in the monitor block, the same from the buffer for subsequent reassembly of the packet.11-26-2009
20080212600Router and queue processing method thereof - A queue processing method and a router perform cache update and queue processing based upon whether or not the packet capacity stored in the queue exceeds a rising threshold, or whether the packet capacity stored in the queue is below a falling threshold after the packet capacity stored in the queue has exceeded the rising threshold. This queue processing method and router makes it possible to eliminate overhead associated with the update of flow information by using two caches, while concomitantly removing the inequality of packet flows via RED queue management with the expedient of using two caches.09-04-2008
20100103946PACKET CAPTURING DEVICE - A next sequence number, which is a sequence number that a next packet should have, is compared with the sequence number of a current packet, an identifier of a previous packet is compared with the identifier of the current packet, and a delay is judged to be a simple delay when the next sequence number matches the sequence number of the current packet and the identifier of the previous packet is followed by the identifier of the current packet. Or the delay is judged to be caused by a retransmission when the identifier of the previous packet is not followed by the identifier of the current packet.04-29-2010
20120294314DUAL-ROLE MODULAR SCALED-OUT FABRIC COUPLER CHASSIS - A scaled-out fabric coupler (SFC) chassis includes a plurality of root fabric cards installed on the one side of the SFC chassis. Each root fabric card has a plurality of electrical connectors. A plurality of line cards is installed on the opposite side of the SFC chassis. Each line card is one of two types of line cards. One of the two types of line cards is a switch-based network line card having network ports for connecting to servers and switches. The other of the two types of line cards is a leaf fabric card having fabric ports for connecting to a fabric port of a network element. Each of the two types of the line cards has electrical connectors that mate with one electrical connector of each root fabric card installed in the chassis.11-22-2012
20100202469QUEUE MANAGEMENT SYSTEM AND METHODS - A system and method are provided for managing a queue of packets transmitted from a sender to a receiver across a communications network. The sender has a plurality of sender states and a queue manager situated in between the sender and receiver may have a corresponding plurality of queue manager states. The queue manager has one or more queue management parameters which may have distinct predetermined values for each of the queue manager states. When the queue manager detects an event that is indicative of a change in the sender's state, the queue manager may change its state correspondingly.08-12-2010
20100061392METHOD, DEVICE AND SYSTEM OF SCHEDULING DATA TRANSPORT OVER A FABRIC - Embodiments of the invention provide systems, devices and methods to schedule data transport across a fabric, e.g., prior to actual transmission of the data across the fabric. In some demonstrative embodiments, a packet switch may include an input controller to schedule transport of at least one data packet to an output controller over a fabric based on permission information received from the output controller. Other embodiments are described and claimed.03-11-2010
20090296729DATA OUTPUT APPARATUS, COMMUNICATION APPARATUS AND SWITCH APPARATUS - A data communication apparatus has a data retainer, a retain state manager, a guaranteed bandwidth manager, a surplus bandwidth manager managing outputting of output data having a destination retained in the data retainer to an output line on a per-destination basis when the output data is outputted to the output line with the use of a surplus bandwidth that is a surplus over a sum of guaranteed bandwidths, and a scheduler scheduling outputting of data retained in the data retainer to the output line, based on results of managements by the guaranteed bandwidth manager and the surplus bandwidth manager and a retain state managed by the retain state manager. The apparatus manages the bandwidth with improved accuracy at the time of communication using a surplus bandwidth,12-03-2009
20090046734Method for Traffic Management, Traffic Prioritization, Access Control, and Packet Forwarding in a Datagram Computer Network - The invention provides an enhanced datagram packet switched computer network. The invention processes network datagram packets in network devices as separate flows, based on the source-destination address pair in the datagram packet. As a result, the network can control and manage each flow of datagrams in a segregated fashion. The processing steps that can be specified for each flow include traffic management, flow control, packet forwarding, access control, and other network management functions. The ability to control network traffic on a per flow basis allows for the efficient handling of a wide range and a large variety of network traffic, as is typical in large-scale computer networks, including video and multimedia traffic. The amount of buffer resources and bandwidth resources assigned to each flow can be individually controlled by network management. In the dynamic operation of the network, these resources can be varied—based on actual network traffic loading and congestion encountered. The invention also teaches an enhanced datagram packet switched computer network which can selectively control flows of datagram packets entering the network and traveling between network nodes. This new network access control method also interoperates with existing media access control protocols, such as used in the Ethernet or 802.3 local area network. An aspect of the invention is that it does not require any changes to existing network protocols or network applications.02-19-2009
20090168790Dynamically adjusted credit based round robin scheduler - A credit based queue scheduler dynamically adjusts credits depending upon at least a moving average of incoming packet size to alleviate the impact of traffic burstiness and packet size variation, and increase the performance of the scheduler by lowering latency and jitter. For the case when no service differentiation is required, the credit is adjusted by computing a weighted moving average of incoming packets for the entire scheduler. For the case when differentiation is required, the credit for each queue is determined by a product of a sum of credits given to all queues and priority levels of each queue.07-02-2009
20080267205Traffic management device and method thereof - A traffic management device and the method thereof are disclosed. The traffic management device includes a control logic unit, a first counting unit, and a second counting unit. The traffic management method follows the dual leaky bucket mechanism. A first count value and a second count value are generated by the first counting unit and the second counting unit, respectively, such that the control logic unit controls the average rate by checking whether the first count value falls within the range of a first threshold and controls the peak rate by checking whether the second count value falls within the range of a second threshold. When both the conditions are satisfied, packets in the queue are transmitted. Thus, the network flow is controlled effectively.10-30-2008
20090168793Prioritising Data Transmission - Transmitting from a mobile terminal to a telecommunication network data stored in a plurality of queues, each queue having a respective transmission priority, includes setting the data in each of said queues to be either primary data or secondary data, or a combination of primary data and secondary data. The data may be transmitted from the queues in an order in dependence upon the priority of the queue and whether the data in that queue are primary data or secondary data. Resources for data transmission may be allocated such that the primary data of each of said queues are transmitted at a minimum predetermined rate and such that the secondary data of each of said queues are transmitted at a maximum predetermined rate, greater that said minimum predetermined rate.07-02-2009
20080279207Method and apparatus for improving performance in a network using a virtual queue and a switched poisson process traffic model - A method for improving network performance using a virtual queue is disclosed. The method includes measuring characteristics of a packet arrival process at a network element, establishing a virtual queue for packets arriving at the network element, and modeling the packet arrival process based on the measured characteristics and a computed performance of the virtual queue.11-13-2008
20080304503TRAFFIC MANAGER AND METHOD FOR PERFORMING ACTIVE QUEUE MANAGEMENT OF DISCARD-ELIGIBLE TRAFFIC - A traffic manager and a method are described herein that are capable of performing an active queue management of discard-eligible traffic for a shared memory device (with a per-CoS switching fabric) that provides fair per-class backpressure indications.12-11-2008
20080285580ROUTER APPARATUS - A router apparatus allocates a queue in a storage device and transmits transmission-target data after temporarily storing the transmission-target data in the queue. The router apparatus determines whether the size of a usable data area assigned to the queue is equal to or greater than a threshold, and supplements, on the basis of the determination, the data area of the supplementation-target queue having the usable data area whose size is smaller than the threshold with a data area of a queue other than the supplementation-target queue.11-20-2008
20090154483A 3-LEVEL QUEUING SCHEDULER SUPPORTING FLEXIBLE CONFIGURATION AND ETHERCHANNEL - In one embodiment, a scheduler for a queue hierarchy only accesses sub-groups of bucket nodes in order to determine the best eligible queue bucket to transmit next. Etherchannel address hashing is performed after scheduling so that an Etherchannel queue including a single queue in the hierarchy is implemented to guarantee quality of service.06-18-2009
20090190605DYNAMIC COLOR THRESHOLD IN A QUEUE - A network device for dynamically allocating memory locations to plurality of queues. The network device includes means for determining an amount of memory buffers that is associated with a port, for assigning a fixed allocation of memory buffers to each of a plurality of queues associated with the port and for sharing remaining memory buffers among the plurality of queues. The remaining memory buffers are used by each of the plurality of queues after the fixed allocation of memory buffers assigned to the queue is used. The network device also includes means for setting a limit threshold for each of the plurality of queues. The limit threshold determines how much of the remaining memory buffer may be used by each of the plurality of queues. The network device further includes means for defining at least one color threshold for packets including a specified color marking and for defining a virtual maximum threshold. When one of the plurality of queues requests access to the remaining memory buffers and the remaining memory buffers is less than the limit threshold for the queue, the virtual maximum threshold is defined for the queue. The virtual maximum threshold replaces the limit threshold and packets associated with the at least one color threshold are processed in proportion with other color thresholds based on the virtual maximum threshold ceiling.07-30-2009
20080240140Network interface with receive classification - A network interface that provides improved processing of received packets in a networked computer by classifying packets as they are received. Further, both the characteristics used by the network interface to classify packets and the processing performed on those packets once classified may be programmed. The network interface contains multiple receive queues and one type of processing that may be performed is assigning packets to queues based on classification. A network stack within an operating system of the networked computer can route packets classified by the network interface to application level destinations with reduced processing. Additionally, the priority with which packets of certain classifications are processed may be used to allocate processing power to certain types of packets. As a specific example, a computer subjected to a particular type of denial of service attack sometimes called a “SYN attack” may lower the priority of processing SYN packets to reduce the effect of such an attack.10-02-2008
20080240139Method and Apparatus for Operating Fast Switches Using Slow Schedulers - The invention includes an apparatus and method for switching packets through a switching fabric. The apparatus includes a plurality of input ports and output ports for receiving arriving packets and transmitting departing packets, a switching fabric for switching packets from the input ports to the output ports, and a plurality of schedulers controlling switching of packets through the switching fabric. The switching fabric includes a plurality of virtual output queues associated with a respective plurality of input-output port pairs. One of the schedulers is active during each of a plurality of timeslots. The one of the schedulers active during a current timeslot provides a packet schedule to the switching fabric for switching packets through the switching fabric during the current timeslot. The packet schedule is computed by the one of the schedulers active during the current timeslot using packet departure information for packets departing during previous timeslots during which the one of the schedulers was active and packet arrival information for packets arriving during previous timeslots during which the one of the schedulers was active.10-02-2008
20090116503METHODS AND SYSTEMS FOR PERFORMING TCP THROTTLE - The present invention relates to systems and methods of accelerating network traffic. The method includes receiving a plurality of network packets and setting a threshold for a buffer. The threshold indicates a low water mark for the buffer. The method further includes storing the plurality of network packets in the buffer at least until the buffer's capacity is full, removing packets from the buffer, and transmitting the removed packets via a downstream link to an associated destination. Furthermore, the method includes that in response to removing packets from the buffer such that the buffer's capacity falls below the threshold, receiving additional packets and storing the additional packets at least until the buffer's capacity is full.05-07-2009
20090161685METHOD, SYSTEM AND NODE FOR BACKPRESSURE IN MULTISTAGE SWITCHING NETWORK - The present invention provides a backpressure method, system, and intermediate stage switching node of a multistage switching network and an intermediate stage switching node. The method includes: (i) the intermediate stage switching node receives a first backpressure information; and (ii) the intermediate stage switching node sends at least part of the first backpressure information to an upper stage switching node, wherein there is no response sent by the intermediate switching node to at least part of the first backpressure information.06-25-2009
20080317057Methods for Processing Two Data Frames With Scalable Data Utilization - The present invention provides a framework for the processing of blocks between two data frames and in particular application to motion estimation calculations in which a balance among the performance of a motion search algorithm, the size of on-chip memory to store the reference data, and the required data transfer bandwidth between on-chip and external memory can be optimized in a scalable manner, such that the total system cost with hierarchical embedded memory structure can be optimized in a flexible manner. The scope of the present invention is not limited to digital video encoding in which motion vector is part of information to be encoded, but is applicable to any other implementation in which difference between any two data frames are to be computed.12-25-2008
20090080451PRIORITY SCHEDULING AND ADMISSION CONTROL IN A COMMUNICATION NETWORK - Techniques for performing priority scheduling and admission control in a communication network are described. In an aspect, data flows may be prioritized, and packets for data flows with progressively higher priority levels may be placed at points progressively closer to the head of a queue and may then experience progressively shorter queuing delays. In another aspect, a packet for a terminal may be transferred from a source cell to a target cell due to handoff and may be credited for the amount of time the packet has already waited in a queue at the source cell. In yet another aspect, all priority and non-priority data flows may be admitted if cell loading is light, only priority data flows may be admitted if the cell loading is heavy, and all priority data flows and certain non-priority data flows may be admitted if the cell loading is moderate.03-26-2009
20110222552THREAD SYNCHRONIZATION IN A MULTI-THREAD NETWORK COMMUNICATIONS PROCESSOR ARCHITECTURE - Described embodiments provide a packet classifier for a network processor that generates tasks corresponding to each received packet. The packet classifier includes a scheduler to generate contexts corresponding to tasks received by the packet classifier from a plurality of processing modules of the network processor. A multi-thread instruction engine processes threads of instructions, each thread of instructions corresponding to a context received from the scheduler. A thread status manager maintains a thread status table having N entries to track up to N active threads. Each status entry includes a valid status indicator, a sequence value, and a thread indicator. A sequence counter generates a sequence value for each thread and is incremented when processing of a thread is started, and is decremented when a thread is completed, by the multi-thread instruction engine. Instructions are processed by the multi-thread instruction engine in the order in which the threads were started.09-15-2011
20090097494PACKET FORWARDING METHOD AND DEVICE - A packet forwarding mechanism using a packet map is disclosed. The method includes the packet map storing a packet forwarding information of each packet, where the packet map uses a single bit to indicate whether a packet is forwarding through a specific output port. In this way, the packet forwarding information can be stored in a very simple form such that less memory space is required for storing the packet forwarding information.04-16-2009
20090097493QUEUING MIXED MESSAGES FOR CONFIGURABLE SEARCHING - The present invention provides a method and an apparatus for forming a queue that enables a real time search of a first and a second plurality of messages which enter the queue in a linear order. The method comprises providing a sequential data structure to populate the queue with the first and second plurality of messages. The method comprises using the sequential data structure to selectively configure the queue for traversing in a search order different than the linear order in which the first and second plurality of messages reach the queue.04-16-2009
20110228794System and Method for Pseudowire Packet Cache and Re-Transmission - Disclosed is an apparatus that includes an ingress node configured to couple to an egress node and transmit a plurality of packets to one or more egress nodes, wherein at least some of the plurality of packets are cached before transmission and wherein the ingress node is further configured to retransmit a packet from the cached packets based on a request from one of the one or more egress nodes.09-22-2011
20090219942Transmission of Data Packets of Different Priority Levels Using Pre-Emption - A method for transmitting data packets of at least two different priority levels via one or more bearer channels is described. The method comprises the steps of fragmenting a data packet into a plurality of corresponding code words, each code word comprising a sync code, with the sync code being adapted for indicating a priority level of the corresponding data packet, and of transmitting the code words via the one or more bearer channels. In case high priority code words corresponding to a high priority data packet arrive during transmission of low priority code words corresponding to a low priority data packet, the following steps are performed: interrupting transmission of low priority code words, transmitting the high priority code words corresponding to the high priority data packet, and resuming the transmission of the low priority code words via the one or more bearer channels.09-03-2009
20090245271SIGNAL PACKET RELAY DEVICE - A packet-signaling relay device selectively relays incoming signal packets, and includes a random number generation unit which generates a random number, a delete threshold generation unit which generates a delete threshold based on an objective delete probability, a comparison unit which compares the random number and the delete threshold to generate a comparison result, and a delete determination unit which generates a delete/storage determination result based on the comparison result. The packet-signaling relay device further includes a packet receiving-and-storing unit which is responsive to the comparison result to selectively delete or store incoming signal packets, and a sending unit for sending the signal packets stored in the packet receiving-and-storing unit.10-01-2009
20090257441PACKET FORWARDING APPARATUS AND METHOD FOR DISCARDING PACKETS - The packet forwarding apparatus of the present invention includes a packet buffer for temporarily storing packets to be forwarded, a timer for measuring the time of every predetermined unit period, a plurality of first queues corresponding to each of a plurality of address groups that form the packet buffer, a plurality of second queues that are provided corresponding to the property of the packets, a first controller for executing the writing of the packets, and a second controller for executing the discarding of the packets. According to this invention, through managing the first queues and the second queues, packets in the packet buffer can be discarded without the packets being read from the packet buffer.10-15-2009
20100061391METHODS AND APPARATUS RELATED TO A LOW COST DATA CENTER ARCHITECTURE - In one embodiment, an apparatus can include a first edge device that can have a packet processing module. The first edge device can be configured to receive a packet. The packet processing module of the first edge device can be configured to produce cells based on the packet. A second edge device can have a packet processing module configured to reassemble the packet based on the cells. A multi-stage switch fabric can be coupled to the first edge device and the second edge device. The multi-stage switch fabric can define a single logical entity. The multi-stage switch fabric can have switch modules. Each switch module from the switch modules can have a shared memory device. The multi-stage switch fabric can be configured to switch the cells so that the cells are sent to the second edge device.03-11-2010
20110228793CUSTOMIZED CLASSIFICATION OF HOST BOUND TRAFFIC - A network device component receives traffic, determines whether the traffic is host bound traffic or non-host bound traffic, and classifies, based on a user-defined classification scheme, the traffic when the traffic is host bound traffic. The network device component also assigns, based on the classification, the classified host bound traffic to a queue associated with network device component for forwarding the classified host bound traffic to a host component of the network device.09-22-2011
20100150164FLOW-BASED QUEUING OF NETWORK TRAFFIC - A method is provided for queuing packets. A packet may be received and its flow identified. It may then be determined whether a flow queue has been assigned to the identified flow. The identified flow may be dynamically assigning to an available flow queue when it is determined that a flow queue has not been assigned to the identified flow. The packet may be enqueued into the available flow queue.06-17-2010
20100278190HIERARCHICAL PIPELINED DISTRIBUTED SCHEDULING TRAFFIC MANAGER - A hierarchical pipelined distributed scheduling traffic manager includes multiple hierarchical levels to perform hierarchical winner selection and propagation in a pipeline including selecting and propagating winner queues of a lower level to subsequent levels to determine one final winning queue. The winner selection and propagation is performed in parallel between the levels to reduce the time required in selecting the final winning queue. In some embodiments, the hierarchical traffic manager is separated into multiple separate sliced hierarchical traffic managers to distributively process the traffic.11-04-2010
20100158033COMMUNICATION APPARATUS IN LABEL SWITCHING NETWORK - In a label switching network using a plurality of labels including first and second labels, a communication apparatus receives a packet having the plurality of labels, and determines an output destination of the packet in accordance with the first label of the plurality of labels. Additionally, the communication apparatus sorts the packet to one of a plurality of packet queues in accordance with a combination of the first and the second labels of the plurality of labels, and reads and multiplexes packets from the plurality of packet queues.06-24-2010
20100158032SCHEDULING AND QUEUE MANAGEMENT WITH ADAPTIVE QUEUE LATENCY - The invention relates to a scheduler for a TCP/IP based data communication system and a method for the scheduler. The communication system comprises a TCP/IP transmitter and a receiving unit (UE). The scheduler is associated with a Node comprising a rate measuring device for measuring a TCP/IP data rate from the TCP/IP transmitter and a queue buffer device for buffering data segments from the TCP/IP transmitter. The scheduler is arranged to receive information from the rate measuring device regarding the TCP/IP data rate and is arranged to adapt the permitted queue latency to a minimum value when the TCP/IP transmitter is in a slow start mode and to increase the permitted queue latency when the TCP/IP rate has reached a threshold value.06-24-2010
20090316712Method and apparatus for minimizing clock drift in a VoIP communications network - A method and apparatus for minimizing clock drift between un-synchronized clocks which may occur at opposing ends of a communication link established in, for example, a Voice over Internet Protocol (VoIP) communications network, especially for use with, for example, a FAX or modem terminal device. The illustrative system employs two or more clocks, wherein at least one of these clocks operates at an intentionally higher frequency than the nominal clock frequency (e.g., 8 kHz) and wherein at least one of these clocks operates at an intentionally lower frequency than the nominal clock frequency. In operation, the illustrative system alternatively chooses one of the clocks, in order to attempt to match the clock of the far-end terminal device on average. The state and/or history of the receiving device's associated jitter buffer may be advantageously used to determine which clock to select.12-24-2009
20100020816Connectionless packet data transport over a connection-based point-to-point link - A multiple processor device generates a control packet for at least one connectionless-based packet in partial accordance with a control packet format of the connection-based point-to-point link and partially not in accordance with the control packet format. For instance, the multiple processor device generates the control packet to include, in noncompliance with the control packet format, one or more of an indication that at least one connectionless-based packet is being transported, an indication of a virtual channel of a plurality of virtual channels associated with the at least one connectionless-based packet, an indication of an amount of data included in the associated data packet, status of the at least one connectionless-based packet, and an error status indication. The multiple processor device then generates the associated data packet in accordance with a data packet format of the connection-based point-to-point link, wherein the data packet includes at least a portion of the at least one connectionless-based packet.01-28-2010
20100183021Method and Apparatus for Queuing Data Flows - In a data system, such as a cable modem termination system, different-priority flows are scheduled to be routed to their logical destinations by factoring both the priority level and the time spent in queue. The time that each packet of each flow spends waiting for transmission is normalized such that the waiting times of all flows are equalized with respect to each other. A latency scaling parameter is calculated.07-22-2010
20100189123Reordering Packets - There are disclosed processes and apparatus for reordering packets. The system includes a plurality of source processors that transmit the packets to a destination processor via multiple communication fabrics. The source processors and the destination processor are synchronized together. Time stamp logic at each source processor operates to include a time stamp parameter with each of the packets transmitted from the source processors. The system also includes a plurality of memory queues located at the destination processor. An enqueue processor operates to store a memory pointer and an associated time stamp parameter for each of the packets received at the destination processor in a selected memory queue. A dequeue processor determines a selected memory pointer associated with a selected time stamp parameter and operates to process the selected memory pointer to access a selected packet for output in a reordered packet stream.07-29-2010
20090116504PACKET PROCESSING APPARATUS FOR REALIZING WIRE-SPEED, AND METHOD THEREOF - Provided are a packet processing apparatus for realizing a wire-speed, and a method thereof. The packet processing apparatus realizes a wire-speed by making an inputted packet be processed in another packet processing apparatus instead of processing the inputted packet for itself. The packet processing apparatus for realizing a wire-speed by having an inputted packet processed in a packet processor of another packet processing apparatus by making an inputted packet detour a packet processor into a detour path, includes: a packet classifier for classifying and storing the inputted packet in a multi-queue based on a priority; a queue manager for including the multi-queue, determining a detour packet among packets stored in the multi-queue and marking the packet as a detour packet; and a packet scheduler for transmitting the packet designated as the detour packet to the detour path. The apparatus is used for a packet communication system.05-07-2009
20090316713COMMUNICATION APPARATUS IN LABEL SWITCHING NETWORK - In a label switching network using a plurality of labels, a communication apparatus receives signaling information for setting a first label switching tunnel. This signaling information includes one or more values of one or more labels representing one or more pseudowires accommodated in a first label switching tunnel, the bandwidth information of the pseudowire, and the bandwidth-sharing identifier. A bandwidth management table holding correspondence relationships between values of a plurality of labels, the bandwidth information, and the bandwidth-sharing identifiers are generated. One or more second label switching tunnels may be accommodated instead of one or more pseudowires.12-24-2009
20090316714PACKET RELAY APPARATUS - In a packet relay apparatus equipped with a hierarchical bandwidth control function, a queuing unit of a bandwidth controller for controlling a bandwidth of a packet to be transmitted recognizes user information for identifying a user from VLAN ID of a received Tag-VLAN packet, acquires queue information representative of a queue position by referring to a priority mapping table by using a user priority order in the packet, and queues the packet to the queue identified by the user information and queue information. Bandwidth control can therefore be performed without searching a QoS information management table.12-24-2009
20100195662METHOD FOR SUB-PACKET GENERATION WITH ADAPTIVE BIT INDEX - A method of generating a sub-packet in consideration of an offset is disclosed. A method of generating a sub-packet in consideration of an offset, for re-transmission of a packet from systematic bits and parity bits stored in a circular buffer includes turbo-coding an input bitstream at a predetermined code rate and generating and storing the systematic bits and the parity bits in the circular buffer, and deciding a starting position of the sub-packet in the circular buffer in consideration of the offset for puncturing at least a portion of the systematic bits of the circular buffer. Accordingly, it is possible to efficiently decide the starting position of the sub-packet to be transmitted adaptively with respect to a variable packet length, improving a coding gain, reducing complexity and reducing a calculation amount.08-05-2010
20130215904VIRTUAL MEMORY PROTOCOL SEGMENTATION OFFLOADING - Methods and systems for a more efficient transmission of network traffic are provided. According to one embodiment, a user process of a host processor requests a network driver to store payload data within a system memory. The network driver stores (i) payload buffers each containing therein at least a subset of the payload data and (ii) buffer descriptors each containing therein information indicative of a starting address of a corresponding payload buffer within a user memory space. A network processor transmits onto a network the payload data within multiple transport layer protocol packets by (i) causing a network interface to retrieve the payload data from the payload buffers by performing direct virtual memory addressing of the user memory space using the buffer descriptors and information contained within a translation data structure stored within the system memory; and (ii) segmenting the payload data across the transport layer protocol packets.08-22-2013
20100226384Method for reliable transport in data networks - Rapid and reliable network data delivery uses state sharing to combine multiple flows into one meta-flow at an intermediate network stack meta-layer, or shim layer. Copies of all packets of the meta-flow are buffered using a common wait queue having an associated retransmit timer, or set of timers. The timers may have fixed or dynamic timeout values. The meta-flow may combine multiple distinct data flows to multiple distinct destinations and/or from multiple distinct sources. In some cases, only a subset of all packets of the meta-flow are buffered.09-09-2010
20110058569NETWORK ON CHIP INPUT/OUTPUT NODES - The present invention relates to a torus network comprising a matrix of infrastructure routers, each of which is connected to two other routers belonging to the same row and to two other routers belonging to the same column; and input/output routers, each of which is connected by two internal inputs to two other routers belonging either to the same row, or to the same column, and comprising an external input for supplying the network with data. Each input/output router is devoid of queues for its internal inputs and comprises queues assigned to its external input managed by an arbiter which is configured to also manage the queues of an infrastructure router connected to the input/output router.03-10-2011
20110058568PACKET-BASED COMMUNICATION SYSTEM AND METHOD - A system and method for facilitating communication of packets between one or more applications residing on a first computing device and at least one second computing device. The system comprises a connection manager adapted to receive packets from the at least one second computing device, and a packet cache for storing packets received by the connection manager. The connection manager, upon receiving a packet from a second computing device, transmits the packet to the packet cache for storage and notifies each of the applications of receipt of the packet. Subsequently, the packet is retrievable from the packet cache by a notified application, and verification that the packet is intended for communication to the notified application is made.03-10-2011
20100232447QUALITY OF SERVICE QUEUE MANAGEMENT IN HOME MESH NETWORK - An embodiment is a technique to perform queue management. A packet is received from an upper layer or a classifier in a multi-hop mesh network. The packet has a packet type classified by the classifier. The received packet is enqueued into one of a plurality of buffers organized according to the packet type.09-16-2010
20100238947DATA TRANSFER SYSTEM AND METHOD - A transmission source bridge collects packets sent from nodes connected to a serial bus in accordance the IEEE1394 Standards, into one packet in an order they are to be transmitted and then sends them onto an ATM network, so that a transmission destination bridge receives this packet and divides it into a plurality of smaller packets and transfers them, in the order they were sent, to nodes connected to the serial bus in accordance with the IEEE1394 Standards.09-23-2010
20100238946APPARATUS FOR PROCESSING PACKETS AND SYSTEM FOR USING THE SAME - An apparatus processes a packet and classifies the packet as a processed fast path packet or a slow path packet, wherein the processed fast path packet is forwarded to a fast path forwarding queue directly or is forwarded to a fast path output queue through a packet direct memory access controller. The apparatus not only improves the packet processing performance but also guarantees the quality of service.09-23-2010
20100220742SYSTEM AND METHOD FOR ROUTER QUEUE AND CONGESTION MANAGEMENT - In a multi-QOS level queuing structure, packet payload pointers are stored in multiple queues and packet payloads in a common memory pool. Algorithms control the drop probability of packets entering the queuing structure. Instantaneous drop probabilities are obtained by comparing measured instantaneous queue size with calculated minimum and maximum queue sizes. Non-utilized common memory space is allocated simultaneously to all queues. Time averaged drop probabilities follow a traditional Weighted Random Early Discard mechanism. Algorithms are adapted to a multi-level QOS structure, floating point format, and hardware implementation. Packet flow from a router egress queuing structure into a single egress port tributary is controlled by an arbitration algorithm using a rate metering mechanism. The queuing structure is replicated for each egress tributary in the router system.09-02-2010
20080273545Channel service manager with priority queuing - A system and method are provided for prioritizing network processor information flow in a channel service manager (CSM). The method receives a plurality of information streams on a plurality of input channels, and selectively links input channels to CSM channels. The information streams are stored, and the stored the information streams are mapped to a processor queue in a group of processor queues. Information streams are supplied from the group of processor queues to a network processor in an order responsive to a ranking of the processor queues inside the group. More explicitly, selectively linking input channels to CSM channels includes creating a fixed linkage between each input port and an arbiter in a group of arbiters, and scheduling information streams in response to the ranking of the arbiter inside the group. Finally, a CSM channel is selected for each information stream scheduled by an arbiter.11-06-2008
20100246590DYNAMIC ASSIGNMENT OF DATA TO SWITCH-INGRESS BUFFERS - Embodiments of a system that includes a switch and a buffer-management technique for storing signals in the system are described. In this system, data cells are dynamically assigned from a host buffer to at least a subset of switch-ingress buffers in the switch based at least in part on the occupancy of the switch-ingress buffers. This buffer-management technique may reduce the number of switch-ingress buffers relative to the number of input and output ports to the switch, which in turn may overcome the limitations posed by the amount of memory available on chips, thereby facilitating large switches.09-30-2010
20080267203DYNAMIC MEMORY QUEUE DEPTH ALGORITHM - A method of modifying a priority queue configuration of a network switch is described. The method comprises configuring a priority queue configuration, monitoring a network parameter, and adjusting the priority queue configuration based on the monitored network parameter.10-30-2008
20090161684System and Method for Dynamically Allocating Buffers Based on Priority Levels - Methods and systems consistent with the present invention provide dynamic buffer allocation to a plurality of queues of differing priority levels. Each queue is allocated fixed minimum number of buffers that will not be de-allocated during buffer reassignment. The rest of the buffers are intelligently and dynamically assigned to each queue depending on their current need. The system then monitors and learns the incoming traffic pattern and resulting drops in each queue due to traffic bursts. Based on this information, the system readjusts allocation of buffers to each traffic class. If a higher priority queue does not need the buffers, it gradually relinquishes them. These buffers are then assigned to other queues based on the input traffic pattern and resultant drops. These buffers are aggressively reclaimed and reassigned to higher priority queues when needed. In this way, methods and systems consistent with the present invention dynamically balance requirements of the higher priority queues versus optimal allocation.06-25-2009
20130128895FRAME TRANSMISSION AND COMMUNICATION NETWORK - Exemplary embodiments are directed to a communication network interconnecting a plurality of synchronized nodes, where regular frames including time-critical data are transmitted periodically or cyclically, and sporadic frames are transmitted non-periodically or occasionally. For example, each node can transmit a regular frame at the beginning of a transmission period common to, and synchronized among, all nodes. Another node then receives regular frames from its first neighboring node, and forwards the frames within the same transmission period and with the shortest delay, to a second neighboring node. Furthermore, each node actively delays transmission of any sporadic frame, whether originating from an application hosted by the node itself or whether received from a neighboring node, until forwarding of all received regular frames is completed.05-23-2013
20080291934Variable Dynamic Throttling of Network Traffic for Intrusion Prevention - Methods, apparatus, and computer program products for variable dynamic throttling of network traffic for intrusion prevention are disclosed that include initializing, as throttling parameters, a predefined time interval, a packet count, a packet count threshold, a throttle rate, a keepers count, and a discards count; starting a timer, the timer remaining on no longer than the predefined time interval; maintaining, while the timer is on, statistics including the packet count, the keepers count, and the discards count; for each data communications packet received by the network host, determining, in dependence upon the statistics and the throttle rate, whether to discard the packet and determining whether the packet count exceeds the packet count threshold; and if the packet count exceeds the packet count threshold: resetting the statistics, incrementing the throttle rate, and restarting the timer.11-27-2008
20090034548Hardware Queue Management with Distributed Linking Information - A network element including a processor with logic for managing packet queues by way of packet descriptor index values that are mapped to addresses in the memory space of the packet descriptors. A linking memory is implemented in the same integrated circuit as the processor, and has entries corresponding to the descriptor index values. Each entry can store the next descriptor index in a packet queue, to form a linked list of packet descriptors. Queue manager logic receives push and pop requests from host applications, and updates the linking memory to maintain the queue. The queue manager logic also maintains a queue control register for each queue, including head and tail descriptor index values.02-05-2009
20080285578CONTENT-BASED ROUTING OF INFORMATION CONTENT - A system to route media information content may include a router that analyzes predetermined content of a plurality of data packets of the media information content and prioritizes forwarding the plurality of data packets from the router based on applying at least one rule to the predetermined content.11-20-2008
20080279208SYSTEM AND METHOD FOR BUFFERING DATA RECEIVED FROM A NETWORK - A system for buffering data received from a network comprises a network socket, a plurality of buffers, a buffer pointer pool, receive logic, and packet delivery logic. The buffer pointer pool has a plurality of entries respectively pointing to the buffers. The receive logic is configured to pull an entry from the pool and to perform a bulk read of the network socket. The entry points to one of the buffers, and the receive logic is further configured to store data from the bulk read to the one buffer based on the entry. The packet delivery logic is configured to read, based on the entry, the one buffer and to locate a missing packet sequence in response to a determination, by the packet delivery logic, that the one buffer is storing an incomplete packet sequence. The packet delivery logic is further configured to form a complete packet sequence based on the incomplete packet sequence and the missing packet sequence.11-13-2008
20090109988Video Decoder with an Adjustable Video Clock - A method, an apparatus, and logic encoded in a computer-readable medium to carry out a method. The method includes receiving packets containing compressed video information, storing the received packets in a buffer memory, timestamping the received packets according to an adjustable clock; and removing packets from the buffer for decoding and playout of the video information, the removing according to playback order and at a time determined by the adjustable clock. The method includes adjusting the adjustable clock from time to time according to a measure the amount of time that the packets reside in the buffer memory, such that time latency caused by the buffer memory is limited. An overrun or an underrun of the buffer memory is unlikely.04-30-2009
20100278189Methods and Apparatus for Providing Dynamic Data Flow Queues - A network system and method capable of creating separate output queues on demand to improve overall network routing performance are disclosed. The network system, in one embodiment, includes a classifier, an egress queuing device and a processor. The classifier provides a result of classification for an incoming data flow in accordance with a set of predefined application policies. The egress queuing device is an egress per flow queue (“PFQ”) wherein a separately dedicated queue can be dynamically allocated within the egress PFQ in accordance with the result of classification. The processor is configured to establish a temporary circuit connection between the classifier and the egress queuing device for facilitating routing process.11-04-2010
20100296518Single DMA Transfers from Device Drivers to Network Adapters - Methods and arrangements of data communications are discussed. Embodiments include transformations, code, state machines or other logic to provide data communications. An embodiment may involve receiving from a protocol stack a request for a buffer to hold data. The data may consist of all or part of a payload of a packet. The embodiment may also involve allocating space in a buffer for the data and for a header of a packet. The protocol stack may store the data in a portion of the buffer and hand down the buffer to a network device driver. The embodiment may also involve the network device driver transferring the entire packet from the buffer to a communications adapter in a single direct memory access (DMA) operation.11-25-2010
20100322264METHOD AND APPARATUS FOR MESSAGE ROUTING TO SERVICES - An approach is provided for message routing to services. A publish request associated with a service is received from a user equipment. A query is generated to determine a plurality of locations of the service. Each location corresponds respectively to a plurality of clusters. Transmission of the query is initiated to a home locator. The locations from the home locator are received. One of the locations is selected. Transmission of the publish request to the selected location is initiated.12-23-2010
20100246591ENABLING LONG-TERM COMMUNICATION IDLENESS FOR ENERGY EFFICIENCY - A network adapter comprises a controller to change to a first mode from a second mode based on a number of transmit packets, sizes of received packets, and intervals between arrivals of the received packets. In one embodiment, the network controller further comprises a memory to buffer received packets, where the received packets are buffered for a longer period in the first mode than in the second mode.09-30-2010
20110019685METHOD AND SYSTEM FOR PACKET PREEMPTION FOR LOW LATENCY - Latency requirements for Ethernet link partners comprising PHY devices and memory buffers, may be determined for packets pending transmission. Transmission may be interrupted for a first packet having greater latency than a second packet, and the second packet may be transmitted. The second packet may be interrupted for transmission of a third or more packets. Packets are inspected for marks and/or for OSI layer 01-27-2011
20090141731BANDWIDTH ADMISSION CONTROL ON LINK AGGREGATION GROUPS - A device may receive a bandwidth (B) available on each link of a link aggregation group (LAG) that includes a number (N) of links, assign a primary LAG link and a redundant LAG link to a virtual local area network (VLAN), set an available bandwidth for primary link booking to (B−B/N), and set an available bandwidth for redundant link booking to (B/N).06-04-2009
20110026539Forwarding Cells of Partitioned Data Through a Three-Stage Clos-Network Packet Switch with Memory at Each Stage - Examples are disclosed for forwarding cells of partitioned data through a three-stage memory-memory-memory (MMM) input-queued Clos-network (IQC) packet switch. In some examples, each module of the three-stage MMM IQC packet switch includes a virtual queue and a manager that are configured in cooperation with one another to forward a cell from among cells of partitioned data through at least a portion of the switch. The cells of partitioned data may have been partitioned and stored at an input port for the switch and have a destination of an output port for the switch.02-03-2011
20090034551SYSTEM AND METHOD FOR RECEIVE QUEUE PROVISIONING - Systems and methods that provide receive queue provisioning are provided. In one embodiment, a communications system may include, for example, a first queue pair (QP), a second QP, a general pool and a resource manager. The first QP may be associated with a first connection and with at least one of a first limit value and an out-of-order threshold. The first QP may include, for example, a first send queue (SQ). The second QP may be associated with a second connection and with a second limit value. The second QP may include, for example, a second SQ. The general pool may include, for example, a shared receive queue (SRQ) that is shared by the first QP and the second QP. The resource manager may provide, for example, provisioning for the SRQ and may manage the first limit value and the second limit value.02-05-2009
20100172363SYSTEMS AND METHODS FOR CONGESTION CONTROL USING RANDOM EARLY DROP AT HEAD OF BUFFER - A system selectively drops data from a queue. The system includes queues that temporarily store data, a dequeue engine that dequeues data from the queues, and a drop engine that operates independently from the dequeue engine. The drop engine selects one of the queues to examine, determines whether to drop data from a head of the examined queue, and marks the data based on a result of the determination.07-08-2010
20110116511Directly Providing Data Messages To A Protocol Layer - In one embodiment, the present invention provides for a layered communication protocol for a serial link, in which a link layer is to receive and forward a message to a protocol layer coupled to the link layer with a minimal amount of buffering and without maintenance of a single resource buffer for adaptive credit pools where all message classes are able to consume credits. By performing a message decode, the link layer is able to steer non-data messages and data messages to separate structures within the protocol layer. Credit accounting for each message type can be handled independently where the link layer is able to return credits immediately for non-data messages. In turn, the protocol layer includes a shared buffer to store all data messages received from the link layer and return credits to the link layer for these messages when the data is removed from the shared buffer. Other embodiments are described and claimed.05-19-2011
20110243150Facilitating Communication Of Routing Information - In certain embodiments, facilitating communication of routing information includes receiving, at a shim, incoming messages communicating routing information from a first protocol point of one or more protocol points operating according to a routing protocol. The shim belongs to an internal region separate from an external region, and a transport layer is disposed between the shim and the protocol points. The incoming messages are processed and sent to siblings that belong to the internal region. Each sibling implements a state machine for the routing protocol. Outgoing messages are received from a first sibling. The outgoing messages are processed and sent to a second protocol point of the one or more protocol points.10-06-2011
20090323710METHOD FOR PROCESSING INFORMATION FRAGMENTS AND A DEVICE HAVING INFORMATION FRAGMENT PROCESSING CAPABILITIES - A device and method for processing information fragments, the method includes: receiving multiple information fragments from multiple communication paths; wherein the each information fragment is associated with a cyclic serial number indicating of a generation time of the information fragment; storing the multiple information fragments in multiple input queues, each input queue being associated with a communication path out of the multiple communication paths; determining whether at least one serial number associated with at least one valid information fragment positioned in a head of one of the multiple input queues is located within a pre-rollout serial number range; mapping, in response to the determination, serial numbers associated with each of the valid information fragment positioned in the heads of the multiple input queues to at least one serial number range that differs from the pre-rollout serial number range; and sending to an output queue information fragment metadata associated with a minimal valued serial number out of the serial numbers associated with each of the valid information fragment positioned in the heads of the multiple input queues.12-31-2009
20120134370ASYNCHRONOUS COMMUNICATION IN AN UNSTABLE NETWORK - Embodiments are directed to promptly reestablishing communication between nodes in a dynamic computer network and dynamically maintaining an address list in an unstable network. A computer system sends a message to other message queuing nodes in a network, where each node in the message queuing network includes a corresponding persistent unique global identifier. The computer system maintains a list of unique global identifiers and the current network addresses of those network nodes from which the message queuing node has received a message or to which the message queuing node has sent a message. The computer system goes offline for a period of time and upon coming back online, sends an announcement message to each node maintained in the list indicating that the message queuing node is ready for communication in the message queuing network, where each message includes the destination node's globally unique identifier and the node's current network address.05-31-2012
20110085567METHOD OF DATA DELIVERY ACROSS A NETWORK - The present invention relates to a method of sorting data packets in a multi- path network having a plurality of ports; a plurality of network links; and a plurality of network elements, each network element having at least first and second separately addressable buffers in communication with a network link and the network links interconnecting the network elements and connecting the network elements to the ports, the method comprising: sorting data packets with respect to their egress port or ports such that at a network element a first set of data packets intended for the same egress port are queued in said first buffer and at least one other data packet intended for an egress port other than the egress port of the first set of data packets is queued separately in said second buffer whereby said at least one other data packet is separated from any congestion associated with the first set of data packets. The present invention further relates to a method of data delivery in a multi-path network comprising the sorting of data packets in accordance with a first aspect of the present invention. The present invention further relates to a multi-path network operable to sort data packets and operable to deliver data in a multi-path network.04-14-2011
20110085566METHOD FOR COMMUNICATING IN A NETWORK AND RADIO STATIONS ASSOCIATED - The present invention relates to a method for communicating in a network comprising a primary station and at least one secondary station, said secondary station comprising a buffer containing data packets to be transmitted to the primary station, the method comprising the step of the secondary station transmitting an indication of the buffer status to the primary station, said indication comprising information about history of said buffer.04-14-2011
20100014539Packet Relay Device And Queue Scheduling Method - Each of the plurality of queues stores packet data of a received packet. The read concession assignor assigns one of the plurality of queues with a read concession for a predefined time period. The overdraft storage stores an overdraft amount in connection with each of the plurality of queues. The read adequacy determiner determines, in accordance with an overdraft amount stored in connection with one queue out of the plurality of queues, whether to read packet data from the one queue. The overdraft updater updates at least one of a first overdraft amount stored in connection with a first queue and a second overdraft amount stored in connection with a second queue different from the first queue upon reading packet data from the first queue during a time period while the second queue is assigned with the read concession.01-21-2010
20100054269METHOD FOR TRANSFERRING DATA PACKETS TO A SHARED RESOURCE, AND RELATED DEVICE AND COMPUTER SOFTWARE - The invention relates to a method for transferring data packets to a shared resource (03-04-2010
20100061390METHODS AND APPARATUS FOR DEFINING A FLOW CONTROL SIGNAL RELATED TO A TRANSMIT QUEUE - In one embodiment, a processor-readable medium can store code representing instructions that when executed by a processor cause the processor to receive a value representing a congestion level of a receive queue and a value representing a state of a transmit queue. At least a portion of the transmit queue can be defined by a plurality of packets addressed to the receive queue. A rate value for the transmit queue can be defined based on the value representing the congestion level of the receive queue and the value representing the state of the transmit queue. The processor-readable medium can store code representing instructions that when executed by the processor cause the processor to define a suspension time value for the transmit queue based on the value representing the congestion level of the receive queue and the value representing the state of the transmit queue.03-11-2010
20100054268Method of Tracking Arrival Order of Packets into Plural Queues - In PCI-Express and alike communications systems, it is often desirable to keep track of order of arrival into different queues of packets that will later compete for servicing by a downstream resource of limited bandwidth. Use of time stamping to determine order of arrival can be a problem because time of arrival between different packets entering respective ones of plural queues can vary greatly and thus the number of bits consumed for accurately time stamping each packet can become significant. Disclosed are systems and methods for tracking the arrival orders of packets into plural queues by means of travel-along dynamic counts rather than by means of high precision time stamps. A machine system that keeps track of relative arrival orders of data blocks in different ones of plural queues comprises a first count associater that associates with a first data block in a first of the plural queues, a first count of how many earlier arrived and still pending data blocks await in a second of the plural queues; and a count updater that updates the first count in response to one or more of said earlier arrived data blocks departing from the second queue.03-04-2010
20100061389METHODS AND APPARATUS RELATED TO VIRTUALIZATION OF DATA CENTER RESOURCES - In one embodiment, an apparatus includes a switch core that has a multi-stage switch fabric. A first set of peripheral processing devices coupled to the multi-stage switch fabric by a set of connections that have a protocol. Each peripheral processing device from the first set of peripheral processing devices is a storage node that has virtualized resources. The virtualized resources of the first set of peripheral processing devices collectively define a virtual storage resource interconnected by the switch core. A second set of peripheral processing devices coupled to the multi-stage switch fabric by a set of connections that have the protocol. Each peripheral processing device from the first set of peripheral processing devices is a compute node that has virtualized resources. The virtualized resources of the second set of peripheral processing devices collectively define a virtual compute resource interconnected by the switch core.03-11-2010
20110176553SYSTEM AND METHOD FOR SEAMLESS SWITCHING THROUGH BUFFERING - A method of preparing data streams to facilitate seamless switching between such streams by a switching device to produce an output data stream without any switching artifacts. Bi-directional switching between any plurality of data streams is supported. The data streams are divided into segments, wherein the segments include synchronized starting points and end points. The data rate is increased before an end point of a segment, to create switch gaps between the segments. Increasing the data rate can include increasing a bandwidth of the plurality of data streams, for example by multiplexing, or compressing the data.07-21-2011
20100135312Method for Scoring Queued Frames for Selective Transmission Through a Switch - A method includes determining a priority of each of a plurality of frames, wherein the priority is a function of an initial value dependent on content of each said frame and one or more adjustment values independent of content of each said frame, and selecting the frame with the highest determined priority for transmission through the device prior to transmission of any other of the frames. A system includes a receiving port configured to receive frames and assign an initial priority to each frame, a queue configured to insert queue entries associated with received frames on the queue, each queue entry being inserted at a queue position based on the initial priority assigned to the queue entry, the queue further configured to reorder queue entries based on readjusted priorities of the queue entries; and a transmitter switch configured to transmit the frame having the highest priority before transmitting any other frame.06-03-2010
20110149988COMMUNICATION CONTROLLER - A switch includes: a packet buffer including a plurality of segments for temporarily storing a packet to be relayed, using a segment of a fixed size as a unit of management of a storage area; an input processor configured to receive a packet to be relayed from an external source, refer to an offset, indicating a location in a segment where a free area starts, store a first packet, starting at the location in the segment indicated by the offset, update the location of start indicated by the offset in accordance with the packet size, and store a second packet, starting at the location of start in the segment thus updated; and an output processor configured to read the first and second packets and send the packets to a communication node.06-23-2011
20090028171FIFO BUFFER WITH ADAPTIVE THRESHOLD LEVEL - A system comprising a FIFO data buffer having a programmable threshold level, which is initially set to a worst case scenario level, so that the FIFO data buffer does not empty of data. The system also comprises a hardware device which is configured to adjust the threshold level in the FIFO data buffer to a level equal to the current threshold level minus the amount of remaining data in the FIFO data buffer at the time new data enters the FIFO data buffer. The hardware device is also configured to adjust the threshold level to the initial threshold level if the FIFO data buffer underflows. The hardware device may be coupled to the FIFO data buffer, implemented in the FIFO data buffer, or implemented in the display subsystem. The system may be implemented in a mobile communications device.01-29-2009
20090213865Techniques for channel access and transmit queue selection - Various embodiments are disclosed for techniques to perform channel access decisions and to select a transmit queue. These decisions may be performed, for example, based upon the age and number of packets in a queue. These techniques may allow a node to improve the length of data bursts transmitted by the node, although the invention is not limited thereto.08-27-2009
20090213864INBOUND BLOCKING OF DATA RECEIVED FROM A LAN IN A MULTI-PROCESSOR VIRTUALIZATION ENVIRONMENT - An incoming LAN traffic management system comprising: an I/O adapter configured to receive incoming packets from an Ethernet; a plurality of hosts coupled to the I/O adapter and each having a host buffer; a data router configured to block information received by the I/O adapter into memory locations from an SBAL associated with at least one of the plurality of hosts and in accordance with blocking parameters for the at least one of the plurality of hosts, the data router including an expiration engine configured to expire the SBAL before it is full if at least one predetermined threshold is exceeded.08-27-2009
20090213863ROUTER, NETWORK COMPRISING A ROUTER, METHOD FOR ROUTING DATA IN A NETWORK - A router for a network is arranged for guiding data traffic from one of a first plurality Ni of inputs (I) to one or more of a second plurality No of outputs (O). The inputs each have a third plurality m of input queues for buffering data. The third plurality m is greater than 1, but less than the second plurality No. The router comprises a first selection facility for writing data received at an input to a selected input queue of said input, and a second selection facility for providing data from an input queue to a selected output. Pairs of packets having mutually different destinations Oj and Ok are arranged in the same queue for a total number of Nj,k inputs characterized in that Nj,k08-27-2009
20100124234Method for scheduling packets of a plurality of flows and system for carrying out the method - The invention concerns a method for scheduling packets belonging to a plurality of flows received at a router. It is also provided the system for carrying out the method. According to the invention, a single packet queue is used for storing said packets, said single packet queue being adapted to be divided into a variable number of successive sections which are created and updated dynamically as a function of each received packet, each section being of variable size and a section load threshold for each flow of said plurality of flows being allocated to each section. The method further comprises insertion (S05-20-2010
20110069716METHOD AND APPARATUS FOR QUEUING VARIABLE SIZE DATA PACKETS IN A COMMUNICATION SYSTEM - Variable size data packets are queued in a communication system by generating from each data packet a record portion of predetermined fixed size containing information about each packet and storing only data portions of the packets in independent memory locations in a first memory. The record portions are only stored in one or more managed queues in a second memory having fixed size memory locations equal in size to the size of the record portions. The first memory is larger than the second memory; and the memory locations in the first memory are arranged in blocks having a plurality of different sizes. The memory locations are allocated to the data portions according to the size of the data portions.03-24-2011
20120134371QUEUE SCHEDULING METHOD AND APPARATUS - A queue scheduling method and apparatus is disclosed in the embodiments of the present invention, the method comprises: one or more queues are indexed by using a first circulation link list; one or more queues are accessed respectively by using the front pointer of the first circulation link list, and the value acquired from subtracting a value of a unit to be scheduled at the head of the queue from a weight middle value of each queue is treated as the residual weight middle value of the queue; when the weight middle value of one queue in the first circulation link list is less than the unit to be scheduled at the head of the queue, the queue is deleted from the first circulation link list and the weight middle value is updated with the sum of a set weight value and the residual weight middle value of the queue; the queue deleted from the first circulation link list is linked with a second circulation link list. The present invention enables the scheduling to support any number of queues, and supports the expansion of the number of queues under the circumstances that the hardware implementation logic core is not changed.05-31-2012
20110149989INSTRUCTION SET FOR PROGRAMMABLE QUEUING - A traffic manager includes an execution unit that is responsive to instructions related to queuing of data in memory. The instructions may be provided by a network processor that is programmed to generate such instructions, depending on the data. Examples of such instructions include (1) writing of data units (of fixed size or variable size) without linking to a queue, (2) re-sequencing of the data units relative to one another without moving the data units in memory, and (3) linking the previously-written data units to a queue. The network processor and traffic manager may be implemented in a single chip.06-23-2011
20100309928ASYNCHRONOUS COMMUNICATION IN AN UNSTABLE NETWORK - Embodiments are directed to promptly reestablishing communication between nodes in a dynamic computer network and dynamically maintaining an address list in an unstable network. A computer system sends a message to other message queuing nodes in a network, where each node in the message queuing network includes a corresponding persistent unique global identifier. The computer system maintains a list of unique global identifiers and the current network addresses of those network nodes from which the message queuing node has received a message or to which the message queuing node has sent a message. The computer system goes offline for a period of time and upon coming back online, sends an announcement message to each node maintained in the list indicating that the message queuing node is ready for communication in the message queuing network, where each message includes the destination node's globally unique identifier and the node's current network address.12-09-2010
20080253387Method and apparatus for improving SIP server performance - A method and apparatus for improving SIP server performance is disclosed. The apparatus comprises an enqueuer for determining whether a request packet entering into the server is a new request or a retransmitted request and its retransmission times and for enqueuing the request packet into different queues based on results of the determining step and a dequeuer for dequeuing the packet in the queues for processing based on a scheduling policy. The apparatus may further include a policy controller for communicating with the server, enqueuer, dequeuer, queues and user, to dynamically and automatically set, or set based on the user's instructions, the scheduling policy, number of different queues, each queue's capacity, scheduling, etc. based on the network and/or server load and/or based on different server applications.10-16-2008
20080205422Method And Structure To Support System Resource Access Of A Serial Device Implementing A Lite-Weight Protocol - On-chip resources of a serial buffer are accessed using priority packets of a Lite-weight protocol. A priority packet path is provided on the serial buffer to support priority packets. Normal data packets are processed on a normal data packet path, which operates in parallel with the priority packet path. The system resources of the serial buffer can be accessed in response to the priority packets, without blocking the flow of normal data packets. Thus, normal data packets may flow through the serial buffer with the maximum bandwidth supported by the serial interface. The Lite-weight protocol also supports read accesses to queues of the serial buffer (which reside on the normal data packet path). The Lite-weight protocol also supports doorbell commands for status/error reporting.08-28-2008
20100329275Multiple Processes Sharing a Single Infiniband Connection - A compute node with multiple transfer processes that share an Infiniband connection to send and receive messages across a network. Transfer processes are first associated with an Infiniband queue pair (QP) connection. Then send message commands associated with a transfer process are issued. This causes an Infiniband message to be generated and sent, via the QP connection, to a remote compute node corresponding to the QP. Send message commands associated with another process are also issued. This causes another Infiniband message to be generated and sent, via the same QP connection, to the same remote compute node. As mentioned, multiple processes may receive network messages received via a shared QP connection. A transfer process on a receiving compute node receives a network message through a QP connection using a receive queue. A second transfer process receives another message through the same QP connection using another receive queue.12-30-2010
20110134933CLASSES OF SERVICE FOR NETWORK ON CHIPS - A method includes a local switch receiving a first set of upstream packets and a first set of local packets, each assigned a first class of service. The local switch inserts, according to a first insertion rate, a local packet between subsets of the first set of upstream packets to obtain an ordered set of first class packets. The local switch also receives a second set of upstream packets and a second set of local packets, each assigned a second class. The local switch inserts, according to a second insertion rate, a local packet between subsets of the second set of upstream packets to obtain an ordered set of second class packets. The method includes for each timeslot, selecting a class, and forwarding a packet from the selected class of service to a downstream switch. The switches are interconnected in a daisy chain topology on a single chip.06-09-2011
20110261831Dynamic Priority Queue Level Assignment for a Network Flow - Forwarding a flow in a network includes receiving the flow at a switch, determining an optimized priority queue level of the flow at the switch, and forwarding the flow via the switch using an optimized priority queue level of the flow at the switch. The flow passes through a plurality of switches, including the switch, in the network, and the optimized priority queue level of the flow at the switch is different from a priority queue level of the flow at a second switch of the plurality of switches. The second switch routes the flow at the second switch using the different priority queue level for the flow.10-27-2011
20080219279SCALABLE AND CONFIGURABLE QUEUE MANAGEMENT FOR NETWORK PACKET TRAFFIC QUALITY OF SERVICE - Various embodiments are directed to scalable and configurable queue management for network packet traffic Quality of Service (QoS). In one or more embodiments, the queue management may be implemented by a network processor comprising a queue manager to assert interrupts indicating that one or more queues require service, and a core processor to apply an interrupt mask to a status register value identifying the one or more queues that require service and to provide service during a particular service cycle to only those queues that are not masked out. Other embodiments are described and claimed.09-11-2008
20110075678NETWORK INTERFACE SYSTEM WITH FILTERING FUNCTION - A network interface system for transferring a data packet between a host system and a network includes multiple matchers and multiple queues. The matchers match the data packet with multiple rules from the host system to generate multiple matching results and allocate a transferring priority to the data packet according to the rules. The queues correspond to the matchers respectively. A queue of the queues stores information indicating the transferring priority for the data packet according to the matching results and priorities of matchers.03-31-2011
20110075679PACKET TRANSMISSION DEVICE AND PACKET TRANSMISSION METHOD - Provided are a packet transmission device and a packet transmission method which can effectively use a radio band while suppressing a processing overhead. A packet transmission device (03-31-2011
20110176554PACKET RELAY APPARATUS AND METHOD OF RELAYING PACKET - The packet relay apparatus is provided. The packet relay apparatus includes a receiver that receives a packet; and a determiner that determines to drop the received packet without storing the received packet into a queue among the multi-stage queue. The determiner determines to drop the received packet at a latter stage, based on former-stage queue information representing a state of a queue at any former stage which the received packet belongs to and latter-stage queue information representing a state of a queue at the latter stage which the received packet belongs to.07-21-2011
20100067538METHOD AND SYSTEM FOR FRAME CLASSIFICATION - The present invention provides a method and a device for classifying data frames. The method is typically carried out by a communication device in a wireless network with quality of service capability. It comprises the step of comparing data in a frame to data in a plurality of classifier entries, wherein the order of comparison of the classifier entries with a frame is a function of a quality of service priority level, and the step of classifying a frame for which a match is found as a function of a parameter associated with the matching classifier entry.03-18-2010
20090285229METHOD FOR SCHEDULING OF PACKETS IN TDMA CHANNELS - The method of the invention is implemented in ad hoc communications network employing at least two-hop routing and wherein each node in the network employs an omnidirectional send/receive capability. Each node keeps a near neighbour database (NND) updated by receiving of messages. Each Othernode in the network, the message of which was received by Mynode in a time period T, is a candidate for becoming a relay for transmitting Mynode's messages. The probability of an Othernode to become a relay for Mynode is higher for a larger amount of candidates Othernode has in its NND. The probability for the Othernode to become a relay is higher the larger its distance from Mynode.11-19-2009
20090175286SWITCHING METHOD - A method of switching data packets between an input and a plurality of outputs of a switching device. The switching device comprises a memory arranged to store a plurality of data structures, each data structure being associated with one of said outputs. The method comprises receiving a first data packet at said input, and storing said first data packet in a data structure associated with an output from which said data packet is to be transmitted. If said first data packet is intended to be transmitted from a plurality of said outputs, indication data is stored in each data structure associated with an output from which said first data packet is to be transmitted, but said first data packet is stored in only one of said data structures. The first data packet is transmitted from said data structure to the or each output from which the first data packet is to be transmitted.07-09-2009
20100020815DATA TRANSMISSION METHOD FOR HSDPA - In the data transmission method of an HSDPA system according to the present invention, a transmitter transmits Data Blocks each composed of one or more data units originated from a same logical channel, and a receiver receives the Data Block through a HS-DSCH and distributes the Data Block to a predetermined reordering buffer. Since each Data Block is composed of the MAC-d PDUs originated from the same logical channel, it is possible to monitor the in-sequence delivery of the data units, resulting in reduction of undesirable queuing delay caused by logical channel multiplexing.01-28-2010
20100020814Ethernet Switching - A method in an Ethernet switch (01-28-2010
20120307838METHOD AND SYSTEM FOR TEMPORARY DATA UNIT STORAGE ON INFINIBAND HOST CHANNEL ADAPTOR - A method for temporary storage of data units including receiving a first data unit to store in a hardware linked list queue on a communications adapter, reading a first index value from the first data unit, determining that the first index value does match an existing index value of a first linked list, and storing the first data unit in the hardware linked list queue as a member of the first linked list. The method further includes receiving a second data unit, reading a second index value from the second data unit, determining that the second index value does not match any existing index value, allocating space in the hardware linked list queue for a second linked list, and storing the second data unit in the second linked list.12-06-2012
20110142064DYNAMIC RECEIVE QUEUE BALANCING - A method according to one embodiment includes the operations of assigning a network application to at least one first core processing unit, from among a plurality of core processing unit. The method of this embodiment also includes the operations of assigning a first receive queue to said first core processing unit, wherein the first receive queue is configured to receive packet flow associated with the network application; defining a high threshold for the first receive queue; and monitoring the packet flow in the first receive queue and comparing a packet flow level in the first receive queue to the high threshold; wherein if the packet flow level exceeds the threshold based on the comparing, generating a queue status message indicating that the packet flow level in the first queue has exceeded the queue high threshold.06-16-2011
20110110380Hiding System Latencies in a Throughput Networking Systems - A method for addressing system latency within a network system which includes providing a network interface and moving data within each of the plurality of memory access channels independently and in parallel to and from a memory system so that one or more of the plurality of memory access channels operate efficiently in the presence of arbitrary memory latencies across multiple requests is disclosed. The network interface includes a plurality of memory access channels.05-12-2011
20120099603METHOD AND APPARATUS FOR SCHEDULING IN A PACKET BUFFERING NETWORK - A system and method that can be deployed to schedule links in a switch fabric. The operation uses two functional elements: to perform updating of a priority link list; and then selecting a link using that list.04-26-2012
20110158251CONTENT DISTRIBUTION METHOD AND CONTENT RECEPTION DEVICE - Both a first method, in which a packet used to transmit distribution content is divided into two or more containers and the divided containers are transmitted at the same time, and a second method, in which the divided containers are transmitted at different times, are executed in a content distribution system. A download terminal selectively executes high-speed downloading by the first method and low-speed downloading by the second method, thus allowing the user to use a download service according to the terminal performance.06-30-2011
20110158250Assigning Work From Multiple Sources to Multiple Sinks Given Assignment Constraints - A method and apparatus for assigning work, such as data packets, from a plurality of sources, such as data queues in a network processing device, to a plurality of sinks, such as processor threads in the network processing device. In a given processing period, sinks that are available to receive work are identified and sources qualified to send work to the available sinks are determined taking into account any assignment constraints. A single source is selected from an overlap of the qualified sources and sources having work available. This selection may be made using a hierarchical source scheduler for processing subsets of supported sources simultaneously in parallel. A sink to which work from the selected source may be assigned is selected from available sinks qualified to receive work from the selected source.06-30-2011
20110158249Assignment Constraint Matrix for Assigning Work From Multiple Sources to Multiple Sinks - An assignment constraint matrix method and apparatus used in assigning work, such as data packets, from a plurality of sources, such as data queues in a network processing device, to a plurality of sinks, such as processor threads in the network processing device. The assignment constraint matrix is implemented as a plurality of qualifier matrixes adapted to operate simultaneously in parallel. Each of the plurality of qualifier matrixes is adapted to determine sources in a subset of supported sources that are qualified to provide work to a set of sinks based on assignment constraints. The determination of qualified sources may be based sink availability information that may be provided for a set of sinks on a single chip or distributed on multiple chips.06-30-2011
20110158248DYNAMIC PRIORITIZED FAIR SHARE SCHEDULING SCHEME IN OVER-SUBSCRIBED PORT SCENARIO - A network device receives initial policer limits for a plurality of over-subscribing ingress ports, where the initial policer limits are based on existing bandwidth limits for an over-subscribed egress port associated with the over-subscribing ingress ports. The network device receives a high threshold watermark and a low threshold watermark for bandwidth usage of the over-subscribed egress port, and identifies a queue, associated with the over-subscribed egress port, with values outside the high threshold watermark or the low threshold watermark. The network device reduces the initial policer limits for the plurality of over-subscribing ingress ports when the queue has values above the high threshold watermark, and increases the initial policer limits for the plurality of over-subscribing ingress ports when the queue has values below the low threshold watermark.06-30-2011
20110317713Control Plane Packet Processing and Latency Control - A switch resource receives control plane packets and data packets. The control plane packets indicate how to configure the network in which the switch resource resides. The switch resource includes a classifier. The classifier classifies the control plane packets based on priority and stores the control plane packets into different packet priority queues. The switch resource also includes a flow controller. The forwarding manager selectively forwards the control plane packets stored in the control plane packet priority queues to a control plane packet processing environment depending on a completion status of processing previously forwarded control plane packets by a packet processing thread. The control plane packet processing environment includes a monitor resource that generates one or more interrupts to an operating system to ensure further forwarding of the packets downstream to the packet processing thread for timely processing.12-29-2011
20120002677Arbitration method, arbiter circuit, and apparatus provided with arbiter circuit - An arbitration method includes a first process to perform a path control to transfer data from physically plural input ports logically having plural virtual channels to an arbitrary one of the plural output ports, wherein only one channel is selectable at one input port at an arbitrary point in time, by performing an arbitration among the channels of each of the plural input ports according to an arbitrary arbitration algorithm other than a time-division algorithm, and a second process to perform an arbitration among the plural input ports according to the arbitrary arbitration algorithm. The arbitrary arbitration algorithm used in the first and second processes is switched to the time-division algorithm for a predetermined time in response to a trigger.01-05-2012
20120008636DYNAMICALLY ADJUSTED CREDIT BASED ROUND ROBIN SCHEDULER - A credit based queue scheduler dynamically adjusts credits depending upon at least a moving average of incoming packet size to alleviate the impact of traffic burstiness and packet size variation, and increase the performance of the scheduler by lowering latency and jitter. For the case when no service differentiation is required, the credit is adjusted by computing a weighted moving average of incoming packets for the entire scheduler. For the case when differentiation is required, the credit for each queue is determined by a product of a sum of credits given to all queues and priority levels of each queue.01-12-2012
20120008637DIFFERENTIAL FRAME BASED SCHEDULING FOR INPUT QUEUED SWITCHES - A differential frame-based scheduling scheme is employed for input queued (IQ) switches with virtual output queues (VOQ). Differential scheduling adjusts previous scheduling based on a traffic difference in two consecutive frames. To guarantee quality of service (QoS) with low complexity, the adjustment first reserves some slots for each port pair in each frame, then releases surplus allocations and supplements deficit allocations according to a dichotomy order, designed for high throughput, low jitter, fairness, and low computational complexity.01-12-2012
20090086747Queuing Method - A method of queuing data packets, said data packets comprising data packets of a first packet type and data packets of a second packet type. The method comprises grouping received packets of said first and second packet types into an ordered series of groups, each group comprising at least one packet, maintaining a group counter indicating the number of groups at the beginning of the series of groups comprising only packets of the second packet type, and transmitting a packet. A packet of the second packet type is available for transmission if but only if the group counter is indicative that the number of groups at the beginning of the series of groups comprising only packets of the second packet type is greater than zero.04-02-2009
20120207175Dynamic load balancing for port groups - In one embodiment, a method includes receiving a packet at an input port of a network device, the input port having a plurality of queues with at least one queue for each output port at the network device, identifying a port group for transmitting the packet from the network device, the port group having a plurality of members each associated with one of the output ports, and selecting one of the queues based on utilization of the members. An apparatus for load balancing is also disclosed.08-16-2012
20120063467HIERARCHICAL PACKET SCHEDULING - A packet scheduler may include logic configured to receive packet information. The packet scheduler may include logic to receive an operating parameter associated with a downstream device that operates with cell-based traffic. The packet scheduler may include logic perform a packet to cell transformation to produce an output based on the operating parameter. The packet scheduler may include logic to use the output to compensate for the downstream device.03-15-2012
20120027024Zero-Setting Network Quality Service System - A zero-setting QoS system, which is designed with priority session and bandwidth technologies in an undifferentiated network, such that the packets for universal or dedicated network can obtain priority transmission services. As a QoS system of priority levels, the network packets are received from the inlet, and 802.1q tag and 802.1p tag are loaded onto the packets, so that the packets can be transmitted by priority levels, then 802.1q tag and 802.1p tag are removed from the outlet of the system, enabling easy operation in an undifferentiated network environment; therefore, the system with rapid transmission capability can allocate and transmit the network packets within a shorter response time and by better priority levels.02-02-2012
20120300787 APPARATUS AND A METHOD OF RECEIVING AND STORING DATA PACKETS CONTROLLED BY A CENTRAL CONTROLLER - An assembly and a method where a number of receiving units receive and store data in a number of queues de-queued by a plurality of processors/processes. If a selected queue for one processor has a fill level exceeding a limit, the packet is forwarded to a queue of another processor which is instructed to not de-queue that queue until the queue with the exceeded fill level has been emptied. Thus, load balancing between processes/processors may be obtained while maintaining an ordering between packets.11-29-2012
20110103395COMPUTING THE BURST SIZE FOR A HIGH SPEED PACKET DATA NETWORKS WITH MULTIPLE QUEUES - A communications method is provided. The method includes processing multiple packet queues for a high speed packet data network and associating one or more arrays for the multiple packet queues. The method also includes generating an index for the arrays, where the index is associated with a time stamp in order to determine a burst size for the high speed packet data network.05-05-2011
20120120966Method and Apparatus for Allocating and Prioritizing Data Transmission - The subject matter disclosed herein describes a method to allocate and prioritize data communications on an industrial control network. A transmission schedule including multiple priority windows and multiple queues is established. Each queue is assigned to at least one priority window, and each priority window may have multiple queues assigned thereto. A control device communicating on the control network transmits data packets according to the transmission schedule. Within each priority window, data packets corresponding to one of the queues assigned to the priority window may be transmitted. The data packets may be transmitted at any point during the priority window, but will only be transmitted if no data packet from a higher queue is waiting to be transmitted.05-17-2012
20120120965LOCK-LESS AND ZERO COPY MESSAGING SCHEME FOR TELECOMMUNICATION NETWORK APPLICATIONS - A computer-implemented system and method for a lock-less, zero data copy messaging mechanism in a multi-core processor for use on a modem in a telecommunications network are described herein. The method includes, for each of a plurality of processing cores, acquiring a kernel to user-space (K-U) mapped buffer and corresponding buffer descriptor, inserting a data packet into the buffer; and inserting the buffer descriptor into a circular buffer. The method further includes creating a frame descriptor containing the K-U mapped buffer pointer, inserting the frame descriptor onto a frame queue specified by a dynamic PCD rule mapping IP addresses to frame queues, and creating a buffer descriptor from the frame descriptor.05-17-2012
20100246592LOAD BALANCING METHOD FOR NETWORK INTRUSION DETECTION - A load balancing method for network intrusion detection includes the following steps. Packets are received from a client. The data packets include a protocol type and a protocol property. An intrusion detection procedure is loaded on a receiving end. A corresponding request queue is set for each intrusion detection procedure. The request queue is used for storing the data packets. The data packets are processed a separation procedure, and are categorized into data packets of a chain type and data packets of a non-chain type according to the protocol type. The data packets of the chain type are processed by a first distribution procedure. The data packets of the non-chain type are processed by a second distribution procedure. The distribution procedures distribute the data packets to the corresponding request queues according to the protocol property. The corresponding intrusion detection procedure is performed on the data packets in each request queue.09-30-2010
20120128007DISTRIBUTED SCHEDULING FOR VARIABLE-SIZE PACKET SWITCHING SYSTEM - Scheduling methods and apparatus are provided for an input-queued switch. The exemplary distributed scheduling process achieves 100% throughput for any admissible Bernoulli arrival traffic. The exemplary distributed scheduling process includes scheduling variable size packets. The exemplary distributed scheduling process may be easily implemented with a low-rate control or by sacrificing the throughput by a small amount. Simulation results also showed that this distributed scheduling process can provide very good delay performance for different traffic patterns. The exemplary distributed scheduling process may therefore be a good candidate large-scale high-speed switching systems.05-24-2012
20110182299LIMITING TRANSMISSION RATE OF DATA - An improved solution for limiting the transmission rate of data over a network is provided according to an aspect of the invention. In particular, the transmission rate for a port is limited by rate limiting one of a plurality of queues (e.g., class/quality of service queues) for the port, and directing all data (e.g., packets) for transmission through the port to the single rate limited queue. In this manner, the transmission rate for the port can be effectively limited to accommodate, for example, a lower transmission rate for a port on a destination node.07-28-2011
20120134369Programmable Queuing Instruction Set - A traffic manager includes an execution unit that is responsive to instructions related to queuing of data in memory. The instructions may be provided by a network processor that is programmed to generate such instructions, depending on the data. Examples of such instructions include (1) writing of data units (of fixed size or variable size) without linking to a queue, (2) re-sequencing of the data units relative to one another without moving the data units in memory, and (3) linking the previously-written data units to a queue. The network processor and traffic manager may be implemented in a single chip.05-31-2012
20120170590Bandwidth Arrangement Method and Transmitter Thereof - A bandwidth arranging method includes the following steps of: registering isochronous packets of N isochronous streams, N is a natural number greater than 1; segmenting an isochronous transmission period into M sub-periods, M is a natural number greater than 1; arranging operation of transmitting each of the N isochronous streams in one of the M sub-periods and allocating corresponding bandwidth according to bandwidth requirement information corresponding to each of the N isochronous streams; arranging the isochronous packets into M output queues corresponding to the respective M sub-periods; outputting isochronous packets stored in the M output queues in the respective M sub-periods.07-05-2012
20090129400PARSING AND FLAGGING DATA ON A NETWORK - Described are computer-based methods and apparatuses, including computer program products, for parsing, flagging, and/or reconstructing data on a network. Data packets associated with user requests are distributed among a plurality of data centers for processing. The data packets are captured at the data centers for fraud detection. The captured data packets are preprocessed at the data center. The preprocessing includes disregarding data packets that are not applicable to fraud detection. The preprocessing includes indicating if data packets are applicable to fraud detection. The indicating of the applicable data packets includes parsing the data packets using particular rules optimized for fraud detection. The data packets are processed at each data center to reconstruct part of the data associated with a user. The processing of the data packets includes reconstructing the data packets based on customer information from network information and/or cookie information. The reconstructed data packets are transmitted to a central processing center (e.g., central data center). The central processing center receives reconstructed data packets from the plurality of data centers and unifies the reconstructed data packets into data associated with a user.05-21-2009
20100008376METHODS, SYSTEMS AND COMPUTER PROGRAM PRODUCTS FOR PACKET PRIORITIZATION BASED ON DELIVERY TIME EXPECTATION - Methods, systems and computer program products for packet prioritization based on delivery time expectation. Exemplary embodiments include receiving a packet for routing, estimating a TimeToDestination for the packet, the estimating performed by a Internet Control Message Protocol, reading a TimeToDeliver field from each the Internet Protocol Header of the packet to extract data on when the packet needs to be at the destination, determining a MaxQueueDelay for the packet, the MaxQueueDelay calculated by subtracting the TimeToDeliver from the TimeToDestination, passing a lower priority packet if the lower priority packet has a lower MaxQueueDelay, and decrementing the TimeToDeliver by an amount of time the network router has had the packet in the queue before passing the packet to a next router, thereby communicating to the next router how much time is left before the packet must be delivered.01-14-2010
20090073999Adaptive Low Latency Receive Queues - A receive queue provided in a computer system holds work completion information and message data together. An InfiniBand hardware adapter sends a single CQE+message data to the computer system that includes the completion information and data. This information is sufficient for the computer system to receive and process the data message, thereby providing a highly scalable low latency receiving mechanism.03-19-2009
20090034549Managing Free Packet Descriptors in Packet-Based Communications - A network element including a processor with logic for managing packet queues including a queue of free packet descriptors. Upon the transmission of a packet by a host application, the packet descriptor for the transmitted packet is added to the free packet descriptor queue. If the new free packet descriptor resides in on-chip memory, relative to queue manager logic, it is added to the head of the free packet descriptor queue; if the new free packet descriptor resides in external memory, it is added to the tail of the free packet descriptor queue. Upon a packet descriptor being requested, by a host application, to be associated with valid data to be added to an active packet queue, the queue manager logic pops the packet descriptor currently at the head of the free descriptor queue. In this manner, packet descriptors in on-chip memory are preferentially used relative to packet descriptors in external memory, thus improving system performance.02-05-2009
20120230346Ethernet Switching - A scheduler in an Ethernet switch and method for scheduling and queuing received unicast packets. The scheduler determines a destination address and a traffic priority of a received packet, and searches for a stored association between the destination address and an interfacing port of the Ethernet switch. When a stored association is found, the received packet is scheduled and queued in one of the priority buffers of the output buffer in an associated interfacing port according to the received packet's traffic priority. When no association is found, the scheduler floods the received unicast packet in a flooding buffer in every interfacing outgoing port of the Ethernet switch. The flooded packet may be scheduled as low priority traffic, or may be prioritized in relation to other flooded unicast packets based on each flooded unicast packet's traffic priority.09-13-2012
20120230345Systems and Methods of QoS for Single Stream ICA - The present solution provides quality of service (QoS) for a stream of protocol data units via a single transport layer connection. A device receives via a single transport layer connection a plurality of packets carrying a plurality of protocol data units. Each protocol data unit identifies a priority. The device may include a filter for determining an average priority for a predetermined window of protocol data units and an engine for assigning the average priority as a connection priority of the single transport layer connection. The device transmits via the single transport layer connection the packets carrying those protocol data units within the predetermined window of protocol data units while the connection priority of the single transport layer connection is assigned the average priority for the predetermined window of protocol data units.09-13-2012
20110122886METHOD AND DEVICES FOR INSTALLING PACKET FILTERS IN A DATA TRANSMISSION - A method is described for associating a data packet (DP) with a packet bearer (PB) in a user equipment (UE05-26-2011
20110122885Controlling Packet Filter Installation in a User Equipment - A communication system includes a user equipment (UE) 05-26-2011
20110122884ZERO COPY TRANSMISSION WITH RAW PACKETS - A system for providing a zero copy transmission with raw packets includes an operating system that receives an application request pertaining to a data packet to be transmitted over a network, where the data packet has already gone through networking stack processing invoked by the application. The operating system queries a driver of a network device on whether the network device has a zero copy capability. Based on the query response of the driver, the operating system determines whether a zero copy transmission should be used for the data packet. If not, the operating system copies the data packet from the application memory to a kernel buffer, and notifies the driver about the data packet in the kernel buffer. If so, the operating system refrains from copying the data packet to the kernel buffer, and notifies the driver about the data packet in the application memory.05-26-2011
20110122883SETTING AND CHANGING QUEUE SIZES IN LINE CARDS - A device may include a first line card and a second line card. The first line card may include a memory including queues. In addition, the first line card may include a processor. The processor may identify, among the queues, a queue whose size is to be modified, change the size of the identified queue, receive a packet, insert a header cell associated with the packet in the identified queue, identify a second line card from which the packet is to be sent to another device in a network, remove the header cell from the identified queue, and forward the header cell to the second line card. The second line card may receive the header cell from the first line card, and send the packet to the other device in the network.05-26-2011
20080298381APPARATUS FOR QUEUE MANAGEMENT OF A GLOBAL LINK CONTROL BYTE IN AN INPUT/OUTPUT SUBSYSTEM - Apparatus for communicating global link control words (LCW) between chips. A queue stores LCWs and has an input for receiving an LCW from a previous chip, and an output for outputting a stored LCW to a subsequent chip. A management circuit compares an incoming LCW with a previously stored LCW, and a combiner circuit combines the incoming LCW with a previously stored LCW and stores the combined LCW in the queue when the management circuit determines that the incoming LCW can be combined with the previously stored LCW.12-04-2008
20080298380Transmit Scheduling - There are disclosed apparatus and methods for scheduling packet transmission. At least one scheduled traffic queue holds a plurality of scheduled packets, each scheduled packet having an associated scheduled transmit time. At least one unscheduled traffic queue holds plurality of unscheduled packets. A packet selector causes transmission of scheduled packets from the scheduled traffic queue at the associated scheduled transmit time, while causing transmission of unscheduled packets from the unscheduled traffic queue during the time intervals between transmissions of scheduled packets.12-04-2008
20120327951Channel Service Manager with Priority Queing - A system and method are provided for prioritizing network processor information flow in a channel service manager (CSM). The method receives a plurality of information streams on a plurality of input channels, and selectively links input channels to CSM channels. The information streams are stored, and the stored the information streams are mapped to a processor queue in a group of processor queues. Information streams are supplied from the group of processor queues to a network processor in an order responsive to a ranking of the processor queues inside the group. More explicitly, selectively linking input channels to CSM channels includes creating a fixed linkage between each input port and an arbiter in a group of arbiters, and scheduling information streams in response to the ranking of the arbiter inside the group. Finally, a CSM channel is selected for each information stream scheduled by an arbiter.12-27-2012
20120327950Method for Transmitting Data Packets - Method for transmitting data packets in an Ethernet automation network, wherein the method comprises receiving a first data packet having a first priority by a transmitter, starting a transmit operation to send the first data packet from the transmitter to a receiver, receiving a second data packet having a second priority at an instant in time by the transmitter, where the second priority is higher than the first priority, and where the second data packet is to be transmitted to the receiver. The method further comprises aborting the transmit operation of the first data packet within one of the data frames of the first data packet which is located in the transmit operation at the time of the reception of the second data packet, and thereupon transmitting the second data packet from the transmitter to the receiver.12-27-2012
20120327949DISTRIBUTED PROCESSING OF DATA FRAMES BY MULTIPLE ADAPTERS USING TIME STAMPING AND A CENTRAL CONTROLLER - An apparatus and a method where a plurality of physically separate data receiving/analyzing elements receive data packets and time stamp these. A controlling unit determines a storing address for each data packet based on at least the time stamp, where the controlling unit does not perform the determination of the address until a predetermined time delay has elapsed after the time of receipt.12-27-2012
20110032947RESOURCE ARBITRATION - A circuit includes queue buffers, a bid masking circuit, and a priority selection circuit. Each of the queue buffers carries packets of a respective message class selected from a set of message classes and asserts a respective bid signal indicating that the queue buffer carries a packet that is available for transmission. The bid masking circuit produces a masked vector of bid signals by selectively masking one or more of the bid signals asserted by the queue buffers based on credit available to transmit the packets and on cyclical masking of one or more of the bid signals asserted by ones of the queue buffers selected for packet transmission. The priority selection circuit selects respective ones of the queue buffers from which packets are transmitted based on the masked vector of bid signals produced by the bid masking circuit.02-10-2011
20100232448Scalable Interface for Connecting Multiple Computer Systems Which Performs Parallel MPI Header Matching - An interface device for a compute node in a computer cluster which performs Message Passing Interface (MPI) header matching using parallel matching units. The interface device comprises a memory that stores posted receive queues and unexpected queues. The posted receive queues store receive requests from a process executing on the compute node. The unexpected queues store headers of send requests (e.g., from other compute nodes) that do not have a matching receive request in the posted receive queues. The interface device also comprises a plurality of hardware pipelined matcher units. The matcher units perform header matching to determine if a header in the send request matches any headers in any of the plurality of posted receive queues. Matcher units perform the header matching in parallel. In other words, the plural matching units are configured to search the memory concurrently to perform header matching.09-16-2010
20100232446QUEUE SHARING WITH FAIR RATE GUARANTEE - In one embodiment, separate rate meters are maintained for each flow enqueued an increased at a target rate while a packet from the flow occupies the head of a shared transmit queue. The meter value is decreased by the packet length when a packet in enqueued or dropped. The next packet that occupies the head of the shared transmit queue is dropped if the meter value corresponding to the flow is greater than a threshold.09-16-2010
20110255551METHOD AND SYSTEM FOR WEIGHTED FAIR QUEUING - A system for scheduling data for transmission in a communication network includes a credit distributor and a transmit selector. The communication network includes a plurality of children. The transmit selector is communicatively coupled to the credit distributor. The credit distributor operates to grant credits to at least one of eligible children and children having a negative credit count. Each credit is redeemable for data transmission. The credit distributor further operates to affect fairness between children with ratios of granted credits, maintain a credit balance representing a total amount of undistributed credits available, and deduct the granted credits from the credit balance. The transmit selector operates to select at least one eligible and enabled child for dequeuing, bias selection of the eligible and enabled child to an eligible and enabled child with positive credits, and add credits to the credit balance corresponding to an amount of data selected for dequeuing.10-20-2011
20120327948ADJUSTMENT OF NEGATIVE WEIGHTS IN WEIGHTED ROUND ROBIN SCHEDULING - In one embodiment, a network processor services a plurality of queues having data using weighted round robin scheduling. Each queue is assigned an initial weight based on the queue's priority. During each cycle, an updated weight is generated for each queue by adding the corresponding initial weight to a corresponding previously generated decremented weight. Further, each queue outputs as many packets as it can without exceeding its updated weight. As each packet gets transmitted, the updated weight is decremented based on the number of blocks in that packet. If, after those packets are transmitted, the decremented weight is still positive and the queue still has data, then one more packet is transmitted, no matter how many blocks are in the packet. When a decremented weight becomes negative, the weights of the remaining queues are increased to restore the priorities of the queues as set by the initial weights.12-27-2012
20120287940REDUCING DATA TRANSFER FOR MATCHING PATTERNS - A device may receive a packet, obtain data from the packet, store the data in a memory, and send a request to match a portion of the data to a set of patterns, the request identifying the portion in the memory. In addition, the device may access the portion in the memory based on the request, compare the accessed portion to the set of patterns, generate a result by comparing the accessed portion to the set of patterns, and output the result.11-15-2012
20100202470Dynamic Queue Memory Allocation With Flow Control - A method in an Ethernet controller for allocating memory space in a buffer memory between a transmit queue (TXQ) and a receive queue (RXQ) includes allocating initial memory space in the buffer memory to the RXQ and the TXQ; defining a RXQ high watermark and a RXQ low watermark; receiving an ingress data frame; determining if a memory usage in the RXQ exceeds the RXQ high watermark; if the RXQ high watermark is not exceeded, storing the ingress data frame in the RXQ; if the RXQ high watermark is exceeded, determining if there are unused memory space in the TXQ; if there are no unused memory space in the TXQ, transmitting a pause frame to halt further ingress data frame; if there are unused memory space in the TXQ, allocating unused memory space in the TXQ to the RXQ; and storing the ingress data frame in the RXQ.08-12-2010
20120243551Efficient Processing of Compressed Communication Traffic - A method for processing communication traffic includes receiving an incoming stream of compressed data conveyed by a sequence of data packets, each containing a respective portion of the compressed data. The respective portion of the compressed data contained in the first packet is stored in a buffer, having a predefined buffer size. Upon receiving a subsequent packet, at least a part of the compressed data stored in the buffer and the respective portion of the compressed data contained in the subsequent packet are decompressed, thereby providing decompressed data. A most recent part of the decompressed data that is within the buffer size is recompressed and stored in the buffer.09-27-2012
20120243550TECHNIQUES TO UTILIZE QUEUES FOR NETWORK INTERFACE DEVICES - Techniques to allocate packets for processing among multiple processor(s). Other embodiments are also disclosed and/or claimed.09-27-2012
20080247410Creating A Low Bandwidth Channel Within A High Bandwidth Packet Stream - Creating a low-bandwidth channel in a high-bandwidth channel. By taking advantage of extra bandwidth in a high-bandwidth channel, a low-bandwidth channel is created by inserting extra packets. When an inter-packet gap of the proper duration is detected, the extra packet is inserted and any incoming packets on the high-bandwidth channel are stored in an elastic buffer. Observing inter-packet gaps, minimal latency is introduced in the high-bandwidth channel when there is no extra packet in the process of being sent, and the effects of sending a packet on the low-bandwidth channel are absorbed and distributed among other passing traffic.10-09-2008
20080232386PRIORITY BASED BANDWIDTH ALLOCATION WITHIN REAL-TIME AND NON-REAL-TIME TRAFFIC STREAMS - A method and system for transmitting packets in a packet switching network. Packets received by a packet processor may be prioritized based on the urgency to process them. Packets that are urgent to be processed may be referred to as real-time packets. Packets that are not urgent to be processed may be referred to as non-real-time packets. Real-time packets have a higher priority to be processed than non-real-time packets. A real-time packet may either be discarded or transmitted into a real-time queue based upon its value priority, the minimum and maximum rates for that value priority and the current real-time queue congestion conditions. A non-real-time packet may either be discarded or transmitted into a non-real-time queue based upon its value priority, the minimum and maximum rates for that value priority and the current real-time and non-real-time queue congestion conditions.09-25-2008
20080225874Stateful packet filter and table management method thereof - A stateful packet filter and a table management method thereof The stateful packet filter includes an index buffer storing a session table index address from a session table, which is searched for determining a session of a received packet when a packet is received; and a table manager updating a state table by using the session table index address, stored in the index buffer, as a state table address value.09-18-2008
20080225873RELIABLE NETWORK PACKET DISPATCHER WITH INTERLEAVING MULTI-PORT CIRCULAR RETRY QUEUE - Disclosed is a method and apparatus for managing network data packet transmission. A retry buffer is maintained that includes a single first in, first out retransmission retry buffer. A first data packet is inserted into the retry buffer in response to transmitting the first data packet to a remote node. A determination that a second data packet is not able to be transmitted to the remote node causes the second data packet to be inserted into the retry buffer. A third data packet is retrieved from the retry buffer and a determination that it is not to be transmitted to the remote node causes the third data packet to be reinserted into the retry buffer.09-18-2008
20080225872DYNAMICALLY DEFINING QUEUES AND AGENTS IN A CONTACT CENTER - In one embodiment, an automatic call distributor apparatus is provided, including a network interface operable to receive a request for a service, and a processor operable to assign the request to a queue and to associate a number of resources with the queue based upon a determination of at least one dynamic parameter of the queue. Advantageously, resources may be allocated to queues in a flexible, efficient, and dynamic manner.09-18-2008
20130142204COMMUNICATION METHOD AND APPARATUS FOR THE EFFICIENT AND RELIABLE TRANSMISSION OF TT ETHERNET MESSAGES - The goal of the present invention is to improve the useful data efficiency and reliability in the use of commercially available ETHERNET controllers, in a distributed real time computer system, by a number of node computers communicating via one or more communication channels by means of TT ETHERNET messages. To achieve this goal, a distinction is made between the node computer send time (KNSZPKT) and the network send time (NWSZPKT) of a message. The KNSZPKT must wait for the NWSZPKT, so that under all circumstances, the start of the message has arrived in the TT star coupler at the NWSZPKT, interpreted by the clock in the TT star coupler. The TT star coupler is modified, so that a message arriving from a node computer is delayed in an intelligent port of the TT star coupler until the NWSZPKT can send it precisely at the NWSZPKT into the TT network.06-06-2013
20130114621System and Method for Computer Originated Audio File Transmission - A system and method for computer originated audio file transmission includes a server having a communications module operable to communicate with a terminal unit. The server may also include a storage module operable to store at least one file. A processor may be provided to separate the file into a plurality of packets. In accordance with one embodiment of the present invention, the communications module is operable to send an initial burst of packets to the terminal unit, wherein the initial burst of packets includes at least two of the plurality of packets. In accordance with another embodiment of the present invention, the communications module is further operable to send additional packets of the plurality of packets at a predetermined rate, until each of the plurality of packets has been sent to the terminal unit.05-09-2013
20130100960SYSTEM AND METHOD FOR DYNAMIC SWITCHING OF A RECEIVE QUEUE ASSOCIATED WITH A VIRTUAL MACHINE - Methods and systems for managing multiple receive queues of a networking device of a host machine in a virtual machine system. The networking device includes multiple receive queues that are used to receive packets intended for a guest of the virtual machine system and pass the packets to the intended virtual machine. A hypervisor of the virtual machine system manages the switching from one or more receive queues (i.e., old receive queues) to one or more other receive queues (i.e., new receive queues) by managing the provisioning of packets from the receive queues to one or more virtual machines in the virtual machine system.04-25-2013
20130128896NETWORK SWITCH WITH EXTERNAL BUFFERING VIA LOOPAROUND PATH - Described embodiments process data packets received by a network switch coupled to an external buffering device. The network switch determines a queue of an internal buffer of the network switch associated with a flow of the received packet and determines whether the received packet should be forwarded to the external buffering device. If the received packet should be forwarded to the external buffering device, the network switch sets an external buffering active indicator indicating that the network switch is in an external buffering mode for the flow, tags the received packet with metadata, and forwards the packet to the external buffering device. The external buffering device stores the forwarded packet in a queue of a memory of the external buffering device corresponding to the tagged metadata of the forwarded packet. The network switch processes packets stored in the internal buffer of the network switch.05-23-2013
20110222553THREAD SYNCHRONIZATION IN A MULTI-THREAD NETWORK COMMUNICATIONS PROCESSOR ARCHITECTURE - Described embodiments provide a packet classifier for a network processor that generates tasks corresponding to each received packet. The packet classifier includes a scheduler to generate a thread of contexts for each task received by the packet classifier from a plurality of processing modules of the network processor. The scheduler includes one or more output queues to temporarily store contexts. Each thread corresponds to an order of instructions applied to the corresponding packet, and includes an identifier of a corresponding one of the output queues. The scheduler sends the contexts to a multi-thread instruction engine that processes the threads. An arbiter selects one of the output queues in order to provide output packets to the multi-thread instruction engine, the output packets associated with a corresponding thread of contexts. Each output queue transmits output packets corresponding to a given thread contiguously in the order in which the threads started.09-15-2011
20080198866Hybrid Method and Device for Transmitting Packets - A method for transmitting packets, the method includes receiving multiple packets at multiple queues. The method is characterized by dynamically defining fixed priority queues and weighted fair queuing queues, and scheduling a transmission of packets in response to a status of the multiple queues and in response to the definition. A device for transmitting packets, the device includes multiple queues adapted to receive multiple packets. The device includes a circuit that is adapted to dynamically define fixed priority queues and weighted fair queuing queues out of the multiple queues and to schedule a transmission of packets in response to a status of the multiple queues and in response to the definition.08-21-2008
20130136141WRR SCHEDULER CONFIGURATION FOR OPTIMIZED LATENCY, BUFFER UTILIZATION - A method includes receiving network information for calculating weighted round-robin (WRR) weights, calculating WRR weights associated with queues based on the network information, and determining whether a highest common factor (HCF) exists in relation to the calculated WRR weights. The method further includes reducing the calculated WRR weights in accordance with the HCF, when it is determined that the HCF exists, and performing a WRR scheduling of packets, stored in the queues, based on the reduced WRR weights.05-30-2013
20120275464SYSTEM AND METHOD FOR DYNAMICALLY ALLOCATING BUFFERS BASED ON PRIORITY LEVELS - Methods and systems consistent with the present invention provide dynamic buffer allocation to a plurality of queues of differing priority levels. Each queue is allocated fixed minimum number of buffers that will not be de-allocated during buffer reassignment. The rest of the buffers are intelligently and dynamically assigned to each queue depending on their current need. The system then monitors and learns the incoming traffic pattern and resulting drops in each queue due to traffic bursts. Based on this information, the system readjusts allocation of buffers to each traffic class. If a higher priority queue does not need the buffers, it gradually relinquishes them. These buffers are then assigned to other queues based on the input traffic pattern and resultant drops. These buffers are aggressively reclaimed and reassigned to higher priority queues when needed.11-01-2012
20110310909PACKET SWITCHING - In an embodiment, an apparatus is provided that may include an integrated circuit including switch circuitry to determine, at least in part, an action to be executed involving a packet. This determination may be based, at least in part, upon flow information determined, at least in part, from the packet, and packet processing policy information. The circuitry may examine the policy information to determine whether a previously-established packet processing policy has been established that corresponds, at least in part, to the flow information. If the circuitry determines, at least in part, that the policy has not been established and the packet is a first packet in a flow corresponding at least in part to the flow information, the switch circuitry may request that at least one switch control program module establish, at least in part, a new packet processing policy corresponding, at least in part, to the flow information.12-22-2011
20110317712Recovering Data From A Plurality of Packets - A method includes receiving a plurality of packets at an integrated processor block of a network on a chip device. The plurality of packets includes a first packet that includes an indication of a start of data associated with a pixel shader application. The method includes recovering the data from the plurality of packets. The method also includes storing the recovered data in a dedicated packet collection memory within the network on the chip device. The method further includes retaining the data stored in the dedicated packet collection memory during an interruption event. Upon completion of the interruption event, the method includes copying packets stored in the dedicated packet collection memory prior to the interruption event to an inbox of the network on the chip device for processing.12-29-2011
20120002678PRIORITIZATION OF DATA PACKETS - A method of operating a telecommunications node (01-05-2012
20120020371MULTITHREADED, SUPERSCALAR SCHEDULING IN A TRAFFIC MANAGER OF A NETWORK PROCESSOR - Described embodiments schedule packets for transmission by a network processor. A traffic manager generates a scheduling hierarchy having a root scheduler and N levels. The network processor generates tasks corresponding to received packets. The traffic manager enqueues tasks in an associated queue. The queue has a corresponding level M, with a corresponding parent scheduler at each of M−1 levels in the scheduling hierarchy, where M is less than or equal to N. In a single scheduling cycle, a parent scheduler selects a child node to transmit one or more tasks, and the child node responds whether the scheduling is accepted, and if so, with a number of tasks for scheduling. Starting at the parent scheduler and iteratively repeating at each level until reaching the root scheduler, statistics corresponding to the selected node are updated. Output packets corresponding to the scheduled tasks are transmitted, thereby achieving a superscalar task scheduling throughput.01-26-2012
20120020370ROOT SCHEDULING ALGORITHM IN A NETWORK PROCESSOR - Described embodiments provide for arbitrating between nodes of scheduling hierarchy of a network processor. A traffic manager generates a tree scheduling hierarchy having a root scheduler and N scheduler levels. The network processor generates tasks corresponding to received packets. The traffic manager queues the received task in an associated queue of the scheduling hierarchy. The root scheduler performs smooth deficit weighted round robin (SDWRR) arbitration between each child node of the root scheduler. The SDWRR arbitration includes checking one or more status indicators of each child node of the given scheduler and selecting, based on the status indicators, a first active child node of the scheduler and updating the one or more status indicators corresponding to the selected child node. Thus, a task is scheduled for transmission by the traffic manager every cycle of the network processor.01-26-2012
20120020369SCHEDULING HIERARCHY IN A TRAFFIC MANAGER OF A NETWORK PROCESSOR - Described embodiments provide for dynamically constructing a scheduling hierarchy of a network processor. A traffic manager generates a tree scheduling hierarchy having a root scheduler and N scheduler levels. The network processor generates tasks corresponding to received packets. The traffic manager queues the received task in the associated queue, the queue having a corresponding parent scheduler at each of one or more next levels of the scheduling hierarchy up to the root scheduler. A parent scheduler selects, starting at the root scheduler and iteratively repeating at each of the corresponding N scheduling levels until a queue is selected, a child node to transmit at least one task. The traffic manager forms output packets for transmission based on the at least one task from the selected queue.01-26-2012
20120020368DYNAMIC UPDATING OF SCHEDULING HIERARCHY IN A TRAFFIC MANAGER OF A NETWORK PROCESSOR - Described embodiments provide for dynamically controlling a scheduling rate of each node in a scheduling hierarchy of a network processor. A traffic manager generates a tree scheduling hierarchy having a root scheduler and N scheduler levels. The network processor generates tasks corresponding to received packets. A traffic manager enqueues received tasks in a queue of the scheduling hierarchy associated with a data flow. The queue has a parent scheduler at each level of the hierarchy up to the root scheduler. The traffic manager maintains one or more scheduling data structures for each node in the scheduling hierarchy. If the traffic manager receives a rate reduction request corresponding to a given node of the scheduling hierarchy, the traffic manager updates one or more indicators in the scheduling data structure corresponding to the given node and removes the given node from the scheduling hierarchy, thereby reducing the scheduling rate of the node.01-26-2012
20120020367SPECULATIVE TASK READING IN A TRAFFIC MANAGER OF A NETWORK PROCESSOR - Described embodiments provide for scheduling packets for transmission by a network processor. The network processor generates tasks corresponding to received packets associated with a data flow. A traffic manager of the network processor receives tasks provided by a processing module of the network processor and generates a tree scheduling hierarchy having one or more scheduling levels. Each received task is queued in a queue of the scheduling hierarchy associated with the received task, the queue having a corresponding parent scheduler in each level of the scheduling hierarchy, forming a branch of the scheduling hierarchy. A parent scheduler selects a child node to transmit a task. A task read module determines a thread corresponding to the selected child node to read corresponding packet data from a shared memory. The traffic manager forms one or more output tasks for transmission based on the packet data corresponding to the thread.01-26-2012
20120020366PACKET DRAINING FROM A SCHEDULING HIERARCHY IN A TRAFFIC MANAGER OF A NETWORK PROCESSOR - Described embodiments provide for restructuring a scheduling hierarchy of a network processor having a plurality of processing modules and a shared memory. The scheduling hierarchy schedules packets for transmission. The network processor generates tasks corresponding to each received packet associated with a data flow. A traffic manager receives tasks provided by one of the processing modules and determines a queue of the scheduling hierarchy corresponding to the task. The queue has a parent scheduler at each of one or more next levels of the scheduling hierarchy up to a root scheduler, forming a branch of the hierarchy. The traffic manager determines if the queue and one or more of the parent schedulers of the branch should be restructured. If so, the traffic manager drops subsequently received tasks for the branch, drains all tasks of the branch, and removes the corresponding nodes of the branch from the scheduling hierarchy.01-26-2012
20120294315PACKET BUFFER COMPRISING A DATA SECTION AND A DATA DESCRIPTION SECTION - The present invention relates to a data buffer memory (11-22-2012
20130201995SYSTEM AND METHOD FOR PERFORMING PACKET QUEUING ON A CLIENT DEVICE USING PACKET SERVICE CLASSIFICATIONS - A client device having a networking layer and a network driver layer for transmitting network packets comprising: a plurality of transmit queues configured at the network layer, each of the transmit queues having different packet service classifications associated therewith, packets being queued in one of the transmit queues according to traffic service classifications assigned to the packets; a classifier module for classifying packets according to the different packet service classifications, wherein a packet to be transmitted is stored in one of the transmit queues based on the packet service classifications; and a network layer packet scheduler for scheduling packets for transmission from each of the transmit queues at the networking layer, the network layer packet scheduler scheduling packets for transmission according to the packet service classifications.08-08-2013
20130201996SCHEDULING PACKET TRANSMISSION ON A CLIENT DEVICE USING PACKET CLASSIFICATIONS INCLUDING HIGH PRIORITY NETWORK CONTROL PACKETS - A method comprising: configuring a plurality of transmit queues, each of the transmit queues having different packet service classifications associated therewith, the packet service classifications specifying a relative priority for packets stored within each respective queue, at least one of the transmit queues having a packet service classification assigned to network control packets being assigned a highest priority relative to the other transmit queues; classifying packets according to the different packet service classifications, wherein a packet to be transmitted is stored in one of the transmit queues based on the packet service classifications, and wherein network control packets are stored in the queue associated with network control packets; and scheduling packets for transmission from each of the transmit queues, wherein packets are scheduled for transmission according to the packet service classifications and wherein network control packets are prioritized for transmission above all other packet service classifications.08-08-2013
20130208731POSTED AND UNENCUMBERED QUEUES - In one aspect, techniques are provided for adding a packet to a queue. A packet may he received. A determination may be made if the packet is encumbered or unencumbered. The packet may be added to a posted queue, to an encumbered queue, or a unencumbered queue based on the determination. In another aspect, techniques are provided for de-queuing a packet in a posted queue. A posted packet may be de-queued and encumbered queues associated with the packet may be added to unencumbered queues.08-15-2013

Patent applications in class Queuing arrangement

Patent applications in all subclasses Queuing arrangement