Patent application number | Description | Published |
20100293312 | Network Communications Processor Architecture - Described embodiments provide a system having a plurality of processor cores and common memory in direct communication with the cores. A source processing core communicates with a task destination core by generating a task message for the task destination core. The task source core transmits the task message directly to a receiving processing core adjacent to the task source core. If the receiving processing core is not the task destination core, the receiving processing core passes the task message unchanged to a processing core adjacent the receiving processing core. If the receiving processing core is the task destination core, the task destination core processes the message. | 11-18-2010 |
20120155495 | PACKET ASSEMBLY MODULE FOR MULTI-CORE, MULTI-THREAD NETWORK PROCESSORS - Described embodiments provide for processing received data packets into packet reassemblies for transmission as output packets of a network processor. A packet assembler determines an associated packet reassembly of data portions and enqueues an identifier for each data portion in an input queue corresponding to the packet reassembly associated with the data portion. A state data entry corresponding to each packet reassembly identifies whether the packet reassembly is actively processed by the packet assembler. Iteratively, until an eligible data portion is selected, the packet assembler selects a given data portion from a non-empty input queue for processing and determines if the selected data portion corresponds to a reassembly that is actively processed. If the reassembly is active, the packet assembler sets the selected data portion as ineligible for selection. Otherwise, the packet assembler selects the data portion for processing and modifies the packet reassembly based on the selected data portion. | 06-21-2012 |
20120300772 | SHARING OF INTERNAL PIPELINE RESOURCES OF A NETWORK PROCESSOR WITH EXTERNAL DEVICES - Described embodiments provide a system having at least two network processors that each have a plurality of processing modules. The processing modules process a packet in a task pipeline by transmitting task messages to other processing modules on a task ring, the task messages related to desired processing of the packet. A series of tasks within a network processor may result in no processing or reduced processing for certain processing modules creating a virtual pipeline depending on the packet received by the network processor. At least two of the network processors communicate tasks. This communication allows ter the extension of the virtual pipeline of or IC network processor to at least two network processors. | 11-29-2012 |
20130028264 | PACKET REASSEMBLY PROCESSING - Described embodiments provide for a reassembly system for processing an asynchronous transfer mode (ATM) cell of data into an ATM adaptation layer (AAL) packet. A preprocessor module identifies a first conversation identification of one or more minipackets in the ATM cell, and reassembles the one or more minipackets having the first conversation identification into a portion of the AAL packet. A preprocessor determines if a trigger has occurred. In response to a trigger, the preprocessor sends a portion of the reassembled minipackets having the first conversation identification to a destination processor. | 01-31-2013 |
20130097345 | ADDRESS LEARNING AND AGING FOR NETWORK BRIDGING IN A NETWORK PROCESSOR - Described embodiments process data packets received that include a source address and at least one destination address. If the destination address is stored in a memory of an I/O adapter, the received data packet is processed in accordance with bridging rules associated with each destination address stored in the I/O adapter memory. If the destination address is not stored in the I/O adapter memory, the I/O adapter sends a task message to a processor to determine whether the destination address is stored in an address table stored in a shared memory of the network processor. The I/O adapter memory has lower access latency than the address table. If the destination address is stored in the address table, the received data packet is processed in accordance with bridging rules stored in the address table and the bridging rules stored in the I/O adapter memory are updated. | 04-18-2013 |
20130128896 | NETWORK SWITCH WITH EXTERNAL BUFFERING VIA LOOPAROUND PATH - Described embodiments process data packets received by a network switch coupled to an external buffering device. The network switch determines a queue of an internal buffer of the network switch associated with a flow of the received packet and determines whether the received packet should be forwarded to the external buffering device. If the received packet should be forwarded to the external buffering device, the network switch sets an external buffering active indicator indicating that the network switch is in an external buffering mode for the flow, tags the received packet with metadata, and forwards the packet to the external buffering device. The external buffering device stores the forwarded packet in a queue of a memory of the external buffering device corresponding to the tagged metadata of the forwarded packet. The network switch processes packets stored in the internal buffer of the network switch. | 05-23-2013 |
20130142205 | Hierarchical Self-Organizing Classification Processing in a Network Switch - Described embodiments process data packets received by a switch coupled to a network processor. The switch determines whether one or more rules for classifying and processing the received packet are stored in an internal classification database of the switch. If one or more rules are stored in the internal database, the switch updates statistics corresponding to each of the rules and classifies and processes the received packet in accordance with the rules. If no associated rules are stored in the internal database, the switch tags the received packet with metadata and forwards the packet to the network processor. The network processor determines one or more rules for classifying and processing the forwarded packet in a classification database of the network processor and updates statistics corresponding to each rule. The network processor classifies and processes the packet in accordance with the rules and updates the internal database of the switch. | 06-06-2013 |
20140153575 | PACKET DATA PROCESSOR IN A COMMUNICATIONS PROCESSOR ARCHITECTURE - Described embodiments provide a network processor having a hardware accelerator that identifies a received packet and, based on a flow identification associated with the received packet, might pre-fetch pre-established portions of data from the received packet into local data memory (e.g., local data cache) for processing by a general purpose processor core. In addition to the packet data, the software necessary for the general-purpose processor core to process the data might also be pre-fetched into local instruction memory (e.g., local instruction cache). The flow identification might be used to select different portions of the packet and different software to be pre-fetched. | 06-05-2014 |
20140258375 | SYSTEM AND METHOD FOR LARGE OBJECT CACHE MANAGEMENT IN A NETWORK - Aspects of the disclosure pertain to a system and method for large object cache management in a network. A proxy server of the present disclosure implements a token-based policing mechanism via a cache tracking table to evaluate objects for potential inclusion in a cache controlled by the proxy server and to age-out objects already stored in the cache. Use of the tracking table and token-based policing mechanism by the proxy server promoting efficient usage of caching resources and promotes efficient handling of client requests for large data objects, such as video files. | 09-11-2014 |
20140351519 | SYSTEM AND METHOD FOR PROVIDING CACHE-AWARE LIGHTWEIGHT PRODUCER CONSUMER QUEUES - Aspects of the disclosure pertain to a system and method for providing cache-aware lightweight producer consumer queues. The system is a multiprocessor system configured for specifying separate cache attributes for inner (e.g., local) cache and outer (e.g., shared) cache for promoting lower system overhead. Separate cache attributes are specified such that shared variables are cacheable only in a cache level shared by multiple processors. | 11-27-2014 |