Hasting
Joseph Hasting US
Patent application number | Description | Published |
---|---|---|
20110225376 | MEMORY MANAGER FOR A NETWORK COMMUNICATIONS PROCESSOR ARCHITECTURE - Described embodiments provide a memory manager for a network processor having a plurality of processing modules and a shared memory. The memory manager allocates blocks of the shared memory to requesting ones of the plurality of processing modules. A free block list tracks availability of memory blocks of the shared memory. A reference counter maintains, for each allocated memory block, a reference count indicating a number of access requests to the memory block by ones of the plurality of processing modules. The reference count is located with data at the allocated memory block. For subsequent access requests to a given memory block concurrent with processing of a prior access request to the memory block, a memory access accumulator (i) accumulates an incremental value corresponding to the subsequent access requests, (ii) updates the reference count associated with the memory block, and (iii) updates the memory block with the accumulated result. | 09-15-2011 |
Joseph Hasting, Orefield, PA US
Patent application number | Description | Published |
---|---|---|
20120131283 | MEMORY MANAGER FOR A NETWORK COMMUNICATIONS PROCESSOR ARCHITECTURE - Described embodiments provide a network processor having a plurality of processing modules coupled to a system cache and a shared memory. A memory manager allocates blocks of the shared memory to a requesting one of the processing modules. The allocated blocks store data corresponding to packets received by the network processor. The memory manager maintains a reference count for each allocated memory block indicating a number of processing modules accessing the block. One of the processing modules reads the data stored in the allocated memory blocks, stores the read data to corresponding entries of the system cache and operates on the data stored in the system cache. Upon completion of operation on the data, the processing module requests to decrement the reference count of each memory block. Based on the reference count, the memory manager invalidates the entries of the system cache and deallocates the memory blocks. | 05-24-2012 |
Joseph R. Hasting, Orefield, PA US
Patent application number | Description | Published |
---|---|---|
20130086332 | Task Queuing in a Multi-Flow Network Processor Architecture - Described embodiments generate tasks corresponding to each packet received by a network processor. A destination processing module receives a task and determines, based on the task size, a queue in which to store the task, and whether the task is larger than space available within a current memory block of the queue. If the task is larger, an address of a next memory block in a memory is determined, and the address is provided to a source processing module of the task. The source processing module writes the task to the memory based on a provided offset address and the address of the next memory block, if provided. If a task is written to more than one memory block, the destination processing module preloads the address of the next memory block to a local memory to process queued tasks without stalling to retrieve the address of the next memory block. | 04-04-2013 |
Joseph Roy Hasting, Orefield, PA US
Patent application number | Description | Published |
---|---|---|
20110019814 | VARIABLE SIZED HASH OUTPUT GENERATION USING A SINGLE HASH AND MIXING FUNCTION - A system and circuit for generating a variable sized hash output using a single hash and mixing function are disclosed. In one embodiment, a system for generating a variable sized hash output data includes a hash function module for generating an N bit hash result data by processing an M bit input data. The system also includes a mixing function module including a plurality of logic gates which implement a set of reversible arithmetic functions for generating an N bit hash output data by processing the N bit hash result data using the set of reversible arithmetic functions, where a subset of the N bit hash output data is used as the variable sized hash output data, and a size of the subset of the N bit hash output data is less than N bits. | 01-27-2011 |
William Hasting, Cinicinnati, OH US
Patent application number | Description | Published |
---|---|---|
20120156029 | LOW-DUCTILITY TURBINE SHROUD FLOWPATH AND MOUNTING ARRANGEMENT THEREFOR - A turbine flowpath apparatus is provided for a gas turbine engine having a centerline axis. The apparatus includes: an annular flowpath member of low-ductility material, the flowpath member having a flowpath surface and an opposed back surface, and having a cross-sectional shape comprising a generally cylindrical forward section and an aft section that extends aft and radially outward at a non-perpendicular, non-parallel angle to the centerline axis; an annular stationary structure surrounding the flowpath member; and an annular centering spring disposed between the stationary structure and the flowpath member, the centering spring urging the flowpath member towards a centered position within the stationary structure. | 06-21-2012 |