Patent application number | Description | Published |
20080320226 | Apparatus and Method for Improved Data Persistence within a Multi-node System - Improved access to retained data useful to a system is accomplished by managing data flow through cache associated with the processor(s) of a multi-node system. A data management facility operable with the processors and memory array directs the flow of data from the processors to the memory array by determining the path along which data evicted from a level of cache close to one of the processors is to return to a main memory and directing evicted data to be stored, if possible, in a horizontally associated cache. | 12-25-2008 |
20090006693 | Apparatus and Method for Fairness Arbitration for a Shared Pipeline in a Large SMP Computer System - A modification of rank priority arbitration for access to computer system resources through a shared pipeline that provides more equitable arbitration by allowing a higher ranked request access to the shared resource ahead of a lower ranked requester only one time. If multiple requests are active at the same time, the rank priority will first select the highest priority active request and grant it access to the resource. It will also set a ‘blocking latch’ to prevent that higher priority request from re-gaining access to the resource until the rest of the outstanding lower priority active requesters have had a chance to access the resource. | 01-01-2009 |
20090164874 | Collecting Failure Information On Error Correction Code (ECC) Protected Data - Methods and means of error correction code (ECC) debugging may comprise detecting whether a bit error has occurred; determining which bit or bits were in error; and using the bit error information for debug. The method may further comprise comparing ECC syndromes against one or more ECC syndrome patterns. The method may allow for accumulating bit error information, comparing error bit failures against a pattern, trapping data, counting errors, determining pick/drop information, or stopping the machine for debug. | 06-25-2009 |
20090193192 | Method and Process for Expediting the Return of Line Exclusivity to a Given Processor Through Enhanced Inter-node Communications - Cache coherency latency is reduced through a method and apparatus that expedites the return of line exclusivity to a given processor in a multi-node data handling system through enhanced inter-node communications. | 07-30-2009 |
20130036341 | COLLECTING FAILURE INFORMATION ON ERROR CORRECTION CODE (ECC) PROTECTED DATA - Methods and means of error correction code (ECC) debugging may comprise detecting whether a bit error has occurred; determining which bit or bits were in error; and using the bit error information for debug. The method may further comprise comparing ECC syndromes against one or more ECC syndrome patterns. The method may allow for accumulating bit error information, comparing error bit failures against a pattern, trapping data, counting errors, determining pick/drop information, or stopping the machine for debug. | 02-07-2013 |
Patent application number | Description | Published |
20110320695 | MITIGATING BUSY TIME IN A HIGH PERFORMANCE CACHE - Various embodiments of the present invention mitigate busy time in a hierarchical store-through memory cache structure. In one embodiment, a cache directory associated with a memory cache is divided into a plurality of portions each associated with a portion memory cache. Simultaneous cache lookup operations and cache write operations between the plurality of portions of the cache directory are supported. Two or more store commands are simultaneously processed in a shared cache pipeline communicatively coupled to the plurality of portions of the cache directory. | 12-29-2011 |
20110320697 | DYNAMICALLY SUPPORTING VARIABLE CACHE ARRAY BUSY AND ACCESS TIMES - Various embodiments of the present invention manage access to a cache memory. In or more embodiments a request for a targeted interleave within a cache memory is received. The request is associated with an operation of a given type. The target is determined to be available. The request is granted in response to the determining that the target is available. A first interleave availability table associated with a first busy time associated with the cache memory is updated based on the operation associated with the request in response to granting the request. A second interleave availability table associated with a second busy time associated with the cache memory is updated based on the operation associated with the request in response to granting the request. | 12-29-2011 |
20130060997 | MITIGATING BUSY TIME IN A HIGH PERFORMANCE CACHE - Various embodiments of the present invention mitigate busy time in a hierarchical store-through memory cache structure. In one embodiment, a cache directory associated with a memory cache is divided into a plurality of portions each associated with a portion memory cache. Simultaneous cache lookup operations and cache write operations between the plurality of portions of the cache directory are supported. Two or more store commands are simultaneously processed in a shared cache pipeline communicatively coupled to the plurality of portions of the cache directory. | 03-07-2013 |
20130061002 | PERFORMANCE OPTIMIZATION AND DYNAMIC RESOURCE RESERVATION FOR GUARANTEED COHERENCY UPDATES IN A MULTI-LEVEL CACHE HIERARCHY - A cache includes a cache pipeline, a request receiver configured to receive off chip coherency requests from an off chip cache and a plurality of state machines coupled to the request receiver. The cache also includes an arbiter coupled between the plurality of state machines and the cache pipe line and is configured to give priority to off chip coherency requests as well as a counter configured to count the number of coherency requests sent from the cache pipeline to a lower level cache. The cache pipeline is halted from sending coherency requests when the counter exceeds a predetermined limit. | 03-07-2013 |
20130339593 | REDUCING PENALTIES FOR CACHE ACCESSING OPERATIONS - A computer program product for reducing penalties for cache accessing operations is provided. The computer program product includes a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method. The method includes respectively associating platform registers with cache arrays, loading control information and data of a store operation to be executed with respect to one or more of the cache arrays into the one or more of the platform registers respectively associated with the one or more of the cache arrays, and, based on the one or more of the cache arrays becoming available, committing the data from the one or more of the platform registers using the control information from the same platform registers to the one or more of the cache arrays. | 12-19-2013 |
20130339606 | REDUCING STORE OPERATION BUSY TIMES - A computer product for reducing store operation busy times is provided and relates to associating first and second platform registers with a cache array, determining that first and second store operations target a same wordline of the cache array, loading control information and data of the store operations into the platform registers and delaying a commit of the first store operation until the loading of the second platform register is complete. The method further includes committing the data from the platform registers using the control information from the platform registers to the wordline of the cache array at a same time to thereby reduce a busy time of the wordline of the cache array. | 12-19-2013 |
20130339607 | REDUCING STORE OPERATION BUSY TIMES - A computer product for reducing store operation busy times is provided. The computer product includes a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method. The method includes associating first and second platform registers with a cache array, determining that first and second store operations target a same wordline of the cache array, loading control information and data of the first and second store operation into the first and second platform registers and delaying a commit of the first store operation until the loading of the second platform register is complete. The method further includes committing the data from the first and second platform registers using the control information from the first and second platform registers to the wordline of the cache array at a same time to thereby reduce a busy time of the wordline of the cache array. | 12-19-2013 |
20130339622 | CACHE COHERENCY PROTOCOL FOR ALLOWING PARALLEL DATA FETCHES AND EVICTION TO THE SAME ADDRESSABLE INDEX - A technique for cache coherency is provided. A cache controller selects a first set from multiple sets in a congruence class based on a cache miss for a first transaction, and places a lock on the entire congruence class in which the lock prevents other transactions from accessing the congruence class. The cache controller designates in a cache directory the first set with a marked bit indicating that the first transaction is working on the first set, and the marked bit for the first set prevents the other transactions from accessing the first set within the congruence class. The cache controller removes the lock on the congruence class based on the marked bit being designated for the first set, and resets the marked bit for the first set to an unmarked bit based on the first transaction completing work on the first set in the congruence class. | 12-19-2013 |
20130339623 | CACHE COHERENCY PROTOCOL FOR ALLOWING PARALLEL DATA FETCHES AND EVICTION TO THE SAME ADDRESSABLE INDEX - A technique for cache coherency is provided. A cache controller selects a first set from multiple sets in a congruence class based on a cache miss for a first transaction, and places a lock on the entire congruence class in which the lock prevents other transactions from accessing the congruence class. The cache controller designates in a cache directory the first set with a marked bit indicating that the first transaction is working on the first set, and the marked bit for the first set prevents the other transactions from accessing the first set within the congruence class. The cache controller removes the lock on the congruence class based on the marked bit being designated for the first set, and resets the marked bit for the first set to an unmarked bit based on the first transaction completing work on the first set in the congruence class. | 12-19-2013 |
20130339808 | BITLINE DELETION - Embodiments relate to a method including detecting a first error when reading a first cache line, recording a first address of the first error, detecting a second error when reading a second cache line and recording a second address of the second error. Embodiments also include comparing the first and second bitline address, comparing the first and second wordline address, activating a bitline delete mode based on matching first and second bitline addresses and not matching the first and second wordline addresses, detecting a third error when reading a third cache line, recording a third bitline address of the third error, comparing the second bitline address to a third bitline address and deleting a location corresponding to the third cache line from available cache locations based on the activated bitline delete mode and the third bitline address matching the second bitline address. | 12-19-2013 |
20130339809 | BITLINE DELETION - Embodiments relate to a computer system for bitline deletion, the system including a cache controller and cache. The system is configured to perform a method including detecting a first error when reading a first cache line, recording a first address of the first error, detecting a second error when reading a second cache line, recording a second address of the second error, comparing first and second bitline addresses, comparing the first and second wordline address, activating a bitline delete mode based on matching first and second bitline addresses and not matching first and second wordline addresses, detecting a third error when reading a third cache line, recording a third bitline address of the third error, comparing the second bitline address to the third bitline address and deleting a location corresponding to the third cache line based on the activated bitline delete mode and matching third and second bitline addresses. | 12-19-2013 |
20130339822 | BAD WORDLINE/ARRAY DETECTION IN MEMORY - A technique for error detection is provided. A controller is configured to detect errors by using error correcting code (ECC), and a cache includes independent ECC words for storing data. The controller detects the errors in the ECC words for a wordline that is read. The controller detects a first error in a first ECC word on the wordline and a second error in a second ECC word on the wordline. The controller determines that the wordline is a failing wordline based on detecting the first error in the first ECC word and the second error in the second ECC word. | 12-19-2013 |
20130339823 | BAD WORDLINE/ARRAY DETECTION IN MEMORY - A technique for error detection is provided. A controller is configured to detect errors by using error correcting code (ECC), and a cache includes independent ECC words for storing data. The controller detects the errors in the ECC words for a wordline that is read. The controller detects a first error in a first ECC word on the wordline and a second error in a second ECC word on the wordline. The controller determines that the wordline is a failing wordline based on detecting the first error in the first ECC word and the second error in the second ECC word. | 12-19-2013 |
20140095795 | REDUCING PENALTIES FOR CACHE ACCESSING OPERATIONS - A computer program product for reducing penalties for cache accessing operations is provided. The computer program product includes a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method. The method includes respectively associating platform registers with cache arrays, loading control information and data of a store operation to be executed with respect to one or more of the cache arrays into the one or more of the platform registers respectively associated with the one or more of the cache arrays, and, based on the one or more of the cache arrays becoming available, committing the data from the one or more of the platform registers using the control information from the same platform registers to the one or more of the cache arrays. | 04-03-2014 |
20140095839 | MONITORING PROCESSING TIME IN A SHARED PIPELINE - A pipelined processing device includes: a pipeline controller configured to receive at least one instruction associated with an operation from each of a plurality of subcontrollers, and input the at least one instruction into a pipeline; and a pipeline counter configured to receive an active time value from each of the plurality of subcontrollers, the active time value indicating at least a portion of a time taken to process the at least one instruction, the pipeline controller configured to route the active time value to a shared pipeline storage for performance analysis. | 04-03-2014 |
Patent application number | Description | Published |
20120014788 | DIFFUSER USING DETACHABLE VANES - A system, in certain embodiments, includes a plurality of detachable, three-dimensional diffuser vanes attached to a diffuser plate of a centrifugal compressor. In certain embodiments, the detachable, three-dimensional diffuser vanes may be attached to the diffuser plate using threaded fasteners. In addition, dowel pins may be used to align the detachable, three-dimensional diffuser vanes with respect to the diffuser plate. However, in other embodiments, the detachable, three-dimensional diffuser vanes may include a tab configured to fit securely within a groove in the diffuser plate. In addition, the tabs of the detachable, three-dimensional diffuser vanes may include indentions that mate with extensions extending from the diffuser plate, wherein the tabs may slide into slots between the extensions and the grooves of the diffuser plate. | 01-19-2012 |
20120014801 | DIFFUSER HAVING DETACHABLE VANES WITH POSITIVE LOCK - A system, in certain embodiments, includes a centrifugal compressor diffuser that includes an elliptical plate including multiple vane receptacles disposed about an axis of the plate and multiple detachable vanes attached to the plate. Each vane receptacle includes a first two dimensional (2D) projection along a plane of the elliptical plate and each detachable vane includes a second two dimensional (2D) projection along a base portion of the vane, where each detachable vane is disposed in a respective vane receptacle with the first and second 2D projections blocking movement of the detachable vane in at least a first axial direction relative to the elliptical plate. In certain embodiments, the first and second 2D projections may include a first tab to fit in a recess between a pair of second tabs, respectively, or vice versa. However, in other embodiments, the first and second 2D projections may include alternative mating surfaces. | 01-19-2012 |
20130315741 | DIFFUSER HAVING DETACHABLE VANES WITH POSITIVE LOCK - A system, in certain embodiments, includes a centrifugal compressor diffuser that includes an elliptical plate including multiple vane receptacles disposed about an axis of the plate and multiple detachable vanes attached to the plate. Each vane receptacle includes a first two dimensional (2D) projection along a plane of the elliptical plate and each detachable vane includes a second two dimensional (2D) projection along a base portion of the vane, where each detachable vane is disposed in a respective vane receptacle with the first and second 2D projections blocking movement of the detachable vane in at least a first axial direction relative to the elliptical plate. In certain embodiments, the first and second 2D projections may include a first tab to fit in a recess between a pair of second tabs, respectively, or vice versa. However, in other embodiments, the first and second 2D projections may include alternative mating surfaces. | 11-28-2013 |
20140186173 | DIFFUSER USING DETACHABLE VANES - A system, in certain embodiments, includes a plurality of detachable, three-dimensional diffuser vanes attached to a diffuser plate of a centrifugal compressor. In certain embodiments, the detachable, three-dimensional diffuser vanes may be attached to the diffuser plate using threaded fasteners. In addition, dowel pins may be used to align the detachable, three-dimensional diffuser vanes with respect to the diffuser plate. However, in other embodiments, the detachable, three-dimensional diffuser vanes may include a tab configured to fit securely within a groove in the diffuser plate. In addition, the tabs of the detachable, three-dimensional diffuser vanes may include indentions that mate with extensions extending from the diffuser plate, wherein the tabs may slide into slots between the extensions and the grooves of the diffuser plate. | 07-03-2014 |