Patent application number | Description | Published |
20090198867 | METHOD FOR CHAINING MULTIPLE SMALLER STORE QUEUE ENTRIES FOR MORE EFFICIENT STORE QUEUE USAGE - A computer implemented method, a processor chip, a data processing system, and computer program product in a data processing system process information in a store cache of a data processing system. The store cache receives a first entry that includes a first address indicating a first segment of a cache line. The store cache then receives a second entry including a second address indicating a second segment of the cache line. Responsive to the first segment not being equal to the second segment, the first entry is chained to the second entry. | 08-06-2009 |
20100262735 | TECHNIQUES FOR TRIGGERING A BLOCK MOVE USING A SYSTEM BUS WRITE COMMAND INITIATED BY USER CODE - A technique for triggering a system bus write command with user code includes identifying a specific store-type instruction in a user instruction sequence. The specific store-type instruction is converted into a specific request-type command, which is configured to include core permission controls (that are stored in core configuration registers of a processor core by a trusted kernel) and user created data (stored in a cache memory). Slave devices are configured through register space (that is only accessible by the trusted kernel) with respective slave permission controls. The specific request-type command is then transmitted from the cache memory, via a system bus. In this case, the slave devices that receive the specific request-type command (via the system bus) process the specific request-type command when the core permission controls are the same as the respective slave permission controls. | 10-14-2010 |
20100268884 | Updating Partial Cache Lines in a Data Processing System - A processing unit for a data processing system includes a processor core having one or more execution units for processing instructions and a register file for storing data accessed in processing of the instructions. The processing unit also includes a multi-level cache hierarchy coupled to and supporting the processor core. The multi-level cache hierarchy includes at least one upper level of cache memory having a lower access latency and at least one lower level of cache memory having a higher access latency. The lower level of cache memory, responsive to receipt of a memory access request that hits only a partial cache line in the lower level cache memory, sources the partial cache line to the at least one upper level cache memory to service the memory access request. The at least one upper level cache memory services the memory access request without caching the partial cache line. | 10-21-2010 |
20100268887 | INFORMATION HANDLING SYSTEM WITH IMMEDIATE SCHEDULING OF LOAD OPERATIONS IN A DUAL-BANK CACHE WITH DUAL DISPATCH INTO WRITE/READ DATA FLOW - An information handling system (IHS) includes a processor with a cache memory system. The processor includes a processor core with an L1 cache memory that couples to an L2 cache memory. The processor includes an arbitration mechanism that controls load and store requests to the L2 cache memory. The arbitration mechanism includes control logic that enables a load request to interrupt a store request that the L2 cache memory is currently servicing. The L2 cache memory includes dual data banks so that one bank may perform a load operation while the other bank performs a store operation. The cache system provides dual dispatch points into the data flow to the dual cache banks of the L2 cache memory. | 10-21-2010 |
20100268890 | INFORMATION HANDLING SYSTEM WITH IMMEDIATE SCHEDULING OF LOAD OPERATIONS IN A DUAL-BANK CACHE WITH SINGLE DISPATCH INTO WRITE/READ DATA FLOW - An information handling system (IHS) includes a processor with a cache memory system. The processor includes a processor core with an L1 cache memory that couples to an L2 cache memory. The processor includes an arbitration mechanism that controls load and store requests to the L2 cache memory. The arbitration mechanism includes control logic that enables a load request to interrupt a store request that the L2 cache memory is currently servicing. The L2 cache memory includes dual data banks so that one bank may perform a load operation while the other bank performs a store operation. The cache system provides a single dispatch point into the data flow to the dual cache banks of the L2 cache memory. | 10-21-2010 |
20130205096 | FORWARD PROGRESS MECHANISM FOR STORES IN THE PRESENCE OF LOAD CONTENTION IN A SYSTEM FAVORING LOADS BY STATE ALTERATION - A multiprocessor data processing system includes a plurality of cache memories including a cache memory. The cache memory issues a read-type operation for a target cache line. While waiting for receipt of the target cache line, the cache memory monitors to detect a competing store-type operation for the target cache line. In response to receiving the target cache line, the cache memory installs the target cache line in the cache memory, and sets a coherency state of the target cache line installed in the cache memory based on whether the competing store-type operation is detected. | 08-08-2013 |
20130205099 | FORWARD PROGRESS MECHANISM FOR STORES IN THE PRESENCE OF LOAD CONTENTION IN A SYSTEM FAVORING LOADS BY STATE ALTERATION - A multiprocessor data processing system includes a plurality of cache memories including a cache memory. The cache memory issues a read-type operation for a target cache line. While waiting for receipt of the target cache line, the cache memory monitors to detect a competing store-type operation for the target cache line. In response to receiving the target cache line, the cache memory installs the target cache line in the cache memory, and sets a coherency state of the target cache line installed in the cache memory based on whether the competing store-type operation is detected. | 08-08-2013 |
Patent application number | Description | Published |
20080201533 | REDUCING NUMBER OF REJECTED SNOOP REQUESTS BY EXTENDING TIME TO RESPOND TO SNOOP REQUEST - A cache, system and method for reducing the number of rejected snoop requests. An incoming snoop request is entered in the first available latch in a pipeline of latches in a stall/reorder unit if the stall/reorder unit is not full. The entered snoop request is dispatched to a selector upon entering a bottom latch in the pipeline. The stall/reorder unit is not informed as to whether the dispatched snoop request is accepted by an arbitration mechanism for several clock cycles after the dispatch occurred. A copy of the dispatched snoop request is stored in a top latch in an overrun pipeline of latches in the first unit upon dispatching the snoop request. By maintaining information about the snoop request, the snoop request may be dispatched again to the selector in case the dispatched snoop request was rejected thereby increasing the chance that the snoop request will ultimately be accepted. | 08-21-2008 |
20080201534 | REDUCING NUMBER OF REJECTED SNOOP REQUESTS BY EXTENDING TIME TO RESPOND TO SNOOP REQUEST - A cache, system and method for reducing the number of rejected snoop requests. A “stall/reorder unit” in a cache receives a snoop request from an interconnect. The snoop request is entered in the first available latch of the stall/reorder unit unless the stall/reorder unit is full in which case the new snoop request is transmitted to a second unit configured to transmit a request to retry resending the new snoop request. Snoop requests have a higher priority than requests from processors and snoop requests are selected by the arbitration mechanism over processor requests unless the arbitration mechanism requests otherwise (“stall request”) to the stall/reorder unit. By snoop requests having a higher priority than processor requests, the number of snoop requests rejected is reduced. By having the arbitration mechanism issue a stall request, the processor will not be starved. | 08-21-2008 |
20080229022 | EFFICIENT SYSTEM BOOTSTRAP LOADING - An efficient system for bootstrap loading scans cache lines into a cache store queue during a scan phase, and then transmits the cache lines from the cache store queue to a cache memory array during a functional phase. Scan circuitry stores a given cache line in a set of latches associated with one of a plurality of cache entries in the cache store queue, and passes the cache line from the latch set to the associated cache entry. The cache lines may be scanned from test software that is external to the computer system. Read/claim dispatch logic dispatches store instructions for the cache entries to read/claim machines which write the cache lines to the cache memory array without obtaining write permission, after the read/claim machines evaluate a mode bit which indicates that cache entries in the cache store queue are scanned cache lines. In the illustrative embodiment the cache memory is an L2 cache. | 09-18-2008 |
20080235459 | PROCESSOR, METHOD, AND DATA PROCESSING SYSTEM EMPLOYING A VARIABLE STORE GATHER WINDOW - A processor includes at least one instruction execution unit that executes store instructions to obtain store operations and a store queue coupled to the instruction execution unit. The store queue includes a queue entry in which the store queue gathers multiple store operations during a store gathering window to obtain a data portion of a write transaction directed to lower level memory. In addition, the store queue includes dispatch logic that varies a size of the store gathering window to optimize store performance for different store behaviors and workloads. | 09-25-2008 |
20090006916 | METHOD FOR CACHE CORRECTION USING FUNCTIONAL TESTS TRANSLATED TO FUSE REPAIR - A method of correcting defects in a storage array of a microprocessor, such as a cache memory, by operating the microprocessor to carry out a functional test procedure which utilizes cache memory, collecting fault data in a trace array during the functional test procedure, identifying a location of the defect in the cache memory using the fault data, and repairing the defect by setting a fuse to reroute access requests for the location to a redundant array. The fault data may include an error syndrome and a failing address. The functional test procedure creates random cache access sequences that cause varying loads of traffic in the cache memory using a test pattern based on a random seed. The functional test procedure may be carried out after completion of a nonfunctional, built-in self test of the microprocessor which sets some of the fuses. | 01-01-2009 |
20090083579 | METHOD FOR CACHE CORRECTION USING FUNCTIONAL TESTS TRANSLATED TO FUSE REPAIR - A method of correcting defects in a storage array of a microprocessor, such as a cache memory, by operating the microprocessor to carry out a functional test procedure which utilizes cache memory, collecting fault data in a trace array during the functional test procedure, identifying a location of the defect in the cache memory using the fault data, and repairing the defect by setting a fuse to reroute access requests for the location to a redundant array. The fault data may include an error syndrome and a failing address. The functional test procedure creates random cache access sequences that cause varying loads of traffic in the cache memory using a test pattern based on a random seed. The functional test procedure may be carried out after completion of a nonfunctional, built-in self test of the microprocessor which sets some of the fuses. | 03-26-2009 |