Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


Look-ahead

Subclass of:

711 - Electrical computers and digital processing systems: memory

711100000 - STORAGE ACCESSING AND CONTROL

711117000 - Hierarchical memories

711118000 - Caching

Patent class list (only not empty are listed)

Deeper subclasses:

Entries
DocumentTitleDate
20090198906Techniques for Multi-Level Indirect Data Prefetching - A technique for performing data prefetching using multi-level indirect data prefetching includes determining a first memory address of a pointer associated with a data prefetch instruction. Content that is included in a first data block (e.g., a first cache line of a memory) at the first memory address is then fetched. A second memory address is then determined based on the content at the first memory address. Content that is included in a second data block (e.g., a second cache line) at the second memory address is then fetched (e.g., from the memory or another memory). A third memory address is then determined based on the content at the second memory address. Finally, a third data block (e.g., a third cache line) that includes another pointer or data at the third memory address is fetched (e.g., from the memory or the another memory).08-06-2009
20100011170CACHE MEMORY DEVICE - A cache memory device includes an address generation unit, a data memory, a tag memory, and a hit judging unit. The address generation unit generates a prefetch index address included in a prefetch address based on an input address supplied from a higher-level device. The tag memory stores a plurality of tag addresses corresponding to a plurality of line data stored in the data memory. Further, the tag memory comprises a memory component that is configured to receive the prefetch index address and an input index address included in the input address in parallel and to output a first tag address in accordance with the input index address and a second tag address in accordance with the prefetch index address in parallel. The hit judging unit performs cache hit judgment of the input address and the prefetch address based on the first tag address and the second tag address.01-14-2010
20130031313CACHE ARRANGEMENT - A first cache arrangement including an input configured to receive a memory request from a second cache arrangement; a first cache memory for storing data; an output configured to provide a response to the memory request for the second cache arrangement; and a first cache controller; the first cache controller configured such that for the response to the memory request output by the output, the cache memory includes no allocation for data associated with the memory request.01-31-2013
20130031312CACHE MEMORY CONTROLLER - A cache memory controller including: a pre-fetch requester configured to issue pre-fetch requests, each pre-fetch request having one of a plurality of different quality of services.01-31-2013
20100077154METHOD AND SYSTEM FOR OPTIMIZING PROCESSOR PERFORMANCE BY REGULATING ISSUE OF PRE-FETCHES TO HOT CACHE SETS - A method for pre-fetching data. The method includes obtaining a pre-fetch request. The pre-fetch request identifies new data to pre-fetch from memory and store in a cache. The method further includes identifying a set in the cache to store the new data and identifying a value of a hotness indicator for the set. The hotness indicator value defines a number of replacements of at least one line in the set. The method further includes determining whether the value of the hotness indicator exceeds a predefined threshold, and storing the new data in the set when the value of the hotness indicator does not exceed the pre-defined threshold.03-25-2010
20090019229Data Prefetch Throttle - A system and method taught herein control data prefetching for a data cache by tracking prefetch hits and overall hits for the data cache. Data prefetching for the data cache is disabled based on the tracking of prefetch hits and data prefetching is enabled for the data cache based on the tracking of overall hits. For example, in one or more embodiments, a cache controller is configured to track a prefetch hit rate reflecting the percentage of hits on the data cache that involve prefetched data lines and disable data prefetching if the prefetch hit rate falls below a defined threshold. The cache controller also tracks an overall hit rate reflecting the overall percentage of data cache hits (versus misses) and enables data prefetching if the overall hit rate falls below a defined threshold.01-15-2009
20130042074Prefetch Unit - In one embodiment, a processor comprises a prefetch unit coupled to a data cache. The prefetch unit is configured to concurrently maintain a plurality of separate, active prefetch streams. Each prefetch stream is either software initiated via execution by the processor of a dedicated prefetch instruction or hardware initiated via detection of a data cache miss by one or more load/store memory operations. The prefetch unit is further configured to generate prefetch requests responsive to the plurality of prefetch streams to prefetch data in to the data cache.02-14-2013
20120166734PRE-FETCHING IN A STORAGE SYSTEM - A storage system, a non-transitory computer readable medium and a method of pre-fetching. The method may include determining, by a pre-fetch module of the storage system, to fetch a certain data portion from a data storage device of the storage system to a cache memory of the storage system; wherein the certain data portion belongs to a certain statistical segment that belongs to at least one logical volume; determining, by a pre-fetch module of the storage system, to pre-fetch at least one additional data portion to the cache memory based upon input/output (I/O) activity statistics associated with the certain statistical segment; wherein the I/O activity statistics comprises timing information related to I/O activities; fetching the certain data portion; and pre-fetching the at least one additional data portion if it is determined to pre-fetch the at least one additional data portions.06-28-2012
20100268892Data Prefetcher - In an embodiment, a processor comprises a data cache and a prefetch unit coupled to the data cache. The prefetch unit is configured to detect one or more prefetch streams corresponding to load operations that miss the data cache, and comprises a memory configured to store data corresponding to potential prefetch streams. The prefetch unit is configured to confirm a prefetch stream in response to N or more demand accesses to addresses in the prefetch stream, where N is a positive integer greater than one and is dependent on a prefetch pattern being detected. The prefetch unit comprises a plurality of stream engines, each stream engine configured to generate prefetches for a different prefetch stream assigned to that stream engine. The prefetch unit is configured to assign the confirmed prefetch stream to one of the plurality of stream engines.10-21-2010
20090307433Cache memory system - Systems and methods for pre-fetching data are disclosed that use a cache memory for storing a copy of data stored in a system memory and mechanism to initiate a pre-fetch of data from the system memory into the cache memory. The system further comprises an event monitor for monitoring events that is connected to a path on which signals representing an event are transmitted between one or more event generating modules and a processor. In some embodiments, the event monitor initiates a pre-fetch of a portion of data in response to the event monitor detecting an event indicating the availability of the portion of data in the system memory.12-10-2009
20130073810Memory Sharing Between Embedded Controller and Central Processing Unit Chipset - An embedded controller includes a microcontroller core and memory control circuitry. The memory control circuitry is configured to communicate with a Central Processing Unit (CPU) chipset over a first Serial Peripheral Interface (SPI), for which bus arbitration is not supported, at a first clock rate, to communicate with a memory over a second SPI at a second, fixed clock rate, to relay memory transactions between the CPU chipset and the memory over the first and second SPIs, to identify time intervals in which no memory transactions are relayed on the second SPI and to retrieve from the memory information for operating the microcontroller core during the identified time intervals.03-21-2013
20090271576DATA PROCESSOR - There is a need for providing a data processor capable of easily prefetching data from a wide range. A central processing unit is capable of performing a specified instruction that adds an offset to a value of a register to generate an effective address for data. This register can be assigned an intended value in accordance with execution of an instruction. A buffer maintains part of instruction streams and data streams stored in memory. The buffer includes cache memories for storing the instruction stream and the data stream. From the memory, the buffer prefetches a data stream containing data corresponding to an effective address designated by the specified instruction stored in the cache memory. A data prefetch operation is easy because a data stream is prefetched by finding the specified instruction from the fetched instruction stream. Data can be prefetched from a wider range than the use of a PC-relative load instruction.10-29-2009
20130067170Browser Predictive Caching - A method and computer readable medium are disclosed for predictive caching of web pages for display through a screen of a mobile computing device. A load request is received at a mobile computing device, where the load request includes a current timestamp and an address. The address points to a remote server storing a current copy of the address content. The mobile computing device determines whether there is an existing copy of the address content is pre-cached on the mobile computing device. The mobile computing device determines whether a difference between the current timestamp and a pre-cache timestamp is greater than a heuristic timeliness value. If it is, the mobile computing device pre-caches the current copy of the address content from the remove server at the address on the mobile computing device. The mobile computing device then provides the current copy of the address content for display on its screen.03-14-2013
20110022806METHOD AND SYSTEM OF NUMERICAL ANALYSIS FOR CONTINUOUS DATA - A method of numerical analysis for continuous data includes: providing a temporary storage block; fetching a plurality of data units sequentially from a continuous data to store in the temporary storage block; conducting an analysis step in which the first of the data units is analyzed based on all the data units stored in the temporary storage block and the analysis result is recorded; determining whether the end of the continuous data has been reached, if so, the method terminates; and if not, removing the first of the data units, fetching the next data unit from the continuous data and returning to the analysis step. The above-mentioned method can be implemented in hardware with less temporary storage space and read/write overheads. A system of numerical analysis for continuous data is also disclosed.01-27-2011
20120198176PREFETCHING OF NEXT PHYSICALLY SEQUENTIAL CACHE LINE AFTER CACHE LINE THAT INCLUDES LOADED PAGE TABLE ENTRY - A microprocessor includes a translation lookaside buffer, a request to load a page table entry into the microprocessor generated in response to a miss of a virtual address in the translation lookaside buffer, and a prefetch unit. The prefetch unit receives a physical address of a first cache line that includes the requested page table entry and responsively generates a request to prefetch into the microprocessor a second cache line that is the next physically sequential cache line to the first cache line.08-02-2012
20120203975AUTOMATIC DETERMINATION OF READ-AHEAD AMOUNT - Read-ahead of data blocks in a storage system is performed based on a policy. The policy is stochastically selected from a plurality of policies in respect to probabilities. The probabilities are calculated based on past performances, also referred to as rewards. Policies which induce better performance may be given precedence over other policies. However, the other policies may be also utilized to reevaluate them. A balance between exploration of different policies and exploitation of previously discovered good policies may be achieved.08-09-2012
20110219195PRE-FETCHING OF DATA PACKETS - Some of the embodiments of the present disclosure provide a method comprising receiving a data packet, and storing the received data packet in a memory; generating a descriptor for the data packet, the descriptor including information for fetching at least a portion of the data packet from the memory; and in advance of a processing core requesting the at least a portion of the data packet to execute a processing operation on the at least a portion of the data packet, fetching the at least a portion of the data packet to a cache based at least in part on information in the descriptor. Other embodiments are also described and claimed.09-08-2011
20110283067Target Memory Hierarchy Specification in a Multi-Core Computer Processing System - Target memory hierarchy specification in a multi-core computer processing system is provided including a system for implementing prefetch instructions. The system includes a first core processor, a dedicated cache corresponding to the first core processor, and a second core processor. The second core processor includes instructions for executing a prefetch instruction that specifies a memory location and the dedicated local cache corresponding to the first core processor. Executing the prefetch instruction includes retrieving data from the memory location and storing the retrieved data on the dedicated local cache corresponding to the first core processor.11-17-2011
20110219196MEMORY HUB WITH INTERNAL CACHE AND/OR MEMORY ACCESS PREDICTION - A computer system includes a memory hub for coupling a processor to a plurality of synchronous dynamic random access memory (“SDRAM”) devices. The memory hub includes a processor interface coupled to the processor and a plurality of memory interfaces coupled to respective SDRAM devices. The processor interface is coupled to the memory interfaces by a switch. Each of the memory interfaces includes a memory controller, a cache memory, and a prediction unit. The cache memory stores data recently read from or written to the respective SDRAM device so that it can be subsequently read by processor with relatively little latency. The prediction unit prefetches data from an address from which a read access is likely based on a previously accessed address.09-08-2011
20100011169CACHE MEMORY - Disclosed is a cache memory, design structure, and corresponding method for improving cache performance comprising one or more cache lines of equal size, each cache line adapted to store a cache block of data from a main memory in response to an access request from a processor; and a predict buffer, of size equal to the size of the cache lines, configured to store a next block of data from said main memory in response to a predict-fetch signal generated using at least one previous access request.01-14-2010
20100268893Data Prefetcher that Adjusts Prefetch Stream Length Based on Confidence - In an embodiment, a processor comprises a data cache and a prefetch unit coupled to the data cache. The prefetch unit is configured to identify a prefetch stream in cache misses from the data cache, and the prefetch unit is configured to issue prefetches predicted by the prefetch stream to prefetch data into the data cache. More particularly, the prefetch unit implements one or more stream engines that generate prefetches for respective prefetch streams. Each stream engine is configured to maintain limit data that indicates a number of prefetches that are permitted to be outstanding beyond a most recent demand access. The stream engine is configured to increase the limit responsive to the number of demand accesses that consume prefetched data at least equaling the limit.10-21-2010
20100115206STORAGE DEVICE PREFETCH SYSTEM USING DIRECTED GRAPH CLUSTERS - A system analyzes access patterns in a storage system. Logic circuitry in the system identifies different address regions of contiguously accessed memory locations. A statistical record identifies a number of storage accesses to the different address regions and a historical record identifies previous address regions accessed prior to the address regions currently being accessed. The logic circuitry is then used to prefetch data from the different address regions according to the statistical record and the historical record.05-06-2010
20100268894Prefetch Unit - In one embodiment, a processor comprises a prefetch unit coupled to a data cache. The prefetch unit is configured to concurrently maintain a plurality of separate, active prefetch streams. Each prefetch stream is either software initiated via execution by the processor of a dedicated prefetch instruction or hardware initiated via detection of a data cache miss by one or more load/store memory operations. The prefetch unit is further configured to generate prefetch requests responsive to the plurality of prefetch streams to prefetch data in to the data cache.10-21-2010
20090150618STRUCTURE FOR HANDLING DATA ACCESS - A design structure embodied in a machine readable storage medium for designing, manufacturing, and/or testing a design is provided. The design structure generally includes a computer system that includes a CPU, a storage device, circuitry for providing a speculative access threshold corresponding to a selected percentage of the total number of accesses to the storage device that can be speculatively issued, and circuitry for intermixing demand accesses and speculative accesses in accordance with the speculative access threshold.06-11-2009
20120110269PREFETCH INSTRUCTION - Techniques are disclosed relating to prefetching data from memory. In one embodiment, an integrated circuit may include a processor containing an execution core and a data cache. The execution core may be configured to receive an instance of a prefetch instruction that specifies a memory address from which to retrieve data. In response to the instance of the instruction, the execution core retrieves data from the memory address and stores it in the data in the data cache, regardless of whether the data corresponding to that particular memory address is already stored in the data cache. In this manner, the data cache may be used as a prefetch buffer for data in memory buffers where coherence has not been maintained.05-03-2012
20120036327DYNAMIC LOOK-AHEAD EXTENT MIGRATION FOR TIERED STORAGE ARCHITECTURES - A method for migrating extents between extent pools in a tiered storage architecture maintains a data access profile for an extent over a period of time. Using the data access profile, the method generates an extent profile graph that predicts data access rates for the extent into the future. The slope of the extent profile graph is calculated and used to determine whether the extent will reach a migration threshold within a specified “look-ahead” time. If so, the method calculates a migration window that allows the extent to be migrated prior to reaching the migration threshold. In certain embodiments, the method determines the overall performance impact on the source extent pool and destination extent pool during the migration window. If the overall performance impact is below a designated impact threshold, the method migrates the extent during the migration window. A corresponding apparatus and computer program product are also disclosed herein.02-09-2012
20090187715Prefetch Termination at Powered Down Memory Bank Boundary in Shared Memory Controller - A prefetch scheme in a shared memory multiprocessor disables the prefetch when an address falls within a powered down memory bank. A register stores a bit corresponding to each independently powered memory bank to determine whether that memory bank is prefetchable. When a memory bank is powered down, all bits corresponding to the pages in this row are masked so that they appear as non-prefetchable pages to the prefetch access generation engine preventing an access to any page in this memory bank. A powered down status bit corresponding to the memory bank is used for masking the output of the prefetch enable register. The prefetch enable register is unmodified. This also seamlessly restores the prefetch property of the memory banks when the corresponding memory row is powered up.07-23-2009
20110173398TWO DIFFERENT PREFETCHING COMPLEMENTARY ENGINES OPERATING SIMULTANEOUSLY - A prefetch system improves a performance of a parallel computing system. The parallel computing system includes a plurality of computing nodes. A computing node includes at least one processor and at least one memory device. The prefetch system includes at least one stream prefetch engine and at least one list prefetch engine. The prefetch system operates those engines simultaneously. After the at least one processor issues a command, the prefetch system passes the command to a stream prefetch engine and a list prefetch engine. The prefetch system operates the stream prefetch engine and the list prefetch engine to prefetch data to be needed in subsequent clock cycles in the processor in response to the passed command.07-14-2011
20110173397PROGRAMMABLE STREAM PREFETCH WITH RESOURCE OPTIMIZATION - A stream prefetch engine performs data retrieval in a parallel computing system. The engine receives a load request from at least one processor. The engine evaluates whether a first memory address requested in the load request is present and valid in a table. The engine checks whether there exists valid data corresponding to the first memory address in an array if the first memory address is present and valid in the table. The engine increments a prefetching depth of a first stream that the first memory address belongs to and fetching a cache line associated with the first memory address from the at least one cache memory device if there is not yet valid data corresponding to the first memory address in the array. The engine determines whether prefetching of additional data is needed for the first stream within its prefetching depth. The engine prefetches the additional data if the prefetching is needed.07-14-2011
20090094417System and Method for Dynamically Inserting Prefetch Tags by the Web Server - A method and system for embedding prefetch tags in the HTML of a user-requested webpage so that, after delivery of the user-requested webpage to the user, the proxy can cache webpages that the user is likely to request. After the browser issues a request for a webpage to the proxy, the proxy passes the request to the web server. The web server obtains the webpage and embeds prefetch tags into the HTML of the webpage. The selection of prefetch tags is determined by a personalization database or log/statistics database in the web server. The web server sends the user-requested webpage back to the user through the proxy. The proxy reads the prefetch tags and prefetches the webpages identified in the prefetch tags. The webpages identified in the prefetch tags are stored in the proxy cache memory so that they can be quickly sent to the user upon request.04-09-2009
20090271577PEER-TO-PEER NETWORK CONTENT OBJECT INFORMATION CACHING - In a peer-to-peer network system, a local node communicates with a remote node on which detailed information about content objects resides and optionally, the content objects reside. The local node uses caching, message request resizing and predictive message requesting to speed response time to user requests and internal control node requests.10-29-2009
20090287884INFORMATION PROCESSING SYSTEM AND INFORMATION PROCESSING METHOD - An information processing system performs a prefetch for predicting data that is likely to be accessed by a central processing unit, reading the predicted data from a main memory, and storing the data in a cache area in advance. The information processing system includes a usage information storage unit that stores therein usage information indicating whether prefetched data has been accessed; and a usage information writing unit that writes the usage information of the prefetched data in the usage information storage unit.11-19-2009
20090254711Reducing Cache Pollution of a Software Controlled Cache - Reducing cache pollution of a software controlled cache is provided. A request is received to prefetch data into the software controlled cache. A first designator is set for a first cache access to a first value. If there is the second cache access to prefetch, a determination is made as to whether data associated with the second cache access exists in the software controlled cache. If the data is in the software controlled cache, a determination is made as to whether a second value of a second designator is greater than the first value of the first cache access. If the second value fails to be greater than the first value, the position of the first cache access and the second cache access in a cache line is swapped. The first value is decremented by a predetermined amount and the second value is replaced to equal the first value.10-08-2009
20120297144DYNAMIC HIERARCHICAL MEMORY CACHE AWARENESS WITHIN A STORAGE SYSTEM - A computing device-implemented method for implementing dynamic hierarchical memory cache (HMC) awareness within a storage system is described. Specifically, when performing dynamic read operations within a storage system, a data module evaluates a data prefetch policy according to a strategy of determining if data exists in a hierarchical memory cache and thereafter amending the data prefetch policy, if warranted. The system then uses the data prefetch policy to perform a read operation from the storage device to minimize future data retrievals from the storage device. Further, in a distributed storage environment that include multiple storage nodes cooperating to satisfy data retrieval requests, dynamic hierarchical memory cache awareness can be implemented for every storage node without degrading the overall performance of the distributed storage environment.11-22-2012
20120297142DYNAMIC HIERARCHICAL MEMORY CACHE AWARENESS WITHIN A STORAGE SYSTEM - Described is a system and computer program product for implementing dynamic hierarchical memory cache (HMC) awareness within a storage system. Specifically, when performing dynamic read operations within a storage system, a data module evaluates a data prefetch policy according to a strategy of determining if data exists in a hierarchical memory cache and thereafter amending the data prefetch policy, if warranted. The system then uses the data prefetch policy to perform a read operation from the storage device to minimize future data retrievals from the storage device. Further, in a distributed storage environment that include multiple storage nodes cooperating to satisfy data retrieval requests, dynamic hierarchical memory cache awareness can be implemented for every storage node without degrading the overall performance of the distributed storage environment.11-22-2012
20110208919CACHING BASED ON SPATIAL DISTRIBUTION OF ACCESSES TO DATA STORAGE DEVICES - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for quantifying a spatial distribution of accesses to storage systems and for determining spatial locality of references to storage addresses in the storage systems, are described. In one aspect, a method includes determining a measure of spatial distribution of accesses to a data storage system based on multiple distinct groups of accesses to the data storage system, and adjusting a caching policy used for the data storage system based on the determined measure of spatial distribution.08-25-2011
20110173396Performing High Granularity Prefetch from Remote Memory into a Cache on a Device without Change in Address - Provided is a method, which may be performed on a computer, for prefetching data over an interface. The method may include receiving a first data prefetch request for first data of a first data size stored at a first physical address corresponding to a first virtual address. The first data prefetch request may include second data specifying the first virtual address and third data specifying the first data size. The first virtual address and the first data size may define a first virtual address range. The method may also include converting the first data prefetch request into a first data retrieval request. To convert the first data prefetch request into a first data retrieval request the first virtual address specified by the second data may be translated into the first physical address. The method may further include issuing the first data retrieval request at the interface, receiving the first data at the interface and storing at least a portion of the received first data in a cache. Storing may include setting each of one or more cache tags associated with the at least a portion of the received first data to correspond to the first physical address.07-14-2011
20110010506DATA PREFETCHER WITH MULTI-LEVEL TABLE FOR PREDICTING STRIDE PATTERNS - A data prefetcher includes a table of entries to maintain a history of load operations. Each entry stores a tag and a corresponding next stride. The tag comprises a concatenation of first and second strides. The next stride comprises the first stride. The first stride comprises a first cache line address subtracted from a second cache line address. The second stride comprises the second cache line address subtracted from a third cache line address. The first, second and third cache line addresses each comprise a memory address of a cache line implicated by respective first, second and third temporally preceding load operations. Control logic calculates a current stride by subtracting a previous cache line address from a new load cache line address, looks up in the table a concatenation of a previous stride and the current stride, and prefetches a cache line using the hitting table entry next stride.01-13-2011
20080244188INFORMATION RECORDING APPARATUS AND CONTROL METHOD THEREOF - According to one embodiment, an information recording apparatus has a control unit configured to control mutual transfer of information between each of a disc-shaped recording medium, a cache memory, and a non-volatile memory and the outside, control mutual transfer of information between the disc-shaped recording medium, the cache memory, and the non-volatile memory, and control to set a substituting region corresponding to a defect region generated in the disc-shaped recording medium in the non-volatile memory.10-02-2008
20080250208System and Method for Improving the Page Crossing Performance of a Data Prefetcher - A system and method for improving the page crossing performance of a data prefetcher is presented. A prefetch engine tracks times at which a data stream terminates due to a page boundary. When a certain percentage of data streams terminate at page boundaries, the prefetch engine sets an aggressive profile flag. In turn, when the data prefetch engine receives a real address that corresponds to the beginning/end of a new page, and the aggressive profile flag is set, the prefetch engine uses an aggressive startup profile to generate and schedule prefetches on the assumption that the real address is highly likely to be the continuation of a long data stream. As a result, the system and method minimize latency when crossing real page boundaries when a program is predominately accessing long streams.10-09-2008
20120297143DATA SUPPLY DEVICE, CACHE DEVICE, DATA SUPPLY METHOD, AND CACHE METHOD - A data supply device includes an output unit, a fetch unit including a storage region for storing data and configured to supply data stored in the storage region to the output unit, and a prefetch unit configured to request, from an external device, data to be transmitted to the output unit. The fetch unit is configured to store data received from the external device in a reception region, which is a portion of the storage region, and, according to a request from the prefetch unit, to assign, as a transmission region, the reception region where data corresponding to the request is stored. The output unit is configured to output data stored in the region assigned as the transmission region by the fetch unit.11-22-2012
20080313408LOW LATENCY MEMORY ACCESS AND SYNCHRONIZATION - A low latency memory system access is provided in association with a weakly-ordered multiprocessor system. Bach processor in the multiprocessor shares resources, and each shared resource has an associated lock within a locking device that provides support for synchronization between the multiple processors in the multiprocessor and the orderly sharing of the resources. A processor only has permission to access a resource when it owns the lock associated with that resource, and an attempt by a processor to own a lock requires only a single load operation, rather than a traditional atomic load followed by store, such that the processor only performs a read operation and the hardware locking device performs a subsequent write operation rather than the processor. A simple prefetching for non-contiguous data structures is also disclosed. A memory line is redefined so that in addition to the normal physical memory data, every line includes a pointer that is large enough to point to any other line in the memory, wherein the pointers to determine which memory line to prefetch rather than some other predictive algorithm. This enables hardware to effectively prefetch memory access patterns that are non-contiguous, but repetitive.12-18-2008
20080250209Tagged sequential read operations - In some embodiments, a storage device, comprises a processor, a memory module communicatively connected to the processor, and logic instructions in the memory module which, when executed by the processor, configure the processor to receive a read input/output operation, and configure a prefetch disk data into cache memory in response to a prefetch tag embedded in the read input/output operation.10-09-2008
20120144123READ-AHEAD PROCESSING IN NETWORKED CLIENT-SERVER ARCHITECTURE - Various embodiments for read-ahead processing in a networked client-server architecture by a processor device are provided. Read messages are grouped by a plurality of unique sequence identifications (IDs), where each of the sequence IDs corresponds to a specific read sequence, consisting of all read and read-ahead requests related to a specific storage segment that is being read sequentially by a thread of execution in a client application. The storage system uses the sequence id value in order to identify and filter read-ahead messages that are obsolete when received by the storage system, as the client application has already moved to read a different storage segment. Basically, a message is discarded when its sequence id value is less recent than the most recent value already seen by the storage system. The sequence IDs are used by the storage system to determine corresponding read-ahead data to be loaded into a read-ahead cache.06-07-2012
20120144125Instruction for Pre-Fetching Data and Releasing Cache Lines - A prefetch data machine instruction having an M field performs a function on a cache line of data specifying an address of an operand. The operation comprises either prefetching a cache line of data from memory to a cache or reducing the access ownership of store and fetch or fetch only of the cache line in the cache or a combination thereof. The address of the operand is either based on a register value or the program counter value pointing to the prefetch data machine instruction.06-07-2012
20080270706Block Reallocation Planning During Read-Ahead Processing - A data storage system pre-fetches data blocks from a mass storage device, then determines whether reallocation of the pre-fetched blocks would improve access to them. If access would be improved, the pre-fetched blocks are written to different areas of the mass storage device. Several different implementations of such data storage systems are described.10-30-2008
20110271058Method, system and apparatus for identifying a cache line - A method of identifying a cache line of a cache memory (11-03-2011
20110208918MOVE ELIMINATION AND NEXT PAGE PREFETCHER - Methods and apparatus relating to a hardware move elimination and/or next page prefetching are described. In some embodiments, a logic may provide hardware move eliminations based on stored data. In an embodiment, a next page prefetcher is disclosed. Other embodiments are also described and claimed.08-25-2011
20090138661PREFETCH INSTRUCTION EXTENSIONS - A computer system and method. In one embodiment, a computer system comprises a processor and a cache memory. The processor executes a prefetch instruction to prefetch a block of data words into the cache memory. In one embodiment, the cache memory comprises a plurality of cache levels. The processor selects one of the cache levels based on a value of a prefetch instruction parameter indicating the temporal locality of data to be prefetched. In a further embodiment, individual words are prefetched from non-contiguous memory addresses. A single execution of the prefetch instruction allows the processor to prefetch multiple blocks into the cache memory. The number of data words in each block, the number of blocks, an address interval between each data word of each block, and an address interval between each block to be prefetched are indicated by parameters of the prefetch instruction.05-28-2009
20110225371DATA PREFETCH FOR SCSI REFERRALS - A method for communication between an initiator system and a storage cluster. The method comprises receiving an initial I/O request from the initiator system to a first storage system; providing a referral response from the first storage system to the initiator system, the referral response providing information for directing the initiator system to a second storage system; notifying the second storage system regarding the referral response via a prefetch notice, the prefetch notice including an operation type and address information for accessing requested data; when the initial I/O request is a read request, prefetching at least a portion of the requested data stored in the second storage system in to a cache; receiving a second I/O request from the initiator system to the second storage system; and providing to the initiator system the portion of the prefetched data from the cache of the second storage system.09-15-2011
20090198909Jump Starting Prefetch Streams Across Page Boundaries - A method, processor, and data processing system for enabling utilization of a single prefetch stream to access data across a memory page boundary. A prefetch engine includes an active streams table in which information for one or more scheduled prefetch streams are stored. The prefetch engine also includes a victim table for storing a previously active stream whose next prefetch crosses a memory page boundary. The scheduling logic issues a prefetch request with a real address to fetch data from the lower level memory. Then, responsive to detecting that the real address of the stream's next sequential prefetch crosses the memory page boundary, the prefetch engine determines when the first prefetch stream can continue across the page boundary of the first memory page (via an effective address comparison). The PE automatically reinserts the first prefetch stream into the active stream table to jump start prefetching across the page boundary.08-06-2009
20090198908METHOD FOR ENABLING DIRECT PREFETCHING OF DATA DURING ASYCHRONOUS MEMORY MOVE OPERATION - While an AMM operation is ongoing, a prefetch request for data from the source effective address or the destination effective address triggers a cache injection by the AMM mover (or memory controller) of relevant data from the stream of data being moved in the physical memory. The memory controller forwards the first prefetched line to the prefetch engine and L1 cache. The memory controller also forwards the next cache lines in the sequence of data to the L2 cache and a subsequent set of cache lines to the L3 cache. The memory controller then forwards the remaining data to the destination memory location. Quick access to prefetch data is enabled by buffering the stream of data in the upper caches rather than placing all the moved data within the memory. Also, the memory controller does not overrun the upper caches, by placing moved data into only a subset of the available cache lines of the upper level cache.08-06-2009
20090198907Dynamic Adjustment of Prefetch Stream Priority - A method, processor, and data processing system for dynamically adjusting a prefetch stream priority based on the consumption rate of the data by the processor. The method includes a prefetch engine issuing a prefetch request of a first prefetch stream to fetch one or more data from the memory subsystem. The first prefetch stream has a first assigned priority that determines a relative order for scheduling prefetch requests of the first prefetch stream relative to other prefetch requests of other prefetch streams. Based on the receipt of a processor demand for the data before the data returns to the cache or return of the data along time before the receiving the processor demand, logic of the prefetch engine dynamically changes the first assigned priority to a second higher or lower priority, which priority is subsequently utilized to schedule and issue a next prefetch request of the first prefetch stream.08-06-2009
20090198903DATA PROCESSING SYSTEM, PROCESSOR AND METHOD THAT VARY AN AMOUNT OF DATA RETRIEVED FROM MEMORY BASED UPON A HINT - In at least one embodiment, a processor detects during execution of program code whether a load instruction within the program code is associated with a hint. In response to detecting that the load instruction is not associated with a hint, the processor retrieves a full cache line of data from the memory hierarchy into the processor in response to the load instruction. In response to detecting that the load instruction is associated with a hint, a processor retrieves a partial cache line of data into the processor from the memory hierarchy in response to the load instruction.08-06-2009
20090198905Techniques for Prediction-Based Indirect Data Prefetching - A technique for data prefetching using indirect addressing includes monitoring data pointer values, associated with an array, in an access stream to a memory. The technique determines whether a pattern exists in the data pointer values. A prefetch table is then populated with respective entries that correspond to respective array address/data pointer pairs based on a predicted pattern in the data pointer values. Respective data blocks (e.g., respective cache lines) are then prefetched (e.g., from the memory or another memory) based on the respective entries in the prefetch table.08-06-2009
20090198904Techniques for Data Prefetching Using Indirect Addressing with Offset - A technique for performing data prefetching using indirect addressing includes determining a first memory address of a pointer associated with a data prefetch instruction. Content, that is included in a first data block (e.g., a first cache line) of a memory, at the first memory address is then fetched. An offset is then added to the content of the memory at the first memory address to provide a first offset memory address. A second memory address is then determined based on the first offset memory address. A second data block (e.g., a second cache line) that includes data at the second memory address is then fetched (e.g., from the memory or another memory). A data prefetch instruction may be indicated by a unique operational code (opcode), a unique extended opcode, or a field (including one or more bits) in an instruction.08-06-2009
20090063777CACHE SYSTEM - A cache system includes a tag memory having a tag indicating whether data is obtained by prefetch access, a prefetch reliability storage unit having prefetch reliability of each processor, and a tag comparator configured to compare the tag with an access address, instruct the prefetch reliability storage unit to decrease the prefetch reliability if cache miss occurs for the tag indicating the prefetch access, and erase information indicating the prefetch access and instruct the prefetch reliability storage unit to increase the prefetch reliability if cache hit occurs for the tag indicating the prefetch access.03-05-2009
20110145508AUTOMATIC DETERMINATION OF READ-AHEAD AMOUNT - Read-ahead of data blocks in a storage system is performed based on a policy. The policy is stochastically selected from a plurality of policies in respect to probabilities. The probabilities are calculated based on past performances, also referred to as rewards. Policies which induce better performance may be given precedence over other policies. However, the other policies may be also utilized to reevaluate them. A balance between exploration of different policies and exploitation of previously discovered good policies may be achieved.06-16-2011
20110145507METHOD OF REDUCING RESPONSE TIME FOR DELIVERY OF VEHICLE TELEMATICS SERVICES - A method of operating a predictive data cache includes receiving a request for telematics service from a telematics service requester, determining the subject matter of the request, querying a predictive data cache to determine if the predictive data cache includes a service response to the subject matter of the request and, if the predictive data cache includes the service response, then providing the service response to the requester and updating the predictive data cache using the subject matter of the request. The subject matter can include one or more of: an event description, an event period, or an event location based on the request.06-16-2011
20120144124METHOD AND APPARATUS FOR MEMORY ACCESS UNITS INTERACTION AND OPTIMIZED MEMORY SCHEDULING - A method and an apparatus for modulating the prefetch training of a memory-side prefetch unit (MS-PFU) are described. An MS-PFU trains on memory access requests it receives from processors and their processor-side prefetch units (PS-PFUs). In the method and apparatus, an MS-PFU modulates its training based on one or more of a PS-PFU memory access request, a PS-PFU memory access request type, memory utilization, or the accuracy of MS-PFU prefetch requests.06-07-2012
20090055595ADJUSTING PARAMETERS USED TO PREFETCH DATA FROM STORAGE INTO CACHE - Provided are a method, system, and article of manufacture for adjusting parameters used to prefetch data from storage into cache. Data units are added from a storage to a cache, wherein requested data from the storage is returned from the cache. A degree of prefetch is processed indicating a number of data units to prefetch into the cache. A trigger distance is processed indicating a prefetched trigger data unit in the cache. The number of data units indicated by the degree of prefetch is prefetched in response to processing the trigger data unit. The degree of prefetch and the trigger distance are adjusted based on a rate at which data units are accessed from the cache.02-26-2009
20090063778Storage System and Storage System Control Method - A storage system of the present invention improves the response performance of sequential access to data, the data arrangement of which is expected to be sequential. Data to be transmitted via streaming delivery is stored in a storage section. A host sends data read out from the storage section to respective user machines. A prefetch section reads out from the storage section ahead of time the data to be read out by the host, and stores it in a cache memory. A fragmentation detector detects the extent of fragmentation of the data arrangement in accordance with the cache hit rate. The greater the extent of the fragmentation, the smaller the prefetch quantity calculated by a prefetch quantity calculator. A prefetch operation controller halts a prefetch operation when the extent of data arrangement fragmentation is great, and restarts a prefetch operation when the extent of fragmentation decreases.03-05-2009
20090106498COHERENT DRAM PREFETCHER - A system and method for obtaining coherence permission for speculative prefetched data. A memory controller stores an address of a prefetch memory line in a prefetch buffer. Upon allocation of an entry in the prefetch buffer a snoop of all the caches in the system occurs. Coherency permission information is stored in the prefetch buffer. The corresponding prefetch data may be stored elsewhere. During a subsequent memory access request for a memory address stored in the prefetch buffer, both the coherency information and prefetched data may be already available and the memory access latency is reduced.04-23-2009
20090210630Method and Apparatus for Prefetching Data from a Data Structure - A method, apparatus, and computer instructions for providing hardware assistance to prefetch data during execution of code by a process or in the data processing system. In response to loading of an instruction in the code into a cache, a determination is made, by the processor unit, as to whether metadata for a prefetch is present for the instruction. In response to metadata being present for the instruction, selectively prefetching data, from within a data structure using the metadata, into the cache in a processor.08-20-2009
20090222629MEMORY SYSTEM - A memory system includes a controller that reads out, data written in a nonvolatile second storing area, from which data is read out and in which data is written in a page unit, to a first storing area as a cache memory included in a semiconductor memory and transfers the data to the host apparatus. The controller performs, when a readout request from the host apparatus satisfies a predetermined condition, at least one of first pre-fetch for reading out, to the first storing area data from a terminal end of a logical address range designated by a readout request being currently processed to a boundary of a logical address aligned in the page unit and a second pre-fetch for reading out data from the boundary of the logical address aligned in the page unit to a next boundary of the logical address.09-03-2009
20100161906PRE-FETCHING VIRTUAL ENVIRONMENT IN A VIRTUAL UNIVERSE BASED ON PREVIOUS TRAVERSALS - An approach is provided for pre-fetching of virtual content in a virtual universe based on previous traversals. In one embodiment, there is a pre-fetching tool, including a ranking component configured to rank each of a plurality of parcels of locations previously visited by an avatar according to predefined ranking criteria. The pre-fetching tool further includes a pre-fetching component configured to pre-fetch a virtual content of said parcels of locations based on the ranking.06-24-2010
20090276577Adaptive caching for high volume extract transform load process - A method, system, and medium related to a mechanism to cache key-value pairs of a lookup process during an extract transform load process of a manufacturing execution system. The method includes preloading a cache with a subset of a set of key-value pairs stored in source data; receiving a request of a key-value pair; determining whether the requested key-value pair is in the preloaded cache; retrieving the requested key-value pair from the preloaded cache if the requested key-value pair is in the preloaded cache; queuing the requested key-value pair in an internal data structure if the requested key-value pair is not in the preloaded cache until a threshold number of accumulated requested key-value pairs are queued in the internal data structure; and executing a query of the source data for all of the accumulated requested key-value pairs.11-05-2009
20090077321Microprocessor with Improved Data Stream Prefetching - A microprocessor coupled to a system memory by a bus includes an instruction decode unit that decodes an instruction that specifies a data stream in the system memory and a stream prefetch priority. The microprocessor also includes a load/store unit that generates load/store requests to transfer data between the system memory and the microprocessor. The microprocessor also includes a stream prefetch unit that generates a plurality of prefetch requests to prefetch the data stream from the system memory into the microprocessor. The prefetch requests specify the stream prefetch priority. The microprocessor also includes a bus interface unit (BIU) that generates transaction requests on the bus to transfer data between the system memory and the microprocessor in response to the load/store requests and the prefetch requests. The BIU prioritizes the bus transaction requests for the prefetch requests relative to the bus transaction requests for the load/store requests based on the stream prefetch priority.03-19-2009
20100153653SYSTEM AND METHOD FOR PREFETCHING DATA - The present disclosure is directed towards a prefetch controller configured to communicate with a prefetch cache in order to increase system performance. In some embodiments, the prefetch controller may include an instruction lookup table (ILT) configured to receive a first tuple including a first instruction ID and a first missed data address. The prefetch controller may further include a tuple history queue (THQ) configured to receive an instruction/stride tuple, the instruction/stride tuple generated by subtracting a last data access address from the first missed data address. The prefetch controller may further include a sequence prediction table (SPT) in communication with the tuple history queue (THQ) and the instruction lookup table. The prefetch controller may also include an adder in communication with the instruction lookup table (ILT) and the sequence prediction table (SPT) configured to generate a predicted prefetch address and to provide the predicted prefetch address to a prefetch cache. Numerous other embodiments are also within the scope of the present disclosure.06-17-2010
20100250859PREFETCHING OF NEXT PHYSICALLY SEQUENTIAL CACHE LINE AFTER CACHE LINE THAT INCLUDES LOADED PAGE TABLE ENTRY - A microprocessor includes a cache memory, a load unit, and a prefetch unit, coupled to the load unit. The load unit is configured to receive a load request that includes an indicator that the load request is loading a page table entry. The prefetch unit is configured to receive from the load unit a physical address of a first cache line that includes the page table entry specified by the load request. The prefetch unit is further configured to responsively generate a request to prefetch into the cache memory a second cache line. The second cache line is the next physically sequential cache line to the first cache line. In an alternate embodiment, the second cache line is the previous physically sequential cache line to the first cache line rather than the next physically sequential cache line to the first cache line.09-30-2010
20100241811Multiprocessor Cache Prefetch With Off-Chip Bandwidth Allocation - Technologies are generally described for allocating available prefetch bandwidth among processor cores in a multiprocessor computing system. The prefetch bandwidth associated with an off-chip memory interface of the multiprocessor may be determined, partitioned, and allocated across multiple processor cores.09-23-2010
20100070716PROCESSOR AND PREFETCH SUPPORT PROGRAM - A processor loads a program from a main memory, detects a register updating instruction, and registers the address of the register updating instruction in a register-producer table storing unit. Moreover, the processor loads the program to detect a memory access instruction, compares a register number utilized by the detected memory access instruction with a register-producer table to specify an address generation instruction, and rewrites an instruction corresponding to the address generation instruction.03-18-2010
20100199045STORE-TO-LOAD FORWARDING MECHANISM FOR PROCESSOR RUNAHEAD MODE OPERATION - A system and method to optimize runahead operation for a processor without use of a separate explicit runahead cache structure. Rather than simply dropping store instructions in a processor runahead mode, store instructions write their results in an existing processor store queue, although store instructions are not allowed to update processor caches and system memory. Use of the store queue during runahead mode to hold store instruction results allows more recent runahead load instructions to search retired store queue entries in the store queue for matching addresses to utilize data from the retired, but still searchable, store instructions. Retired store instructions could be either runahead store instructions retired, or retired store instructions that executed before entering runahead mode.08-05-2010
20100191918Cache Controller Device, Interfacing Method and Programming Method Using the Same - Disclosed are a cache controller device, an interfacing method and a programming method using the same. The cache controller device prefetching and supplying data distributed in a memory to a main processor, includes: a cache temporarily storing data in a memory block having a limited size; a cache controller circularly reading out the data from the memory block to a cache memory, or transferring the data from the cache memory to the cache; and a memory input/output controller controlling prefetching the data to the cache, or transferring the data from the cache to a memory.07-29-2010
20110238923COMBINED L2 CACHE AND L1D CACHE PREFETCHER - A microprocessor includes a first-level cache memory, a second-level cache memory, and a data prefetcher that detects a predominant direction and pattern of recent memory accesses presented to the second-level cache memory and prefetches cache lines into the second-level cache memory based on the predominant direction and pattern. The data prefetcher also receives from the first-level cache memory an address of a memory access received by the first-level cache memory, wherein the address implicates a cache line. The data prefetcher also determines one or more cache lines indicated by the pattern beyond the implicated cache line in the predominant direction. The data prefetcher also causes the one or more cache lines to be prefetched into the first-level cache memory.09-29-2011
20130132680ADAPTIVE DATA PREFETCH - A method, apparatus and product for data prefetching. The method comprising: prefetching data associated with a load instruction of a computer program, wherein the prefetching is performed in anticipation to performing the load instruction, whereby the data is retained in the cache; detecting whether the prefetched data of the prefetching is invalidated after the prefetching commenced and prior to performing the load instruction; and adaptively determining whether to modify the prefetching data operation associated with the load instruction in response to the detection.05-23-2013
20130132681TEMPORAL STANDBY LIST - In one embodiment, a memory management system temporarily maintains a memory page at an artificially high priority level 05-23-2013
20090037663PROCESSOR EQUIPPED WITH A PRE-FETCH FUNCTION AND PRE-FETCH CONTROL METHOD - A processor equipped with a pre-fetch function comprises: first layer cache memory having a first line size; second layer cache memory that is on the under layer of the first layer cache memory and that has a second line size different from the first line size; and a pre-fetch control unit for issuing a pre-fetch request from the first layer cache memory to the second layer cache memory so as to pre-fetch a block equivalent to the first line size for each second line size.02-05-2009
20100306477STORE PREFETCHING VIA STORE QUEUE LOOKAHEAD - Systems and methods for efficient handling of store misses. A processor comprises a store queue that stores data for committed store instructions. Coupled to the store queue is a cache responsible for ensuring consistent ordering of store operations for all consumers, which may be accomplished by maintaining a corresponding cache line be in an exclusive state before executing a store operation. In response to a first committed store instruction missing in the cache, the store queue is configured to convey to the cache a second entry of the plurality of queue entries as a speculative prefetch instruction. This second entry corresponds to a committed store instruction that follows in program order the first committed store instruction of a given thread. If the prefetch instruction misses in the cache, the latency for acquiring a corresponding cache line overlaps with the latency of the first store instruction.12-02-2010
20130013867DATA PREFETCHER MECHANISM WITH INTELLIGENT DISABLING AND ENABLING OF A PREFETCHING FUNCTION - A data prefetcher includes a controller to control operation of the data prefetcher. The controller receives data associated with cache misses and data associated with events that do not rely on a prefetching function of the data prefetcher. The data prefetcher also includes a counter to maintain a count associated with the data prefetcher. The count is adjusted in a first direction in response to detection of a cache miss, and in a second direction in response to detection of an event that does not rely on the prefetching function. The controller disables the prefetching function when the count reaches a threshold value.01-10-2013
20090187714MEMORY HUB AND ACCESS METHOD HAVING INTERNAL PREFETCH BUFFERS - A memory module includes a memory hub coupled to several memory devices. The memory hub includes history logic that predicts on the basis of read memory requests which addresses in the memory devices from which date are likely to be subsequently read. The history logic applies prefetch suggestions corresponding to the predicted addresses to a memory sequencer, which uses the prefetch suggestions to generate prefetch requests that are coupled to the memory devices. Data read from the memory devices responsive to the prefetch suggestions are stored in a prefetch buffer. Tag logic stores prefetch addresses corresponding to addresses from which data have been prefetched. The tag logic compares the memory request addresses to the prefetch addresses to determine if the requested read data are stored in the prefetch buffer. If so, the requested data are read from the prefetch buffer. Otherwise, the requested data are read from the memory devices.07-23-2009
20110040941Microprocessor with Improved Data Stream Prefetching - A microprocessor coupled to a system memory by a bus includes an instruction decode unit that decodes an instruction that specifies a data stream in the system memory and a stream prefetch priority. The microprocessor also includes a load/store unit that generates load/store requests to transfer data between the system memory and the microprocessor. The microprocessor also includes a stream prefetch unit that generates a plurality of prefetch requests to prefetch the data stream from the system memory into the microprocessor. The prefetch requests specify the stream prefetch priority. The microprocessor also includes a bus interface unit (BIU) that generates transaction requests on the bus to transfer data between the system memory and the microprocessor in response to the load/store requests and the prefetch requests. The BIU prioritizes the bus transaction requests for the prefetch requests relative to the bus transaction requests for the load/store requests based on the stream prefetch priority.02-17-2011
20100153654DATA PROCESSING METHOD AND DEVICE - In a data-processing method, first result data may be obtained using a plurality of configurable coarse-granular elements, the first result data may be written into a memory that includes spatially separate first and second memory areas and that is connected via a bus to the plurality of configurable coarse-granular elements, the first result data may be subsequently read out from the memory, and the first result data may be subsequently processed using the plurality of configurable coarse-granular elements. In a first configuration, the first memory area may be configured as a write memory, and the second memory area may be configured as a read memory. Subsequent to writing to and reading from the memory in accordance with the first configuration, the first memory area may be configured as a read memory, and the second memory area may be configured as a write memory.06-17-2010
20110145509CACHE DIRECTED SEQUENTIAL PREFETCH - A technique for performing stream detection and prefetching within a cache memory simplifies stream detection and prefetching. A bit in a cache directory or cache entry indicates that a cache line has not been accessed since being prefetched and another bit indicates the direction of a stream associated with the cache line. A next cache line is prefetched when a previously prefetched cache line is accessed, so that the cache always attempts to prefetch one cache line ahead of accesses, in the direction of a detected stream. Stream detection is performed in response to load misses tracked in the load miss queue (LMQ). The LMQ stores an offset indicating a first miss at the offset within a cache line. A next miss to the line sets a direction bit based on the difference between the first and second offsets and causes prefetch of the next line for the stream.06-16-2011
20110246722ADAPTIVE BLOCK PRE-FETCHING METHOD AND SYSTEM - A method and system may include fetching a first pre-fetched data block having a first length greater than the length of a first requested data block, storing the first pre-fetched data block in a cache, and then fetching a second pre-fetched data block having a second length, greater than the length of a second requested data block, if data in the second requested data block is not entirely stored in a valid part of the cache. The first and second pre-fetched data blocks may be associated with a storage device over a channel. Other embodiments are described and claimed.10-06-2011
20090100231CACHE MEMORY SYSTEM, AND CONTROL METHOD THEREFOR - A cache memory system which readily accepts software control for processing includes: a cache memory provided between a processor and memory; and a TAC (Transfer and Attribute Controller) for controlling the cache memory. The TAC receives a command which indicates a transfer and an attribute operation of cache data and a target for the operation, resulting from the execution of a predetermined instruction by the processor, so as to request the operation indicated by the command against the address to the cache memory.04-16-2009
20100100687Method and Apparatus For Increasing Performance of HTTP Over Long-Latency Links - The invention increases performance of HTTP over long-latency links by pre-fetching objects concurrently via aggregated and flow-controlled channels. An agent and gateway together assist a Web browser in fetching HTTP contents faster from Internet Web sites over long-latency data links. The gateway and the agent coordinate the fetching of selective embedded objects in such a way that an object is ready and available on a host platform before the resident browser requires it. The seemingly instantaneous availability of objects to a browser enables it to complete processing the object to request the next object without much wait. Without this instantaneous availability of an embedded object, a browser waits for its request and the corresponding response to traverse a long delay link.04-22-2010
20090216956SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR ENHANCING TIMELINESS OF CACHE PREFETCHING - A system, method, and computer program product for enhancing timeliness of cache memory prefetching in a processing system are provided. The system includes a stride pattern detector to detect a stride pattern for a stride size in an amount of bytes as a difference between successive cache accesses. The system also includes a confidence counter. The system further includes eager prefetching control logic for performing a method when the stride size is less than a cache line size. The method includes adjusting the confidence counter in response to the stride pattern detector detecting the stride pattern, comparing the confidence counter to a confidence threshold, and requesting a cache prefetch in response to the confidence counter reaching the confidence threshold. The system may also include selection logic to select between the eager prefetching control logic and standard stride prefetching control logic.08-27-2009
20120203974AUTOMATIC DETERMINATION OF READ-AHEAD AMOUNT - Read-ahead of data blocks in a storage system is performed based on a policy. The policy is stochastically selected from a plurality of policies in respect to probabilities. The probabilities are calculated based on past performances, also referred to as rewards. Policies which induce better performance may be given precedence over other policies. However, the other policies may be also utilized to reevaluate them. A balance between exploration of different policies and exploitation of previously discovered good policies may be achieved.08-09-2012
20080320229PRE-FETCH CONTROL APPARATUS - A pre-fetch control apparatus is equipped with a next-line pre-fetch control apparatus 12-25-2008
20080320228METHOD AND APPARATUS FOR EFFICIENT REPLACEMENT ALGORITHM FOR PRE-FETCHER ORIENTED DATA CACHE - Disclosed are a method and apparatus for replacing pre-fetched data in a pre-fetch cache. In one embodiment, each line of the pre-fetch cache will be accessed at most M times. A line accessed M times can be evicted from the cache without any performance loss. In this embodiment, a counter is added to each pre-fetch data line to track how many times it has been accessed. In another embodiment, a displacement bit is added to each pre-fetch data line, and when a defined portion of the data line is accessed, this bit is set to a given value, indicating that the line can be evicted.12-25-2008
20080256302Programmable Data Prefetching - A method, computer program product, and system are provided for prefetching data into a cache memory. As a program is executed an object identifier is obtained of a first object of the program. A lookup operation is performed on a data structure to determine if the object identifier is present in the data structure. Responsive to the object identifier being present in the data structure, a referenced object identifier is retrieved that is referenced by the object identifier. Then, the data associated with the referenced object identifier is prefetched from main memory into the cache memory.10-16-2008
20110264864Prefetch Unit - In one embodiment, a processor comprises a prefetch unit coupled to a data cache. The prefetch unit is configured to concurrently maintain a plurality of separate, active prefetch streams. Each prefetch stream is either software initiated via execution by the processor of a dedicated prefetch instruction or hardware initiated via detection of a data cache miss by one or more load/store memory operations. The prefetch unit is further configured to generate prefetch requests responsive to the plurality of prefetch streams to prefetch data in to the data cache.10-27-2011
20100293340Wake-and-Go Mechanism with System Bus Response - A wake-and-go mechanism is provided for a data processing system. The wake-and-go mechanism is configured to issue a look-ahead load command on a system bus to read a data value from a target address and perform a comparison operation to determine whether the data value at the target address indicates that an event for which a thread is waiting has occurred. In response to the comparison resulting in a determination that the event has not occurred, the wake-and-go engine populates a wake-and-go storage array with the target address and snooping the target address on the system bus without data exclusivity. In response to the comparison resulting in a determination that the event has occurred, the wake-and-go engine issues a load command on the system bus to read the data value from the target address with data exclusivity.11-18-2010
20100293339DATA PROCESSING SYSTEM, PROCESSOR AND METHOD FOR VARYING A DATA PREFETCH SIZE BASED UPON DATA USAGE - A method of data processing in a processor includes maintaining a usage history indicating demand usage of prefetched data retrieved into cache memory. An amount of data to prefetch by a data prefetch request is selected based upon the usage history. The data prefetch request is transmitted to a memory hierarchy to prefetch the selected amount of data into cache memory.11-18-2010
20120311270TIMING-AWARE DATA PREFETCHING FOR MICROPROCESSORS - A method and apparatus for prefetching data from memory for a multicore data processor. A prefetcher issues a plurality of requests to prefetch data from a memory device to a memory cache. Consecutive cache misses are recorded in response to at least two of the plurality of requests. A time between the cache misses is determined and a timing of a further request to prefetch data from the memory device to the memory cache is altered as a function of the determined time between the two cache misses.12-06-2012
20110035555METHOD AND APPARATUS FOR AFFINITY-GUIDED SPECULATIVE HELPER THREADS IN CHIP MULTIPROCESSORS - Apparatus, system and methods are provided for performing speculative data prefetching in a chip multiprocessor (CMP). Data is prefetched by a helper thread that runs on one core of the CMP while a main program runs concurrently on another core of the CMP. Data prefetched by the helper thread is provided to the helper core. For one embodiment, the data prefetched by the helper thread is pushed to the main core. It may or may not be provided to the helper core as well. A push of prefetched data to the main core may occur during a broadcast of the data to all cores of an affinity group. For at least one other embodiment, the data prefetched by a helper thread is provided, upon request from the main core, to the main core from the helper core's local cache.02-10-2011
20100122037DEVICE AND METHOD FOR GENERATING CACHE USER INITIATED PRE-FETCH REQUESTS - A method for generating cache user initiated pre-fetch requests, the method comprises initiating a sequence of user initiated pre-fetch requests; the method being characterized by: determining the timing of user initiated pre-fetch requests of the sequence of user initiated pre-fetch requests in response to: the timing of an occurrence of a last triggering event, a user initiated pre-fetch sequence delay period and a user initiated pre-fetch sequence rate.05-13-2010
20100122036METHODS AND APPARATUSES FOR IMPROVING SPECULATION SUCCESS IN PROCESSORS - Methods and apparatuses are disclosed for improving speculation success in processors. In some embodiments, the method may include executing a plurality of threads of program code, the plurality of threads comprising a first speculative load request, setting an indicator bit corresponding to a cache line in response to the first speculative load request, and in the event that a second speculative load request from the plurality of threads refers to a first cache line with the indicator bit set, determining if a second cache line is available.05-13-2010
20120151150Cache Line Fetching and Fetch Ahead Control Using Post Modification Information - A method is provided for performing cache line fetching and/or cache fetch ahead in a processing system including at least one processor core and at least one data cache operatively coupled with the processor. The method includes the steps of: retrieving post modification information from the processor core and a memory address corresponding thereto; and the processing system performing, as a function of the post modification information and the memory address retrieved from the processor core, cache line fetching and/or cache fetch ahead control in the processing system.06-14-2012
20100030973CACHE DIRECTED SEQUENTIAL PREFETCH - A technique for performing stream detection and prefetching within a cache memory simplifies stream detection and prefetching. A bit in a cache directory or cache entry indicates that a cache line has not been accessed since being prefetched and another bit indicates the direction of a stream associated with the cache line. A next cache line is prefetched when a previously prefetched cache line is accessed, so that the cache always attempts to prefetch one cache line ahead of accesses, in the direction of a detected stream. Stream detection is performed in response to load misses tracked in the load miss queue (LMQ). The LMQ stores an offset indicating a first miss at the offset within a cache line. A next miss to the line sets a direction bit based on the difference between the first and second offsets and causes prefetch of the next line for the stream.02-04-2010
20100023701CACHE LINE DUPLICATION IN RESPONSE TO A WAY PREDICTION CONFLICT - Embodiments of the present invention provide a system that handles way mispredictions in a multi-way cache. The system starts by receiving requests to access cache lines in the multi-way cache. For each request, the system makes a prediction of a way in which the cache line resides based on a corresponding entry in the way prediction table. The system then checks for the presence of the cache line in the predicted way. Upon determining that the cache line is not present in the predicted way, but is present in a different way, and hence the way was mispredicted, the system increments a corresponding record in a conflict detection table. Upon detecting that a record in the conflict detection table indicates that a number of mispredictions equals a predetermined value, the system copies the corresponding cache line from the way where the cache line actually resides into the predicted way.01-28-2010
20090006762METHOD AND APPARATUS OF PREFETCHING STREAMS OF VARYING PREFETCH DEPTH - Method and apparatus of prefetching streams of varying prefetch depth dynamically changes the depth of prefetching so that the number of multiple streams as well as the hit rate of a single stream are optimized. The method and apparatus in one aspect monitor a plurality of load requests from a processing unit for data in a prefetch buffer, determine an access pattern associated with the plurality of load requests and adjust a prefetch depth according to the access pattern.01-01-2009
20090172293METHODS FOR PREFETCHING DATA IN A MEMORY STORAGE STRUCTURE - A method includes detecting a cache miss. The method further includes, in response to detecting the cache miss, traversing a plurality of linked memory nodes in a memory storage structure being used to store data to determine if the memory storage structure is a binary tree. The method further includes, in response to determining that the memory storage structure is a binary tree, prefetching data from the memory storage structure. An associated machine readable medium is also disclosed.07-02-2009
20090106499Processor with prefetch function - Non-speculatively prefetched data is prevented from being discarded from a cache memory before being accessed. In a cache memory including a cache control unit for reading data from a main memory into the cache memory and registering the data in the cache memory upon reception of a fill request from a processor and for accessing the data in the cache memory upon reception of a memory instruction from the processor, a cache line of the cache memory includes a registration information storage unit for storing information indicating whether the registered data is written into the cache line in response to the fill request and whether the registered data is accessed by the memory instruction. The cache control unit sets information in the registration information storage unit for performing a prefetch based on the fill request and resets the information for accessing the cache line based on the memory instruction.04-23-2009
20120011325METHODS AND SYSTEMS FOR CACHING DATA USING BEHAVIORAL EVENT CORRELATIONS - A method is disclosed including a client accessing a cache for a value of an object based on an object identification (ID), initiating a request to a cache loader if the cache does not include a value for the object, the cache loader performing a lookup in an object table for the object ID corresponding to the object, the cache loader retrieving a vector of execution context IDs, from an execution context table that correspond to the object IDs looked up in the object table and the cache loader performing an execution context lookup in an execution context table for every retrieved execution context ID in the vector to retrieve object IDs from an object vector.01-12-2012
20120072674DOUBLE-BUFFERED DATA STORAGE TO REDUCE PREFETCH GENERATION STALLS - A prefetch unit includes a program prefetch address generator that receives memory read requests and in response to addresses associated with the memory read request generates prefetch addresses and stores the prefetch addresses in slots of the prefetch unit buffer. Each slot includes a buffer for storing a prefetch address, two data buffers for storing data that is prefetched using the prefetch address of the slot, and a data buffer selector for alternating the functionality of the two data buffers. A first buffer is used to hold data that is returned in response to a received memory request, and a second buffer is used to hold data from a subsequent prefetch operation having a subsequent prefetch address, such that the data in the first buffer is not overwritten even when the data in the first buffer is still in the process of being read out.03-22-2012
20110066812TRANSFER REQUEST BLOCK CACHE SYSTEM AND METHOD - The present invention is directed to a transfer request block (TRB) cache system and method. A cache is used to store plural TRBs, and a mapping table is utilized to store corresponding TRB addresses in a system memory. A cache controller pre-fetches the TRBs and stores them in the cache according to the content of the mapping table.03-17-2011
20110066811STORE AWARE PREFETCHING FOR A DATASTREAM - A system and method for efficient data prefetching. A data stream stored in lower-level memory comprises a contiguous block of data used in a computer program. A prefetch unit in a processor detects a data stream by identifying a sequence of storage accesses referencing a contiguous blocks of data in a monotonically increasing or decreasing manner. After a predetermined training period for a given data stream, the prefetch unit prefetches a portion of the given data stream from memory without write permission, in response to an access that does not request write permission. Also, after the training period, the prefetch unit prefetches a portion of the given data stream from lower-level memory with write permission, in response to determining there has been a prior access to the given data stream that requests write permission subsequent to a number of cache misses reaching a predetermined threshold.03-17-2011
20100095070INFORMATION PROCESSING APPARATUS AND CACHE MEMORY CONTROL METHOD - An information processing apparatus including a main memory and a processor, the processor includes: a cache memory that stores data fetched to the cache memory; an instruction processing unit that accesses a part of the data in the cache memory sub block by sub block; an entry holding unit that holds a plurality of entries including a plurality of block addresses and access history information; and a controller that controls fetching of data from the main memory to the cache memory, while the access by the instruction processing unit to sub blocks of data in a block indicated by another of the entries immediately preceding the one of the entries, in accordance with order of the access from the instruction processing unit to sub blocks in the block indicated by the another of the entries and access history information associated with the one of the entries.04-15-2010
20120166733APPARATUS AND METHOD FOR IMPROVING DATA PREFETCHING EFFICIENCY USING HISTORY BASED PREFETCHING - An apparatus and method are described for performing history-based prefetching. For example a method according to one embodiment comprises: determining if a previous access signature exists in memory for a memory page associated with a current stream; if the previous access signature exists, reading the previous access signature from memory; and issuing prefetch operations using the previous access signature.06-28-2012
20120317364CACHE PREFETCHING FROM NON-UNIFORM MEMORIES - An apparatus is disclosed for performing cache prefetching from non-uniform memories. The apparatus includes a processor configured to access multiple system memories with different respective performance characteristics. Each memory stores a respective subset of system memory data. The apparatus includes caching logic configured to determine a portion of the system memory to prefetch into the data cache. The caching logic determines the portion to prefetch based on one or more of the respective performance characteristics of the system memory that stores the portion of data.12-13-2012
20120185651MEMORY-ACCESS CONTROL CIRCUIT, PREFETCH CIRCUIT, MEMORY APPARATUS AND INFORMATION PROCESSING SYSTEM - Disclosed herein is a memory-access control circuit including: a prefetch-size-changing-command detection section configured to detect a command to change a prefetch size of data transferred from a memory to a prefetch buffer; a transfer-state monitoring section configured to monitor a state of transferring data between the memory and the prefetch buffer; and a prefetch-size changing section configured to immediately change the prefetch size in the prefetch buffer when the command to change the prefetch size is detected and no state of transferring data between the memory and the prefetch buffer is being monitored and to change the prefetch size in the prefetch buffer after completion of the state of transferring data between the memory and the prefetch buffer when the command to change the prefetch size is detected and the state of transferring data between the memory and the prefetch buffer is being monitored.07-19-2012
20120226872PREFETCHING CONTENT OF A DIRECTORY BY EXECUTING A DIRECTORY ACCESS COMMAND - In response to a request to access a directory, a directory access command is invoked and executed, where the executed directory access command accesses the directory and prefetches content of the directory.09-06-2012
20080301375Method, Apparatus, and Program to Efficiently Calculate Cache Prefetching Patterns for Loops - A mechanism is provided that identifies instructions that access storage and may be candidates for catch prefetching. The mechanism augments these instructions so that any given instance of the instruction operates in one of four modes, namely normal, unexecuted, data gathering, and validation. In the normal mode, the instruction merely performs the function specified in the software runtime environment. An instruction in unexecuted mode, upon the next execution, is placed in data gathering mode. When an instruction in the data gathering mode is encountered, the mechanism of the present invention collects data to discover potential fixed storage access patterns. When an instruction is in validation mode, the mechanism of the present invention validates the presumed fixed storage access patterns.12-04-2008
20120265941Prefetching Irregular Data References for Software Controlled Caches - Prefetching irregular memory references into a software controlled cache is provided. A compiler analyzes source code to identify at least one of a plurality of loops that contain an irregular memory reference. The compiler determines if the irregular memory reference within the at least one loop is a candidate for optimization. Responsive to an indication that the irregular memory reference may be optimized, the compiler determines if the irregular memory reference is valid for prefetching. Responsive to an indication that the irregular memory reference is valid for prefetching, a store statement for an address of the irregular memory reference is inserted into the at least one loop. A runtime library call is inserted into a prefetch runtime library for the irregular memory reference. Data associated with the irregular memory reference is prefetched into the software controlled cache when the runtime library call is invoked.10-18-2012
20110131380ALTERING PREFETCH DEPTH BASED ON READY DATA - A system comprises a controller and a buffer accessible to the controller. The controller is configured to prefetch data from a storage medium in advance of such prefetch data being requested by a host device, some of such prefetch data being retrieved from the storage medium and stored in the buffer ready for access by the host device (“ready data”) and a remainder of such prefetch data in process of being retrieved from the storage medium but not yet stored in the buffer (“not ready data”). The controller alters a depth of the prefetch data based on a ratio of the ready data to a combined total of the ready data and not ready data.06-02-2011
20120239885MEMORY HUB WITH INTERNAL CACHE AND/OR MEMORY ACCESS PREDICTION - A computer system includes a memory hub for coupling a processor to a plurality of synchronous dynamic random access memory (“SDRAM”) devices. The memory hub includes a processor interface coupled to the processor and a plurality of memory interfaces coupled to respective SDRAM devices. The processor interface is coupled to the memory interfaces by a switch. Each of the memory interfaces includes a memory controller, a cache memory, and a prediction unit. The cache memory stores data recently read from or written to the respective SDRAM device so that it can be subsequently read by processor with relatively little latency. The prediction unit prefetches data from an address from which a read access is likely based on a previously accessed address.09-20-2012
20110238922BOUNDING BOX PREFETCHER - A data prefetcher in a microprocessor having a cache memory receives memory accesses each to an address within a memory block. The access addresses are non-monotonically increasing or decreasing as a function of time. As the accesses are received, the prefetcher maintains a largest address and a smallest address of the accesses and counts of changes to the largest and smallest addresses and maintains a history of recently accessed cache lines implicated by the access addresses within the memory block. The prefetcher also determines a predominant access direction based on the counts and determines a predominant access pattern based on the history. The prefetcher also prefetches into the cache memory, in the predominant access direction according to the predominant access pattern, cache lines of the memory block which the history indicates have not been recently accessed.09-29-2011
20110238921ANTICIPATORY RESPONSE PRE-CACHING - Interaction between a client and a service in which the service responds to requests from the client. In addition to responding to specific client requests, the service also anticipates or speculates about what the client may request in the future. Rather than await the client request (that may or may not ultimately be made), the service provides the unrequested anticipatory data to the client in the same data stream as the response data that actual responds to the specific client requests. The client may then use the anticipatory data to fully or partially respond to future requests from the client, if the client does make the request anticipated by the service. Thus, in some cases, latency may be reduced when responding to requests in which anticipatory data has already been provided. The service may give priority to the actual requested data, and gives secondary priority to the anticipatory data.09-29-2011
20120278560PRE-FETCHING IN A STORAGE SYSTEM THAT MAINTAINS A MAPPING TREE - A storage system, a non-transitory computer readable medium and a method for pre-fetching. The method may include presenting, by a storage system and to at least one host computer, a logical address space; determining, by a fetch module, to fetch a certain data portion from a data storage device to a cache memory of the storage system; determining, by a pre-fetch module, whether to pre-fetch at least one additional data portion from at least one data storage device to the cache memory based upon at least one characteristic of a mapping tree that maps one or more contiguous ranges of addresses related to the logical address space and one or more contiguous ranges of addresses related to the physical address space; and pre-fetching the at least one additional data portions if it is determined to pre-fetch the at least one additional data portions.11-01-2012
20120096227CACHE PREFETCH LEARNING - An apparatus generally having a processor, a cache and a circuit is disclosed. The processor may be configured to generate (i) a plurality of access addresses and (ii) a plurality of program counter values corresponding to the access addresses. The cache may be configured to present in response to the access addresses (i) a plurality of data words and (ii) a plurality of address information corresponding to the data words. The circuit may be configured to record a plurality of events in a file in response to a plurality of cache misses. A first of the events in the file due to a first of the cache misses generally includes (i) a first of the program counter values, (ii) a first of the address information and (iii) a first time to prefetch a first of the data word from a memory to the cache.04-19-2012
20120331235MEMORY MANAGEMENT APPARATUS, MEMORY MANAGEMENT METHOD, CONTROL PROGRAM, AND RECORDING MEDIUM - There is provided a memory management apparatus including a data input/output part for requesting to read data in units of blocks in a first size from a first storage medium and storing the data read from the first storage medium into a second storage medium, a data creating part for creating prefetch data obtained by converting a history of the request to read the data from the first storage medium into data of which read position and size are indicated in units of blocks in a second size, the request being issued by the data input/output part in response to a request from a program to be prefetched, and a prefetching part for requesting the data input/output part to prefetch the data of the program from the first storage medium to the second storage medium based on the prefetch data.12-27-2012
20130019065Mobile Memory Cache Read OptimizationAANM Floman; MattiAACI KangasalaAACO FIAAGP Floman; Matti Kangasala FIAANM Mylly; KimmoAACI YlojarviAACO FIAAGP Mylly; Kimmo Ylojarvi FI - A method for enabling cache read optimization for mobile memory devices is described. The method includes receiving one or more access commands, at a memory device from a host, the one or more access commands instructing the memory device to access at least two data blocks. The at least two data blocks are accessed. The method includes generating, by the memory device, pre-fetch information for the at least two data blocks based at least in part on an order of accessing the at least two data blocks. Apparatus and computer readable media are also described.01-17-2013
20130024627PREFETCHING DATA TRACKS AND PARITY DATA TO USE FOR DESTAGING UPDATED TRACKS - Provided are a computer program product, system, and method for prefetching data tracks and parity data to use for destaging updated tracks. A write request is received including at least one updated track to the group of tracks. The at least one updated track is stored in a first cache device. A prefetch request is sent to the at least one sequential access storage device to prefetch tracks in the group of tracks to a second cache device. A read request is generated to read the prefetch tracks following the sending of the prefetch request. The read prefetch tracks returned to the read request from the second cache device are stored in the first cache device. New parity data is calculated from the at least one updated track and the read prefetch tracks.01-24-2013
20130024626PREFETCHING SOURCE TRACKS FOR DESTAGING UPDATED TRACKS IN A COPY RELATIONSHIP - A point-in-time copy relationship associates tracks in a source storage with tracks in a target storage. The target storage stores the tracks in the source storage as of a point-in-time. A write request is received including an updated source track for a point-in-time source track in the source storage in the point-in-time copy relationship. The point-in-time source track was in the source storage at the point-in-time the copy relationship was established. The updated source track is stored in a first cache device. A prefetch request is sent to the source storage to prefetch the point-in-time source track in the source storage subject to the write request to a second cache device. A read request is generated to read the source track in the source storage following the sending of the prefetch request. The read source track is copied to a corresponding target track in the target storage.01-24-2013
20080229027PREFETCH CONTROL DEVICE, STORAGE DEVICE SYSTEM, AND PREFETCH CONTROL METHOD - A prefetch control device controls prefetching of read-out data into cache memory which improves efficiency of data reading from a storage device by caching data passed between the storage device and a computing device, determines whether data read out from the storage device to the computing device is sequentially accessed data or not, decides a prefetch amount for the read-out data in accordance with a predetermined condition if the read-out data is determined to be sequentially accessed data, and prefetches the read-out data of the prefetch amount.09-18-2008
20130103907MEMORY MANAGEMENT DEVICE, MEMORY MANAGEMENT METHOD, CONTROL PROGRAM, AND RECORDING MEDIUM - A memory management device includes a prefetch execution unit which performs prefetching data from a first memory unit, and moving the data to a second memory unit, and an initial data preservation unit which preserves data including at least a part of the data items which are placed in the second memory unit before the prefetch execution unit performs the prefetching, and data including the data which is prefetched by the prefetch execution unit as initial data which is data stored in the second memory unit when a system including the first and second memory units is started, before the prefetch execution unit performs prefetching.04-25-2013
20130103908PREVENTING UNINTENDED LOSS OF TRANSACTIONAL DATA IN HARDWARE TRANSACTIONAL MEMORY SYSTEMS - A method and apparatus are disclosed for implementing early release of speculatively read data in a hardware transactional memory system. A processing core comprises a hardware transactional memory system configured to receive an early release indication for a specified word of a group of words in a read set of an active transaction. The early release indication comprises a request to remove the specified word from the read set. In response to the early release request, the processing core removes the group of words from the read set only after determining that no word in the group other than the specified word has been speculatively read during the active transaction.04-25-2013
20130124803PREFETCHING SOURCE TRACKS FOR DESTAGING UPDATED TRACKS IN A COPY RELATIONSHIP - A point-in-time copy relationship associates tracks in a source storage with tracks in a target storage. The target storage stores the tracks in the source storage as of a point-in-time. A write request is received including an updated source track for a point-in-time source track in the source storage in the point-in-time copy relationship. The point-in-time source track was in the source storage at the point-in-time the copy relationship was established. The updated source track is stored in a first cache device. A prefetch request is sent to the source storage to prefetch the point-in-time source track in the source storage subject to the write request to a second cache device. A read request is generated to read the source track in the source storage following the sending of the prefetch request. The read source track is copied to a corresponding target track in the target storage.05-16-2013
20130145102MULTI-LEVEL INSTRUCTION CACHE PREFETCHING - One embodiment of the present invention sets forth an improved way to prefetch instructions in a multi-level cache. Fetch unit initiates a prefetch operation to transfer one of a set of multiple cache lines, based on a function of a pseudorandom number generator and the sector corresponding to the current instruction L1 cache line. The fetch unit selects a prefetch target from the set of multiple cache lines according to some probability function. If the current instruction L1 cache 06-06-2013
20130145101Method and Apparatus for Controlling an Operating Parameter of a Cache Based on Usage - A method and apparatus are provided for controlling power consumed by a cache. The method comprises monitoring usage of a cache and providing a cache usage signal responsive thereto. The cache usage signal may be used to vary an operating parameter of the cache. The apparatus comprises a cache usage monitor and a controller. The cache usage monitor is adapted to monitor a cache and provide a cache usage signal responsive thereto. The controller is adapted to vary the operating parameter of the cache in response to the cache usage signal.06-06-2013
20100281224PREFETCHING CONTENT FROM INCOMING MESSAGES - A method, system, and computer program product for prefetching content from incoming messages. A computer receives an incoming message containing one or more resource links. The computer may then determine if the resource links contained in the incoming message are likely to be accessed. In response to determining that one or more of the resource links are likely to be accessed, the logic determines if the target content of the resource link has previously been cached, and if any previously cached data is current. In response to determining that the requested content has not previously been cached, or is not current, the logic begins downloading the requested content for local consumption. When the cached content is requested, the cached data is provided to the user. Upon receiving requests for the cached content from other connected client terminals, the cached content may also be served to the other requesting client terminals.11-04-2010
20110219194DATA RELAYING APPARATUS AND METHOD FOR RELAYING DATA BETWEEN DATA - A data relaying apparatus and method capable of relaying data in a highly efficient manner. Data of a predetermined read-ahead size is acquired from the storage apparatus from a top address indicated by a data read request to temporarily store the data as temporary storage data and, each time a subsequent data read request is made, data of a transmission data size corresponding to a type of the subsequent data read request is read out sequentially from a top position of the temporary storage data to relay the data to a data processing apparatus.09-08-2011
20080201530SYSTEM AND STORAGE MEDIUM FOR MEMORY MANAGEMENT - Systems and a storage medium for memory management are provided. A system includes a tag controlled buffer in communication with a memory device, including multiple pages divided into individually addressable lines. The tag controlled buffer includes a prefetch buffer with at least one of the individually addressable lines from the memory device and a tag cache in communication with the prefetch buffer. The tag cache includes at least one tag associated with one of the pages in the memory device. Each tag includes a reference history field and a pointer to a line in the prefetch buffer that is from the associated page. The reference history field includes information about how the lines from the associated page have been accessed in the past and is utilized to determine which lines in the associated page should be added to the prefetch buffer when the tag is added to the tag cache.08-21-2008
20130185518DETERMINING DATA CONTENTS TO BE LOADED INTO A READ-AHEAD CACHE IN A STORAGE SYSTEM - Read messages are issued by a client for data stored in a storage system of the networked client-server architecture. A client agent mediates between the client and the storage system. Each sequence of read requests generated by a single thread of execution in the client to read a specific data segment in the storage is defined as a client read session. Each read request sent from the client agent to the storage system includes positions and size for reading. A read-ahead cache is maintained for each client read session. The read-ahead cache is partitioned into two buffers. Data is loaded into the logical buffers according to the changes of the positions in the read requests of the client read session and loading of new data into the buffers is triggered by the read requests positions exceeding a position threshold in the data covered by the second logical buffer.07-18-2013
20130151787Mechanism for Using a GPU Controller for Preloading Caches - Provided is a method and system for preloading a cache on a graphical processing unit. The method includes receiving a command message, the command message including data related to a portion of memory. The method also includes interpreting the command message, identifying policy information of the cache, identifying a location and size of the portion of memory, and creating a fetch message including data related to contents of the portion, wherein the fetch message causes the cache to preload data of the portion of memory.06-13-2013
20130185517TECHNIQUES FOR IMPROVING THROUGHPUT AND PERFORMANCE OF A DISTRIBUTED INTERCONNECT PERIPHERAL BUS CONNECTED TO A HOST CONTROLLER - A method for accelerating execution of read operations in a distributed interconnect peripheral bus, the distributed interconnect peripheral bus is coupled to a host controller being connected to a universal serial bus (USB) device. The method comprises synchronizing on at least one ring assigned to the USB device; pre-fetching transfer request blocks (TRBs) maintained in the at least one ring, wherein the TRBs are saved in a host memory; saving the pre-fetched TRBs in an internal cache memory; upon reception of a TRB read request from the host controller, serving the request by transferring the requested TRB from the internal cache memory to the host controller; and sending a TRB read completion message to the host controller.07-18-2013
20130185516Use of Loop and Addressing Mode Instruction Set Semantics to Direct Hardware Prefetching - Systems and methods for prefetching cache lines into a cache coupled to a processor. A hardware prefetcher is configured to recognize a memory access instruction as an auto-increment-address (AIA) memory access instruction, infer a stride value from an increment field of the AIA instruction, and prefetch lines into the cache based on the stride value. Additionally or alternatively, the hardware prefetcher is configured to recognize that prefetched cache lines are part of a hardware loop, determine a maximum loop count of the hardware loop, and a remaining loop count as a difference between the maximum loop count and a number of loop iterations that have been completed, select a number of cache lines to prefetch, and truncate an actual number of cache lines to prefetch to be less than or equal to the remaining loop count, when the remaining loop count is less than the selected number of cache lines.07-18-2013
20130185515Utilizing Negative Feedback from Unexpected Miss Addresses in a Hardware Prefetcher - Systems and methods for populating a cache using a hardware prefetcher are disclosed. A method for prefetching cache entries includes determining an initial stride value based on at least a first and second demand miss address in the cache, verifying the initial stride value based on a third demand miss address in the cache, prefetching a predetermined number of cache entries based on the verified initial stride value, determining an expected next miss address in the cache based on the verified initial stride value and addresses of the prefetched cache entries; and confirming the verified initial stride value based on comparing the expected next miss address to a next demand miss address in the cache. If the verified initial stride value is confirmed, additional cache entries are prefetched. If the verified initial stride value is not confirmed, further prefetching is stalled and an alternate stride value is determined.07-18-2013
20130191603Method And Apparatus For Accessing Physical Memory From A CPU Or Processing Element In A High Performance Manner - A method and apparatus is described herein for accessing a physical memory location referenced by a physical address with a processor. The processor fetches/receives instructions with references to virtual memory addresses and/or references to physical addresses. Translation logic translates the virtual memory addresses to physical addresses and provides the physical addresses to a common interface. Physical addressing logic decodes references to physical addresses and provides the physical addresses to a common interface based on a memory type stored by the physical addressing logic.07-25-2013
20130191601APPARATUS, SYSTEM, AND METHOD FOR MANAGING A CACHE - An apparatus, system, and method are disclosed for managing a cache. A cache interface module provides access to a plurality of virtual storage units of a solid-state storage device over a cache interface. At least one of the virtual storage units comprises a cache unit. A cache command module exchanges cache management information for the at least one cache unit with one or more cache clients over the cache interface. A cache management module manages the at least one cache unit based on the cache management information exchanged with the one or more cache clients.07-25-2013
20130191602CALCULATING READ OPERATIONS AND FILTERING REDUNDANT READ REQUESTS IN A STORAGE SYSTEM - Read messages are issued by a client for data stored in a storage system of the networked client-server architecture. A client agent mediates between the client and the storage system. Each sequence of read requests generated by a single thread of execution in the client to read a specific data segment in the storage is defined as a client read session. The client agent maintains a read-ahead cache for each client read session and generates read-ahead requests to load data into the read-ahead cache. Each read request and read-ahead request sent from the client agent to the storage system includes positions and a size for reading and a sequence id value. The storage system filters and modifies incoming read request and read-ahead requests based on sequence ID values, positions and sizes of the incoming read request and read-ahead requests.07-25-2013
20120030431PREDICTIVE SEQUENTIAL PREFETCHING FOR DATA CACHING - A system for prefetching memory in caching systems includes a processor that generates requests for data. A cache of a first level stores memory lines retrieved from a lower level memory in response to references to addresses generated by the processor's requests for data. A prefetch buffer is used to prefetch an adjacent memory line from the lower level memory in response to a request for data. The adjacent memory line is a memory line that is adjacent to a first memory line that is associated with an address of the request for data. An indication that a memory line associated with an address associated with the requested data has been prefetched is stored. A prefetched memory line is transferred to the cache of the first level in response to the stored indication that a memory line associated with an address associated with the requested data has been prefetched.02-02-2012
20120066456DIRECT MEMORY ACCESS CACHE PREFETCHING - An apparatus having a first cache and a controller is disclosed. The first cache may be configured to assert a first signal after receiving given information in response to being ready to receive additional information. The controller may be configured to (i) fetch the given information from a memory to the first cache and (ii) prefetch first information in a direct memory access transfer from the memory to the first cache in response to the assertion of the first signal.03-15-2012
20130205095PROCESSING READ REQUESTS BY A STORAGE SYSTEM - Read messages are issued by a client for data stored in a storage system. A client agent mediates between the client and the storage system. Each sequence of read requests generated by a single thread of execution in the client to read a specific data segment in the storage is defined as a client read session. Each read request sent from the client agent to the storage system includes a position and a size for reading. The read-ahead cache and a current sequence ID value for each client read session are maintained. For each incoming read request, the storage system determines whether to further process the read request based on a sequence ID value of the read request, and the source from which to obtain data for the read request, and which of the data to load into the read-ahead cache according to data positions of the read request.08-08-2013
20120072673SPECULATION-AWARE MEMORY CONTROLLER ARBITER - A memory arbiter minimizes latency of memory accesses in a system having multiple processors. The memory arbiter improves overall system performance by managing the memory requests from each processor individually before those requests are sent to a central memory arbiter for handling memory requests for the shared resources from the multiple processors. The local memory arbiter buffers the memory requests from a local processor, analyzes the buffered memory requests, and optimizes the requests by reordering commands according to a rule set, and by performing write merging and prefetch squashing in certain conditions.03-22-2012
20120072672PREFETCH ADDRESS HIT PREDICTION TO REDUCE MEMORY ACCESS LATENCY - A prefetch unit receives a memory read request having an associated address for accessing data that is stored in memory. A next predicted address is determined in response to a prefetch address stored in a slot of an array for storing portions of predicted addresses and associated with a slot in accordance with an order in which a prefetch FIFO counter is modified to select the slots of the array. Data is prefetched from a lower-level hierarchical memory in accordance with a next predicted address and provisioned the prefetched data to minimize a read time for reading the prefetched data. The provisioned prefetched data is read-out when the address of the memory request is associated with the next predicted address.03-22-2012
20130212334Determining Optimal Preload Distance at Runtime - A run-time delay of a memory is measured, a run-time duration of a routine is determined, and an optimal run-time preload distance for the routine is determined based on the measured run-time memory delay and the determined run-time duration of the routine. Optionally, the run-time duration of the routine can be determined by measuring a run-time duration, and optionally the run-time duration can be determined based on a database of run-time delay for operations of the routine. Optionally, the optimal run-time preload distance is used in performing a loop of the routines.08-15-2013

Patent applications in class Look-ahead