Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


Entry replacement strategy

Subclass of:

711 - Electrical computers and digital processing systems: memory

711100000 - STORAGE ACCESSING AND CONTROL

711117000 - Hierarchical memories

711118000 - Caching

Patent class list (only not empty are listed)

Deeper subclasses:

Class / Patent application numberDescriptionNumber of patent applications / Date published
711135000 Cache flushing 75
711136000 Least recently used 73
711134000 Combined replacement modes 14
Entries
DocumentTitleDate
20110179227CACHE MEMORY AND METHOD FOR CACHE ENTRY REPLACEMENT BASED ON MODIFIED ACCESS ORDER - A cache memory and method for controlling the cache memory. The cache memory selects, from an access address, a unique set from among a plurality of sets, each access set including a plurality of cache entries. Each cache entry holds unit data for caching. The cache memory holds, for each of the cache entries, order data that indicates an access order of the cache entries in each set, and replaces a cache entry that is oldest in the access order. The cache memory modifies the order data regardless of an actual access order, and selects, based on the modified order data, a cache entry to be replaced.07-21-2011
20120246411CACHE EVICTION USING MEMORY ENTRY VALUE - Embodiments are directed to efficiently determining which cache entries are to be evicted from memory and to incorporating a probability of reuse estimation in a cache entry eviction determination. A computer system with multiple different caches accesses a cache entry. The computer system determines an entry cost value for the accessed cache entry. The entry cost value indicates an amount of time the computer system is slowed down by to load the cache entry into cache memory. The computer system determines an opportunity cost value for the computing system caches. The opportunity cost value indicates an amount of time by which the computer system is slowed down while performing other operations that could have used the cache entry's cache memory space. Upon determining that the entry cost value is lower than the opportunity cost value, the computer system probabilistically evicts the cache entry from cache memory.09-27-2012
20090113133Synchronous Memory Having Shared CRC and Strobe Pin - A memory system having a memory element chip (DRAM) and a memory controller chips having a plurality of drivers and receivers and latches for transferred data. For writes clocks, write data and write for CRC (cyclic redundancy checks) is transferred to the DRAM from the memory controller and latched for error checking. The reads are clocked and the read data is received and transferred to a read data latch with also receives a clocked read strobe for verification of data integrity from DRAM. Each chip has a bi-functional pin that acts as a shared CRC pin during write and acts as a shared strobe pin during READ. Data transfers with the CRC signal and DQS signal are transferred across two paths CRC04-30-2009
20130042073Hybrid Automatic Repeat Request Combiner and Method for Storing Hybrid Automatic Repeat Request Data - The invention provides a method for storing hybrid automatic repeat request (HARQ) data, the method including: when receiving new data of a coded block, a HARQ processor writing the new data into a high rate buffer memory (Cache) and a channel decoder; the Cache writing the new data into a data memory of the Cache or an external memory; and when receiving retransmitted data of the coded block, the HARQ processor obtaining a previous data corresponding to the retransmitted data from the data memory of the Cache or the external memory through the Cache, combining the retransmitted data and the previous data, and writing the combined data to the Cache and the channel decoder; the Cache writing the combined data into the data memory of the Cache or the external memory. The invention also provides a HARQ combiner.02-14-2013
20090157971Integration of Secure Data Transfer Applications for Generic IO Devices - Techniques are presented for sending an application instruction from a hosting digital appliance to a portable medium, where the instruction is structured as one or more units whose size is a first size, or number of bytes. After flushing the contents of a cache, the instruction is written to the cache, where the cache is structured as logical blocks having a size that is a second size that is larger (in terms of number of bytes) than the first size. In writing the instruction (having a command part and, possibly, a data part), the start of the instruction is aligned with one of the logical block boundaries in the cache and the instruction is padded out with dummy data so that it fills an integral number of the cache blocks. When a response from a portable device to an instruction is received at a hosting digital appliance, the cache is similarly flushed prior to receiving the response. The response is then stored to align with a logical block boundary of the cache.06-18-2009
20090157973Storage controller for handling data stream and method thereof - A storage controller for handling data stream having data integrity field (DIF) and method thereof. The storage controller comprises a host-side I/O controller for receiving a data stream from a host entity, a host-side I/O controller for connecting to a physical storage device, and, a central processing circuitry having at least one DIF I/O interface for handling DIF data so as to reduce the number of memory access to the main memory of the storage controller.06-18-2009
20090157972Hash Optimization System and Method - A computer implemented method, apparatus and program product automatically optimizes hash function operation by recognizing when a first hash function results in an unacceptable number of cache misses, and by dynamically trying another hash function to determine which hash function results in the most cache hits. In this manner, hardware optimizes hash function operation in the face of changing loads and associated data flow patterns.06-18-2009
20120191918TECHNIQUES FOR DIRECTORY SERVER INTEGRATION - Techniques for directory server integration are disclosed. In one particular exemplary embodiment, the techniques may be realized as a method for directory server integration comprising setting one or more parameters determining a range of permissible expiration times for a plurality of cached directory entries, creating, in electronic storage, a cached directory entry from a directory server, assigning a creation time to the cached directory entry, and assigning at least one random value to the cached directory entry, the random value determining an expiration time for the cached directory entry within the range of permissible expiration times, wherein randomizing the expiration time for the cached directory entry among the range of permissible expiration times for a plurality of cached directory entries reduces an amount of synchronization required between cache memory and the directory server at a point in time.07-26-2012
20130061001SYSTEM REFRESH IN CACHE MEMORY - System refresh in a cache memory that includes generating a refresh time period (RTIM) pulse at a centralized refresh controller of the cache memory and activating a refresh request at the centralized refresh controller based on generating the RTIM pulse. The refresh request is associated with a single cache memory bank of the cache memory. A refresh grant is received and transmitted to a bank controller. The bank controller is associated with and localized at the single cache memory bank of the cache memory.03-07-2013
20090271575CACHE MEMORY, SYSTEM, AND METHOD OF STORING DATA - A cache memory according to the present invention is a cache memory that has a set associative scheme and includes: a plurality of ways, each way being made up of entries, each entry holding data and a tag; a first holding unit operable to hold, for each way, a priority attribute that indicates a type of data to be preferentially stored in that way; a second holding unit which is included at least in a first way among the ways, and is operable to hold, for each entry of the first way, a data attribute that indicates a type of data held in that entry; and a control unit operable to perform replace control on the entries by prioritizing a way whose priority attribute held by the first holding unit matches a data attribute outputted from a processor, wherein when a cache miss occurs and in the case where (i) valid data is held in an entry of the first way among entries that belong to a set selected based on an address outputted from the processor, (ii) all of the following attributes match: the data attribute of the entry; the data attribute outputted from the processor; and the priority attribute of the first way, and (iii) an entry of a way other than the first way does not hold valid data, the entry being one of the entries that belong to the selected set, the control unit is further operable to store data into the entry of the way other than the first way.10-29-2009
20090271574METHOD FOR IMPROVING FREQUENCY-BASED CACHING ALGORITHMS BY MAINTAINING A STABLE HISTORY OF EVICTED ITEMS - The invention provides a method for improving frequency-based caching algorithms by maintaining a stable history of evicted items. One embodiment involves a process for caching data in a cache memory including logical pages including, upon detecting that a first page is being evicted from the cache memory, performing an addition process by adding metadata of the first page to a stable history list. Upon detecting a cache miss for a second page, if the stable history list contains metadata for the second page, then removing the second page metadata from the stable history list and applying a promotion determination for the second page to determine a priority value for the second page metadata and placing the second page in the cache memory based on the priority data. Upon detecting that metadata of a third page is to be evicted from the stable history list, applying an eviction determination to evict metadata of the third page from the stable history list based on a predetermined caching rule.10-29-2009
20110138129CACHE MANAGEMENT FOR A NUMBER OF THREADS - The illustrative embodiments provide a method, a computer program product, and an apparatus for managing a cache. A probability of a future request for data to be stored in a portion of the cache by a thread is identified for each of the number of threads to form a number of probabilities. The data is stored with a rank in a number of ranks in the portion of the cache responsive to receiving the future request from the thread in the number of threads for the data. The rank is selected using the probability in the number of probabilities for the thread.06-09-2011
20110191544Data Storage and Access - A data cache wherein contents of the cache are arranged and organised according to a hierarchy. When a member of a first hierarchy is accessed, all contents of that member are copied to the cache. The cache may be arranged according to folders which contain data or blocks of data. A process for caching data using such an arrangement is also provided for.08-04-2011
20110197033Cache Used Both as Cache and Staging Buffer - In one embodiment, a cache comprises a data memory comprising a plurality of data entries, each data entry having capacity to store a cache block of data, and a cache control unit coupled to the data memory. The cache control unit is configured to dynamically allocate a given data entry in the data memory to store a cache block being cached or to store data that is not being cache but is being staged for retransmission on an interface to which the cache is coupled.08-11-2011
20130219124EFFICIENT DISCARD SCANS - A plurality of tracks is examined for meeting criteria for a discard scan. In lieu of waiting for a completion of a track access operation, at least one of the plurality of tracks is marked for demotion. An additional discard scan may be subsequently performed for tracks not previously demoted. The discard and additional discard scans may proceed in two phases.08-22-2013
20110197032CACHE COORDINATION BETWEEN DATA SOURCES AND DATA RECIPIENTS - A data recipient configured to access a data source may exhibit improved performance by caching data items received from the data source. However, the cache may become stale unless the data recipient is informed of data source updates. Many subscription mechanisms are specialized for the particular data recipient and/or data source, which may cause an affinity of the data recipient for the data source, thereby reducing scalability of the data sources and/or data recipients. A cache synchronization service may accept requests from data recipients to subscribe to the data source, and may promote cache freshness by notifying subscribers when particular data items are updated at the data source. Upon detecting an update of the data source involving one or more data items, the cache synchronization service may request each subscriber of the data source to remove the stale cached representation of the updated data item(s) from its cache.08-11-2011
20090300289Reducing back invalidation transactions from a snoop filter - In one embodiment, the present invention includes a method for receiving an indication of a pending capacity eviction from a caching agent, determining whether an invalidating writeback transaction from the caching agent is likely for a cache line associated with the pending capacity eviction, and if so moving a snoop filter entry associated with the cache line from a snoop filter to a staging area. Other embodiments are described and claimed.12-03-2009
20120297140EXPANDABLE DATA CACHE - A method and system for cache management in a storage device is disclosed. A portion of unused memory in the storage device is used for temporary data cache so that two levels of cache may be used (such as a permanent data cache and a temporary data cache). The storage device may manage the temporary data cache in order to maintain clean entries in the temporary data cache. In this way, the storage area associated with the temporary data cache may be immediately reclaimed and retasked for a different purpose without the need for extraneous copy operations.11-22-2012
20120297141IMPLEMENTING TRANSACTIONAL MECHANISMS ON DATA SEGMENTS USING DISTRIBUTED SHARED MEMORY - Systems, Methods, and Computer Program Products are provided for implementing transactional mechanisms by a plurality of procedures on data segments by using distributed shared memory (DSM) agents in a clustered file system (CFS). A new data segment is allocated and an associated cache data segment and metadata data segments, which are allocated for the new data segment and loaded into a cache and modified during the allocating of the new data segment, are added to a list of data segments modified within an associated transaction. The DSM agents assign an exclusive permission to the new data segment.11-22-2012
20080282037Method and apparatus for controlling cache - A cache controller controls at least one cache. The cache includes ways including a plurality of blocks that stores therein entry data. A writing unit writes degradation data to a failed block. The degradation data indicates that the failed block is in a degradation state. A reading unit reads entry data from a block. A determining unit determines, if the entry data obtained by the reading unit includes the degradation data, that the block is in the degradation state.11-13-2008
20080288723STORAGE DEVICE AND STORAGE DEVICE DATA LIFE CYCLE CONTROL METHOD - A storage device including a control part which performs control by extracting a life tag specifying a retention term during which the data is to be retained in the second volume having the quicker access time than the first volume, the control part managing the retention term of the corresponding data as specified by the life tag, and an elapsed term which has elapsed since the corresponding data was stored. A storage part manages update segment control information, and when the elapsed term of certain data exceeds the retention term of the certain data, the storage part nullifies the certain data in the second volume.11-20-2008
20080313407LATENCY-AWARE REPLACEMENT SYSTEM AND METHOD FOR CACHE MEMORIES - A method for replacing cache lines in a computer system having a non-uniform set associative cache memory is disclosed. The method incorporates access latency as an additional factor into the existing ranking guidelines for replacement of a line, the higher the rank of the line the sooner that it is likely to be evicted from the cache. Among a group of highest ranking cache lines in a cache set, the cache line chosen to be replaced is one that provides the lowest latency access to a requesting entity, such as a processor. The distance separating the requesting entity from the memory partition where the cache line is stored most affects access latency.12-18-2008
20100262785Method and System for an Extensible Caching Framework - Systems and methods which provide an extensible caching framework are disclosed. These systems and methods may provide a caching framework which can evaluate individual parameters of a request for a particular piece of content. Modules capable of evaluating individual parameters of an incoming request may be added and removed from this framework. When a request for content is received, parameters of the request can be evaluated by the framework and a cache searched for responsive content based upon this evaluation. If responsive content is not found in the cache, responsive content can be generated and stored in the cache along with associated metadata and a signature formed by the caching framework. This signature may aid in locating this content when a request for similar content is next received.10-14-2010
20090083492COST-CONSCIOUS PRE-EMPTIVE CACHE LINE DISPLACEMENT AND RELOCATION MECHANISMS - A hardware based method for determining when to migrate cache lines to the cache bank closest to the requesting processor to avoid remote access penalty for future requests. In a preferred embodiment, decay counters are enhanced and used in determining the cost of retaining a line as opposed to replacing it while not losing the data. In one embodiment, a minimization of off-chip communication is sought; this may be particularly useful in a CMP environment.03-26-2009
20090193195CACHE THAT STORES DATA ITEMS ASSOCIATED WITH STICKY INDICATORS - Data items are stored in a cache of the storage system, where the data items are for a snapshot volume. Sticky indicators are associated with the data items in the cache, where the sticky indicators delay removal of corresponding data items from the cache. Data items of the cache are sacrificed according to a replacement algorithm that takes into account the sticky indicators associated with the data items.07-30-2009
20090024800METHOD AND SYSTEM FOR USING UPPER CACHE HISTORY INFORMATION TO IMPROVE LOWER CACHE DATA REPLACEMENT - A system for managing data in a plurality of storage locations. In response to a least recently used algorithm wanting to move data from a cache to a storage location, an aging table is searched for an associated entry for the data. In response to finding the associated entry for the data in the aging table, an indicator is enabled on the data. In response to determining that the indicator is enabled on the data, the data is kept in the cache despite the least recently used algorithm wanting to move the data to the storage location.01-22-2009
20120079205METHOD AND APPARATUS FOR REDUCING PROCESSOR CACHE POLLUTION CAUSED BY AGGRESSIVE PREFETCHING - A method and apparatus for controlling a first and second cache is provided. A cache entry is received in the first cache, and the entry is identified as having an untouched status. Thereafter, the status of the cache entry is updated to accessed in response to receiving a request for at least a portion of the cache entry, and the cache entry is subsequently cast out according to a preselected cache line replacement algorithm. The cast out cache entry is stored in the second cache according to the status of the cast out cache entry.03-29-2012
20080263283System and Method for Tracking Changes in L1 Data Cache Directory - Method, system and computer program product for tracking changes in an L1 data cache directory. A method for tracking changes in an L1 data cache directory determines if data to be written to the L1 data cache is to be written to an address to be changed from an old address to a new address. If it is determined that the data to be written is to be written to an address to be changed, a determination is made if the data to be written is associated with the old address or the new address. If it is determined that the data is to be written to the new address, the data is allowed to be written to the new address following a prescribed delay after the address to be changed is changed. The method is preferably implemented in a system that provides a Store Queue (STQU) design that includes a Content Addressable Memory (CAM)-based store address tracking mechanism that includes early and late write CAM ports. The method eliminates time windows and the need for an extra copy of the L1 data cache directory.10-23-2008
20110231613REMOTE STORAGE CACHING - Disclosed is a storage system. A network interface device (NIC) receives network storage commands from a host. The NIC may cache the data to/from the storage commands in a solid-state disk. The NIC may respond to future network storage command by supplying the data from the solid-state disk rather than initiating a network transaction.09-22-2011
20090204766METHOD, SYSTEM, AND COMPUTER PROGRAM PRODUCT FOR HANDLING ERRORS IN A CACHE WITHOUT PROCESSOR CORE RECOVERY - A method for handling errors in a cache memory without processor core recovery includes receiving a fetch request for data from a processor and simultaneously transmitting fetched data and a parity matching the parity of the fetched data to the processor. The fetched data is received from a higher-level cache into a low level cache of the processor. Upon determining that the fetched data failed an error check indicating that the fetched data is corrupted, the method includes requesting an execution pipeline to discontinue processing and flush its contents, and initiating a clean up sequence, which includes sending an invalidation request to the low level cache causing the low level cache to remove lines associated with the corrupted data, and requesting the execution pipeline to restart. The execution pipeline accesses a copy of the requested data from a higher-level storage location.08-13-2009
20080294847Cache control device and computer-readable recording medium storing cache control program - A cache control device controlling a cache memory having ways based on an access request includes an error number count memory unit that counts the total number of errors occurred in response to the access request regardless of in which way they occur, a degeneration information memory unit that stores cache line degeneration information indicating degeneration of a specific cache line, a degeneration information writing unit that writes, when the counted number of errors reaches a predetermined upper limit number, the cache line degeneration information into the degeneration information memory unit for a cache line, error in which causes the number to reach the predetermined upper limit number, and a replace control unit that performs, in response to a replace request to the cache line corresponding to the cache line degeneration information stored in the degeneration information memory unit, a replace control to exclude the cache line from replace candidates.11-27-2008
20090204765DATA BLOCK FREQUENCY MAP DEPENDENT CACHING - A method for increasing the performance and utilization of cache memory by combining the data block frequency map generated by data de-duplication mechanism and page prefetching and eviction algorithms like Least Recently Used (LRU) policy. The data block frequency map provides weight directly proportional to the frequency count of the block in the dataset. This weight is used to influence the caching algorithms like LRU. Data blocks that have lesser frequency count in the dataset are evicted before those with higher frequencies, even though they may not have been the topmost blocks for page eviction by caching algorithms. The method effectively combines the weight of the block in the frequency map and its eviction status by caching algorithms like LRU to get an improved performance and utilization of the cache memory.08-13-2009
20090222626Systems and Methods for Cache Line Replacements - A system for determining a cache line to replace is described. In one embodiment, the system includes a cache comprising a plurality of cache lines. The system further includes an identifier configured to identify a cache line for replacement. The system also includes a control logic configured to determine a value of the identifier selected from an incrementer, a cache maintenance instruction, or remains the same.09-03-2009
20090254710DEVICE AND METHOD FOR CONTROLLING CACHE MEMORY - A cache memory control device according to an embodiment of the present invention comprises: a refill counter that counts a refill request, and a cache-capacity determining unit that determines cache capacity. The cache-capacity determining unit transmits a cache-capacity-decrease command signal to the cache memory when a count value is equal to or smaller than a first threshold value or is smaller than the first threshold value, and the cache-capacity determining unit transmits a cache-capacity-increase command signal to the cache memory when the count value is equal to or larger than a second threshold value, which is larger than the first threshold value, or when the count value is larger than the second threshold value.10-08-2009
20120036326EFFICIENTLY SYNCHRONIZING WITH SEPARATED DISK CACHES - In a method of synchronizing with a separated disk cache, the separated cache is configured to transfer cache data to a staging area of a storage device. An atomic commit operation is utilized to instruct the storage device to atomically commit the cache data to a mapping scheme of the storage device.02-09-2012
20100153651Efficient use of memory and accessing of stored records - Memory is used, including by receiving at a processor an indication that a first piece of metadata associated with a set of backup data is required during a block based backup and/or restore. The processor is used to retrieve from a metadata store a set of metadata that includes the first piece of metadata and one or more additional pieces of metadata included in the metadata store in an adjacent location that is adjacent to a first location in which the first piece of metadata is stored in the metadata store, without first determining whether the one or more additional pieces of metadata are currently required. The retrieved set of metadata is stored in a cache.06-17-2010
20100161905Latency Reduction for Cache Coherent Bus-Based Cache - In one embodiment, a system comprises a plurality of agents coupled to an interconnect and a cache coupled to the interconnect. The plurality of agents are configured to cache data. A first agent of the plurality of agents is configured to initiate a transaction on the interconnect by transmitting a memory request, and other agents of the plurality of agents are configured to snoop the memory request from the interconnect. The other agents provide a response in a response phase of the transaction on the interconnect. The cache is configured to detect a hit for the memory request and to provide data for the transaction to the first agent prior to the response phase and independent of the response.06-24-2010
20120198174APPARATUS, SYSTEM, AND METHOD FOR MANAGING EVICTION OF DATA - An apparatus, system, and method are disclosed for managing eviction of data. A cache write module stores data on a non-volatile storage device sequentially using a log-based storage structure having a head region and a tail region. A direct cache module caches data on the non-volatile storage device using the log-based storage structure. The data is associated with storage operations between a host and a backing store storage device. An eviction module evicts data of at least one region in succession from the log-based storage structure starting with the tail region and progressing toward the head region.08-02-2012
20120198173ROUTER AND MANY-CORE SYSTEM - According to one embodiment, a router manages routing of a packet transferred between a plurality of cores and at least one of cache memories to which the cores can access. The router includes an analyzer, a packet memory and a controller. The analyzer determines whether the packet is a read-packet or a write-packet. The packet memory stores at least part of the write-packet issued by one of the cores. The controller stores cache data of the write-packet and a cache address in the packet memory when the analyzer determines that the packet is the write-packet. The cache address indicates an address in which the cache data is stored. The controller outputs the cache data stored in the packet memory to the core issuing a read-request as a response data corresponding to the read packet when the analyzer determines that the packet is the read-packet and the cache address corresponding to the read-request is stored in the packet memory.08-02-2012
20100153650Victim Cache Line Selection - A cache memory includes a cache array including a plurality of congruence classes each containing a plurality of cache lines, where each cache line belongs to one of multiple classes which include at least a first class and a second class. The cache memory also includes a cache directory of the cache array that indicates class membership. The cache memory further includes a cache controller that selects a victim cache line for eviction from a congruence class. If the congruence class contains a cache line belonging to the second class, the cache controller preferentially selects as the victim cache line a cache line of the congruence class belonging to the second class based upon access order. If the congruence class contains no cache line belonging to the second class, the cache controller selects as the victim cache line a cache line belonging to the first class based upon access order.06-17-2010
20100235582METHOD AND MECHANISM FOR DELAYING WRITING UPDATES TO A DATA CACHE - A novel and useful mechanism and method for writing data updates to a data cache subsystem of a storage controller. Updates received by the storage controller requiring storage allocation on a repository volume are delayed prior to being written to the data cache subsystem. The delay is based on the storage utilization of the repository volume. As the utilization of the repository volume increases, the cache write delay increases, thereby limiting the possibility that there will still be any updates in the data cache subsystem waiting to be destaged to the repository volume when the repository volume is fully utilized. When the repository volume is fully utilized all writes to the data cache of updates that will cause destage of tracks in the repository volume are stopped, thereby causing an infinite delay.09-16-2010
20110113201GARBAGE COLLECTION IN A CACHE WITH REDUCED COMPLEXITY - Garbage collection associated with a cache with reduced complexity. In an embodiment, a relative rank is computed for each cache item based on relative frequency of access and relative non-idle time of cache entry compared to other entries. Each item having a relative rank less than a threshold is considered a suitable candidate for replacement. Thus, when a new item is to be stored in a cache, an entry corresponding to an identified item is used for storing the new item.05-12-2011
20090113132PREFERRED WRITE-MOSTLY DATA CACHE REPLACEMENT POLICIES - A computer-implemented method of cache replacement includes steps of: determining whether each cache block in a cache memory is a read or a write block; augmenting metadata associated with each cache block with an indicator of the type of access; receiving an access request resulting in a cache miss, the cache miss indicating that a cache block will need to be replaced; examining the indicator in the metadata of each cache block for determining a probability that said cache block will be replaced; and selecting for replacement the cache block with the highest probability of replacement.04-30-2009
20090037661Cache mechanism for managing transient data - A system and method are provided for managing transient data in cache memory. The method accepts a segment of data and stores the segment in a cache line. In response to accepting a read-invalidate command for the cache line, the segment is both read from the cache line and the cache line made invalid. If, prior to accepting the read-invalidate command, the segment in the cache line is modified, the modified segment is not stored in a backup storage memory as a result of subsequently accepting the read-invalidate command. In one aspect, the segment is initially identified as transient data, and the read-invalidate command is used in response to identifying the segment as transient data.02-05-2009
20100281223SELECTIVELY SECURING DATA AND/OR ERASING SECURE DATA CACHES RESPONSIVE TO SECURITY COMPROMISING CONDITIONS - Techniques are generally described for methods, systems, data processing devices and computer readable media configured to decrypt data to be stored in a data cache when a particular condition indicative of user authentication or data security has occurred. The described techniques may also be arranged to terminate the storage of decrypted data in the cache when a particular condition that may compromise the security of the data is detected. The describe techniques may further be arranged to erase the decrypted data stored in the cache when a particular condition that may compromise the security of the data is detected.11-04-2010
20100325362System and Method For Providing Conditional access to Server-based Applications From Remote Access Devices - Systems and methods are provided for providing users at remote access devices with conditional access to server-based applications. Requests for access to server-based applications (e.g., requests to launch or obtain data associated with the server-based applications) by remote access devices may be prevented or allowed based on device compliance with one or more policies including whether data-retention prevention code can be downloaded to and operational on the remote access devices. The data-retention prevention code may be used to both determine whether data can be automatically deleted from a cache or file directory at the remote access device and to delete potentially retention-sensitive data once the data is downloaded to the remote access device from the server-based application.12-23-2010
20100325361METHOD FOR CONTROLLING CACHE - A computer-implemented method, apparatus, and computer program-product for controlling cache. The method includes the steps of assigning a value corresponding to a transaction to a memory object that is created while a computer application is processing the transaction; adding the assigned value as a transaction flag value to a flag area of a cache array in accordance with the storage of the memory object in the cache; registering the corresponding transaction flag value as a victim candidate at the completion of the transaction; and in response to eviction of a cache line, preferentially evicting a cache line having the transaction flag value registered as the victim candidate.12-23-2010
20100235583ADAPTIVE DISPLAY CACHING - Apparatus, systems, and methods may operate to send a window copy message including changed window identification information to a remote node when metadata associated with a changed foreground window at a local node has been cached, and otherwise, to locally cache the window metadata and send the window metadata and window pixel data to the remote node. When a preselected minimum bandwidth connection is not available between the local node and the remote node, additional operations may include sending a rectangle paint message including changed rectangle identification information to the remote node when rectangle metadata associated with a changed rectangle of a designated minimum size at the local node has been cached, and otherwise, to locally cache the rectangle metadata and send the rectangle metadata and rectangle pixel data to the remote node. Additional apparatus, systems, and methods are disclosed.09-16-2010
20110010505RESOURCE MANAGEMENT CACHE TO MANAGE RENDITIONS - A resource management cache of a computing device receives a request for an item. The item may include any type of content, such as an image or a video. A rendition for the item is determined. The item may be stored in a plurality of renditions for retrieval. The resource management cache can send one or more requests to one or more sources for the rendition. The sources may include remote sources and also a local source. If a source responds with an indication the rendition is available, the rendition is sent to and received at the computing device. If no sources respond with an indication the rendition is available, the resource management cache may send a message asking if a source can generate the rendition from another rendition of the item. The rendition may be generated and it is sent to and received at the resource management cache.01-13-2011
20100138613Data Caching - The invention relates to a method for improving caching efficiency in a computing device. It utilises metadata, that describes attributes of the data to which it relates, to determine an appropriate caching strategy for the data. The caching strategy may be based on the type of the data, and/or on the expected access of the data.06-03-2010
20090313438DISTRIBUTED CACHE ARRANGEMENT - Systems and methods that aggregate memory capacity of multiple computers into a single unified cache, via a layering arrangement. Such layering arrangement is scalable to a plurality of machines and includes a data manager component, an object manager component and a distributed object manager component, which can be implemented in a modular fashion. Moreover, the layering arrangement can provide for an explicit cache tier (e.g., cache-aside architecture) that applications are aware about, wherein decision are made explicitly which objects to put/remove in such applications (as opposed to an implicit cache wherein application do not know the existence of the cache.)12-17-2009
20100077152PRIMARY-SECONDARY CACHING SCHEME TO ENSURE ROBUST PROCESSING TRANSITION DURING MIGRATION AND/OR FAILOVER - Scores are maintained usable by a behavioral targeting service. Each of a plurality of scoring engine partitions is provided events (first events) for at least one of the particular non-overlapping subsets of the users, and at least one particular scoring engine partition is also provided events (second events) for at least an additional one of said particular non-overlapping subsets of the users. The event indications are processed to determine updated scoring data indicative of behavior of the users represented by the detected events relative to the at least one online service and the updated scoring data are written to a persistent scoring engine storage. The particular scoring engine provides updated scores to the persistent scoring engine storage according to a first writeback caching scheme for updated scores determined from the first events and according to a second writeback caching scheme for updated scores determined from the second events. The time-to-live parameters are controlled for the first writeback caching scheme independently of controlling time-to-live parameters for the second writeback caching scheme.03-25-2010
20100037026Cache Refill Control - A method and a device are disclosed for a cache memory refill control.02-11-2010
20100057995CONTENT REPLACEMENT AND REFRESH POLICY IMPLEMENTATION FOR A CONTENT DISTRIBUTION NETWORK - A method for replacing, refreshing, and managing content in a communication network is provided. The method defines an object policy mechanism that applies media replacement policy rules to defined classes of stored content objects. The object policy mechanism may classify stored content objects into object groups or policy targets. The object policy mechanism may also define metric thresholds and event triggers as policy conditions. The object policy mechanism may further apply replacement policy algorithms or defined policy actions against a class of stored content objects. The media replacement policy rules are enforced at edge content storage repositories in the communication network. A computing device for carrying out the method, and a method for creating, reading, updating, and deleting policy elements and managing policy engine operations, are also provided.03-04-2010
20090216954APPARATUS, SYSTEM, AND METHOD FOR SELECTING A SPACE EFFICIENT REPOSITORY - An apparatus, system, and method are disclosed for selecting a space efficient repository. A cache receives write data. A destage module destages the data sequentially to a coarse grained repository such as a stride level repository and destages a directory entry for the data to a coarse grained directory such as a stride level directory if the data satisfies a repository policy. In addition, the destage module destages the data to a fine grained repository such as a track level repository overwriting an existing data instance and destages the directory entry to a fine grained directory such as a track level directory if the data does not satisfy the repository policy.08-27-2009
20100057994DEVICE AND METHOD FOR CONTROLLING CACHES - Device and method for controlling caches, comprising a decoder configured to decode additional information of datasets retrievable from a memory, wherein the decoded additional information is configured to control whether particular ones of the datasets are to be stored in a cache.03-04-2010
20100250857CACHE CONTROLLING APPARATUS, INFORMATION PROCESSING APPARATUS AND COMPUTER-READABLE RECORDING MEDIUM ON OR IN WHICH CACHE CONTROLLING PROGRAM IS RECORDED - A technique for managing a cache memory for temporarily retaining data read out from a main memory so as to be used by a processing section is disclosed. The cache memory is managed using a tag memory and utilized by a write-through method. The cache controlling apparatus includes a supervising section adapted to supervise accessing time to the cache memory, and a refreshing section adapted to read out data on one or more cache lines of the cache memory from the main memory again in response to a result of the supervision by the supervising section and retain the read out data into the cache memory.09-30-2010
20110087844CONTENT NETWORK GLOBAL REPLACEMENT POLICY - This invention is related to content delivery systems and methods. In one aspect of the invention, a content provider controls a replacement process operating at an edge server. The edge server services content providers and has a data store for storing content associated with respective ones of the content providers. A content provider sets a replacement policy at the edge server that controls the movement of content associated with the content provider, into and out of the data store. In another aspect of the invention, a content delivery system includes a content server storing content files, an edge server having cache memory for storing content files, and a replacement policy module for managing content stored within the cache memory. The replacement policy module can store portions of the content files at the content server within the cache memory, as a function of a replacement policy set by a content owner.04-14-2011
20120303902APPARATUS AND METHOD FOR MANAGING DATA STORAGE - An apparatus for controlling a log-structured data storage system, operable with a first log-structured data storage area for storing data, comprises a metadata storage component for controlling the first log-structured data storage area and comprising a second log-structured data storage area for storing metadata; and means for nesting the second log-structured data storage area for storing metadata within the first log-structured data storage area. The apparatus may further comprise at least a third log-structured data storage area for storing further metadata, and means for nesting the at least a third log-structured data storage area within the second log-structured data storage area.11-29-2012
20110060881Asynchronous Cache Refresh for Systems with a Heavy Load - A method and system to refresh a data entry in a cache before the data entry expires. The system includes a client computing system coupled to a server via a network connection. In response to a request for data access, the client computing system locates a data entry in a cache and determines whether the data entry in the cache has exceeded a refresh timeout since a last update of the data entry. If the data entry in the cache has exceeded the refresh timeout, the client computing system retrieves the data entry found in the cache in response to the request without waiting for the data entry to be refreshed, and requests a refresh of the data entry from the server via the network connection.03-10-2011
20110153949DELAYED REPLACEMENT OF CACHE ENTRIES - A cache entry replacement unit can delay replacement of more valuable entries by replacing less valuable entries. When a miss occurs, the cache entry replacement unit can determine a cache entry for replacement (“a replacement entry”) based on a generic replacement technique. If the replacement entry is an entry that should be protected from replacement (e.g., a large page entry), the cache entry replacement unit can determine a second replacement entry. The cache entry replacement unit can “skip” the first replacement entry by replacing the second replacement entry with a new entry, if the second replacement entry is an entry that should not be protected (e.g., a small page entry). The first replacement entry can be skipped a predefined number of times before the first replacement entry is replaced with a new entry.06-23-2011
20120203973SELECTIVE CACHE-TO-CACHE LATERAL CASTOUTS - A data processing system includes first and second processing units and a system memory. The first processing unit has first upper and first lower level caches, and the second processing unit has second upper and lower level caches. In response to a data request, a victim cache line to be castout from the first lower level cache is selected, and the first lower level cache selects between performing a lateral castout (LCO) of the victim cache line to the second lower level cache and a castout of the victim cache line to the system memory based upon a confidence indicator associated with the victim cache line. In response to selecting an LCO, the first processing unit issues an LCO command on the interconnect fabric and removes the victim cache line from the first lower level cache, and the second lower level cache holds the victim cache line.08-09-2012
20120203972MEMORY MANAGEMENT FOR OBJECT ORIENTED APPLICATIONS DURING RUNTIME - Memory management for object oriented applications during run time includes loading an object oriented application into a computer memory. The object oriented application includes a plurality of nodes in a classification tree, the nodes including key value pairs. The nodes are aggregated in the classification tree by a computer. The aggregating includes eliminating redundant keys and creating a composite node. The composite node is loaded into the computer memory. The plurality of nodes in the classification tree are removed from the computer memory in response to loading the composite node into the computer memory.08-09-2012
20090164733APPARATUS AND METHOD FOR CONTROLLING THE EXCLUSIVITY MODE OF A LEVEL-TWO CACHE - A method of controlling the exclusivity mode of a level-two cache includes generating level-two cache exclusivity control information at a processor in response to an exclusivity mode indicator, and utilizing the level-two cache exclusivity control information to configure the exclusivity mode of the level-two cache.06-25-2009
20080320226Apparatus and Method for Improved Data Persistence within a Multi-node System - Improved access to retained data useful to a system is accomplished by managing data flow through cache associated with the processor(s) of a multi-node system. A data management facility operable with the processors and memory array directs the flow of data from the processors to the memory array by determining the path along which data evicted from a level of cache close to one of the processors is to return to a main memory and directing evicted data to be stored, if possible, in a horizontally associated cache.12-25-2008
20080320227Cache memory device and cache memory control method - A cache memory device that includes a cache which stores data and tag information specifying an address of stored data, includes a detection unit that detects an error by reading out the tag information when a writing/readout request of desired data occurs to the cache, a search unit that searches the tag information for an address of the desired data when no error is detected in the tag information as a result of error detection by the detection unit, a memory unit that stores an address of data that is to be replaced by the desired data, the address being contained in the tag information, when the address of the desired data is not contained in the tag information as a result of search by the search unit, and a control unit that requests an external unit to replace data with a use of the address stored by the memory unit.12-25-2008
20110055488HORIZONTALLY-SHARED CACHE VICTIMS IN MULTIPLE CORE PROCESSORS - A processor includes multiple processor core units, each including a processor core and a cache memory. Victim lines evicted from a first processor core unit's cache may be stored in another processor core unit's cache, rather than written back to system memory. If the victim line is later requested by the first processor core unit, the victim line is retrieved from the other processor core unit's cache. The processor has low latency data transfers between processor core units. The processor transfers victim lines directly between processor core units' caches or utilizes a victim cache to temporarily store victim lines while searching for their destinations. The processor evaluates cache priority rules to determine whether victim lines are discarded, written back to system memory, or stored in other processor core units' caches. Cache priority rules can be based on cache coherency data, load balancing schemes, and architectural characteristics of the processor.03-03-2011
20110016276SYSTEM AND METHOD FOR CACHE MANAGEMENT - Aspects of the invention relate to improvements to the Least Recently Used (LRU) cache replacement method. Weighted LRU (WLRU) and Compact Weighted LRU (CWLRU) are CPU cache replacement methods that have superior hit rates to LRU replacement for programs with poor locality, such as network protocols and applications. WLRU assigns weights to cache lines and makes replacement decision by comparing weights. When a cache line is first brought into the cache, it is assigned an initial weight. Weights of cache lines in WLRU increase when hit and decrease when not hit. Weights in WLRU also have upper limits, and the weight of a cache line never increases beyond the upper limit. CWLRU is a more space-efficient implementation of WLRU. Compared to WLRU, CWLRU uses fewer bits per cache line to store the weight.01-20-2011
20100293335Cache Management - A method for cache management in an environment based on Common Information Model is described. Cache elements in the cache are associated with a time attribute and historical data. Cache elements having a time attribute lying in a certain range are polled for from the server and updated at predetermined time points. A new time attribute is calculated for each cache element based on its historical data and this new time attribute assists in adapting the polling frequency for the cache element to its importance and change characteristics. Asynchronous notifications from the server preempt the polling based on the time attribute for a cache element and instead, polling for the cache element is based on the asynchronous notification. A system for cache management includes a client and a server, the client having a cache that is managed based on each cache element's importance and change characteristics.11-18-2010
20100293336SYSTEM AND METHOD OF INCREASING CACHE SIZE - A system and method for increasing cache size is provided. Generally, the system contains a storage device having storage blocks therein and a memory. A processor is also provided, which is configured by the memory to perform the steps of: categorizing storage blocks within the storage device as within a first category of storage blocks if the storage blocks that are available to the system for storing data when needed; categorizing storage blocks within the storage device as within a second category of storage blocks if the storage blocks contain application data therein; and categorizing storage blocks within the storage device as within a third category of storage blocks if the storage blocks are storing cached data and are available for storing application data if no first category of storage blocks are available to the system.11-18-2010
20110138131Probabilistic Offload Engine For Distributed Hierarchical Object Storage Devices - A method and system having a probabilistic offload engine for distributed hierarchical object storage devices is disclosed. According to one embodiment, a system comprises a first storage system and a second storage system in communication with the first storage system. The first storage system and the second storage system are key/value based object storage devices that store and serve objects. The first storage system and the second storage system execute a probabilistic algorithm to predict access patterns. The first storage system and the second storage system execute a probabilistic algorithm to predict access patterns and minimize data transfers between the first storage system and the second storage system.06-09-2011
20110138130PROCESSOR AND METHOD OF CONTROL OF PROCESSOR - A processor includes: a processing unit that has a first unit; a second unit that holds part of the data held by the first unit; a third unit that receives from the processing unit a first request including first attribute information for obtaining a first logical value and a second request including second attribute information for obtaining a second logical value and that holds the first request until receiving a completion notification of the first request or holds the second request until receiving a completion notification of the second request; and a control unit that receives the first and second requests from the third unit and, replaces the first attribute information by the second attribute information when data of the addresses corresponding to the first and second request are not in the second unit, and supplies the completion notification for the second request to the first unit.06-09-2011
20120311269NON-UNIFORM MEMORY-AWARE CACHE MANAGEMENT - An apparatus is disclosed for caching memory data in a computer system with multiple system memories. The apparatus comprises a data cache for caching memory data. The apparatus is configured to determine a retention priority for a cache block stored in the data cache. The retention priority is based on a performance characteristic of a system memory from which the cache block is cached.12-06-2012
20110119449APPLICATION INFORMATION CACHE - A request for application information can be received from an application running in a process. The application information can be requested from an information repository, and received back from the repository in a first format. The application information can be converted to a second format, and passed to the application in the second format. In addition, the application information can be saved in the second format in a cache in the process. Also, when application information has been cached in response to a request for the information for a first user object, and a subsequent request for the application information for a second user object is received, it can be determined whether the second user object is authorized to access the application information. If so, then the application information can be fetched from the cache and returned for use by the second user object.05-19-2011
20110153950Cache memory, cache memory system, and method program for using the cache memory - A cache memory includes: a plurality of MSHRs (Miss Status/Information Holding Registers); a memory access identification unit that identifies a memory access included in an accepted memory access request; and a memory access association unit that associates a given memory access with the MSHR that is used when the memory access turns out to be a cache miss and determines, on the basis of the association, a candidate for the MSHR that is used by the memory access identified by the access identification unit.06-23-2011
20100082907System For And Method Of Data Cache Managment - The present invention provides a system for and a method of data cache management. In accordance with an embodiment, of the present invention, a method of cache management is provided. A request for access to data is received. A sample value is assigned to the request, the sample value being randomly selected according to a probability distribution. The sample value is compared to another value. The data is selectively stored in the cache based on results of the comparison.04-01-2010
20100030972Device, system and method of accessing data stored in a memory. - Device, system and method of accessing data stored in a memory. For example, a device may include a memory to store a plurality of data items to be accessed by a processor; a cache manager to manage, a cache within the memory, the cache including a plurality of pointer entries, wherein each pointer entry includes an identifier of a respective data item and a pointer to an address of the data item; and a search module to receive from the cache manager an identifier of a requested data item, search the plurality of pointer entries for the identifier of the requested data item and, if a pointer entry is detected to include an identifier of a respective data item that matches the identifier of the requested data item then, provide the cache manager with the pointer from the detected entry. Other embodiments are described and claimed.02-04-2010
20100023698Enhanced Coherency Tracking with Implementation of Region Victim Hash for Region Coherence Arrays - A method and system for precisely tracking lines evicted from a region coherence array (RCA) without requiring eviction of the lines from a processor's cache hierarchy. The RCA is a set-associative array which contains region entries consisting of a region address tag, a set of bits for the region coherence state, and a line-count for tracking the number of region lines cached by the processor. Tracking of the RCA is facilitated by a non-tagged hash table of counts represented by a Region Victim Hash (RVH). When a region is evicted from the RCA, and lines from the evicted region still reside in the processor's caches (i.e., the region's line-count is non-zero), the RCA line-count is added to the corresponding RVH count. The RVH count is decremented by the value of the region line count following a subsequent processor cache eviction/invalidation of the region previously evicted from the RCA.01-28-2010
20090172290Replacing stored content to make room for additional content - The storage of items, such as media items, in a playback device may be managed automatically without user intervention in some embodiments. An algorithm, based on heuristics, may predict which items are most likely to be used or played in the future and, based on that determination, may select the least likely to be used items for replacement. In addition, replacement may be affected by the size of space needed for additional storage versus the size of particular candidates for replacement.07-02-2009
20120047330I/O EFFICIENCY OF PERSISTENT CACHES IN A STORAGE SYSTEM - A system and method are disclosed for improving the efficiency of a storage system. At least one application-oriented property is associated with data to be stored on a storage system. Based on the at least one application-oriented property, a manner of implementing at least one caching function in the storage system is determined. Data placement and data movement are controlled in the storage system to implement the at least one caching function.02-23-2012
20120011323MEMORY SYSTEM AND MEMORY MANAGEMENT METHOD INCLUDING THE SAME - A multi-processor system includes a first processor, a second processor communicable with the first processor, a first non-volatile memory for storing first codes and second codes to respectively boot the first and second processors, the first memory communicable with the first processor, a second volatile memory designated for the first processor, a third volatile memory designated for the second processor, and a fourth volatile memory shared by the first and second processors.01-12-2012
20120159078Protecting Data During Different Connectivity States - Aspects of the subject matter described herein relate to data protection. In aspects, during a backup cycle, backup copies may be created for files that are new or that have changed since the last backup. If external backup storage is not available, the backup copies may be stored in a cache located on the primary storage. If backup storage is available, the backup copies may be stored in the backup storage device and backup copies that were previously stored in the primary storage may be copied to the backup storage. The availability of the backup storage may be detected and used to seamlessly switch between backing up files locally and remotely as availability of the backup storage changes.06-21-2012
20120166732CONTENT CACHING DEVICE, CONTENT CACHING METHOD, AND COMPUTER READABLE MEDIUM - A first acquisition unit acquires each of the resources defined by the scenario, from locations depending on identifiers of the resources. A judging unit judge, when a resource having same identifier and structure as the resource acquired is existent in the cache storage, erases the resource, the identifier thereof, and the receipt time information from the cache storage, and when not existent, stores the acquired resource in association with the identifier thereof and the receipt time information of the bookmark instruction, in the cache storage. A second acquisition, when the identifiers of the resources specified by a first scenario are existent in the cache storage, acquires the resources from the cache storage according to the receipt time information corresponding to the first scenario and identifiers of the resources, and when not existent, acquires the resources from a location depending on the identifiers.06-28-2012
20120221798Computer Cache System With Stratified Replacement - Methods for selecting a line to evict from a data storage system are provided. A computer system implementing a method for selecting a line to evict from a data storage system is also provided. The methods include selecting an uncached class line for eviction prior to selecting a cached class line for eviction.08-30-2012
20120131280SYSTEMS AND METHODS FOR BACKING UP STORAGE VOLUMES IN A STORAGE SYSTEM - Systems and methods for backing up storage volumes are provided. One system includes a primary side, a secondary side, and a network coupling the primary and secondary sides. The secondary side includes first and second VTS including a cache and storage tape. The first VTS is configured to store a first portion of a group of storage volumes in its cache and migrate the remaining portion to its storage tape. The second VTS is configured to store the remaining portion of the storage volumes in its cache and migrate the first portion to its storage tape. One method includes receiving multiple storage volumes from a primary side, storing the storage volumes in the cache of the first and second VTS, migrating a portion of the storage volumes from the cache to storage tape in the first VTS, and migrating a remaining portion of the storage volumes from the cache to storage tape in the second VTS.05-24-2012
20120215985CACHE AND A METHOD FOR REPLACING ENTRIES IN THE CACHE - A processor is provided. The processor including a cache, the cache having a plurality of entries, each of the plurality of entries having a tag array and a data array, and a remapper configured to create at least one identifier, each identifier being unique to a process of the processor, and to assign a respective identifier to the tag array for the entries related to a respective process, the remapper further configured to determine a replacement value for the entries related to each identifier.08-23-2012
20120179875USING EPHEMERAL STORES FOR FINE-GRAINED CONFLICT DETECTION IN A HARDWARE ACCELERATED STM - A method and apparatus for fine-grained filtering in a hardware accelerated software transactional memory system is herein described. A data object, which may have an arbitrary size, is associated with a filter word. The filter word is in a first default state when no access, such as a read, from the data object has occurred during a pendancy of a transaction. Upon encountering a first access, such as a first read, from the data object, access barrier operations including an ephemeral/private store operation to set the filter word to a second state are performed. Upon a subsequent/redundant access, such as a second read, the access barrier operations are elided to accelerate the subsequent access, based on the filter word being set to the second state to indicate a previous access occurred.07-12-2012
20120233408INTELLIGENT WRITE CACHING FOR SEQUENTIAL TRACKS - Write caching for sequential tracks is performed by a processor device in a computing storage environment for destaging data from nonvolatile storage (NVS) to a storage unit. If a first track is determined to be sequential, and an earlier track is also determined to be sequential, a temporal bit associated with the earlier track is cleared to allow for destage of data of the earlier track. If a temporal bit for one of a plurality of additional tracks in one of a plurality of strides in a modified cache is determined to be not set, a stride associated with the one of the plurality of additional tracks is selected for a destage operation. If the NVS exceeds a predetermined storage threshold, a predetermined one of the plurality of strides is selected for the destage operation.09-13-2012
20090070533Content network global replacement policy - This invention is related to content delivery systems and methods. In one aspect of the invention, a content provider controls a replacement process operating at an edge server. The edge server services content providers and has a data store for storing content associated with respective ones of the content providers. A content provider sets a replacement policy at the edge server that controls the movement of content associated with the content provider, into and out of the data store. In another aspect of the invention, a content delivery system includes a content server storing content files, an edge server having cache memory for storing content files, and a replacement policy module for managing content stored within the cache memory. The replacement policy module can store portions of the content files at the content server within the cache memory, as a function of a replacement policy set by a content owner.03-12-2009
20120185650CACHE DEVICE, DATA MANAGEMENT METHOD, PROGRAM, AND CACHE SYSTEM - A deleted cache determining part determines a cache data which is to be deleted from a data storing part in a case where a sum of a data amount of a data which is recorded to the data storing part and a data amount of a cache data which is stored to the data storing part and a data amount of a buffer data which is stored to the storing part is equal to or more than a predetermined threshold, and an accumulated data control part deletes the cache data which is determined by the deleted cache determining part from the data storing part.07-19-2012
20120084513CIRCUIT AND METHOD FOR DETERMINING MEMORY ACCESS, CACHE CONTROLLER, AND ELECTRONIC DEVICE - A memory access determination circuit includes a counter that switches between a first reference value and a second reference value in accordance with a control signal to generate a count value based on the first reference value or the second reference value. A controller performs a cache determination based on an address that corresponds to the count value and outputs the control signal in accordance with the cache determination. A changing unit changes the second reference value in accordance with the cache determination.04-05-2012
20120260044SYSTEMS AND METHODS FOR DESTAGING STORAGE TRACKS FROM CACHE - A system includes a cache and a processor coupled to the cache. The cache stores data in multiple storage tracks and each storage track includes an associated multi-bit counter. The processor is configured to perform the following method. One method includes writing data to the plurality of storage tracks and incrementing the multi-bit counter on each respective storage track a predetermined amount each time the processor writes to a respective storage track. The method further includes scan each of the storage tracks in each of multiple scan cycles, decrementing each multi-bit counter each scan cycle, and destaging each storage track including a zero count.10-11-2012
20090019227Method and Apparatus for Refetching Data - Methods and apparatus for refetching data to store in a cache are disclosed. According to one aspect of the present invention, a method includes identifying a speculative set that identifies at least a first element that is associated with a cache. The first element has at least a first representation in the cache that is suitable for updating. The method also includes issuing a request to obtain the first element from a data source, opening a channel to the data source, obtaining the first element from the data source using the channel, and closing the channel. Finally, the method includes updating the first representation associated with the first element in the cache.01-15-2009
20090006761Cache pollution avoidance - Embodiments of the present invention are directed to a scheme in which information as to the future behavior of particular software is used in order to optimize cache management and reduce cache pollution. Accordingly, a certain type of data can be defined as “short life data” by using knowledge of the expected behavior of particular software. Short life data can be a type of data which, according to the ordinary expected operation of the software, is not expected to be used by the software often in the future. Data blocks which are to be stored in the cache can be examined to determine if they are short life data blocks. If the data blocks are in fact short life data blocks they can be stored only in a particular short life area of the cache.01-01-2009
20080301373TECHNIQUE FOR CACHING DATA TO BE WRITTEN TO MAIN MEMORY - A memory apparatus having a cache memory including cache segments, and memorizing validity data indicative of whether or not each of the sectors contained in each cache segment is a valid sector inclusive of valid data; and a cache controlling component for controlling access to the cache memory. The cache controlling component includes a detecting component for detecting, when writing a cache segment back to the main memory, areas having consecutive invalid sectors by accessing validity data corresponding to the cache segment, and a write-back controlling component issuing a read command to the main memory, the read command being for reading data into each area detected, making the area a valid sector, and writing the data in the cache segment back to the main memory.12-04-2008
20110004730CACHE MEMORY DEVICE, PROCESSOR, AND CONTROL METHOD FOR CACHE MEMORY DEVICE - A cache memory device that connects an instruction controlling unit outputting a memory access request for requesting data and a storage device storing data, the cache memory device including: a data memory unit that holds data for each cache line, a tag memory unit that holds, for each cache line linked with a cache line of the data memory unit, tag addresses specifying storage positions of data covered by the memory access request at the storage device and status data indicating states of the data of the data memory unit corresponding to the tag addresses, a search unit that searches for a cache line of the tag memory unit corresponding to an index address included in the memory access request, a comparison unit that compares a tag address held in the found cache line of the tag memory unit and a tag address included in the memory access request and, when the two do not match, detects a “cache miss” and reads out the status information of the found cache line, and a controlling unit that, when the comparison unit detects a cache miss, requests data covered by the memory access request to the storage device and, when the cache line storing the data requested at the storage device is not present in the data memory unit, stops the supply of a clock to the data memory unit based on the status information of the cache line that the comparison unit read out.01-06-2011
20110131379PROCESSOR AND METHOD FOR WRITEBACK BUFFER REUSE - A processor may include a writeback configured to perform a first writeback operation to store corresponding writeback data back to a lower-level memory upon eviction of the writeback data, and a writeback buffer configured to store the writeback data after the writeback data has been evicted from the writeback cache and before the writeback data has been sent to the lower-level memory. After the writeback data has been sent from the writeback buffer to the lower-level memory, and before the lower-level memory has acknowledged completion of the first writeback operation, the writeback cache may perform a second writeback operation to store different writeback data in the writeback buffer in response to eviction of the different writeback data, such that a total size of the writeback data for the concurrently outstanding writeback operations exceeds a total size of writeback data that the writeback buffer is capable of concurrently storing.06-02-2011
20120272009METHODS AND APPARATUS FOR HANDLING A CACHE MISS - In a first aspect, a first method is provided. The first method includes the steps of (1) providing a cache having a plurality of cache entries, each entry adapted to store data, wherein the cache is adapted to be accessed by hardware and software in a first operational mode; (2) determining an absence of desired data in one of the plurality of cache entries; (3) determining a status based on a current operational mode and a value of hint-lock bits associated with the plurality of cache entries; and (4) determining availability of at least one of the cache entries based on the status, wherein availability of a cache entry indicates that data stored in the cache entry can be replaced. Numerous other aspects are provided.10-25-2012
20120324169INFORMATION PROCESSING DEVICE AND METHOD, AND PROGRAM - Provided is an information processing device including a holding portion of a cache link that is formed such that, when clusters are recorded on a predetermined recording medium by a FAT file system and a FAT formed by link information of the clusters is also recorded on the predetermined recording medium by the system, an entry is arranged for each of the clusters located at a predetermined interval, the entry being formed by information including the link information extracted from the FAT, an information update portion that, when updating the cache link after data is additionally written to the clusters on the recording medium, updates the information for an update target entry among entries forming the cache link, and a configuration conversion portion that removes the update target entry updated from an original position in the cache link, and connects it to an endmost position of the cache link.12-20-2012
20110238919CONTROL OF PROCESSOR CACHE MEMORY OCCUPANCY - Techniques are described for controlling processor cache memory within a processor system. Cache occupancy values for each of a plurality of entities executing the processor system can be calculated. A cache replacement algorithm uses the cache occupancy values when making subsequent cache line replacement decisions. In some variations, entities can have occupancy profiles specifying a maximum cache quota and/or a minimum cache quota which can be adjusted to achieve desired performance criteria. Related methods, systems, and articles are also described.09-29-2011
20120278558Structure-Aware Caching - Techniques for structure-aware caching are provided. The techniques include decomposing a response from an origin server into one or more independently addressable objects, using a domain specific language to navigate the response to identify the one or more addressable objects and create one or more access paths to the one or more objects, and selecting a route to an object by navigating an internal structure of a cached object to discover one or more additional independently addressable objects.11-01-2012
20110307666DATA CACHING METHOD - Data caching for use in a computer system including a lower cache memory and a higher cache memory. The higher cache memory receives a fetch request. It is then determined by the higher cache memory the state of the entry to be replaced next. If the state of the entry to be replaced next indicates that the entry is exclusively owned or modified, the state of the entry to be replaced next is changed such that a following cache access is processed at a higher speed compared to an access processed if the state would stay unchanged.12-15-2011
20110320731ON DEMAND ALLOCATION OF CACHE BUFFER SLOTS - Dynamic allocation of cache buffer slots includes receiving a request to perform an operation that requires a storage buffer slot, the storage buffer slot residing in a level of storage. The dynamic allocation of cache buffer slots also includes determining availability of the storage buffer slot for the cache index as specified by the request. Upon determining the storage buffer slot is not available, the dynamic allocation of cache buffer slots includes evicting data stored in the storage buffer slot, and reserving the storage buffer slot for data associated with the request.12-29-2011
20110320730NON-BLOCKING DATA MOVE DESIGN - A mechanism for data buffering is provided. A portion of a cache is allocated as buffer regions, and another portion of the cache is designated as random access memory (RAM). One of the buffer regions is assigned to a processor. A data block is stored to the one of the buffer regions of the cache according an instruction of the processor. The data block is stored from the one of the buffer regions of the cache to the memory.12-29-2011
20120102271CACHE MEMORY SYSTEM AND CACHE MEMORY CONTROL METHOD - The number of ways of address arrays (04-26-2012
20120102270Methods and Apparatuses for Idle-Prioritized Memory Ranks - Embodiments of an apparatus to reduce memory power consumption are presented. In one embodiment, the apparatus comprises a cache memory, a memory, and a control unit. In one embodiment, the memory includes a plurality of memory ranks. The control unit is operable to select one or more memory ranks among the plurality of memory ranks to be idle-prioritized memory ranks such that access frequency to the idle-prioritized memory ranks is reduced.04-26-2012
20130013865DEDUPLICATION OF VIRTUAL MACHINE FILES IN A VIRTUALIZED DESKTOP ENVIRONMENT - Techniques for deduplication of virtual machine files in a virtualized desktop environment are described, including receiving data into a page cache, the data being received from a virtual machine and indicating a write operation, and deduplicating the data in the page cache prior to committing the data to storage, the data being deduplicated in-band and in substantially real-time.01-10-2013
20120151149Method and Apparatus for Caching Prefetched Data - A method is provided for performing caching in a processing system including at least one data cache. The method includes the steps of: determining whether each of at least a subset of cache entries stored in the data cache comprises data that has been loaded using fetch ahead (FA); associating an identifier with each cache entry in the subset of cache entries, the identifier indicating whether the cache entry comprises data that has been loaded using FA; and implementing a cache replacement policy for controlling replacement of at least a given cache entry in the data cache with a new cache entry as a function of the identifier associated with the given cache entry.06-14-2012
20110161597Combined Memory Including a Logical Partition in a Storage Memory Accessed Through an IO Controller - A computer system having a combined memory. A first logical partition of the combined memory is a main memory region in a storage memory. A second logical partition of the combined memory is a direct memory region in a main memory. A memory controller comprising a storage controller is configured to receive a memory access request including a real address from a processor, determine whether the real address is for the first logical partition or for the second logical partition. If the address is for the first logical partition the storage controller communicates with an IO controller in the storage memory to service the memory access request. If the address is for the direct memory region, the memory controller services the memory access request in a conventional manner.06-30-2011
20130024622EVENT-DRIVEN REGENERATION OF PAGES FOR WEB-BASED APPLICATIONS - Systems and methods for invalidating and regenerating pages. In one embodiment, a method can include detecting content changes in a content database including various objects. The method can include causing an invalidation generator to generate an invalidation based on the modification and communicating the invalidation to a dependency manager. A cache manager can be notified that pages in a cache might be invalidated based on the modification via a page invalidation notice. In one embodiment, a method can include receiving a page invalidation notice and sending a page regeneration request to a page generator. The method can include regenerating the cached page. The method can include forwarding the regenerated page to the cache manager replacing the cached page with the regenerated page. In one embodiment, a method can include invalidating a cached page based on a content modification and regenerating pages which might depend on the modified content.01-24-2013
20130173862METHOD FOR CLEANING CACHE OF PROCESSOR AND ASSOCIATED PROCESSOR - A method for cleaning a cache of a processor includes: generating a specific command according to a request, wherein the specific command includes an operation command, a first field and a second field; obtaining an offset and a starting address according to the first field and the second field; selecting a specific segment from the cache according to the starting address and the offset; and cleaning data stored in the specific segment.07-04-2013
20080222362Method and Apparatus for Execution of a Process - Techniques are provided for enabling execution of a process employing a cache Method steps can include obtaining a first probability of accessing a given artifact in a state S09-11-2008
20130145099Method, System and Server of Removing a Distributed Caching Object - The present disclosure discloses a method, a system and a server of removing a distributed caching object. In one embodiment, the method receives a removal request, where the removal request includes an identifier of an object. The method may further apply consistent Hashing to the identifier of the object to obtain a Hash result value of the identifier, locates a corresponding cache server based on the Hash result value and renders the corresponding cache server to be a present cache server. In some embodiments, the method determines whether the present cache server is in an active status and has an active period greater than an expiration period associated with the object. Additionally, in response to determining that the present cache server is in an active status and has an active period greater than the expiration period associated with the object, the method removes the object from the present cache server. By comparing an active period of a located cache server with an expiration period associated with an object, the exemplary embodiments precisely locate a cache server that includes the object to be removed and perform a removal operation, thus saving the other cache servers from wasting resources to perform removal operations and hence improving the overall performance of the distributed cache system.06-06-2013
20120254547MANAGING METADATA FOR DATA IN A COPY RELATIONSHIP - Provided are a computer program product, system, and method for managing metadata for data in a copy relationship copied from a source storage to a target storage. Information is maintained on a copy relationship of source data in the source storage and target data in the target storage. The source data is copied from the source storage to the cache to copy to target data in the target storage indicated in the copy relationship. Target metadata is generated for the target data comprising the source data copied to the cache. An access request to requested target data comprising the target data in the cache is processed and access is provided to the requested target data in the cache. A determination is made as to whether the requested target data in the cache has been destaged to the target storage. The target metadata for the requested target data in the target storage is discarded in response to determining that the requested target data in the cache has not been destaged to the target storage.10-04-2012
20080215818STRUCTURE FOR SILENT INVALID STATE TRANSITION HANDLING IN AN SMP ENVIRONMENT - A design structure embodied in a machine readable storage medium for designing, manufacturing, and/or testing a design can be provided. The design structure includes a symmetric multiprocessing (SMP) system. The system includes a plurality of nodes. Each of the nodes includes a node controller and a plurality of processors cross-coupled to one another. The system also includes at least one cache directory coupled to each node controller, and, invalid state transition logic coupled to each node controller. The invalid state transition logic includes program code enabled to identify an invalid state transition for a cache line in a local node, to evict a corresponding cache directory entry for the cache line, and to forward an invalid state transition notification to a node controller for a home node for the cache line in order for the home node to evict a corresponding cache directory entry for the cache line.09-04-2008
20130145100MANAGING METADATA FOR DATA IN A COPY RELATIONSHIP - Provided is a method for managing metadata for data in a copy relationship copied from a source storage to a target storage. Information is maintained on a copy relationship of source data in the source storage and target data in the target storage. The source data is copied from the source storage to the cache to copy to target data in the target storage indicated in the copy relationship. Target metadata is generated for the target data comprising the source data copied to the cache. An access request to requested target data comprising the target data in the cache is processed and access is provided to the requested target data in the cache. The target metadata for the requested target data in the target storage is discarded in response to determining that the requested target data in the cache has not been destaged to the target storage.06-06-2013
20100281222CACHE SYSTEM AND CONTROLLING METHOD THEREOF - A cache system and a method for controlling the cache system are provided. The cache system includes a plurality of caches, a buffer module, and a migration selector. Each of the caches is accessed by a corresponding processor. Each of the caches includes a plurality of cache sets and each of the cache sets includes a plurality of cache lines. The buffer module is coupled to the caches for receiving and storing data evicted due to conflict miss from a source cache line of a source cache set of a source cache among the caches. The migration selector is coupled to the caches and the buffer module. The migration selector selects, from all the cache sets, a destination cache set of a destination cache among the caches according to a predetermined condition and causing the evicted data to be sent from the buffer module to the destination cache set.11-04-2010
20130151785DIRECTORY REPLACEMENT METHOD AND DEVICE - The present invention provides a directory replacement method and device. An HA receives a data access request including a first address from a first CA, if a designated storage where a directory is located is entirely occupied by the directory, and a first directory entry corresponding to the first address is not in the directory, the HA selects a second directory entry from the directory, deletes it and adds the first directory entry into the directory; before the HA replaces the directory entry in the directory, no matter what status (for example, I status, S status or A status) a share status of a cache line corresponding to an address in the directory entry to be replaced is, the HA does not need to request a corresponding CA to perform an invalidating operation on data, but directly replaces the directory entry in the directory, thereby improving replacement efficiency.06-13-2013
20130151784DYNAMIC PRIORITIZATION OF CACHE ACCESS - Some embodiments of the inventive subject matter are directed to determining that a memory access request results in a cache miss and determining an amount of cache resources used to service cache misses within a past period in response to determining that the memory access request results in the cache miss. Some embodiments are further directed to determining that servicing the memory access request would increase the amount of cache resources used to service cache misses within the past period to exceed a threshold. In some embodiments, the threshold corresponds to reservation of a given amount of cache resources for potential cache hits. Some embodiments are further directed to rejecting the memory access request in response to the determining that servicing the memory access request would increase the amount of cache resources used to service cache misses within the past period to exceed the threshold.06-13-2013
20130138891ALLOCATION ENFORCEMENT IN A MULTI-TENANT CACHE MECHANISM - Systems and methods for cache optimization are provided. The method comprises monitoring cache access rate for a plurality of cache tenants sharing same cache mechanism having an amount of data storage space, wherein a first cache tenant having a first cache size is allocated a first cache space within the data storage space, and wherein a second cache tenant having a second cache size is allocated a second cache space within the data storage space. The method further comprises determining cache profiles for at least the first cache tenant and the second cache tenant according to data collected during the monitoring; analyzing the cache profiles for the plurality of cache tenants to determine an expected cache usage model for the cache mechanism; and analyzing the cache usage model and factors related to cache efficiency or performance for the one or more cache tenants to dictate one or more occupancy constraints.05-30-2013
20120260043FABRICATING KEY FIELDS - Exemplary methods, computer systems, and computer program products for fabricating key fields by a processor device in a computer environment are provided. In one embodiment, the computer environment is configured for, as an alternative to reading Count-Key-Data (CKD) data in order to change the key field, providing a hint to fabricate a new key field, thereby overwriting a previous key field and updating the CKD data.10-11-2012
20130159630SELECTIVE CACHE FOR INTER-OPERATIONS IN A PROCESSOR-BASED ENVIRONMENT - The present invention provides embodiments of methods and apparatuses for selective caching of data for inter-operations in a heterogeneous computing environment. One embodiment of a method includes allocating a portion of a first cache for caching for two or more processing elements and defining a replacement policy for the allocated portion of the first cache. The replacement policy restricts access to the first cache to operations associated with more than one of the processing elements.06-20-2013
20130191598DEVICE, SYSTEM AND METHOD OF ACCESSING DATA STORED IN A MEMORY - Device, system and method of accessing data stored in a memory. For example, a device may include a memory to store a plurality of data items to be accessed by a processor; a cache manager to manage, a cache within the memory, the cache including a plurality of pointer entries, wherein each pointer entry includes an identifier of a respective data item and a pointer to an address of the data item; and a search module to receive from the cache manager an identifier of a requested data item, search the plurality of pointer entries for the identifier of the requested data item and, if a pointer entry is detected to include an identifier of a respective data item that matches the identifier of the requested data item then, provide the cache manager with the point from the detected entry. Other embodiments are described and claimed.07-25-2013
20130198460INFORMATION PROCESSING DEVICE, MEMORY MANAGEMENT METHOD, AND COMPUTER-READABLE RECORDING MEDIUM - An information processing device includes a memory and a processor coupled to the memory, wherein the processor executes a process comprising selecting data included in a same file as deletion target data from the memory when deleting the data cached in the memory at the caching from the memory and deleting the deletion target data and the data selected at the selecting, from the memory.08-01-2013
20130198461MANAGING TRACK DISCARD REQUESTS TO INCLUDE IN DISCARD TRACK MESSAGES - Provided is a method for managing track discard requests. A backup copy of a track in a cache is maintained in a cache backup device. A track discard request is generated to discard tracks in the cache backup device removed from the cache. Track discard requests are queued in a discard track queue. If a predetermined number of track discard requests are queued in the discard track queue while processing in a discard multi-track mode, one discard multiple tracks message is sent to the cache backup device indicating the tracks indicated in the queued predetermined number of track discard requests to instruct the cache backup device to discard the tracks indicated in the discard multiple tracks message. If a predetermined number of periods of inactivity while processing in the discard multi-track mode, processing the track discard requests is switched to a discard single track mode.08-01-2013
20130205093MANAGEMENT OF POINT-IN-TIME COPY RELATIONSHIP FOR EXTENT SPACE EFFICIENT VOLUMES - A storage controller receives a request to establish a point-in-time copy operation by placing a space efficient source volume in a point-in-time copy relationship with a space efficient target volume, wherein subsequent to being established the point-in-time copy operation is configurable to consistently copy the space efficient source volume to the space efficient target volume at a point in time. A determination is made as to whether any track of an extent is staging into a cache from the space efficient target volume or destaging from the cache to the space efficient target volume. In response to a determination that at least one track of the extent is staging into the cache from the space efficient target volume or destaging from the cache to the space efficient target volume, release of the extent from the space efficient target volume is avoided.08-08-2013

Patent applications in class Entry replacement strategy

Patent applications in all subclasses Entry replacement strategy