Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


Associative

Subclass of:

711 - Electrical computers and digital processing systems: memory

711100000 - STORAGE ACCESSING AND CONTROL

711117000 - Hierarchical memories

711118000 - Caching

Patent class list (only not empty are listed)

Deeper subclasses:

Class / Patent application numberDescriptionNumber of patent applications / Date published
711128000 Associative 89
20090144503METHOD AND SYSTEM FOR INTEGRATING SRAM AND DRAM ARCHITECTURE IN SET ASSOCIATIVE CACHE - A method of integrating a hybrid architecture in a set associative cache having a first type of memory structure for one or more ways in each congruence class, and a second type of memory structure for the remaining ways of the congruence class, includes determining whether a memory access request results in a cache hit or a cache miss; in the event of a cache miss, determining whether LRU way of the first type memory structure is also the LRU way of the entire congruence class, and if not, then copying the contents of the LRU way of the first type memory structure into the LRU way of the entire congruence class, and filling the LRU way of the first type memory structure with a new cache line in the event of a cache miss; and updating LRU bits, depending upon the results of the memory access request.06-04-2009
20090193193TRANSLATION TABLE COHERENCY MECAHANISM USING CACHE WAY AND SET INDEX WRITE BUFFERS - Systems and/or methods are presented that provide for recording transactions that occur during a write process in an organized, self-aggregated manner for the purpose of recovering the transactions in the event of a power loss. By implementing an organization that reflects the cache architecture that is organized according to the cache way and set index of each transaction, the amount of time and effort required to recover the modified data from a sudden loss of power event is minimized. In this regard, the cache way and set index cache architecture provides for a post-power loss search operation that is limited to identifying duplicate locations within the cache-line and keeping only the most recent modification. By providing for such pre-organization in terms of self-aggregation by cache way and set index recording, the overall search process is greatly reduced and flexibility can be implemented in cache-line eviction processing in the event that the cache is determined to be full.07-30-2009
20120246410CACHE MEMORY AND CACHE SYSTEM - A cache memory has one or a plurality of ways having a plurality of cache lines including a tag memory which stores a tag address, a first dirty bit memory which stores a first dirty bit, a valid bit memory which stores a valid bit, and a data memory which stores data. The cache memory has a line index memory which stores a line index for identifying the cache line. The cache memory has a DBLB management unit having a plurality of lines including a row memory which stores first bit data identifying the way and second bit data identifying the line index, a second dirty bit memory which stores a second dirty bit of bit unit corresponding to writing of a predetermined unit into the data memory, and a FIFO memory which stores FIFO information prescribing a registered order. Data in a cache line of a corresponding way is written back on the basis of the second dirty bit.09-27-2012
20130036270Data processing apparatus and method for powering down a cache - A data processing apparatus is provided comprising a processing device, and an N-way set associative cache for access by the processing device, each way comprising a plurality of cache lines for temporarily storing data for a subset of memory addresses of a memory device, and a plurality of dirty fields, each dirty field being associated with a way portion and being set when the data stored in that way portion is dirty data. Dirty way indication circuitry is configured to generate an indication of the degree of dirty data stored in each way. Further, staged way power down circuitry is responsive to at least one predetermined condition, to power down at least a subset of the ways of the N-way set associative cache in a plurality of stages, the staged way power down circuitry being configured to reference the dirty way indication circuitry in order to seek to power down ways with less dirty data before ways with more dirty data. This approach provides a particularly quick and power efficient technique for powering down the cache in a plurality of stages.02-07-2013
20090157968Cache Memory with Extended Set-associativity of Partner Sets - A cache memory including a plurality of sets of cache lines, and providing an implementation for increasing the associativity of selected sets of cache lines including the combination of providing a group of parameters for determining the worthiness of a cache line stored in a basic set of cache lines, providing a partner set of cache lines, in the cache memory, associated with the basic set, applying the group of parameters to determine the worthiness level of a cache line in the basic set and responsive to a determination of a worthiness in excess of a predetermined level, for a cache line, storing said worthiness level cache line in said partner set.06-18-2009
20090125684Timer Device For Monitoring Expiration Of Products - A timer device adapted to monitor time periods associated with foods and other substances which may have a finite time during which they are suitable e for use, comprising a timer unit (05-14-2009
20090119458OPPORTUNISTIC BLOCK TRANSMISSION WITH TIME CONSTRAINTS - A technique for determining a data window size allows a set of predicted blocks to be transmitted along with requested blocks. A stream enabled application executing in a virtual execution environment may use the blocks when needed.05-07-2009
20080313406METHODS AND SYSTEMS FOR PORTING SYSPROF - Embodiments of the present invention provide a system profiler that can be used on any processor architecture. In particular, instead of copying an entire stack every time, the stack is divided into blocks of a fixed size. For each block, a hash value is computed. As stack blocks are sent out of the kernel, the hash value and a copy of the block contents is kept in a user space cache. In the kernel, the hash codes of sent stack blocks are tracked in a table. During system profiling, the kernel module sampling the call stack determines if that stack block was previously sent by checking for the hash value in the kernel table. If the hash matches an entry in the kernel table, then only the hash value is sent. If the hash value is not in the table, the entire block and the hash value is sent.12-18-2008
20090006756CACHE MEMORY HAVING CONFIGURABLE ASSOCIATIVITY - A processor cache memory subsystem includes a cache memory having a configurable associativity. The cache memory may operate in a fully associative addressing mode and a direct addressing mode with reduced associativity. The cache memory includes a data storage array including a plurality of independently accessible sub-blocks for storing blocks of data. For example each of the sub-blocks implements an n-way set associative cache. The cache memory subsystem also includes a cache controller that may programmably select a number of ways of associativity of the cache memory. When programmed to operate in the fully associative addressing mode, the cache controller may disable independent access to each of the independently accessible sub-blocks and enable concurrent tag lookup of all independently accessible sub-blocks, and when programmed to operate in the direct addressing mode, the cache controller may enable independent access to one or more subsets of the independently accessible sub-blocks.01-01-2009
20100211744METHODS AND APARATUS FOR LOW INTRUSION SNOOP INVALIDATION - Efficient techniques are described for tracking a potential invalidation of a data cache entry in a data cache for which coherency is required. Coherency information is received that indicates a potential invalidation of a data cache entry. The coherency information in association with the data cache entry is retained to track the potential invalidation to the data cache entry. The retained coherency information is kept separate from state bits that are utilized in cache access operations. An invalidate bit, associated with a data cache entry, may be utilized to represents a potential invalidation of the data cache entry. The invalidate bit is set in response to the coherency information, to track the potential invalidation of the data cache entry. A valid bit associated with the data cache entry is set in response to the active invalidate bit and a memory synchronization command. The set invalidate bit is cleared after the valid bit has been cleared.08-19-2010
20100077149Method and Apparatus for Managing Cache Reliability - A method of configuring a cache includes identifying a plurality of cache configurations of a configurable cache for a processor-executable application unit. Each configuration has an associated error rate. A selected configuration is selected based at least in part on the associated error rate. The configurable cache is configured in accordance with the selected configuration for execution of the application-unit.03-25-2010
20100077148Method and Apparatus for Configuring a Unified Cache - A method of configuring a unified cache includes identifying unified cache way assignment combinations for an application unit. Each combination has an associated error rate. A combination is selected based at least in part on the associated error rate. The unified cache is configured in accordance with the selected combination for execution of the application-unit.03-25-2010
20100011166Data Cache Virtual Hint Way Prediction, and Applications Thereof - A virtual hint based data cache way prediction scheme, and applications thereof. In an embodiment, a processor retrieves data from a data cache based on a virtual hint value or an alias way prediction value and forwards the data to dependent instructions before a physical address for the data is available. After the physical address is available, the physical address is compared to a physical address tag value for the forwarded data to verify that the forwarded data is the correct data. If the forwarded data is the correct data, a hit signal is generated. If the forwarded data is not the correct data, a miss signal is generated. Any instructions that operate on incorrect data are invalidated and/or replayed.01-14-2010
20090006757HIERARCHICAL CACHE TAG ARCHITECTURE - An apparatus, system, and method are disclosed. In one embodiment, the apparatus includes a cache memory coupled to a processor. The apparatus additionally includes a tag storage structure that is coupled to the cache memory. The tag storage structure can store a tag associated with a location in the cache memory. The apparatus additionally includes a cache of cache tags coupled to the processor. The cache of cache tags can store a smaller subset of the tags stored in the tag storage structure.01-01-2009
20110296112Reducing Energy Consumption of Set Associative Caches by Reducing Checked Ways of the Set Association - Mechanisms for accessing a set associative cache of a data processing system are provided. A set of cache lines, in the set associative cache, associated with an address of a request are identified. Based on a determined mode of operation for the set, the following may be performed: determining if a cache hit occurs in a preferred cache line without accessing other cache lines in the set of cache lines; retrieving data from the preferred cache line without accessing the other cache lines in the set of cache lines, if it is determined that there is a cache hit in the preferred cache line; and accessing each of the other cache lines in the set of cache lines to determine if there is a cache hit in any of these other cache lines only in response to there being a cache miss in the preferred cache line(s).12-01-2011
20090292880CACHE MEMORY SYSTEM - A cache memory system controlled by an arbiter includes a memory unit having a cache memory whose capacity is changeable, and an invalidation processing unit that requests invalidation of data stored at a position where invalidation is performed when the capacity of the cache memory is changed in accordance with a change instruction. The invalidation processing unit includes an increasing/reducing processing unit that sets an index to be invalidated in accordance with a capacity before change and a capacity after change and requests the arbiter to invalidate the set index, and an index converter that selects either an index based on the capacity before change or an index based on the capacity after change associated with an access address from the arbiter, and the capacity of the cache memory can be changed while maintaining the number of ways of the cache memory.11-26-2009
20080235454Method and Apparatus for Repairing a Processor Core During Run Time in a Multi-Processor Data Processing System - A data processing system includes multiple processors each having multiple processor cores. A core checkstop from a particular processor core indicates that a memory array associated with the particular core exhibits an error. In response to the core checkstop, the system migrates the workload of the particular processor core to another processor core. The system also removes the particular processor core from the current configuration of the system. In response to the core checkstop and error, the system initializes the particular processor core if the error is in a processor memory array associated with the particular core. The system then attempts correction of the error with array built-in self test (ABIST) circuitry. If the ABIST succeeds in correcting the error, the initialization of the particular processor core completes and the system returns the particular processor core to the current processor configuration. However, if the ABIST does not succeed in correcting the error, then the system removes the portion of the processor memory array including the error from future use.09-25-2008
20100100684SET ASSOCIATIVE CACHE APPARATUS, SET ASSOCIATIVE CACHE METHOD AND PROCESSOR SYSTEM - A set associative cache memory includes a tag memory configured to store tags which are predetermined high-order bits of an address, a tag comparator configured to compare a tag in a request address (RA) with the tag stored in the tag memory and a data memory configured to incorporate way information obtained through a comparison by the tag comparator in part of a column address.04-22-2010
20090144504STRUCTURE FOR IMPLEMENTING REFRESHLESS SINGLE TRANSISTOR CELL eDRAM FOR HIGH PERFORMANCE MEMORY APPLICATIONS - A design structure embodied in a machine readable medium used in a design process includes a cache structure having a cache tag array associated with a eDRAM data cache comprising a plurality of cache lines, the cache tag array having an address tag, a valid bit and an access bit corresponding to each of the plurality of cache lines; and each access bit configured to indicate whether the corresponding cache line has been accessed as a result of a read or a write operation during a defined assessment period, which is smaller than retention time of data in the DRAM data cache; wherein, for any of the cache lines not accessed as a result of a read or a write operation during the defined assessment period, the individual valid bit associated therewith is set to a logic state that indicates the data in the associated cache line is invalid.06-04-2009
20080209128METHOD AND APPARATUS FOR DETECTING A CACHE WRAP CONDITION - A method and apparatus for detecting a cache wrap condition in a computing environment having a processor and a cache. A cache wrap condition is detected when the entire contents of a cache have been replaced, relative to a particular starting state. A set-associative cache is considered to have wrapped when all of the sets within the cache have been replaced. The starting point for cache wrap detection is the state of the cache sets at the time of the previous cache wrap. The method and apparatus is preferably implemented in a snoop filter having filter mechanisms that rely upon detecting the cache wrap condition. These snoop filter mechanisms requiring this information are operatively coupled with cache wrap detection logic adapted to detect the cache wrap event, and perform an indication step to the snoop filter mechanisms. In the various embodiments, cache wrap detection logic is implemented using registers and comparators, loadable counters, or a scoreboard data structure.08-28-2008
20090300288Write Combining Cache with Pipelined Synchronization - Systems and methods for pipelined synchronization in a write-combining cache are described herein. An embodiment to transmit data to a memory to enable pipelined synchronization of a cache includes obtaining a plurality of synchronization events for transactions with said memory, calculating one or more matches between said events and said data stored in one or more cache-lines of said cache, storing event time stamps of events associated with said matches, generating one or more priority values based on said event time stamps, concurrently transmitting said data to said memory based on said priority values.12-03-2009
20080244182Memory content inverting to minimize NTBI effects - In general, in one aspect, the disclosure describes an apparatus that includes a memory device having a plurality of memory cells. An inverter is used to invert data and tag information destined for the memory device. A register is used to capture the inverted data and tag information. A write inverted value logic is used to determine when to enable writing the inverted data and tag information from the register to the memory device. When inverted data and tag information is written to a memory cell the memory cell is invalidated.10-02-2008
20080282035Methods and apparatus for structure layout optimization for multi-threaded programs - A computer-implemented method for performing structure layout optimization of a data structure in a multi-threaded environment is provided. The method includes determining a set of code concurrency values. The method also includes calculating a set of cycle gain values. The method further includes employing the set of cycle gain values and the set of code concurrency values to create a field layout graph, which is configured to illustrate relationship between a set of data fields of the data structure. The method yet also includes employing a cluster algorithm to the field layout graph to create a set of clusters. Each cluster of the set of clusters is employed to generate a cache line.11-13-2008
20080235455CACHE ARCHITECTURE FOR A PROCESSING UNIT PROVIDING REDUCED POWER CONSUMPTION IN CACHE OPERATION - A cache memory processing system is disclosed that is coupled to a main memory and a processing unit. The cache memory processing system includes an input, a low order bit data path, a high order bit data path and an output. The input is for receiving input data that includes at least one low order input bit and at least one high order input bit. The low order bit data path is for processing the at least one low order input bit and providing at least one low order output bit. The high order bit data path for processing the at least one high order input bit and providing at least one high order output bit. The high order bit data path includes at least one exclusive or gate. The output is for providing the at least one low order output bit and the at least one high order output bit.09-25-2008
20120144121Demand Based Partitioning of Microprocessor Caches - Associativity of a multi-core processor cache memory to a logical partition is managed and controlled by receiving a plurality of unique logical processing partition identifiers into registration of a multi-core processor, each identifier being associated with a logical processing partition on one or more cores of the multi-core processor; responsive to a shared cache memory miss, identifying a position in a cache directory for data associated with the address, the shared cache memory being multi-way set associative; associating a new cache line entry with the data and one of the registered unique logical processing partition identifiers; modifying the cache directory to reflect the association; and caching the data at the new cache line entry, wherein the shared cache memory is effectively shared on a line-by-line basis among the plurality of logical processing partitions of the multi-core processor.06-07-2012
20080270703Method and system for managing memory transactions for memory repair - In one embodiment, a controller for an associative memory having n ways contains circuitry for sending a request to search an indexed location in each of the n ways for a tag, wherein the tag and an index that is used to denote the indexed location form a memory address. The controller also contains circuitry, responsive to the request, for sending a set of n validity values, each validity value indicating, for a respective way, whether the indexed location is a valid location or a defective location. Additionally, the controller contains circuitry for receiving a hit signal that indicates whether a match to the tag was found at any of the indexed locations, wherein no hit is ever received for a defective location.10-30-2008
20100146212Accessing a cache memory with reduced power consumption - In one embodiment, a cache memory includes a data array having N ways and M sets and at least one fill buffer coupled to the data array, where the data array is segmented into multiple array portions such that only one of the portions is to be accessed to seek data for a memory request if the memory request is predicted to hit in the data array. Other embodiments are described and claimed.06-10-2010
20090031082Accessing a Cache in a Data Processing Apparatus - A data processing apparatus is provided having processing logic for performing a sequence of operations, and a cache having a plurality of segments for storing data values for access by the processing logic. The processing logic is arranged, when access to a data value is required, to issue an access request specifying an address in memory associated with that data value, and the cache is responsive to the address to perform a lookup procedure during which it is determined whether the data value is stored in the cache. Indication logic is provided which, in response to an address portion of the address, provides for each of at least a subject of the segments an indication as to whether the data value is stored in that segment. The indication logic has guardian storage for storing guarding data, and hash logic for performing a hash operation on the address portion in order to reference the guarding data to determine each indication. Each indication indicates whether the data value is either definitely not stored in the associated segment or is potentially stored with the associated segment, and the cache is then operable to use the indications produced by the indication logic to affect the lookup procedure performed in respect of any segment whose associated indication indicates that the data value is definitely not stored in that segment. This technique has been found to provide a particularly power efficient mechanism for accessing the cache.01-29-2009
20090063774High Performance Pseudo Dynamic 36 Bit Compare - A cache memory high performance pseudo dynamic address compare path divides the address into two or more address segments. Each segment is separately compared in a comparator comprised of static logic elements. The output of each of these static comparators is then combined in a dynamic logic circuit to generate a dynamic late select output.03-05-2009
20090144505Memory Device - The present invention relates to a memory device, in particular, to a memory device comprising a cache memory with a predetermined amount of cache sets, each cache set comprising a predetermined amount of cache lines. Each cache line is operable to indicate a cache data injection into the particular cache line triggered by a bus-actor.06-04-2009
20090198900Microprocessor Having a Power-Saving Instruction Cache Way Predictor and Instruction Replacement Scheme - Microprocessor having a power-saving instruction cache way predictor and instruction replacement scheme. In one embodiment, the processor includes a multi-way set associative cache, a way predictor, a policy counter, and a cache refill circuit. The policy counter provides a signal to the way predictor that determines whether the way predictor operates in a first mode or a second mode. Following a cache miss, the cache refill circuit selects a way of the cache and compares a layer number associated with a dataram field of the way to a way set layer number. The cache refill circuit writes a block of data to the field if the layer number is not equal to the way set layer number. If the layer number is equal to the way set layer number, the cache refill circuit repeats the above steps for additional ways until the block of memory is written to the cache.08-06-2009
20090198899SYSTEM AND METHOD FOR TRANSACTIONAL CACHE - A computer-implemented method and system to support transactional caching service comprises configuring a transactional cache that are associated with one or more transactions and one or more work spaces; maintaining an internal mapping between the one or more transactions and the one or more work spaces in a transactional decorator; getting a transaction with one or more operations; using the internal mapping in the transactional decorator to find a work space for the transaction; and applying the one or more operations of the transaction to the workspace associated with the transaction.08-06-2009
20090100229Method for Increasing Cache Directory Associativity Classes in a System With a Register Space Memory - In a method of managing a cache directory in a memory system, an original system address is presented to the cache directory when corresponding associativity data is allocated to an associativity class in the cache directory. The original system address is normalized by removing address space corresponding to a memory hole, thereby generating a normalized address. The normalized address is stored in the cache directory. The normalized address is de-normalized, thereby generating a de-normalized address, when the associativity data is cast out of the cache directory to make room for new associativity data. The de-normalized address is sent to the memory system for coherency management.04-16-2009
20090077319Arithmetic Processing Device and Electronic Appliance Using Arithmetic Processing Device - A CPU incorporating a cache memory is provided, in which a high processing speed and low power consumption are realized at the same time. A CPU incorporating an associative cache memory including a plurality of sets is provided, which includes a means for observing a cache memory area which does not contribute to improving processing performance of the CPU in accordance with an operating condition, and changing such a cache memory area to a resting state dynamically. By employing such a structure, a high-performance and low-power consumption CPU can be provided.03-19-2009
20100250856METHOD FOR WAY ALLOCATION AND WAY LOCKING IN A CACHE - A system and method for data allocation in a shared cache memory of a computing system are contemplated. Each cache way of a shared set-associative cache is accessible to multiple sources, such as one or more processor cores, a graphics processing unit (GPU), an input/output (I/O) device, or multiple different software threads. A shared cache controller enables or disables access separately to each of the cache ways based upon the corresponding source of a received memory request. One or more configuration and status registers (CSRs) store encoded values used to alter accessibility to each of the shared cache ways. The control of the accessibility of the shared cache ways via altering stored values in the CSRs may be used to create a pseudo-RAM structure within the shared cache and to progressively reduce the size of the shared cache during a power-down sequence while the shared cache continues operation.09-30-2010
20090327611DOMAIN-BASED CACHE MANAGEMENT, INCLUDING DOMAIN EVENT BASED PRIORITY DEMOTION - Domain-based cache management methods and systems, including domain event based priority demotion (“EPD”). In EPD, priorities of cached data blocks are demoted upon one or more domain events, such as upon encoding of one or more macroblocks of a video frame. New data blocks may be written over lowest priority cached data blocks. New data blocks may initially be assigned a highest priority. Alternatively, or additionally, one or more new data blocks may initially be assigned one of a plurality of higher priorities based on domain-based information, such as a relative position of a requested data block within a video frame, and/or a relative direction associated with a requested data block. Domain-based cache management may be implemented with one or more other cache management techniques, such as least recently used techniques. Domain-based cache management may be implemented in associative caches, including set associative caches and fully associative caches, and may be implemented with indirect indexing.12-31-2009
20110113198Selective searching in shared cache - The present invention discloses a method comprising: sending request for data to a memory controller; arranging the request for data by order of importance or priority; identifying a source of the request for data; if the source is an input/output device, masking off P ways in a cache; and allocating ways in filling the cache. The method further includes extending cache allocation logic to control a tag comparison operation by using a bit to provide a hint from IO devices that certain ways will not have requested data.05-12-2011
20110238918CACHE WRITE INTEGRITY LOGGING - An apparatus, as well as systems, methods, and articles can operate to record the address of write operations to a memory cached by a non-volatile cache prior to executing an operating system cache driver. In an embodiment, a non-volatile cache may be implemented by creating a device option read only memory (ROM), or modifying the associated computer basic input-output system (BIOS) to trap software interrupts associated with disk and other media access requests. Associated addresses, such as logical block addresses, can be stored in a log for data that is modified. The resulting log can be stored in a non-volatile medium, including the cache itself. If the available log space is not large enough to record all write activity prior to loading operating system drivers, a flag may be set to indicate the overrun condition.09-29-2011
20120198171Cache Pre-Allocation of Ways for Pipelined Allocate Requests - This invention is a data processing system with a data cache. The cache controller responds to a cache miss requiring allocation by pre-allocating a way in the set to an allocation request according to said least recently used indication of said ways and then update the least recently used indication of remaining ways of the set. This permits read allocate requests to the same set to proceed without introducing processing stalls due to way contention. This also allows multiple outstanding allocate requests to the same set and way combination. The cache also compares the address of a newly received allocation request to stall this allocation request if the address matches an address of any pending allocation request.08-02-2012
20100281219MANAGING CACHE LINE ALLOCATIONS FOR MULTIPLE ISSUE PROCESSORS - An apparatus having a cache configured as N-way associative and a controller circuit is disclosed. The controller circuit may be configured to (i) detect one of a cache hit and a cache miss in response to each of a plurality of access requests to the cache, (ii) detect a collision among the access requests, (iii) queue at least two first requests of the access requests that establish a speculative collision, the speculative collision occurring where the first requests access a given congruence class in the cache and (iv) delay a line allocation to the cache caused by a cache miss of a given one of the first requests while the given congruence class has at least N outstanding line fills in progress.11-04-2010
20110010503CACHE MEMORY - A cache memory for operating in accordance with a multi-way set associative system, the cache memory includes an identification information storage for storing an identification information for identifying a requesting element of a memory access request corresponding to a cache block specified by a received memory access request, a replacement cache block candidate determinator for determining, upon an occurrence of a cache miss corresponding to the memory access request, a candidate of the cache block for replacing, on the basis of the identification information attached to the memory access request and the identification information stored in the identification information storage corresponding to the cache block specified by the memory access request, and a replacement cache block selector for selecting a replacement cache block from the candidate.01-13-2011
20110010502Cache Implementing Multiple Replacement Policies - In an embodiment, a cache stores tags for cache blocks stored in the cache. Each tag may include an indication identifying which of two or more replacement policies supported by the cache is in use for the corresponding cache block, and a replacement record indicating the status of the corresponding cache block in the replacement policy. Requests may include a replacement attribute that identifies the desired replacement policy for the cache block accessed by the request. If the request is a miss in the cache, a cache block storage location may be allocated to store the corresponding cache block. The tag associated with the cache block storage location may be updated to include the indication of the desired replacement policy, and the cache may manage the block in accordance with the policy. For example, in an embodiment, the cache may support both an LRR and an LRU policy.01-13-2011
20110040940Dynamic cache sharing based on power state - The present invention discloses a method comprising: sending cache request; monitoring power state; comparing said power state; allocating cache resources; filling cache; updating said power state; repeating said sending, said monitoring, said comparing, said allocating, said filling, and said updating until workload is completed.02-17-2011
20110055485EFFICIENT PSEUDO-LRU FOR COLLIDING ACCESSES - An apparatus for allocating entries in a set associative cache memory includes an array that provides a first pseudo-least-recently-used (PLRU) vector in response to a first allocation request from a first functional unit. The first PLRU vector specifies a first entry from a set of the cache memory specified by the first allocation request. The first vector is a tree of bits comprising a plurality of levels. Toggling logic receives the first vector and toggles predetermined bits thereof to generate a second PLRU vector in response to a second allocation request from a second functional unit generated concurrently with the first allocation request and specifying the same set of the cache memory specified by the first allocation request. The second vector specifies a second entry different from the first entry from the same set. The predetermined bits comprise bits of a predetermined one of the levels of the tree.03-03-2011
20100100685EFFECTIVE ADDRESS CACHE MEMORY, PROCESSOR AND EFFECTIVE ADDRESS CACHING METHOD - An effective address cache memory includes a TLB effective page memory configured to retain entry data including an effective page tag of predetermined high-order bits of an effective address of a process, and output a hit signal when the effective page tag matches the effective page tag from a processor; a data memory configured to retain cache data with the effective page tag or a page offset as a cache index; and a cache state memory configured to retain a cache state of the cache data stored in the data memory, in a manner corresponding to the cache index.04-22-2010
20110078382Adaptive Linesize in a Cache - A mechanism is provided in a cache for emulating larger linesize in a substrate with smaller linesize using gang fetching and gang replacement. Gang fetching fetches multiple lines on a cache miss to ensure that all smaller lines that make up the larger line are resident in cache at the same time. Gang replacement evicts all smaller lines in cache that would have been evicted had the cache linesize been larger. The mechanism provides adaptive linesize using set dueling by dynamically selecting between multiple linsizes depending on which linesize performs the best at runtime. Set dueling dedicates a portion of sets of the cache to always use smaller linesize and dedicates one or more portions of the sets of cache to always emulate larger linesizes. One or more counters keep track of which linesize has the best performance. The cache uses that linesize for the remainder of the sets.03-31-2011
20090216953METHODS AND SYSTEMS FOR DYNAMIC CACHE PARTITIONING FOR DISTRIBUTED APPLICATIONS OPERATING ON MULTIPROCESSOR ARCHITECTURES - Software, systems and methods are described which provide cache management capabilities. The number of cache sets to be used in each partition of the cache memory space is based on a number of cache pages in each partition and an associativity level associated with the set associative cache. The cache sets can be numbered based on the partition number, a total number of partitions and a cache page index. Cache management according to these exemplary embodiments reduces problems associated with cache trashing in multiprocessor environments sharing common data structures in set associative caches.08-27-2009
20100070714Network On Chip With Caching Restrictions For Pages Of Computer Memory - A network on chip (‘NOC’) that includes integrated processor (‘IP’) blocks, routers, memory communications controllers, and network interface controllers, each IP block adapted to a router through a memory communications controller and a network interface controller, a multiplicity of computer processors, each computer processor implementing a plurality of hardware threads of execution; and computer memory, the computer memory organized in pages and operatively coupled to one or more of the computer processors, the computer memory including a set associative cache, the cache comprising cache ways organized in sets, the cache being shared among the hardware threads of execution, each page of computer memory restricted for caching by one replacement vector of a class of replacement vectors to particular ways of the cache, each page of memory further restricted for caching by one or more bits of a replacement vector classification to particular sets of ways of the cache.03-18-2010
20110072216Memory control device and memory control method - Address information of target data is stored in an ELA register at the start of a cache excluding process performed by BackEviction, and a request processing unit continuously re-executes a data acquiring process while an address of data requested to be acquired by a processor is present in the ELA register. The address information of the target data is stored in an EWB buffer at the start of autonomous move-out performed by a processor, and the cache excluding process performed by BackEviction is stopped when the address of data subjected to BackEviction is present in the EWB buffer.03-24-2011
20130159629METHOD AND SYSTEM FOR HASH KEY MEMORY FOOTPRINT REDUCTION - A system and method are disclosed for storing data in a hash table. The method includes receiving data, determining a location identifier for the data wherein the location identifier identifies a location in the hash table for storing the data and the location identifier is derived from the data, compressing the data by extracting the location identifier; and storing the compressed data in the identified location of the hash table.06-20-2013
20100037025METHOD AND APPARATUS FOR DETECTING A DATA ACCESS VIOLATION - Machine-readable media, methods, apparatus and system for detecting a data access violation are described. In some embodiments, current memory access information related to a current memory access to a memory address by a current user thread may be obtained. It may be determined whether a cache includes a cache entry associated with the memory address. If the cache includes the cache entry associated with the memory address, then, an access history stored in the cache entry and the current memory access information may be analyzed to detect if there is at least one of an actual violation and a potential violation of accessing the memory address.02-11-2010
20120203971Network On Chip With Caching Restrictions For Pages Of Computer Memory - A network on chip (‘NOC’) that includes integrated processor (‘IP’) blocks, routers, memory communications controllers, and network interface controllers, each IP block adapted to a router through a memory communications controller and a network interface controller, a multiplicity of computer processors, each computer processor implementing a plurality of hardware threads of execution; and computer memory, the computer memory organized in pages and operatively coupled to one or more of the computer processors, the computer memory including a set associative cache, the cache comprising cache ways organized in sets, the cache being shared among the hardware threads of execution, each page of computer memory restricted for caching by one replacement vector of a class of replacement vectors to particular ways of the cache, each page of memory further restricted for caching by one or more bits of a replacement vector classification to particular sets of ways of the cache.08-09-2012
20120203970SOFTWARE AND HARDWARE MANAGED DUAL RULE BANK CACHE FOR USE IN A PATTERN MATCHING ACCELERATOR - A pattern matching accelerator (PMA) for assisting software threads to find the presence and location of strings in an input data stream that match a given pattern. The patterns are defined using regular expressions that are compiled into a data structure comprised of rules subsequently processed by the PMA. The patterns to be searched in the input stream are defined by the user as a set of regular expressions. The patterns to be searched are grouped in pattern context sets. The sets of regular expressions which define the pattern context sets are compiled to generate a rules structure used by the PMA hardware. The rules are compiled before search run time and stored in main memory, in rule cache memory within the PMA or a combination thereof. For each input character, the PMA executes the search and returns the search results.08-09-2012
20110055486RESISTIVE MEMORY DEVICES AND RELATED METHODS OF OPERATION - A method of processing data in a resistive memory device comprises performing a write operation to store data into a resistive memory of the resistive memory device and to store program information of the data into a cache memory. The method further comprises performing a first read operation to read the program information from the cache memory during a program-to-active time, and a second read operation to read the data from the resistive memory after the program-to-active time if the program information is not read from the cache memory during the program-to-active time.03-03-2011
20100180083Cache Memory Having Enhanced Performance and Security Features - A cache memory having enhanced performance and security feature is provided. The cache memory includes a data array storing a plurality of data elements, a tag array storing a plurality of tags corresponding to the plurality of data elements, and an address decoder which permits dynamic memory-to-cache mapping to provide enhanced security of the data elements, as well as enhanced performance. The address decoder receives a context identifier and a plurality of index bits of an address passed to the cache memory, and determines whether a matching value in a line number register exists. The line number registers allow for dynamic memory-to-cache mapping, and their contents can be modified as desired. Methods for accessing and replacing data in a cache memory are also provided, wherein a plurality of index bits and a plurality of tag bits at the cache memory are received. The plurality of index bits are processed to determine whether a matching index exists in the cache memory and the plurality of tag bits are processed to determine whether a matching tag exists in the cache memory, and a data line is retrieved from the cache memory if both a matching tag and a matching index exist in the cache memory. A random line in the cache memory can be replaced with a data line from a main memory, or evicted without replacement, based on the combination of index and tag misses, security contexts and protection bits. User-defined and/or vendor-defined replacement procedures can be utilized to replace data lines in the cache memory.07-15-2010
20100030970Adaptive Spill-Receive Mechanism for Lateral Caches - Improving cache performance in a data processing system is provided. A cache controller monitors a counter associated with a cache. The cache controller determines whether the counter indicates that a plurality of non-dedicated cache sets within the cache should operate as spill cache sets or receive cache sets. The cache controller sets the plurality of non-dedicated cache sets to spill an evicted cache line to an associated cache set in another cache in the event of a cache miss in response to an indication that the plurality of non-dedicated cache sets should operate as the spill cache sets. The cache controller sets the plurality of non-dedicated cache sets to receive an evicted cache line from another cache set in the event of the cache miss in response to an indication that the plurality of non-dedicated cache sets should operate as the receive cache sets.02-04-2010
20100023697Testing Real Page Number Bits in a Cache Directory - Testing real page number bits in a cache directory is provided. A specification of a cache to be tested is retrieved in order to test the real page number bits of the cache directory associated with the cache. A range within a real page number address of the cache directory is identified for performing page allocations using the specification of the cache. A random value x is generated that identifies a portion of the real page number bits to be tested. A first random value y is generated that identifies a first congruence class from a set of congruence classes within the portion of the cache to be tested. Responsive to the first congruence class failing to be allocated a predetermined number of times, one page size of memory for the first congruence class is allocated and a first allocation value is incremented by a value of 1.01-28-2010
20090172289CACHE MEMORY HAVING SECTOR FUNCTION - A cache memory having a sector function, operating in accordance with a set associative system, and performing a cache operation to replace data in a cache block in the cache way corresponding to a replacement cache way determined upon an occurrence of a cache miss comprises: storing sector ID information in association with each of the cache ways in the cache block specified by a memory access request; determining, upon the occurrence of the cache miss, replacement way candidates, in accordance with sector ID information attached to the memory access request and the stored sector ID information; selecting and outputting a replacement way from the replacement way candidates; and updating the stored sector ID information in association with each of the cache ways in the cache block specified by the memory access request, to the sector ID information attached to the memory access request.07-02-2009
20090172288Processor having a cache memory which is comprised of a plurality of large scale integration - To provide an easy way to constitute a processor from a plurality of LSIs, the processor includes: a first LSI containing a processor; a second LSI having a cache memory; and information transmission paths connecting the first LSI to a plurality of the second LSIs, in which the first LSI contains an address information issuing unit which broadcasts, to the second LSIs, via the information transmission paths, address information of data, the second LSI includes: a partial address information storing unit which stores a part of address information; a partial data storing unit which stores data that is associated with the address information; and a comparison unit which compares the address information broadcast with the address information stored in the partial address information storing unit to judge whether a cache hit occurs, and the comparison units of the plurality of the second LSIs are connected to the information transmission paths.07-02-2009
20090172287DATA BUS EFFICIENCY VIA CACHE LINE USURPATION - Embodiments of the current invention permit a user to allocate cache memory to main memory more efficiently. The processor or a user allocates the cache memory and associates the cache memory to the main memory location, but suppresses or bypassing reading the main memory data into the cache memory. Some embodiments of the present invention permit the user to specify how many cache lines are allocated at a given time. Further, embodiments of the present invention may initialize the cache memory to a specified pattern. The cache memory may be zeroed or set to some desired pattern, such as all ones. Alternatively, a user may determine the initialization pattern through the processor.07-02-2009
20120151146Demand Based Partitioning of Microprocessor Caches - Associativity of a multi-core processor cache memory to a logical partition is managed and controlled by receiving a plurality of unique logical processing partition identifiers into registration of a multi-core processor, each identifier being associated with a logical processing partition on one or more cores of the multi-core processor; responsive to a shared cache memory miss, identifying a position in a cache directory for data associated with the address, the shared cache memory being multi-way set associative; associating a new cache line entry with the data and one of the registered unique logical processing partition identifiers; modifying the cache directory to reflect the association; and caching the data at the new cache line entry, wherein said shared cache memory is effectively shared on a line-by-line basis among said plurality of logical processing partitions of said multi-core processor.06-14-2012
20110066810PERSISTENT CACHEABLE HIGH VOLUME MANUFACTURING (HVM) INITIALIZATION CODE - A persistent cacheable high volume manufacturing (HVM) initialization code is generally presented. In this regard, an apparatus is introduced comprising a processing unit, a unified cache, a unified cache controller, and a control register to selectively mask off access by the unified cache controller to portions of the unified cache. Other embodiments are also described and claimed.03-17-2011
20120317361STORAGE EFFICIENT SECTORED CACHE - Technologies are generally described for a system for copying particular data in a particular sector of a particular block from a memory into a cache, in some examples, the cache includes a tag array and a data array. In some examples, a processor may be adapted to copy data in the particular sector from the memory into a way of the data array starling at a start sector. In some examples, the processor may be adapted to update the tag array to identify the particular sector. In some examples, the processor may be adapted to update the tag array to identify the way in the data array, hi some examples, the processor may be adapted to update the tag array to identify the start sector.12-13-2012
20120124293PREVENTING UNINTENDED LOSS OF TRANSACTIONAL DATA IN HARDWARE TRANSACTIONAL MEMORY SYSTEMS - A method and apparatus are disclosed for implementing early release of speculatively read data in a hardware transactional memory system. A processing core comprises a hardware transactional memory system configured to receive an early release indication for a specified word of a group of words in a read set of an active transaction. The early release indication comprises a request to remove the specified word from the read set. In response to the early release request, the processing core removes the group of words from the read set only after determining that no word in the group other than the specified word has been speculatively read during the active transaction.05-17-2012
20120131279MEMORY ELEMENTS FOR PERFORMING AN ALLOCATION OPERATION AND RELATED METHODS - Apparatus for memory elements and related methods for performing an allocate operation are provided. An exemplary memory element includes a plurality of way memory elements and a replacement module coupled to the plurality of way memory elements. Each way memory element is configured to selectively output data bits maintained at an input address. The replacement module is configured to enable output of the data bits maintained at the input address of a way memory element of the plurality of way memory elements for replacement in response to an allocate instruction including the input address.05-24-2012
20120179874SCALABLE CLOUD STORAGE ARCHITECTURE - a virtual storage module operable to run in a virtual machine monitor may include a wait-queue operable to store incoming block-level data requests from one or more virtual machines. In-memory metadata may store information associated with data stored in local persistent storage that is local to a host computer hosting the virtual machines. The data stored in local persistent storage replicates a subset of data in one or more virtual disks provided to the virtual machines. The virtual disks are mapped to remote storage accessible via a network connecting the virtual machines and the remote storage. A cache handling logic may be operable to handle the block-level data requests by obtaining the information in the in-memory metadata and making I/O re-quests to the local persistent storage or the remote storage or combination of the local persistent storage and the remote storage to service the block-level data requests.07-12-2012
20100011165CACHE MANAGEMENT SYSTEMS AND METHODS - A multi mode cache system that uses a direct mapped cache scheme for some addresses and an associative cache scheme for other addresses.01-14-2010
20120084512FAST UNALIGNED CACHE ACCESS SYSTEM AND METHOD - A cache unit multiple memory towers, which can be independently addressed. Cache lines are divided among multiple towers. Furthermore, physical lines of the memory towers are shared by multiple cache lines. Because each tower can be addressed independently and the cache lines are split among the towers, unaligned cache access can be performed. Furthermore, power can be conserved because not all the memory towers of the cache unit needs to be activated during some memory access operations.04-05-2012
20090019226METHODS AND SYSTEMS FOR PROVIDING A LEVEL OF ACCESS TO A COMPUTING DEVICE - A method for responding to read requests for a data block of a storage device, the storage device providing access to a hardened appliance and providing unrestricted access to a computing device, includes the step of executing a computing device in a requested one of a plurality of execution modes. A process intercepts a read request for a first data set stored in a data block of a storage device associated with the computing device. The read request is responded to with a second data set, the second data set stored in a cache and representing an unmodified version of the first data set presently stored in the data block of the storage device.01-15-2009
20080301372Memory access control apparatus and memory access control method - A memory access control apparatus includes an MIB for storing information on a plurality of requests and processing the requests in parallel. Upon receipt of a memory access request, the MIB selects a request for a data block to be processed corresponding to the same set of a data block to be processed in response to the memory access request, and outputs a WAY assigned to the selected request to a replace-WAY selecting unit. The replace-WAY selecting unit excludes the WAY output from the MIB, and selects a WAY to be assigned to the memory access request based on a predetermined algorithm.12-04-2008
20080301371Memory Cache Control Arrangement and a Method of Performing a Coherency Operation Therefor - A memory cache control arrangement for performing a coherency operation on a memory cache comprises a receive processor for receiving an address group indication for an address group comprising a plurality of addresses associated with a main memory. The address group indication may indicate a task identity and an address range corresponding to a memory block of the main memory. A control unit processes each line of a group of cache lines sequentially. Specifically it is determined if each cache line is associated with an address of the address group by evaluating a match criterion. If the match criterion is met, a coherency operation is performed on the cache line. If a conflict exists between the coherency operation and another memory operation the coherency means inhibits the coherency operation. The invention allows a reduced duration of a cache coherency operation. The duration is further independent of the size of the main memory address space covered by the coherency operation.12-04-2008
20120239884MEMORY CONTROL DEVICE, MEMORY DEVICE, MEMORY CONTROL METHOD, AND PROGRAM - There is provided a memory control device including a device driver that executes writing or reading of data to/from a main storage unit and temporary writing or reading of data to/from a cache unit including a plurality of cache blocks, and a control unit that issues an instruction for writing or reading of data of a file system to/from the main storage unit or the cache unit to the device driver. The control unit may notify priority information about a priority for data storage into a logical block to which the cache block is associated to the device driver.09-20-2012
20120272007CACHE MEMORY WITH DYNAMIC LOCKSTEP SUPPORT - Cache storage may be partitioned in a manner that dedicates a first portion of the cache to lockstep mode execution, while providing a second (or remaining) portion for non-lockstep execution mode(s). For example, in embodiments that employ cache storage organized as a set associative cache, partition may be achieved by reserving a subset of the ways in the cache for use when operating in lockstep mode. Some or all of the remaining ways are available for use when operating in non-lockstep execution mode(s). In some embodiments, a subset of the cache sets, rather than cache ways, may be reserved in a like manner, though for concreteness, much of the description that follows emphasizes way-partitioned embodiments.10-25-2012
20110307664Cache device for coupling to a memory device and a method of operation of such a cache device - A cache device is provided for use in a data processing apparatus to store data values for access by an associated master device. Each data value has an associated memory location in a memory device, and the memory device is arranged as a plurality of blocks of memory locations, with each block having to be activated before any data value stored in that block can be accessed. The cache device comprises regular access detection circuitry for detecting occurrence of a sequence of accesses to data values whose associated memory locations follow a regular pattern. Upon detection of such an occurrence of a sequence of accesses by the regular access detection circuitry, an allocation policy employed by the cache to determine a selected cache line into which to store a data value is altered with the aim of increasing a likelihood that when an evicted data value output by the cache is subsequently written to the memory device, the associated memory location resides within an already activated block of memory locations. Hence, by detecting regular access patterns, and altering the allocation policy on detection of such patterns, this enables a reuse of already activated blocks within the memory device, thereby significantly improving memory utilisation, thereby giving rise to both performance improvements and power consumption reductions.12-15-2011
20110161595CACHE MEMORY POWER REDUCTION TECHNIQUES - Methods and apparatus to provide for power consumption reduction in memories (such as cache memories) are described. In one embodiment, a virtual tag is used to determine whether to access a cache way. The virtual tag access and comparison may be performed earlier in the read pipeline than the actual tag access or comparison. In another embodiment, a speculative way hit may be used based on pre-ECC partial tag match to wake up a subset of data arrays. Other embodiments are also described.06-30-2011
20130024620METHOD AND APPARATUS FOR ADAPTIVE CACHE FRAME LOCKING AND UNLOCKING - Most recently accessed frames are locked in a cache memory. The most recently accessed frames are likely to be accessed by a task again in the near future and may be locked at the beginning of a task switch or interrupt to improve cache performance. The list of most recently used frames is updated as a task executes and may be embodied as a list of frame addresses or a flag associated with each frame. The list of most recently used frames may be separately maintained for each task if multiple tasks may interrupt each other. An adaptive frame unlocking mechanism is also disclosed that automatically unlocks frames that may cause a significant performance degradation for a task. The adaptive frame unlocking mechanism monitors a number of times a task experiences a frame miss and unlocks a given frame if the number of frame misses exceeds a predefined threshold.01-24-2013
20080222361PIPELINED TAG AND INFORMATION ARRAY ACCESS WITH SPECULATIVE RETRIEVAL OF TAG THAT CORRESPONDS TO INFORMATION ACCESS - A cache design is described in which corresponding accesses to tag and information arrays are phased in time, and in which tags are retrieved (typically speculatively) from a tag array without benefit of an effective address calculation subsequently used for a corresponding retrieval from an information array. In some exploitations, such a design may allow cycle times (and throughput) of a memory subsystem to more closely match demands of some processor and computation system architectures. In some cases, phased access can be described as pipelined tag and information array access, though strictly speaking, indexing into the information array need not depend on results of the tag array access. Our techniques seek to allow early (indeed speculative) retrieval from the tag array without delays that would otherwise be associated with calculation of an effective address eventually employed for a corresponding retrieval from the information array. Speculation can be resolved using the eventually calculated effective address or using separate functionality. In some embodiments, we use calculated effective addresses for way selection based on tags retrieved from the tag array.09-11-2008
20130097386CACHE MEMORY SYSTEM FOR TILE BASED RENDERING AND CACHING METHOD THEREOF - A cache memory system and a caching method for a tile-based rendering may be provided. Each of cache lines in the cache memory system may include delayed-replacement information. The delayed-replacement information may indicate whether texture data referred to at a position of an edge of a tile is included in a cache line. When a cache line corresponding to an access-requested address is absent in the cache memory system, the cache memory system may select and remove a cache line to be removed from an associative cache unit, based on delayed-replacement information.04-18-2013
20130097385DUAL-GRANULARITY STATE TRACKING FOR DIRECTORY-BASED CACHE COHERENCE - A system and method of providing directory cache coherence are disclosed. The system and method may include tracking the coherence state of at least one cache block contained within a region using a global directory, providing at least one region level sharing information about the least one cache block in the global directory, and providing at least one block level sharing information about the at least one cache block in the global directory. The tracking of the provided at least one region level sharing information and the provided at least one block level sharing information may organize the coherence state of the at least one cache block and the region.04-18-2013
20130124802Class Dependent Clean and Dirty Policy - A method for cleaning dirty data in an intermediate cache is disclosed. A dirty data notification, including a memory address and a data class, is transmitted by a level 2 (L2) cache to frame buffer logic when dirty data is stored in the L2 cache. The data classes may include evict first, evict normal and evict last. In one embodiment, data belonging to the evict first data class is raster operations data with little reuse potential. The frame buffer logic uses a notification sorter to organize dirty data notifications, where an entry in the notification sorter stores the DRAM bank page number, a first count of cache lines that have resident dirty data and a second count of cache lines that have resident evict_first dirty data associated with that DRAM bank. The frame buffer logic transmits dirty data associated with an entry when the first count reaches a threshold.05-16-2013
20130238860Administering Registered Virtual Addresses In A Hybrid Computing Environment Including Maintaining A Watch List Of Currently Registered Virtual Addresses By An Operating System - Administering registered virtual addresses in a hybrid computing environment that includes a host computer and an accelerator, the accelerator architecture optimized, with respect to the host computer architecture, for speed of execution of a particular class of computing functions, the host computer and the accelerator adapted to one another for data communications by a system level message passing module, where administering registered virtual addresses includes maintaining, by an operating system, a watch list of ranges of currently registered virtual addresses; upon a change in physical to virtual address mappings of a particular range of virtual addresses falling within the ranges included in the watch list, notifying the system level message passing module by the operating system of the change; and updating, by the system level message passing module, a cache of ranges of currently registered virtual addresses to reflect the change in physical to virtual address mappings.09-12-2013
20130132677OPTIMIZING DATA CACHE WHEN APPLYING USER-BASED SECURITY - A secure caching system and caching method include receiving a user request for data, the request containing a security context, and searching a cache for the requested data based on the user request and the received security context. If the requested data is found in cache, returning the cached data in response to the user request. If the requested data is not found in cache, obtaining the requested data from a data source, storing the obtained data in the cache and associating the obtained data with the security context, and returning the requested data in response to the user request. The search for the requested data can include searching for a security list that has the security context as a key, the security list including an address in the cache of the requested data.05-23-2013
20100287339DEMAND BASED PARTITIONING OR MICROPROCESSOR CACHES - Associativity of a multi-core processor cache memory to a logical partition is managed and controlled by receiving a plurality of unique logical processing partition identifiers into registration of a multi-core processor, each identifier being associated with a logical processing partition on one or more cores of the multi-core processor; responsive to a shared cache memory miss, identifying a position in a cache directory for data associated with the address, the shared cache memory being multi-way set associative; associating a new cache line entry with the data and one of the registered unique logical processing partition identifiers; modifying the cache directory to reflect the association; and caching the data at the new cache line entry, wherein said shared cache memory is effectively shared on a line-by-line basis among said plurality of logical processing partitions of said multi-core processor.11-11-2010
20130151781Cache Implementing Multiple Replacement Policies - In an embodiment, a cache stores tags for cache blocks stored in the cache. Each tag may include an indication identifying which of two or more replacement policies supported by the cache is in use for the corresponding cache block, and a replacement record indicating the status of the corresponding cache block in the replacement policy. Requests may include a replacement attribute that identifies the desired replacement policy for the cache block accessed by the request. If the request is a miss in the cache, a cache block storage location may be allocated to store the corresponding cache block. The tag associated with the cache block storage location may be updated to include the indication of the desired replacement policy, and the cache may manage the block in accordance with the policy. For example, in an embodiment, the cache may support both an LRR and an LRU policy.06-13-2013
20120284462METHOD AND APPARATUS FOR SAVING POWER BY EFFICIENTLY DISABLING WAYS FOR A SET-ASSOCIATIVE CACHE - A method and apparatus for disabling ways of a cache memory in response to history based usage patterns is herein described. Way predicting logic is to keep track of cache accesses to the ways and determine if an access to some ways are to be disabled to save power, based upon way power signals having a logical state representing a predicted miss to the way. One or more counters associated with the ways count accesses, wherein a power signal is set to the logical state representing a predicted miss when one of said one or more counters reaches a saturation value. Control logic adjusts said one or more counters associated with the ways according to the accesses.11-08-2012
20120030430CACHE CONTROL APPARATUS, AND CACHE CONTROL METHOD - A cache control apparatus according to the present invention includes a cache allocation control unit which allocates each of a plurality of ways included in a cache memory to one or more of tasks to be executed by a plurality of processors. In the case where a group of ways includes an unallocated way that is not allocated to any of the tasks and a way allocated to one or more of the tasks which is to be executed by one of the processors, the cache allocation control unit allocates the unallocated way included in the group to the one or more of the tasks to be executed by the one of the processors.02-02-2012
20130262772ON-DEMAND ALLOCATION OF CACHE MEMORY FOR USE AS A PRESET BUFFER - A data processing system comprises data processing circuitry, a cache memory, and memory access circuitry. The memory access circuitry is operative to assign a memory address region to be allocated in the cache memory with a predefined initialization value. Subsequently, a portion of the cache memory is allocated to the assigned memory address region only after the data processing circuitry first attempts to perform a memory access on a memory address within the assigned memory address region. The allocated portion of the cache memory is then initialized with the predefined initialization value.10-03-2013
20110138127AUTOMATIC DETECTION OF STRESS CONDITION - A stress of a computerized system may be detected based on actions performed by the computerized system in respect to a storage system, such as comprising a secondary storage. The stress detection may be performed automatically based on an age of data blocks accessed by the storage system. The stress may take into account a number of accesses to the storage system by the computerized system. Stress detection may be performed automatically based on a decision procedure.06-09-2011
20130275683Programmably Partitioning Caches - Agents may be assigned to discrete portions of a cache. In some cases, more than one agent may be assigned to the same cache portion. The size of the portion, the assignment of agents to the portion and the number of agents may be programmed dynamically in some embodiments.10-17-2013

Patent applications in class Associative