Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


Multiple caches

Subclass of:

711 - Electrical computers and digital processing systems: memory

711100000 - STORAGE ACCESSING AND CONTROL

711117000 - Hierarchical memories

711118000 - Caching

Patent class list (only not empty are listed)

Deeper subclasses:

Class / Patent application numberDescriptionNumber of patent applications / Date published
711122000 Hierarchical caches 216
711124000 Cross-interrogating 20
711121000 Private caches 17
711120000 Parallel caches 17
711123000 User data cache and instruction data cache 13
Entries
DocumentTitleDate
20100082904Apparatus and method to harden computer system - In some embodiments, a non-volatile cache memory may include a segmented non-volatile cache memory configured to be located between a system memory and a mass storage device of an electronic system and a controller coupled to the segmented non-volatile cache memory, wherein the controller is configured to control utilization of the segmented non-volatile cache memory. The segmented non-volatile cache memory may include a file cache segment, the file cache segment to store complete files in accordance with a file cache policy, and a block cache segment, the block cache segment to store one or more blocks of one or more files in accordance with a block cache policy, wherein the block cache policy is different from the file cache policy. The controller may be configured to utilize the file cache segment in accordance with information related to the block cache segment and to utilize the block cache segment in accordance with information related to the file cache segment. Other embodiments are disclosed and claimed.04-01-2010
20120246406EFFECTIVE PREFETCHING WITH MULTIPLE PROCESSORS AND THREADS - A processing system includes a memory and a first core configured to process applications. The first core includes a first cache. The processing system includes a mechanism configured to capture a sequence of addresses of the application that miss the first cache in the first core and to place the sequence of addresses in a storage array; and a second core configured to process at least one software algorithm. The at least one software algorithm utilizes the sequence of addresses from the storage array to generate a sequence of prefetch addresses. The second core issues prefetch requests for the sequence of the prefetch addresses to the memory to obtain prefetched data and the prefetched data is provided to the first core if requested.09-27-2012
20130086322SYSTEMS AND METHODS FOR MULTITENANCY DATA - Systems and methods are provided to support multitenant data in an EclipseLink environment. EclipseLink supports shared multitenant tables using tenant discriminator columns, allowing an application to be re-used for multiple tenants and have all their data co-located. Tenants can share the same schema transparently, without affecting one another and can use non-multitenant entity types as per usual. This functionality is flexible enough to allow for its usage at an Entity Manager Factory level or with individual Entity Manager's based on the application's needs. Support for multitenant entities can be done though the usage of a multitenant annotation or xml element configured in an eclipselink-orm.xml mapping file. The multitenant annotation can be used on an entity or mapped superclass and is used in conjunction with a tenant discriminator column or xml element.04-04-2013
20130086323EFFICIENT CACHE MANAGEMENT IN A CLUSTER - A content management system has at least two content server computers, a cache memory corresponding to each content server, the cache memory having a page cache to store cache objects for pages displayed by the content server, a dependency cache to store dependency information for the cache objects, and a notifier cache to replicate changes in dependency information to other caches.04-04-2013
20130042066STORAGE CACHING - The present disclosure provides a method for processing a storage operation in a system with an added level of storage caching. The method includes receiving, in a storage cache, a read request from a host processor that identifies requested data and determining whether the requested data is in a cache memory of the storage cache. If the requested data is in the cache memory of the storage cache, the requested data may be obtained from the storage cache and sent to the host processor. If the requested data is not in the cache memory of the storage cache, the read request may be sent to a host bus adapter operatively coupled to a storage system. The storage cache is transparent to the host processor and the host bus adapter.02-14-2013
20130042065CUSTOM CACHING - Methods and systems are presented for custom caching. Application threads define caches. The caches may be accessed through multiple index keys, which are mapped to multiple application thread-defined keys. Methods provide for the each index key and each application thread-defined key to be symmetrical. The index keys are used for loading data from one or more data sources into the cache stores on behalf of the application threads. Application threads access the data from the cache store by providing references to the caches and the application-supplied keys. Some data associated with some caches may be shared from the cache store by multiple application threads. Additionally, some caches are exclusively accessed by specific application threads.02-14-2013
20100042785ADVANCED PROCESSOR WITH FAST MESSAGING NETWORK TECHNOLOGY - An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.02-18-2010
20100106911METHODS AND SYSTEMS FOR COMMUNICATION BETWEEN STORAGE CONTROLLERS - Methods and systems for communication between two storage controllers. A first storage controller specifies a special frame indicator in a frame of a protocol that is also used by a first storage controller to send a storage command to a storage device. The first storage controller transmits the frame to a second storage controller such that the frame comprises data in a payload field of the frame.04-29-2010
20120215982Partial Line Cache Write Injector for Direct Memory Access Write - A cache within a computer system receives a partial write request and identifies a cache hit of a cache line. The cache line corresponds to the partial write request and includes existing data. In turn, the cache receives partial write data and merges the partial write data with the existing data into the cache line. In one embodiment, the existing data is “modified” or “dirty.” In another embodiment, the existing data is “shared.” In this embodiment, the cache changes the state of the cache line to indicate the storing of the partial write data into the cache line.08-23-2012
20130046935SHARED COPY CACHE ACROSS NETWORKED DEVICES - A copy cache feature that can be shared across networked devices is provided. Content added to copy cache through a “copy”, a “like”, or similar command through one device may be forwarded to a server providing cloud-based services to a user and/or another device associated with the user such that the content can be inserted into the same or other files on other computing devices by the user. In addition to seamless movement of copy cache content across devices, the content may be made available in a context-based manner and/or sortable manner.02-21-2013
20130046934SYSTEM CACHING USING HETEROGENOUS MEMORIES - A caching circuit includes tag memories for storing tagged addresses of a first cache. On-chip data memories are arranged in the same die as the tag memories, and the on-chip data memories form a first sub-hierarchy of the first cache. Off-chip data memories are arranged in a different die as the tag memories, and the off-chip data memories form a second sub-hierarchy of the first cache. Sources (such as processors) are arranged to use the tag memories to service first cache requests using the first and second sub-hierarchies of the first cache.02-21-2013
20090307430SHARING AND PERSISTING CODE CACHES - Computer code from an application program comprising a plurality of modules that each comprise a separately loadable file is code cached in a shared and persistent caching system. A shared code caching engine receives native code comprising at least a portion of a single module of the application program, and stores runtime data corresponding to the native code in a cache data file in the non-volatile memory. The engine then converts cache data file into a code cache file and enables the code cache file to be pre-loaded as a runtime code cache. These steps are repeated to store a plurality of separate code cache files at different locations in non-volatile memory.12-10-2009
20120191912STORING DATA ON STORAGE NODES - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for storing data on storage nodes. In one aspect, a method includes receiving a file to be stored across a plurality of storage nodes each including a cache. The is stored by storing portions of the file each on a different storage node. A first portion is written to a first storage node's cache until determining that the first storage node's cache is full. A different second storage node is selected in response to determining that the first storage node's cache is full. For each portion of the file, a location of the portion is recorded, the location indicating at least a storage node storing the portion.07-26-2012
20130073808METHOD AND NODE ENTITY FOR ENHANCING CONTENT DELIVERY NETWORK - The present invention provides a method and a caching node entity for ensuring at least a predetermined number of a content object to be kept stored in a network, comprising a plurality of cache nodes for storing copies of content objects. The present invention makes use of ranking states values, deletable or non-deletable, which when assigned to copies of content objects are indicating whether a copy is either deletable or non-deletable. At least one copy of each content object is assigned the value non-deletable. The value for a copy of a content object changing from deletable to non-deletable in one cache node of the network, said copy being a candidate for the value non-deletable, if a certain condition is fulfilled.03-21-2013
20090013130MULTIPROCESSOR SYSTEM AND OPERATING METHOD OF MULTIPROCESSOR SYSTEM - According to one aspect of embodiments, a multiprocessor system includes a plurality of processors, cache memories corresponding respectively to the processors, and a cache access controller. The cache access controller accesses at least one of the cache memories except one of the cache memories corresponding to one of the processors that issued the indirect access instruction in response to an indirect access instruction from each of the processors. Accordingly, even when one processor accesses data stored in a cache memory of another processor, data transfer between the cache memories is not required. Therefore, latency of an access to the data shared by the plurality of processors can be reduced. Moreover, since the communication between the cache memories is performed only at the time of executing the indirect access instructions, the bus traffic between the cache memories can be reduced.01-08-2009
20090106494ALLOCATING SPACE IN DEDICATED CACHE WAYS - A system comprises a processor core and a cache coupled to the core and comprising at least one cache way dedicated to the core, where the cache way comprises multiple cache lines. The system also comprises a cache controller coupled to the cache. Upon receiving a data request from the core, the cache controller determines whether the cache has a predetermined amount of invalid cache lines. If the cache does not have the predetermined amount of invalid cache lines, the cache controller is adapted to allocate space in the cache for new data, where the space is allocable in the at least one cache way dedicated to the core.04-23-2009
20110219189STORAGE SYSTEM AND REMOTE COPY CONTROL METHOD FOR STORAGE SYSTEM - A storage system maintains consistency of the stored contents between volumes even when a plurality of remote copying operations are executed asynchronously. A plurality of primary storage control devices and a plurality of secondary storage control devices are connected by a plurality of paths, and remote copying is performed asynchronously between respective first volumes and second volumes. Write data transferred from the primary storage control device to the secondary storage control device is held in a write data storage portion. Update order information, including write times and sequential numbers, is managed by update order information management portions. An update control portion collects update order information from each update order information management portion, determines the time at which update of each second volume is possible, and notifies each-update portion. By this means, the stored contents of each second volume can be updated up to the time at which update is possible.09-08-2011
20110219188CACHE AS POINT OF COHERENCE IN MULTIPROCESSOR SYSTEM - In a multiprocessor system, a conflict checking mechanism is implemented in the L2 cache memory. Different versions of speculative writes are maintained in different ways of the cache. A record of speculative writes is maintained in the cache directory. Conflict checking occurs as part of directory lookup. Speculative versions that do not conflict are aggregated into an aggregated version in a different way of the cache. Speculative memory access requests do not go to main memory.09-08-2011
20110191541TECHNIQUES FOR DISTRIBUTED CACHE MANAGEMENT - Techniques for distributed cache management are provided. A server having backend resource includes a global cache and a global cache agent. Individual clients each have client cache agents and client caches. When data items associated with the backend resources are added, modified, or deleted in the client caches, the client cache agents report the changes to the global cache agent. The global cache agent records the changes and notifies the other client cache agents to update a status of the changes within their client caches. When the changes are committed to the backend resource each of the statuses in each of the caches are updated accordingly.08-04-2011
20080263279DESIGN STRUCTURE FOR EXTENDING LOCAL CACHES IN A MULTIPROCESSOR SYSTEM - A design structure embodied in a machine readable storage medium for designing, manufacturing, and/or testing a design for caching data in a multiprocessor system is provided. The design structure includes a multiprocessor system, which includes a first processor including a first cache associated therewith, a second processor including a second cache associated therewith, and a main memory to store data required by the first processor and the second processor, the main memory being controlled by a memory controller that is in communication with each of the first processor and the second processor through a bus, wherein the second cache associated with the second processor is operable to cache data from the main memory corresponding to a memory access request of the first processor.10-23-2008
20120144117RECOMMENDATION BASED CACHING OF CONTENT ITEMS - Content item recommendations are generated for users based on metadata associated with the content items and a history of content item usage associated with the users. Each content item recommendation identifies a user and a content item and includes a score that indicates how likely the user is to view the content item. Based on the content item recommendations, and constraints of one or more caches, the content items are selected for storage in one or more caches. The constraints may include users that are associated with each cache, the geographical location of each cache, the size of each cache, and/or costs associated with each cache such as bandwidth costs. The content items stored in a cache are recommended to users associated with the cache.06-07-2012
20080270701Cluster-type storage system and managing method of the cluster-type storage system - A storage system 10-30-2008
20110208914STORAGE SYSTEM AND METHOD OF OPERATING THEREOF - There are provided a storage system, storage control unit and method of operating thereof. A storage system comprises a permanent storage subsystem comprising a first cache memory and a non-volatile storage medium, and a storage control unit operatively coupled to said subsystem and to a second cache memory operable to cache “dirty” data pending to be written to the permanent storage subsystem and to enable, responsive to at least one command by the control storage unit, destaging said “dirty” data or part thereof to the permanent storage subsystem. The storage control unit is operable to determine achievement of a “writing criterion”, to provide, upon achieving, at least one command to the permanent storage subsystem requiring flushing destaged data or part thereof from the first cache memory to the non-volatile storage medium, and to provide at least one command to the second cache memory requiring reclassification of the “washed” data or a respective part thereof into the “clean” data, wherein the storage control unit is further operable to configure the “writing criterion” responsive to indicating one or more predefined events during an operation of the storage system.08-25-2011
20090182942Extract Cache Attribute Facility and Instruction Therefore - A facility and cache machine instruction of a computer architecture for specifying a target cache cache-level and a target cache attribute of interest for obtaining a cache attribute of one or more target caches. The requested cache attribute of the target cache(s) is saved in a register.07-16-2009
20090198896DUAL WRITING DEVICE AND ITS CONTROL METHOD - A first storage system sets a storage area of a first disk drive as a first volume, and misrepresents an identifier of the storage system and an identifier of the first volume. A second storage system sets a storage area of a second disk drive as a second volume, and misrepresents an identifier of the storage system and an identifier of the second volume. The first storage system copies data of the first volume in the second volume. The second storage system copies data of the second volume in the first volume. A management computer determines whether there is a match between the data of the first volume and the data of the second volume. When it is determined that there is not match, the host computer accesses only one of the first volume and the second volume that stores the latest data.08-06-2009
20090198895CONTROL METHOD, MEMORY, AND PROCESSING SYSTEM UTILIZING THE SAME - A control method for a memory is provided. The memory includes a plurality of storage units, each storing a plurality of bits. In a read mode, a read command is provided to the memory. The value of a most significant bit (MSB) of each storage unit is obtained and recorded. The value of the most significant bits is output. The value of a neighboring bit of each storage unit is obtained and recorded. The neighboring bit neighbors the most significant bit. The value of the neighboring bits is output.08-06-2009
20090055589Cache memory system for a data processing apparatus - A data processing apparatus is provided having a cache memory 02-26-2009
20090055590Storage system having function to backup data in cache memory - A storage system comprises a plurality of control modules having a plurality of cache memories respectively. One or more dirty data elements out of a plurality of dirty data elements stored in a first cache memory in a first control module are copied to a second cache memory in a second control module. The one or more dirty data elements stored in the second cache memory are backed up to a non-volatile storage resource. The dirty data elements backed up from the first cache memory to the non-volatile storage resource are dirty data elements other than the one or more dirty data elements of which copying has completed, out of the plurality of dirty data elements.02-26-2009
20120079200UNIFIED STREAMING MULTIPROCESSOR MEMORY - One embodiment of the present invention sets forth a technique for providing a unified memory for access by execution threads in a processing system. Several logically separate memories are combined into a single unified memory that includes a single set of shared memory banks, an allocation of space in each bank across the logical memories, a mapping rule that maps the address space of each logical memory to its partition of the shared physical memory, a circuitry including switches and multiplexers that supports the mapping, and an arbitration scheme that allocates access to the banks.03-29-2012
20090100227Processor architecture with wide operand cache - A programmable processor and method for improving the performance of processors by expanding at least two source operands, or a source and a result operand, to a width greater than the width of either the general purpose register or the data path width. The present invention provides operands which are substantially larger than the data path width of the processor by using the contents of a general purpose register to specify a memory address at which a plurality of data path widths of data can be read or written, as well as the size and shape of the operand. In addition, several instructions and apparatus for implementing these instructions are described which obtain performance advantages if the operands are not limited to the width and accessible number of general purpose registers.04-16-2009
20090204763SYSTEM AND METHOD FOR AVOIDING DEADLOCKS WHEN PERFORMING STORAGE UPDATES IN A MULTI-PROCESSOR ENVIRONMENT - A system and method for avoiding deadlocks when performing storage updates in a multi-processor environment. The system includes a processor having a local cache, a store queue having a temporary buffer with capability to reject exclusive cross-interrogates (XI) while an interrogated cache line is owned exclusive and is to be stored, and a mechanism for performing a method. The method includes setting the processor into a slow mode. A current instruction that includes a data store having one or more target lines is received. The current instruction is executed, with the executing including storing results associated with the data store into the temporary buffer. The store queue is prevented from rejecting an exclusive XI corresponding to the target lines of the current instruction. Each target line is acquired with a status of exclusive ownership, and the contents from the temporary buffer are written to each target line after instruction completion.08-13-2009
20090204762Self Test Apparatus for Identifying Partially Defective Memory - A computing system is provided which includes a processor having a cache memory. The cache memory includes a plurality of independently configurable subdivisions, each subdivision including a memory array. A service element (SE) of the computing system is operable to cause a built-in-self-test (BIST) to be executed to test the cache memory, the BIST being operable to determine whether any of the subdivisions is defective. When it is determined that one of the subdivisions of the cache memory determined defective by the BIST is non-repairable, the SE logically deletes the defective subdivision from the system configuration, and the SE is operable to permit the processor to operate without the logically deleted subdivision. The SE is further operable to determine that the processor is defective when a number of the defective subdivisions exceeds a threshold.08-13-2009
200902106243-Dimensional L2/L3 Cache Array to Hide Translation (TLB) Delays - Embodiments of the invention provide a look-aside-look-aside buffer (LLB) configured to retain a portion of the real addresses in a translation look-aside (TLB) buffer to allow prefetching of data from a cache. A subset of real address bits associated with an effective address may be retrieved relatively quickly from the LLB, thereby allowing access to the cache before the complete address translation is available and reducing cache access latency.08-20-2009
20100241808CACHE-LINE AWARE COLLECTION FOR RUNTIME ENVIRONMENTS - Target data is allocated into caches of a shared-memory multiprocessor system during a runtime environment. The target data includes a plurality of data items that are allocated onto separate cache lines. Each data item is allocated on a separate cache line regardless of the size of the cache line of the system. The data items become members of a wrapper types when data items are value types. The runtime environment maintains a set of wrapper types of various sizes that are of typical cache line sizes. Garbage data is inserted into the cache line in cases where data items are reference types and data is stored on a managed heap. The allocation also configures garbage collectors in the runtime environment not to slide multiple data items onto the same cache line. Other examples are included where a developer can augment the runtime environment to be aware of cache line sizes.09-23-2010
20100211743INFORMATION PROCESSING APPARATUS AND METHOD OF CONTROLLING SAME - Disclosed is an information processing apparatus equipped with first and second CPUs, as well as a method of controlling this apparatus. When the first CPU launches an operating system for managing a virtual memory area that includes a first cache area for a device, the first CPU generates specification data, which indicates the corresponding relationship between the first cache and a second cache for the device and provided in a main memory, and transfers the specification data to the second CPU. In accordance with the specification data, the second CPU transfers data, which has been stored in the device, to a physical memory corresponding to a cache to which the first CPU refers. As a result, the first CPU accesses the first cache area is thereby capable of accessing the device at high speed.08-19-2010
20100211742CONVEYING CRITICAL DATA IN A MULTIPROCESSOR SYSTEM - A system for conveying critical and non-critical words of multiple cache lines includes a first node interface of a first processing node receiving, from a first processor, a first request identifying a critical word of a first cache line and a second request identifying a critical word of a second cache line. The first node interface conveys requests corresponding to the first and second requests to a second node interface of a second processing node. The second node interface receives the corresponding requests and conveys the critical words of the first and second cache lines to the first processing node before conveying non-critical words of the first and second cache lines.08-19-2010
20090327610Method and System for Conducting Intensive Multitask and Multiflow Calculation in Real-Time - The system for conducting intensive multitask and multistream calculation in real time comprises a central processor core (SPP) for supporting the system software and comprising a control unit (ESCU) for assigning threads of an application, the non-critical threads being run by the central processor core (SPP), whereas the intensive or specialized threads are assigned to an auxiliary processing part (APP) comprising a set of N auxiliary calculation units (APU12-31-2009
20090138659MECHANISM TO ACCELERATE REMOVAL OF STORE OPERATIONS FROM A QUEUE - A processor includes at least one processing core. The processing core includes a memory cache, a store queue, and a post-retirement store queue. The processing core retires a store in the store queue and conveys the store to the memory cache and the post-retirement store queue, in response to retiring the store. In one embodiment, the store queue and/or the post-retirement store queue is a first-in, first-out queue. In a further embodiment, to convey the store to the memory cache, the processing core obtains exclusive access to a portion of the memory cache targeted by the store. The processing core buffers the store in a coalescing buffer and merges with the store, one or more additional stores and/or loads targeted to the portion of the memory cache targeted by the store prior to writing the store to the memory cache.05-28-2009
20090037658Providing an inclusive shared cache among multiple core-cache clusters - In one embodiment, the present invention includes a method for receiving requested data from a system interconnect interface in a first scalability agent of a multi-core processor including a plurality of core-cache clusters, storing the requested data in a line of a local cache of a first core-cache cluster including a requester core, and updating a cluster field and a core field in a vector of a tag array for the line. Other embodiments are described and claimed.02-05-2009
20130138885DYNAMIC PROCESS/OBJECT SCOPED MEMORY AFFINITY ADJUSTER - An apparatus, method, and program product for optimizing a multiprocessor computing system by sampling memory reference latencies and adjusting components of the system in response thereto. During execution of processes the computing system, memory reference sampling of memory locations from shared memory of the computing system referenced in the executing processes is performed. Each sampled memory reference collected from sampling is associated with a latency and a physical memory location in the shared memory. Each sampled memory reference is analyzed to identify segments of memory locations in the shared memory corresponding to a sub-optimal latency, and based on the analyzed sampled memory references, the physical location of the one or more identified segments, the processor on which one or more processes referencing the identified segments, and/or a status associated with the one or more identified segments is dynamically adjusted to thereby optimize memory access for the multiprocessor computing system.05-30-2013
20110131376METHOD AND APPARATUS FOR TILE MAPPING TECHNIQUES - An approach for improving tile-map caching techniques is provided. Whether a tile object is stored in a first cache that is configured to store a plurality of tile objects associated with a map is determined. It is also determined whether a resource locator associated with the tile object is stored in a second cache, if the tile object is not in the first cache. The tile object is retrieved based on the resource locator if the resource locator is stored in the second cache.06-02-2011
20100332755METHOD AND APPARATUS FOR USING A SHARED RING BUFFER TO PROVIDE THREAD SYNCHRONIZATION IN A MULTI-CORE PROCESSOR SYSTEM - An apparatus and method for improving synchronization between threads in a multi-core processor system are provided. An apparatus includes a memory, a first processor core, and a second processor core. The memory includes a shared ring buffer for storing data units, and stores a plurality of shared variables associated with accessing the shared ring buffer. The first processor core runs a first thread and has a first cache associated therewith. The first cache stores a first set of local variables associated with the first processor core. The first thread controls insertion of data items into the shared ring buffer using at least one of the shared variables and the first set of local variables. The second processor core runs a second thread and has a second cache associated therewith. The second cache stores a second set of local variables associated with the second processor core. The second thread controls extraction of data items from the shared ring buffer using at least one of the shared variables and the second set of local variables.12-30-2010
20110213931NON BLOCKING REHASHING - An apparatus and a method operating on data at a server node of a data grid system with distributed cache is described. A coordinator receives a request to change a topology of a cache cluster from a first group of cache nodes to a second group of cache nodes. The request includes a cache node joining or leaving the first group. A key for the second group is rehashed without blocking access to the first group while rehashing.09-01-2011
20110082980HIGH PERFORMANCE UNALIGNED CACHE ACCESS - A cache memory device and method for operating the same. One embodiment of the cache memory device includes an address decoder decoding a memory address and selecting a target cache line. A first cache array is configured to output a first cache entry associated with the target cache line, and a second cache array coupled to an alignment unit is configured to output a second cache entry associated with the alignment cache line. The alignment unit coupled to the address decoder selects either the target cache line or a neighbor cache line proximate the target cache line as an alignment cache line output. Selection of either the target cache line or a neighbor cache line is based on an alignment bit in the memory address. A tag array cache is split into even and odd cache lines tags, and provides one or two tags for every cache access.04-07-2011
20120303898MANAGING UNMODIFIED TRACKS MAINTAINED IN BOTH A FIRST CACHE AND A SECOND CACHE - Provided are a computer program product, system, and method for managing unmodified tracks maintained in both a first cache and a second cache. The first cache has unmodified tracks in the storage subject to Input/Output (I/O) requests. Unmodified tracks are demoted from the first cache to a second cache. An inclusive list indicates unmodified tracks maintained in both the first cache and a second cache. An exclusive list indicates unmodified tracks maintained in the second cache but not the first cache. The inclusive list and the exclusive list are used to determine whether to promote to the second cache an unmodified track demoted from the first cache.11-29-2012
20130159626OPTIMIZED EXECUTION OF INTERLEAVED WRITE OPERATIONS IN SOLID STATE DRIVES - A method for data storage includes receiving a plurality of data items for storage in a memory, including at least first data items that are associated with a first data source and second data items that are associated with a second data source, such that the first and second data items are interleaved with one another over time. The first data items are de-interleaved from the second data items, by identifying a respective data source with which each received data item is associated. The de-interleaved first data items and the de-interleaved second data items are stored in the memory.06-20-2013
20100262781Loading Data to Vector Renamed Register From Across Multiple Cache Lines - A load instruction that accesses data cache may be off natural alignment, which causes a cache line crossing to complete the access. The illustrative embodiments provide a mechanism for loading data across multiple cache lines without the need for an accumulation register or collection point for partial data access from a first cache line while waiting for a second cache line to be accessed. Because the accesses to separate cache lines are concatenated within the vector rename register without the need for an accumulator, an off-alignment load instruction is completely pipeline-able and flushable with no cleanup consequences.10-14-2010
20100332756PROCESSING OUT OF ORDER TRANSACTIONS FOR MIRRORED SUBSYSTEMS - Methods and apparatus relating to processing out of order transactions for mirrored subsystems are described. In one embodiment, a device (that is mirroring data from another device) includes a cache to track out of order write operations prior to writing the data from the write operations to memory. A register may be used to track the state of the cache and cause acknowledgement of commitment of the data to memory once all cache entries, as recorded at a select point by the register, are emptied or otherwise invalidated. Other embodiments are also disclosed.12-30-2010
20110060879SYSTEMS AND METHODS FOR PROCESSING MEMORY REQUESTS - A processing system is provided. The processing system includes a first processing unit coupled to a first memory and a second processing unit coupled to a second memory. The second memory comprises a coherent memory and a private memory that is private to the second processing unit.03-10-2011
20100293332CACHE ENUMERATION AND INDEXING - In response to a request including a state object, which can indicate a state of an enumeration of a cache, the enumeration can be continued by using the state object to identify and send cache data. Also, an enumeration of cache units can be performed by traversing a data structure that includes object nodes, which correspond to cache units, and internal nodes. An enumeration state stack can indicate a current state of the enumeration, and can include state nodes that correspond to internal nodes in the data structure. Additionally, a cache index data structure can include a higher level table and a lower level table. The higher level table can have a leaf node pointing to the lower level table, and the lower level table can have a leaf node pointing to one of the cache units. Moreover, the lower level table can be associated with a tag.11-18-2010
20100293331STORAGE SYSTEM AND DATA MANAGEMENT METHOD - A storage system, which is coupled to a computer, includes a storage device, a controller, a plurality of cache memory units, and a connecting unit. Each of the plurality of cache memory units includes: a cache memory for storing data; an auxiliary storage device for holding a content of data even after shutdown of power; and a cache controller for controlling an input/output of data to/from the cache memory and the auxiliary storage device. The cache controller store data stored in the cache memory, which is divided into a plurality of parts, into a plurality of the auxiliary storage devices included in the plurality of cache memory units.11-18-2010
20120311266MULTIPROCESSOR AND IMAGE PROCESSING SYSTEM USING THE SAME - To provide a multiprocessor capable of easily sharing data and buffering data to be transferred.12-06-2012
20110082981MULTIPROCESSING CIRCUIT WITH CACHE CIRCUITS THAT ALLOW WRITING TO NOT PREVIOUSLY LOADED CACHE LINES - Data is processed using a first and second processing circuit (04-07-2011
20100030964METHOD AND SYSTEM FOR SECURING INSTRUCTION CACHES USING CACHE LINE LOCKING - A method and system is provided for securing micro-architectural instruction caches (I-caches). Securing an I-cache involves providing security critical instructions to indicate a security critical code section; and implementing an I-cache locking policy to prevent unauthorized eviction and replacement of security critical instructions in the I-cache. Securing the I-cache may further involve dynamically partitioning the I-cache into multiple logical partitions, and sharing access to the I-cache by an I-cache mapping policy that provides access to each I-cache partition by only one logical processor.02-04-2010
20100023694Memory access system, memory control apparatus, memory control method and program - A memory control apparatus disposed in a memory access system having a bus, a single storage unit with a bank structure and a bus arbitrating unit, includes: an access-request accepting means for accepting sequential access requests for data located at sequential addresses in the storage unit, sequential access requests for data located at discrete addresses in the storage unit as sequential access requests, or access requests for data located at sequential addresses in the storage unit which cannot be made into a single access request as sequential access requests; and an access-request rearranging means for rearranging sequential access requests accepted by the access-request accepting means in an order of banks of the storage unit within a range of access requests relating to either a data write request output from one of data processing units or a data read request output therefrom to control an access control of the storage unit.01-28-2010
20110153941Multi-Autonomous System Anycast Content Delivery Network - A content delivery network includes first and second sets of cache servers, a domain name server, and an anycast island controller. The first set of cache servers is hosted by a first autonomous system and the second set of cache servers is hosted by a second autonomous system. The cache servers are configured to respond to an anycast address for the content delivery network, to receive a request for content from a client system, and provide the content to the client system. The first and second autonomous systems are configured to balance the load across the first and second sets of cache servers, respectively. The domain name server is configured to receive a request from a requestor for a cache server address, and provide the anycast address to the requestor in response to the request. The anycast island controller is configured to receive load information from each of the cache servers, determine an amount of requests to transfer from the first autonomous system to the second autonomous system; send an instruction to the first autonomous system to transfer the amount of requests to the second autonomous system.06-23-2011
20110167223BUFFER MEMORY DEVICE, MEMORY SYSTEM, AND DATA READING METHOD - Memory access is accelerated by performing a burst read without any problems caused due to rewriting of data. A buffer memory device reads, in response to a read request from a processor, data from a main memory including cacheable and uncacheable areas. The buffer memory device includes an attribute obtaining unit which obtains the attribute of the area indicated by a read address included in the read request; an attribute determining unit which determines whether or not the attribute obtained by the attribute obtaining unit is burst-transferable; a data reading unit which performs a burst read of data including data held in the area indicated by the read address, when determined that the attribute obtained by the attribute obtaining unit is burst-transferable; and a buffer memory which holds the data burst read by the data reading unit.07-07-2011
20120059994USING A MIGRATION CACHE TO CACHE TRACKS DURING MIGRATION - Provided are a method, system, and computer program product for using a migration cache to cache tracks during migration. Indication is made in an extent list of tracks in an extent in a source storage subject to Input/Output (I/O) requests. A migration operation is initiated to migrate the extent from the source storage to a destination storage. In response to initiating the migration operation, a determination is made of a first set of tracks in the extent in the source storage indicated in the extent list. A determination is also made of a second set of tracks in the extent. The tracks in the source storage in the first set are copied to a migration cache, wherein updates to the tracks in the migration cache during the migration operation are applied to the migration cache. The tracks in the second set are copied directly from the source storage to the destination storage without buffering in the migration cache. The tracks in the first set are copied from the migration cache to the destination storage. The migration operation is completed in response to copying the first set of tracks from the migration cache to the destination storage and copying the second set of tracks from the source storage to the destination storage, wherein after the migration the tracks in the extent are located in the destination storage.03-08-2012
20120159072MEMORY SYSTEM - According to one embodiment, a memory system includes a chip including a cell array and first and second caches configured to hold data read out from the cell array; an interface configured to manage a first and a second addresses; a controller configured to issue a readout request to the interface; and a buffer configured to hold the data from the chip. The interface transfers the data in the first cache to the buffer without reading out the data from the cell array if the readout address matches the first address, transfers the data in the second cache to the buffer without reading out the data from the cell array if the readout address matches the second address, and reads out the data from the cell array and transfers the data to the buffer if the readout address does not match either one of the first or second address.06-21-2012
20110072212CACHE MEMORY CONTROL APPARATUS AND CACHE MEMORY CONTROL METHOD - A cache memory controller searches a second cache tag memory holding a cache state information indicating whether any of multi-processor cores storing a registered address of information registered within its own first cache memory exists. When a target address coincides with the obtained registered address, the cache memory controller determines whether an invalidation request or a data request to a processor core including a block is necessary based on the cache status information. Once it is determined that invalidation or a data request for the processor including the block, the cache memory controller determines whether a retry of instruction based on a comparison result of the first cache tag memory is necessary, if it is determined that invalidation or a data request for the processor including the block.03-24-2011
20120137073Extract Cache Attribute Facility and Instruction Therefore - A facility and cache machine instruction of a computer architecture for specifying a target cache cache-level and a target cache attribute of interest for obtaining a cache attribute of one or more target caches. The requested cache attribute of the target cache(s) is saved in a register.05-31-2012
20120173819Accelerating Cache State Transfer on a Directory-Based Multicore Architecture - Technologies are generally described herein for accelerating a cache state transfer in a multicore processor. The multicore processor may include first, second, and third tiles. The multicore processor may initiate migration of a thread executing on the first core at the first tile from the first tile to the second tile. The multicore processor may determine block addresses of blocks to be transferred from a first cache at the first tile to a second cache at the second tile, and identify that a directory at the third tile corresponds to the block addresses. The multicore processor may update the directory to reflect that the second cache shares the blocks. The multicore processor may transfer the blocks from the first cache in the first tile to the second cache in the second tile effective to complete the migration of the thread from the first tile to the second tile.07-05-2012
20120084510Computing Machine and Computing System - According to one embodiment, a computing machine includes a virtual machine operated on a virtual machine monitor, the computing machine includes a first memory device, and a second memory device. The virtual machine monitor is configured to assign a part of a region of the first memory device as a third memory device to the virtual machine and to assign a part of a region of the second memory device as a fourth memory device to the virtual machine. The virtual machine comprises a first cache control module configured to use the fourth memory device as a read cache of the third memory device.04-05-2012
20080301368Recording controller and recording control method - Upon retrieving, after occurrence of replacement of a first cache, move out (MO) data that is a write back target, a second cache determines, based on data that is set in a control flag of a register, whether a new registration process of move in (MI) data with respect to a recording position of the MO data is completed. Upon determining that the new registration process is not completed, the second cache cancels the new registration process to ensure that a request of the new registration process is not output to a pipeline.12-04-2008
20100100681System on a chip for networking - A system on a chip for network devices. In one implementation, the system on a chip may include (integrated onto a single integrated circuit), a processor and one or more I/O devices for networking applications. For example, the I/O devices may include one or more network interface circuits for coupling to a network interface. In one embodiment, coherency may be enforced within the boundaries of the system on a chip but not enforced outside of the boundaries.04-22-2010
20120331231METHOD AND APPARATUS FOR SUPPORTING MEMORY USAGE THROTTLING - An apparatus for providing system memory usage throttling within a data processing system having multiple chiplets is disclosed. The apparatus includes a system memory, a memory access collection module, a memory credit accounting module and a memory throttle counter. The memory access collection module receives a first set of signals from a first cache memory within a chiplet and a second set of signals from a second cache memory within the chiplet. The memory credit accounting module tracks the usage of the system memory on a per user virtual partition basis according to the results of cache accesses extracted from the first and second set of signals from the first and second cache memories within the chiplet. The memory throttle counter for provides a throttle control signal to prevent any access to the system memory when the system memory usage has exceeded a predetermined value.12-27-2012
20120324166COMPUTER-IMPLEMENTED METHOD OF PROCESSING RESOURCE MANAGEMENT - A computer-implemented method for managing processing resources of a computerized system having at least a first processor and a second processor, each of the processors operatively interconnected to a memory storing a set of data to be processed by a processor, the method comprising: monitoring data accessed by the first processor while executing; and if the second processor is at a shorter distance than the first processor from the monitored data, instructing to interrupt execution at the first processor and resume the execution at the second processor.12-20-2012
20120096225DYNAMIC CACHE CONFIGURATION USING SEPARATE READ AND WRITE CACHES - Data from storage devices is stored in a read cache, having a read cache size, and a write cache, having a write cache size. The read cache and the write cache are separate caches. Cache configuration of the read cache and the write cache are automatically and dynamically adjusted based, at least in part, upon cache performance parameters. Cache performance parameters include one or more of preference scores, frequency of read and write operations, read and write performance of a storage device, localization information, and contiguous read and write performance. Dynamic cache configuration includes one or more of adjusting read cache size and/or write cache size and adjusting read cache block size and/or write cache block size.04-19-2012
20130013862EFFICIENT HANDLING OF MISALIGNED LOADS AND STORES - A system and method for efficiently handling misaligned memory accesses within a processor. A processor comprises a load-store unit (LSU) with a banked data cache (d-cache) and a banked store queue. The processor generates a first address corresponding to a memory access instruction identifying a first cache line. The processor determines the memory access is misaligned which crosses over a cache line boundary. The processor generates a second address identifying a second cache line logically adjacent to the first cache line. If the instruction is a load instruction, the LSU simultaneously accesses the d-cache and store queue with the first and the second addresses. If there are two hits, the data from the two cache lines are simultaneously read out. If the access is a store instruction, the LSU separates associated write data into two subsets and simultaneously stores these subsets in separate cache lines in separate banks of the store queue.01-10-2013
20110246720STORAGE SYSTEM WITH MULTIPLE CONTROLLERS - A first controller, and a second controller coupled to the first controller via a first path are provided. The first controller includes a first relay circuit which is a circuit that controls data transfer, and a first processor coupled to the first relay circuit via a first second path. The second controller includes a second relay circuit which is a circuit that controls data transfer, and is coupled to the first relay circuit via the first path, and a second processor coupled to the second relay circuit via a second second path. The first processor is coupled to the second relay circuit not via the first relay circuit but via a first third path, and accesses the second relay circuit via the first third path during an I/O process. The second processor is coupled to the first relay circuit not via the second relay circuit but via a second third path, and accesses the first relay circuit via the second third path during an I/O process.10-06-2011
20110271056MULTITHREADED CLUSTERED MICROARCHITECTURE WITH DYNAMIC BACK-END ASSIGNMENT - A multithreaded clustered microarchitecture with dynamic back-end assignment is presented. A processing system may include a plurality of instruction caches and front-end units each to process an individual thread from a corresponding one of the instruction caches, a plurality of back-end units, and an interconnect network to couple the front-end and back-end units. A method may include measuring a performance metric of a back-end unit, comparing the measurement to a first value, and reassigning, or not, the back-end unit according to the comparison. Computer systems according to embodiments of the invention may include: a random access memory; a system bus; and a processor having a plurality of instruction caches, a plurality of front-end units each to process an individual thread from a corresponding one of the instruction caches; a plurality of back-end units; and an interconnect network coupled to the plurality of front-end units and the plurality of back-end units.11-03-2011
20110231612PRE-FETCHING FOR A SIBLING CACHE - One embodiment provides a system that pre-fetches into a sibling cache. During operation, a first thread executes in a first processor core associated with a first cache, while a second thread associated with the first thread simultaneously executes in a second processor core associated with a second cache. During execution, the second thread encounters an instruction that triggers a request to a lower-level cache which is shared by the first cache and the second cache. The system responds to this request by directing a load fill which returns from the lower-level cache in response to the request to the first cache, thereby reducing cache misses for the first thread.09-22-2011
20130138884LOAD DISTRIBUTION SYSTEM - Exemplary embodiments of the invention provide load distribution among storage systems using solid state memory (e.g., flash memory) as expanded cache area. In accordance with an aspect of the invention, a system comprises a first storage system and a second storage system. The first storage system changes a mode of operation from a first mode to a second mode based on load of process in the first storage system. The load of process in the first storage system in the first mode is executed by the first storage system. The load of process in the first storage system in the second mode is executed by the first storage system and the second storage system.05-30-2013
20130185511Hybrid Write-Through/Write-Back Cache Policy Managers, and Related Systems and Methods - Embodiments disclosed in the detailed description include hybrid write-through/write-back cache policy managers, and related systems and methods. A cache write policy manager is configured to determine whether at least two caches among a plurality of parallel caches are active. If all of one or more other caches are not active, the cache write policy manager is configured to instruct an active cache among the parallel caches to apply a write-hack cache policy. In this manner, the cache write policy manager may conserve power and/or increase performance of a singly active processor core. If any of the one or more other caches are active, the cache write policy manager is configured to instruct an active cache among the parallel caches to apply a write-through cache policy. In this manner, the cache write policy manager facilitates data coherency among the parallel caches when multiple processor cores are active.07-18-2013
20110314225COMPUTATIONAL RESOURCE ASSIGNMENT DEVICE, COMPUTATIONAL RESOURCE ASSIGNMENT METHOD AND COMPUTATIONAL RESOURCE ASSIGNMENT PROGRAM - In a multi-core processor system, cache memories are provided respectively for a plurality of processors. An assignment management unit manages assignment of tasks to the processors. A cache status calculation unit calculates a cache usage status such as a memory access count and a cache hit ratio, with respect to each task. A first processor handles a plurality of first tasks that belong to a first process. If computation amount of the first process exceeds a predetermined threshold value, the assignment management unit refers to the cache usage status to preferentially select, as a migration target task, one of the plurality of first tasks whose memory access count is smaller or whose cache hit ratio is higher. Then, the assignment management unit newly assigns the migration target task to a second processor handling another process different from the first processor.12-22-2011
20120290793EFFICIENT TAG STORAGE FOR LARGE DATA CACHES - An apparatus, method, and medium are disclosed for implementing data caching in a computer system. The apparatus comprises a first data cache, a second data cache, and cache logic. The cache logic is configured to cache memory data in the first data cache. Caching the memory data in the first data cache comprises storing the memory data in the first data cache and storing in the second data cache, but not in the first data cache, tag data corresponding to the memory data.11-15-2012
20130205087FORWARD PROGRESS MECHANISM FOR STORES IN THE PRESENCE OF LOAD CONTENTION IN A SYSTEM FAVORING LOADS - A multiprocessor data processing system includes a plurality of cache memories including a cache memory. In response to the cache memory detecting a storage-modifying operation specifying a same target address as that of a first read-type operation being processed by the cache memory, the cache memory provides a retry response to the storage-modifying operation. In response to completion of the read-type operation, the cache memory enters a referee mode. While in the referee mode, the cache memory temporarily dynamically increases priority of any storage-modifying operation targeting the target address in relation to any second read-type operation targeting the target address.08-08-2013
20120303899MANAGING TRACK DISCARD REQUESTS TO INCLUDE IN DISCARD TRACK MESSAGES - Provided are a computer program product, system, and method for managing track discard requests to include in discard track messages. A backup copy of a track in a cache is maintained in the cache backup device. A track discard request is generated to discard tracks in the cache backup device removed from the cache. Track discard requests are queued in a discard track queue. In response to detecting that a predetermined number of track discard requests are queued in the discard track queue while processing in a discard multi-track mode, one discard multiple tracks message is sent indicating the tracks indicated in the queued predetermined number of track discard requests to the cache backup device instructing the cache backup device to discard the tracks indicated in the discard multiple tracks message. In response to determining a predetermined number of periods of inactivity while processing in the discard multi-track mode, processing the track discard requests is switched to a discard single track mode.11-29-2012

Patent applications in class Multiple caches

Patent applications in all subclasses Multiple caches