Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


Caching

Subclass of:

711 - Electrical computers and digital processing systems: memory

711100000 - STORAGE ACCESSING AND CONTROL

711117000 - Hierarchical memories

Patent class list (only not empty are listed)

Deeper subclasses:

Class / Patent application numberDescriptionNumber of patent applications / Date published
711141000 Coherency 466
711119000 Multiple caches 361
711133000 Entry replacement strategy 288
711137000 Look-ahead 149
711125000 Instruction data cache 134
711130000 Shared cache 104
711128000 Associative 86
711129000 Partitioned cache 58
711126000 User data cache 24
711138000 Cache bypassing 17
711140000 Cache pipelining 14
711131000 Multiport cache 10
711132000 Stack cache 10
711127000 Interleaved 8
Entries
DocumentTitleDate
20090172283Reducing minimum operating voltage through hybrid cache design - Methods and apparatus to reduce minimum operating voltage through a hybrid cache design are described. In one embodiment, a cache with different size bit cells may be used, e.g., to reduce minimum operating voltage of an integrated circuit device that includes the cache and possibly other logic (such as a processor). Other embodiments are also described.07-02-2009
20100082903NON-VOLATILE SEMICONDUCTOR MEMORY DRIVE, INFORMATION PROCESSING APPARATUS AND DATA ACCESS CONTROL METHOD OF THE NON-VOLATILE SEMICONDUCTOR MEMORY DRIVE - According to one embodiment, a non-volatile semiconductor memory drive stores an address table in a non-volatile semiconductor memory in predetermined units that are storage units of data in the non-volatile semiconductor memory, manages a second address table associating a logical address with a physical address with respect to each part of the address table stored in the non-volatile semiconductor memory, and temporarily stores each part of the address table which has been read in the predetermined units from the non-volatile semiconductor memory in the cache memory based on the second address table.04-01-2010
20110202725SOFTWARE-ACCESSIBLE HARDWARE SUPPORT FOR DETERMINING SET MEMBERSHIP - A method and processor supporting architected instructions for tracking and determining set membership, such as by implementing Bloom filters. The apparatus includes storage arrays (e.g., registers) and an execution core configured to store an indication that a given value is a member of a set, including by executing an architected instruction having an operand specifying the given value, wherein executing comprises hashing applying a hash function to the value to determine an index into one of the storage arrays and setting a bit of the storage array corresponding to the index. An architected query instruction is later executed to determine if a query value is not a member of the set, including by applying the hash function to the query value to determine an index into the storage array and determining whether a bit at the index of the storage array is set.08-18-2011
20110202724IOMMU Architected TLB Support - Embodiments allow a smaller, simpler hardware implementation of an input/output memory management unit (IOMMU) having improved translation behavior that is independent of page table structures and formats. Embodiments also provide device-independent structures and methods of implementation, allowing greater generality of software (fewer specific software versions, in turn reducing development costs).08-18-2011
20120246405DELAYED FREEING OF DATA STORAGE BLOCKS - A memory block that includes a physical storage page holding data of a data storage application in a page buffer can be cached in a page buffer upon the memory block being designated for a change in status from a used status to a shadow status. Upon occurrence of a trigger event, all pages stored in the page buffer can be processed in a first batch process that can include converting each of the pages in the page buffer from the used status to the shadow status and emptying the page buffer. Upon receiving a call to free the pages in the page buffer from the shadow status to a free status without the trigger event occurring, the pages in the page buffer can be converted from the used status directly to the free status in a second batch process. Related methods, systems, and articles of manufacture are also disclosed.09-27-2012
20100077144TRANSPARENT RESOURCE ADMINISTRATION USING A READ-ONLY DOMAIN CONTROLLER - A domain controller hierarchy in accordance with implementations of the present invention involves one or more local domain controllers, such as one or more read-only local domain controllers in communication with one or more writable hub domain controllers. The local domain controllers include a resource manager, such as a Security Account Manager (“SAM”), that manages resources and/or other accounts information received from the writable hub domain controller. When a local user attempts to change the resource at the local domain controller, however, the resource manager chains the request, along with any appropriate identifiers for the request, to the writable hub domain controller, where the request is processed. If appropriate, the hub domain controller sends a response that the resource has been updated as requested and also sends a copy of the updated resource to be cached at the local domain controller.03-25-2010
20100077143Monitoring a data processing apparatus and summarising the monitoring data - A data processing apparatus is disclosed that comprises monitoring circuitry for monitoring accesses to a plurality of addressable locations within said data processing apparatus that occur between start and end events said monitoring circuitry comprising: an address location store for storing data identifying said plurality of addressable locations to be monitored and a monitoring data store; said monitoring circuitry being responsive to detection of said start event to detect accesses to said plurality of addressable locations and to store monitoring data relating to a summary of said detected accesses in said monitoring data store; and said monitoring circuitry being responsive to detection of said end event to stop collecting said monitoring data; said monitoring circuit being responsive to detection of a flush event to output said stored monitoring data and to flush said monitoring data store.03-25-2010
20100077142EFFICIENTLY CREATING A SNAPSHOT OF A LARGE CONSISTENCY GROUP - Preparation of a snapshot for data storage includes receiving a first command to prepare to create a snapshot of a set of data stored on at least one source storage volume in a data storage system. The data storage system is prepared to expedite creation of the snapshot in response to the first command. A second command to create the snapshot is received subsequent to the first command. The snapshot is created, in response to the second command, by copying the set of data onto at least one target storage volume at an event time.03-25-2010
20130086321FILE BASED CACHE LOADING - A method for loading a cache is disclosed. Data in a computer file is stored on a storage device. The computer file is associated with a computer program. The first step is to determine which logical memory blocks on the storage device correspond to the computer files (04-04-2013
20130086320MULTICAST WRITE COMMANDS - Techniques for implementing a multicast write command are described. A data block may be destined for multiple targets. The targets may be included in a list. A multicast write command may include the list. Write commands may be sent to each target in the list.04-04-2013
20130080704MANAGEMENT OF POINT-IN-TIME COPY RELATIONSHIP FOR EXTENT SPACE EFFICIENT VOLUMES - A storage controller receives a request to establish a point-in-time copy operation by placing a space efficient source volume in a point-in-time copy relationship with a space efficient target volume, wherein subsequent to being established the point-in-time copy operation is configurable to consistently copy the space efficient source volume to the space efficient target volume at a point in time. A determination is made as to whether any track of an extent is staging into a cache from the space efficient target volume or destaging from the cache to the space efficient target volume. In response to a determination that at least one track of the extent is staging into the cache from the space efficient target volume or destaging from the cache to the space efficient target volume, release of the extent from the space efficient target volume is avoided.03-28-2013
20130036268Implementing Vector Memory Operations - In one embodiment, the present invention includes an apparatus having a register file to store vector data, an address generator coupled to the register file to generate addresses for a vector memory operation, and a controller to generate an output slice from one or more slices each including multiple addresses, where the output slice includes addresses each corresponding to a separately addressable portion of a memory. Other embodiments are described and claimed.02-07-2013
20130036267PLACEMENT OF DATA IN SHARDS ON A STORAGE DEVICE - A method, system and computer program product for placing data in shards on a storage device may include determining placement of a data set in one of a plurality of shards on the storage device. Each one of the shards may include a different at least one performance feature. Each different at least one performance feature may correspond to a different at least one predetermined characteristic associated with a particular set of data. The data set is cached in the one of the plurality of shards on the storage device that includes the at least one performance feature corresponding to the at least one predetermined characteristic associated with the data set being cached.02-07-2013
20130042064SYSTEM FOR DYNAMICALLY ADAPTIVE CACHING - The present disclosure is directed to a system for dynamically adaptive caching. The system includes a storage device having a physical capacity for storing data received from a host. The system may also include a control module for receiving data from the host and compressing the data to a compressed data size. Alternatively, the data may also be compressed by the storage device. The control module may be configured for determining an amount of available space on the storage device and also determining a reclaimed space, the reclaimed space being according to a difference between the size of the data received from the host and the compressed data size. The system may also include an interface module for presenting a logical capacity to the host. The logical capacity has a variable size and may include at least a portion of the reclaimed space.02-14-2013
20100042784METHOD FOR COMMUNICATION BETWEEN TWO MEMORY-RELATED PROCESSES IN A COMPUTER SYSTEM, CORRESPONDING SOFTWARE PRODUCT, COMPUTER SYSTEM AND PRINTING SYSTEM - For optimized communication between two memory-related processes in a computer system, a synchronization function is coupled with an operating system function such that it withholds an output of an operating system message that signals a data end of a file in a memory region of the computer system. It can thus be avoided that a memory read process interrupts the reading of the file because a memory write process has not yet written all data of the file into the corresponding memory region.02-18-2010
20100042783DATA VAULTING IN EMERGENCY SHUTDOWN - A method for data storage includes accepting write commands belonging to a storage operation invoked by a host computer, and caching the write commands in a volatile memory that is powered by external electrical power. A current execution status of the storage operation is also cached in the volatile memory.02-18-2010
20100241806DATA BACKUP METHOD AND INFORMATION PROCESSING APPARATUS - An information processing apparatus includes, a first storage unit, a second storage unit in which data stored in the first storage unit is backed up, and a memory controller that controls data backup operation. The memory controller divides a transfer source storage area into portions, and provides two transfer destination areas, each of the two transfer destination areas being divided into portions, backs up data in a direction from a beginning address of each divided area of the transfer source storage area to an end address thereof in one of the transfer destination areas provided for each divided area of the transfer source storage area, and backs up data in a direction from the end address of each divided area of the transfer source storage area to the beginning address thereof in the other transfer destination storage area.09-23-2010
20090157963Contiguously packed data - Data for data elements (e.g., pixels) can be stored in an addressable storage unit that can store a number of bits that is not a whole number multiple of the number of bits of data per data element. Similarly, a number of the data elements can be transferred per unit of time over a bus, where the width of the bus is not a whole number multiple of the number of bits of data per data element. Data for none of the data elements is stored in more than one of the storage units or transferred in more than one unit of time. Also, data for multiple data elements is packaged contiguously in the storage unit or across the width of the bus.06-18-2009
20100332753WAIT LOSS SYNCHRONIZATION - Synchronizing threads on loss of memory access monitoring. Using a processor level instruction included as part of an instruction set architecture for a processor, a read, or write monitor to detect writes, or reads or writes respectively from other agents on a first set of one or more memory locations and a read, or write monitor on a second set of one or more different memory locations are set. A processor level instruction is executed, which causes the processor to suspend executing instructions and optionally to enter a low power mode pending loss of a read or write monitor for the first or second set of one or more memory locations. A conflicting access is detected on the first or second set of one or more memory locations or a timeout is detected. As a result, the method includes resuming execution of instructions.12-30-2010
20100106910CACHE MEMORY AND METHOD OF CONTROLLING THE SAME - It is an object of the present invention to reduce output of a WAIT signal to maintain data consistency to effectively process subsequent memory access when there is no subsequent memory access in case of miss hit in a cache memory having a multi-stage pipeline structure. A cache memory according to the present invention performs update processing of a tag memory and a data memory and decides whether or not there is a subsequent memory access upon decision by a hit decision unit that an input address is a miss hit. Upon decision that there is a subsequent memory access, a controller outputs a WAIT signal to generate a pipeline stall for the pipeline processing of the processor to the processor, while the controller does not output a WAIT signal upon decision that there is no subsequent memory access.04-29-2010
20130046933STORING DATA IN ANY OF A PLURALITY OF BUFFERS IN A MEMORY CONTROLLER - A memory controller containing one or more ports coupled to a buffer selection logic and a plurality of buffers. Each buffer is configured to store write data associated with a write request and each buffer is also coupled to the buffer selection logic. The buffer selection logic is configured to store write data associated with a write request from at least one of the ports in any of the buffers based on a priority of the buffers for each one of the ports.02-21-2013
20090119455METHOD FOR CACHING CONTENT DATA PACKAGES IN CACHING NODES - A method for caching content data packages in caching nodes 05-07-2009
20090119454Method and Apparatus for Video Motion Process Optimization Using a Hierarchical Cache - There are provided method and apparatus for video motion process optimization using a hierarchical cache. A storage method for a video motion process includes configuring a hierarchical cache to have one or more levels, each of the levels of the hierarchical cache corresponding to a respective one of a plurality of levels of a calculation hierarchy associated with calculating sample values for the video motion process. The method also includes storing a particular value for a sample relating to the video motion process in a corresponding level of the hierarchical cache based on which of the plurality of levels of the calculation hierarchy the particular value corresponds to, when the particular value is non-existent in the hierarchical cache.05-07-2009
20090307429Storage system, storage subsystem and storage control method - Proposed is a storage system capable of preventing the compression of a cache memory caused by data remaining in a cache memory of a storage subsystem without being transferred to a storage area of an external storage, and maintaining favorable I/O processing performance of the storage subsystem. In this storage system where an external storage is connected to the storage subsystem and the storage subsystem provides a storage area of the external storage as its own storage area, provided is a volume for saving dirty data remaining in a cache memory of the storage subsystem without being transferred to the external volume. The storage system recognizes the compression of the cache memory, and eliminates the overload of the cache memory by saving dirty data in a save volume.12-10-2009
20090307428INCREASING REMOTE DESKTOP PERFORMANCE WITH VIDEO CACHING - Described techniques improve remote desktop responsiveness by caching an image of a desktop when the host operating system running on the remote desktop server stores graphics output in video memory. Once cached, a Tile Desktop Manager may prioritize the scanning of regions or tiles of the cached image based data received from the operating system. Once regions or tiles that have changed are detected, the changed tiles are copied from the cached desktop image and transmitted to the remote desktop client. The cached desktop image is refreshed based on a feedback loop.12-10-2009
20120191910PROCESSING CIRCUIT AND METHOD FOR READING DATA - A processing circuit includes a processing unit and a data buffer. When the processing unit receives a load instruction and determines that the load instruction has a load-use condition, the processing unit stores specific data into the data buffer, where the specific data is loaded by executing the load instruction.07-26-2012
20090094416SYSTEM AND METHOD FOR CACHING POSTING LISTS - A method of caching posting lists to a search engine cache calculates the ratios between the frequencies of the query terms in a past query log and the sizes of the posting lists for each term, and uses these ratios to determine which posting lists should be cached by sorting the ratios in decreasing order and storing to the cache those posting lists corresponding to the highest ratio values. Further, a method of finding an optimal allocation between two parts of a search engine cache evaluates a past query stream based on a relationship between various properties of the stream and the total size of the cache, and uses this information to determine the respective sizes of both parts of the cache.04-09-2009
20130060999SYSTEM AND METHOD FOR INCREASING READ AND WRITE SPEEDS OF HYBRID STORAGE UNIT - The present invention is to provide a system for increasing read and write speeds of a hybrid storage unit, which includes a cache controller connected to the hybrid storage unit and a computer respectively, and stores forward and backward mapping tables each including a plurality of fields. The hybrid storage unit is composed of at least one regular storage unit (e.g., an HDD) having a plurality of regular sections corresponding to forward fields respectively, and at least one high-speed storage unit (e.g., an SSD) having a plurality of high-speed storage sections corresponding to backward fields respectively with higher read and write speeds than the regular storage unit. The cache controller can make the high-speed storage section corresponding to each backward field correspond to the regular section corresponding to the forward field, thus allowing the computer to rapidly read and write data from and into the hybrid storage unit.03-07-2013
20110066807Protection Against Cache Poisoning - Protecting computers against cache poisoning, including a cache-entity table configured to maintain a plurality of associations between a plurality of data caches and a plurality of entities, where each of the caches is associated with a different one of the entities, and a cache manager configured to receive data that is associated with any of the entities and store the received data in any of the caches that the cache-entity table indicates is associated with the entity, and receive a data request that is associated with any of the entities and retrieve the requested data from any of the caches that the cache-entity table indicates is associated with the requesting entity, where any of the cache-entity table and cache manager are implemented in either of computer hardware and computer software embodied in a computer-readable medium.03-17-2011
20090300286METHOD FOR COORDINATING UPDATES TO DATABASE AND IN-MEMORY CACHE - A computer method and system of caching. In a multi-threaded application, different threads execute respective transactions accessing a data store (e.g. database) from a single server. The method and system represent status of datastore transactions using respective certain (e.g. Future) parameters.12-03-2009
20090276571Enhanced Direct Memory Access - A method for facilitating direct memory access in a computing system in response to a request to transfer data is provided. The method comprises selecting a thread for transferring the data, wherein the thread executes on a processing core within the computing system; providing the thread with the request, wherein the request comprises information for carrying out a data transfer; and transferring the data according to the request. The method may further comprise: coordinating the request with a memory management unit, such that virtual addresses may be used to transfer data; invalidating a cache line associated with the source address or flushing a cache line associated with the destination address, if requested. Multiple threads can be selected to transfer data based on their proximity to the destination address.11-05-2009
20120226864TIERED DATA MANAGEMENT METHOD AND SYSTEM FOR HIGH PERFORMANCE DATA MONITORING - A method for managing memory in a system for an application, comprising: assigning a first block (i.e., a big block) of the memory to the application when the application is initiated, the first block having a first size, the first block being assigned to the application until the application is terminated; dividing the first block into second blocks (i.e., intermediate blocks), each second block having a same second size, a second block of the second blocks for containing data for one or more components of a single data structure to be accessed by one thread of the application at a time; and, dividing the second block into third blocks (i.e., small blocks), each third block having a same third size, a third block of the third blocks for containing data for a single component of the single data structure.09-06-2012
20120226863INFORMATION PROCESSING DEVICE, MEMORY ACCESS CONTROL DEVICE, AND ADDRESS GENERATION METHOD THEREOF - An information processing device according to the present invention includes an operation unit that outputs an access request, a storage unit including a plurality of connection ports and a plurality of memories capable of a simultaneous parallel process that has an access unit of a plurality of word lengths for the connection ports, and a memory access control unit that distributes a plurality access addresses corresponding to the access request received for each processing cycle from the operation unit, and generates an address in a port including a discontinuous word by one access unit for each of the connection ports.09-06-2012
20120226862EVENT TRANSPORT SYSTEM - A method for communicating events from an event source to an event consumer is disclosed herein. In one embodiment, such a method includes monitoring an event generation rate associated with an event source. The method further determines if the event generation rate exceeds a threshold rate. Upon receiving an event from the event source, the method generates a condensed version of the event if the event generation rate exceeds the threshold rate. The method then communicates the condensed version to an event consumer. A corresponding system and computer program product are also disclosed.09-06-2012
20120226861STORAGE CONTROLLER AND METHOD OF CONTROLLING STORAGE CONTROLLER - Provided is a storage controller and method of controlling same which, if part of a storage area of a local memory is used as cache memory, enable an access conflict for access to a parallel bus connected to the local memory to be avoided.09-06-2012
20120226860COMPUTER SYSTEM AND DATA MIGRATION METHOD - A path is formed between a host computer and storage apparatuses without depending on the configuration of the host computer and a network and a plurality of volumes having a copy function are migrated between storage apparatuses while keeping the latest data.09-06-2012
20130067168CACHING FOR A FILE SYSTEM - Aspects of the subject matter described herein relate to caching data for a file system. In aspects, in response to requests from applications and storage and cache conditions, cache components may adjust throughput of writes from cache to the storage, adjust priority of I/O requests in a disk queue, adjust cache available for dirty data, and/or throttle writes from the applications.03-14-2013
20120117325METHOD AND DEVICE FOR PROCESSING DATA CACHING - The present invention discloses a method and device for processing data caching, wherein the method includes: storing cached data into a memory; after reading out the cached data from a memory space address for storing the cached data in the memory, judging whether the cached data that have been read out are the same as the cached data to be written before the storing, if so, then deciding that the memory space for storing the cached data in the memory is normal; if not, then deciding that the memory space for storing the cached data in the memory is abnormal; and when the cached data is stored during the subsequent data caching process, storing the cached data only into the memory spaces in normal state in the memory.05-10-2012
20120117324VIRTUAL CACHE WINDOW HEADERS FOR LONG TERM ACCESS HISTORY - A method of virtual cache window headers for long term access history is disclosed. The method may include steps (A) to (C). Step (A) may receive a request at a circuit from a host to access an address in a memory. The circuit generally controls the memory and a cache. Step (B) may update the access history in a first of the headers in response to the request. The headers may divide an address space of the memory into a plurality of windows. Each window generally includes a plurality of subwindows. Each subwindow may be sized to match one of a plurality of cache lines in the cache. A first of the subwindows in a first of the windows may correspond to the address. Step (C) may copy data from the memory to the cache in response to the access history.05-10-2012
20110022800SYSTEM AND A METHOD FOR SELECTING A CACHE WAY - A method for selecting a cache way, the method includes: selecting an initially selected cache way out of multiple cache ways of a cache module for receiving a data unit; the method being characterized by including: searching, if the initially selected cache way is locked, for an unlocked cache way, out of at least one group of cache ways that are located at predefined offsets from the first cache way.01-27-2011
20090235026DATA TRANSFER CONTROL DEVICE AND DATA TRANSFER CONTROL METHOD - A disclosed data transfer control device includes a main memory unit; a cache memory unit; a command generation unit configured to generate a command to read out data from the main memory unit in accordance with a first address input to the command generation unit; and a storage unit configured to store an information item indicating whether the first address and data corresponding to the first address are stored in the cache memory unit. In the data transfer control device, when the information item stored in the storage unit indicates that there are no data corresponding to the first address in the cache memory unit, the command generation unit generates the command based on the first address before output of data corresponding to a second address that is input immediately before the first address is input.09-17-2009
20090049244Data Displacement Bypass System - A data displacement bypass system is disclosed, wherein the data displacement bypass system comprises a CPU (Central Processing Unit), a first memory, a plurality of address lines, a plurality of data lines, an OE (Output Enable) line, a CS (Chip Select) line and a data displacement unit. The CPU could output a plurality of address characters, an OE signal and a CS signal, and receive a plurality of data characters. The first memory and the data displacement unit could output the plurality of data characters according to the plurality of address characters, the OE signal and the CS signal received by the first memory and the data displacement unit, wherein the data displacement unit could govern the plurality of data characters inputting to the CPU by outputting high or low voltage.02-19-2009
20130166844STORAGE IN TIERED ENVIRONMENT FOR COLDER DATA SEGMENTS - Exemplary embodiments for storing data by a processor device in a computing environment are provided. In one embodiment, by way of example only, from a plurality of available data segments, a data segment having a storage activity lower than a predetermined threshold is identified as a colder data segment. A chunk of storage is located to which the colder data segment is assigned. The colder data segment is compressed. The colder data segment is migrated to the chunk of storage. A status of the chunk of storage is maintained in a compression data segment bitmap.06-27-2013
20080294846DYNAMIC OPTIMIZATION OF CACHE MEMORY - The present invention includes dynamically analyzing look-up requests from a cache look-up algorithm to look-up data block tags corresponding to blocks of data previously inserted into a cache memory, to determine a cache related parameter. After analysis of a specific look-up request, a block of data corresponding to the tag looked up by the look-up request may be accessed from the cache memory or from a mass storage device.11-27-2008
20090204761PSEUDO-LRU CACHE LINE REPLACEMENT FOR A HIGH-SPEED CACHE - Embodiments of the present invention provide a system that replaces an entry in a least-recently-used way in a skewed-associative cache. The system starts by receiving a cache line address. The system then generates two or more indices using the cache line address. Next, the system generates two or more intermediate indices using the two or more indices. The system then uses at least one of the two or more indices or the two or more intermediate indices to perform a lookup in one or more lookup tables, wherein the lookup returns a value which identifies a least-recently-used way. Next, the system replaces the entry in the least-recently-used way.08-13-2009
20110283065Information Processing Apparatus and Driver - According to one embodiment, an information processing apparatus includes a memory includes a buffer area, a first storage, a second storage and a driver. The buffer area is reserved in order to transfer data between the driver and a host system that requests for data writing and data reading. The driver is configured to write data into the second storage and read data from the second storage using the first external storage as a cache for the second storage. The driver is further configured to reserve a cache area in the memory, between a buffer area and the first external storage, and between a buffer area and the second storage.11-17-2011
20130091328STORAGE SYSTEM - A storage system in an embodiment of this invention comprises a non-volatile storage area for storing write data from a host, a cache area capable of temporarily storing the write data before storing the write data in the non-volatile storage area, and a controller that determines whether to store the write data in the cache area or to store the write data in the non-volatile storage area without storing the write data in the cache area, and stores the write data in the determined area.04-11-2013
20090157964EFFICIENT DATA STORAGE IN MULTI-PLANE MEMORY DEVICES - A method for data storage includes initially storing a sequence of data pages in a memory that includes multiple memory arrays, such that successive data pages in the sequence are stored in alternation in a first number of the memory arrays. The initially-stored data pages are rearranged in the memory so as to store the successive data pages in the sequence in a second number of the memory arrays, which is less than the first number. The rearranged data pages are read from the second number of the memory arrays.06-18-2009
20110302371METHOD OF OPERATING A COMPUTING DEVICE TO PERFORM MEMOIZATION - This invention relates to a method (12-08-2011
20110289275Fast Hit Override - In one embodiment, a cache comprises a tag memory and a comparator. The tag memory is configured to store tags of cache blocks stored in the cache, and is configured to output at least one tag responsive to an index corresponding to an input address. The comparator is coupled to receive the tag and a tag portion of the input address, and is configured to compare the tag to the tag portion to generate a hit/miss indication. The comparator comprises dynamic circuitry, and is coupled to receive a control signal which, when asserted, is defined to force a first result on the hit/miss indication independent of whether or not the tag portion matches the tag. The comparator also comprises circuitry coupled to receive the control signal and configured to inhibit a state change on an output of the dynamic circuitry during an evaluate phase of the dynamic circuitry to produce the first result responsive to an assertion of the control signal.11-24-2011
20090157962CACHE INJECTION USING CLUSTERING - A method and system for cache injection using clustering are provided. The method includes receiving an input/output (I/O) transaction at an input/output device that includes a system chipset or input/output (I/O) hub. The I/O transaction includes an address. The method also includes looking up the address in a cache block indirection table. The cache block indirection table includes fields and entries for addresses and cluster identifiers (IDs). In response to a match resulting from the lookup, the method includes multicasting an injection operation to processor units identified by the cluster ID.06-18-2009
20110296111INTERFACE FOR ACCESSING AND MANIPULATING DATA - A system and method for an interface for accessing and manipulating data to allow access to data on a storage module on a network based system. The data is presented as a virtual disk for the local system through a hardware interface that emulates a disk interface. The system and method incorporates features to improve the retrieval and storage performance of frequently access data such as partition information, operating system files, or file system related information through the use of local caching and difference calculations. This system and method may be used to replace some, or all, of the fixed storage in a device. The system and method may provide both online and offline access to the data.12-01-2011
20110296110Critical Word Forwarding with Adaptive Prediction - In an embodiment, a system includes a memory controller, processors and corresponding caches. The system may include sources of uncertainty that prevent the precise scheduling of data forwarding for a load operation that misses in the processor caches. The memory controller may provide an early response that indicates that data should be provided in a subsequent clock cycle. An interface unit between the memory controller and the caches/processors may predict a delay from a currently-received early response to the corresponding data, and may speculatively prepare to forward the data assuming that it will be available as predicted. The interface unit may monitor the delays between the early response and the forwarding of the data, or at least the portion of the delay that may vary. Based on the measured delays, the interface unit may modify the subsequently predicted delays.12-01-2011
20110296109CACHE CONTROL FOR ADAPTIVE STREAM PLAYER - An adaptive stream player that has control over whether a retrieved stream is cached in a local stream cache. For at least some of the stream portions requested by the player, before going out over the network, a cache control component first determines whether or not an acceptable version of the stream portion is present in a stream cache. If there is an acceptable version in the stream cache, that version is provided rather than having to request the stream portion of the network. For stream portions received over the network, the cache control component decides whether or not to cache that stream portion. Thus, the cache control component allows the adaptive stream player to work in offline scenarios and also allows the adaptive stream player to have rewind, pause, and other controls that use cached content.12-01-2011
20110296108Methods to Estimate Existing Cache Contents for Better Query Optimization - A method for estimating contents of a cache determines table descriptors referenced by a query, and scans each page header stored in the cache for the table descriptor. If the table descriptor matches any of the referenced table descriptors, a page count value corresponding to the matching referenced table descriptor is increased. Alternatively, a housekeeper thread periodically performs the scan and stores the page count values in a central lookup table accessible by threads during a query run. Alternatively, each thread independently maintains a hash table with page count entries corresponding to table descriptors for each table in the database system. A thread increases or decreases the page count value when copying or removing pages from the cache. A page count value for each referenced table descriptor is determined from a sum of the values in the hash tables. A master thread performs bookkeeping and prevents hash table overflows.12-01-2011
20110296107Latency-Tolerant 3D On-Chip Memory Organization - A mechanism is provided within a 3D stacked memory organization to spread or stripe cache lines across multiple layers. In an example organization, a 128B cache line takes eight cycles on a 16B-wide bus. Each layer may provide 32B. The first layer uses the first two of the eight transfer cycles to send the first 32B. The next layer sends the next 32B using the next two cycles of the eight transfer cycles, and so forth. The mechanism provides a uniform memory access.12-01-2011
20120191911SYSTEM AND METHOD FOR INCREASING CACHE SIZE - A system and method for increasing cache size is provided. Generally, the system contains a memory and a processor. The processor is configured by the memory to perform the steps of: categorizing storage blocks within a storage device as within a first category of storage blocks if the storage blocks that are available to the system for storing data when needed; categorizing storage blocks within the storage device as within a second category of storage blocks if the storage blocks contain application data therein; and categorizing storage blocks within the storage device as within a third category of storage blocks if the storage blocks are storing cached data and are available for storing application data if no first category of storage blocks are available to the system.07-26-2012
20090157961TWO-SIDED, DYNAMIC CACHE INJECTION CONTROL - A method, system, and computer program product for two-sided, dynamic cache injection control are provided. An I/O adapter generates an I/O transaction in response to receiving a request for the transaction. The transaction includes an ID field and a requested address. The adapter looks up the address in a cache translation table stored thereon, which includes mappings between addresses and corresponding address space identifiers (ASIDs). The adapter enters an ASID in the ID field when the requested address is present in the cache translation table. IDs corresponding to device identifiers, address ranges and pattern strings may also be entered. The adapter sends the transaction to one of an I/O hub and system chipset, which in turn, looks up the ASID in a table stored thereon and injects the requested address and corresponding data in a processor complex when the ASID is present in the table, indicating that the address space corresponding to the ASID is actively running on a processor in the complex. The ASIDs are dynamically determined and set in the adapter during execution of an application in the processor complex.06-18-2009
20090164726Programmable Address Processor for Graphics Applications - Methods and systems for processing memory lookup requests are provided. In an embodiment, an address processing unit includes an instructions module configured to store instructions to be executed to complete a primary memory lookup request and a logic unit coupled to the instructions module. The primary memory lookup request is associated with a desired address. Based on an instruction stored in the instructions module, the logic unit is configured to generate a secondary memory lookup request that requests the desired address.06-25-2009
20110191540PROCESSING READ AND WRITE REQUESTS IN A STORAGE CONTROLLER - Provided are a method, system, and computer program product for processing read and write requests in a storage controller. A host adaptor in the storage controller receives a write request from a host system for a storage address in a storage device. The host adaptor sends write information indicating the storage address updated by the write request to a device adaptor in the storage controller. The host adaptor writes the write data to a cache in the storage controller. The device adaptor indicates the storage address indicated in the write information to a modified storage address list stored in the device adaptor, wherein the modified storage address list indicates modified data in the cache for storage addresses in the storage device.08-04-2011
20110191539Coprocessor session switching - A data processing apparatus is provided, configured to carry out data processing operations on behalf of a main data processing apparatus, comprising a coprocessor core configured to perform the data processing operations and a reset controller configured to cause the coprocessor core to reset. The coprocessor core performs its data processing in dependence on current configuration data stored therein, the current configuration data being associated with a current processing session. The reset controller is configured to receive pending configuration data from the main data processing apparatus, the pending configuration data associated with a pending processing session, and to store the pending configuration data in a configuration data queue. The reset controller is configured, when the coprocessor core resets, to transfer the pending configuration data from the configuration data queue to be stored in the coprocessor core, replacing the current configuration data.08-04-2011
20100274969ACTIVE-ACTIVE SUPPORT OF VIRTUAL STORAGE MANAGEMENT IN A STORAGE AREA NETWORK ("SAN") - Methods and apparatuses are provided for active-active support of virtual storage management in a storage area network (“SAN”). When a storage manager (that manages virtual storage volumes) of the SAN receives data to be written to a virtual storage volume from a computer server, the storage manager determines whether the writing request may result in updating a mapping of the virtual storage volume to a storage system. When the writing request does not involve updating the mapping, which happens most of the time, the storage manager simply writes the data to the storage system based on the existing mapping. Otherwise, the storage manager sends an updating request to another storage manager for updating a mapping of the virtual storage volume to a storage volume. Subsequently, the storage manager writes the data to the corresponding storage system based on the mapping that has been updated by the another storage manager.10-28-2010
20100115203MUTABLE OBJECT CACHING - In one embodiment, a method for caching mutable objects comprises adding to a cache a first cache entry that includes a first object and a first key. Assigning a unique identification to the first object. Adding an entry to an instance map for the first object. The entry includes the unique identification and the first object. Creating a data structure that represents the first object. The data structure includes information relevant to the current state of the first object. A second cache entry is then added to the cache. The second cache entry includes the data structure and the unique identification. Updating the first cache entry to replace the first object with the unique identification.05-06-2010
20100115202METHODS AND SYSTEMS FOR MICROCODE PATCHING - Methods and systems for performing microcode patching are presented. In one embodiment, a data processing system comprises a cache memory and a processor. The cache memory comprises a plurality of cache sections. The processor sequesters one or more cache sections of the cache memory and stores processor microcode therein. In one embodiment, the processor executes the microcode in the one or more cache sections.05-06-2010
20090063771STRUCTURE FOR REDUCING COHERENCE ENFORCEMENT BY SELECTIVE DIRECTORY UPDATE ON REPLACEMENT OF UNMODIFIED CACHE BLOCKS IN A DIRECTORY-BASED COHERENT MULTIPROCESSOR - A design structure embodied in a machine readable storage medium for designing, manufacturing, and/or testing a design to reduce the number of memory directory updates during block replacement in a system having a directory-based cache is provided. The design structure may be implemented to utilize a read/write bit to determine the accessibility of a cache line and limit memory directory updates during block replacement to regions that are determined to be readable and writable by multiple processors.03-05-2009
20090150614NON-VOLATILE CACHE IN DISK DRIVE EMULATION - A method and apparatus for deferring media writes for emulation drives are provided. By deferring media writes using non-volatile storage, the performance penalty associated with RMW operations may be minimized. Deferring writes may allow the RMW operations to be done while the disk drive is idle. Further, deferring writes may also allow data blocks to be accumulated over time, allowing a full (4K) disk drive block size to be written with a simple write operation, thus making a RMW unnecessary.06-11-2009
20110197029HARDWARE ACCELERATION OF A WRITE-BUFFERING SOFTWARE TRANSACTIONAL MEMORY - A method and apparatus for accelerating a software transactional memory (STM) system is described herein. Annotation field are associated with lines of a transactional memory. An annotation field associated with a line of the transaction memory is initialized to a first value upon starting a transaction. In response to encountering a read operation in the transaction, then annotation field is checked. If the annotation field includes a first value, the read is serviced from the line of the transaction memory without having to search an additional write space. A second and third value in the annotation field potentially indicates whether a read operation missed the transactional memory or a tentative value is stored in a write space. Additionally, an additional bit in the annotation field, may be utilized to indicate whether previous read operations have been logged, allowing for subsequent redundant read logging to be reduced.08-11-2011
20120036325MEMORY COMPRESSION POLICIES - Techniques are disclosed for managing memory within a virtualized system that includes a memory compression cache. Generally, the virtualized system may include a hypervisor configured to use a compression cache to temporarily store memory pages that have been compressed to conserve memory space. A “first-in touch-out” (FITO) list may be used to manage the size of the compression cache by monitoring the compressed memory pages in the compression cache. Each element in the FITO list corresponds to a compressed page in the compression cache. Each element in the FITO list records a time at which the corresponding compressed page was stored in the compression cache (i.e. an age). A size of the compression cache may be adjusted based on the ages of the pages in the compression cache.02-09-2012
20120036324METHOD AND SYSTEM FOR REVISITING PRIOR NAVIGATED PAGES AND PRIOR EDITS - A system and method for navigating or editing may include storing multiple forward or redo stacks and a single back or undo stack. The forward or undo stacks may include separate stacks for each page from which navigation occurs to a page of lower hierarchical level or for each operation for which another operation is subsequently performed. Positions of references in the forward or redo stacks may be modified in response to navigations or edits to place a last navigated page or operation at the top of the stack. The timing of such movement of references may be optimized.02-09-2012
20100293330DISPLAYING TRANSITION IMAGES DURING A SLIDE TRANSITION - One or more transition images are displayed during a transition period between a display of slides within a presentation. The displayed transition images include images of different slides that are contained within the presentation. The transition images provide the audience with a glimpse of slides that are displayed within the presentation. For example, the transition images may include images from previous and future slides that are contained within the presentation. The transition images may also be cached in order to more efficiently display the transition images during the transition period.11-18-2010
20090287883Least recently used ADC - During the last 75 years Analog to Digital converters revolutionized the signal processing industry. As transistor sizes reduced, higher resolution of bits is achieved. But FLASH and other full blown faster ADC implementations always consumed relatively higher power. As the analog signal comes into ADC frontend, conversion is initiated from the beginning. ADC conversion process is a highly mathematical number system problem, especially FLASH ADCs are. With faster, low power, and partitioned ADCS, better solutions can be built in so many vast expanding signal processing fields. It is time to come up with logical ADCS instead of brute force, start from the beginning conversion for every sample of analog signal. When the signal does not change abruptly, there is room for applying CACHE principles as it is done in this invention! The approach is to use a smaller ADC for full blown start from the beginning conversions and store it in upfront signal path as CACHED value. Then start using that Cached value set. There must be a balance between number of Cache entries, consumed power, and backend full blown ADC. It is obvious, backend ADC is rarely engaged in conversion when there are too many cache hits, which is desirable.11-19-2009
20100100680STORAGE APPARATUS AND CACHE CONTROL METHOD - The object of the present invention is to provide a storage apparatus capable of optimizing the cache-resident area in a case where cache residence control in units of LUs is employed to a storage apparatus that virtualizes the capacity by acquiring only a cache area of a size that is the same as the physical capacity assigned to the LU. In the storage apparatus, in a case where an LU that is a logical space resident in the cache memory is configured by a set of pages acquired by dividing a pool volume as a physical space created by using a plurality of storage devices in a predetermined size, when the LU to be resident in the cache memory is created, a capacity corresponding to the size of the LU is not initially acquired in the cache memory, a cache capacity that is the same as the physical capacity allocated to a new page is acquired in the cache memory each time when the page is newly allocated, and the new page is resident in the cache memory.04-22-2010
20090049245Memory device and method with on-board cache system for facilitating interface with multiple processors, and computer system using same - A memory device includes an on-board cache system that facilitates the ability of the memory device to interface with a plurality of processors operating in a parallel processing manner. The cache system operates in a manner that can be transparent to a memory controller to which the memory device is connected. Alternatively, the memory controller can control the operation of the cache system.02-19-2009
20110271055SYSTEM AND METHOD FOR LOW-LATENCY DATA COMPRESSION/DECOMPRESSION - A compression technique includes storing respective fixed-size symbols for each of a plurality of words in a data block, e.g., a cache line, into a symbol portion of a compressed data block, e.g., a compressed cache line, where each of the symbols provides information about a corresponding one of the words in the data block. Up to a first plurality of data segments are stored in a data portion of the compressed data block, each data segment corresponds to a unique one of the symbols in the compressed data block and a unique one of the words in the cache line. Up to a second plurality of dictionary entries are stored in the data portion of the compressed cache line. The dictionary entries can correspond to multiple ones of the symbols.11-03-2011
20090292879Nodma cache - A NoDMA cache including a super page field. The super page field indicates when a set of pages contain protected information. The NoDMA cache is used by a computer system to deny I/O device access to protected information in system memory.11-26-2009
20090300287Method and apparatus for controlling cache memory - An apparatus for controlling a cache memory that stores therein data transferred from a main storing unit includes a computing processing unit that executes a computing process using data, a connecting unit that connects an input portion and an output portion of the cache memory, a control unit that causes data in the main storing unit to be transferred to the output portion of the cache memory through the connecting unit when the data in the main storing unit is input from the input portion of the cache memory into the cache memory, and a transferring unit that transfers data transferred by the control unit to the output portion of the cache memory, to the computing processing unit.12-03-2009
20110173390STORAGE MANAGEMENT METHOD AND STORAGE MANAGEMENT SYSTEM - There is provided a storage management system capable of utilizing division management with enhanced flexibility and of enhancing security of the entire system, by providing functions by program products in each division unit of a storage subsystem. The storage management system has a program-product management table stored in a shared memory in the storage subsystem and showing presence or absence of the program products, which provide management functions of respective resources to respective SLPRs. At the time of executing the management functions by the program products in the SLPRs of users in accordance with instructions from the users, the storage management system is referred to and execution of the management function having no program product is restricted.07-14-2011
20130219121METHOD AND APPARATUS FOR IMPLEMENTING A TRANSACTIONAL STORE SYSTEM USING A HELPER THREAD - A method, apparatus, and computer readable article of manufacture for executing a transaction by a processor apparatus that includes a plurality of hardware threads. The method includes the steps of: executing, by the processor apparatus using the plurality of hardware threads, a main software thread for executing the transaction and a helper software thread for executing a barrier function; and deciding, by the processor apparatus, whether or not the barrier function is required to be executed when the main software thread encounters a transactional load or store operation that requires the main software thread to read or write data.08-22-2013
20080235451NON-VOLATILE MEMORY DEVICE AND ASSOCIATED PROGRAMMING METHOD - A non-volatile memory device having a memory array is configured to prevent power voltage noise generation during programming, thereby improving reliability. An associated programming method of the non-volatile memory device includes storing data input from an external source to a cache register. The stored data is moved to a main register. The cache register is cleared and the data stored in the main register is programmed to the memory cell array.09-25-2008
20080235450Updating Entries Cached by a Network Processor - Machine-readable media, methods, and apparatus are described to update network processor cache entries in corresponding local memories and update cached entries based upon information stored in corresponding buffers for the microengines. A control plane of the network processor identifies each microengine having updated entry stored in corresponding local memory, and store information in the corresponding buffer for each identified microengine to indicate that the entry has been updated in the external memory.09-25-2008
20090049247MEDIA CACHE CONTROL INTERFACE - The apparent speed with which a media work is ripped to copy the work into a visible store is substantially reduced. When the media work is played, its content is cached onto a persistent, fast access storage media. If the user subsequently decides to rip the media work, the content of the cache is copied to a visible store in substantially less time than would be required to play the media work and convert it. The user thus perceives that the media work is ripped in a substantially shorter time, compared to that required for ripping the media work in a conventional manner. The ripping process may encode or transform the format of the content to a desired format for use within the visible store. Constraints may be imposed by the user to limit the cache, or the caching process may be hidden from the user.02-19-2009
20090182941Web Server Cache Pre-Fetching - A method and apparatus for a server that includes a file processor that interprets each requested data file, such as a web page, requested by a client in a process analogous to that of a browser application or other requesting application. The file processor initiates the loading of each referenced data item within the requested document in anticipation that the client will make the same requests upon receiving the requested data file. Each referenced data item is loaded into the server cache. When the client browser application requests these referenced data items they can be returned to the client browser application without accessing a slower persistent data storage. The requested data items are loaded from the server cache, which has a faster access time than the persistent data storage.07-16-2009
20090164727Handling of hard errors in a cache of a data processing apparatus - A data processing apparatus and method are provided for handling hard errors occurring in a cache of the data processing apparatus. The cache storage comprising data storage having a plurality of cache lines for storing data values, and address storage having a plurality of entries, with each entry identifying for an associated cache line an address indication value, and each entry having associated error data. In response to an access request, a lookup procedure is performed to determine with reference to the address indication value held in at least one entry of the address storage whether a hit condition exists in one of the cache lines. Further, error detection circuitry determines with reference to the error data associated with the at least one entry of the address storage whether an error condition exists for that entry. Additionally, cache location avoid storage is provided having at least one record, with each record being used to store a cache line identifier identifying a specific cache line. On detection of the error condition, one of the records in the cache location avoid storage is allocated to store the cache line identifier for the specific cache line associated with the entry for which the error condition was detected. Further, the error detection circuitry causes a clean and invalidate operation to be performed in respect of the specific cache line, and the access request is then re-performed. The cache access circuitry is arranged to exclude any specific cache line identified in the cache location avoid storage from the lookup procedure. This mechanism provides a very simple and effective mechanism for handling hard errors that manifest themselves within a cache during use, so as to ensure correct operation of the cache in the presence of such hard errors. Further, the technique can be employed not only in association with write through caches but also write back caches, thus providing a very flexible solution.06-25-2009
20090164725Method and Apparatus for Fast Processing Memory Array - The illustrative embodiments described herein provide a computer implemented method, apparatus, and computer program product for increasing efficiency associated with data access. In one illustrative embodiment a memory chip is presented comprising of a plurality of memory units for storing data; a plurality of processing units for processing the data; and a word line and a bit line external to the plurality of memory units, wherein the plurality of processing units directly access the word line and the bit line in accessing the data.06-25-2009
20090138658Cache memory system for a data processing apparatus - A data processing apparatus is provided having a cache memory comprising a data storage array and a tag array and a cache controller coupled to the cache memory responsive to a cache access request from processing circuitry to perform cache look ups. The cache memory is arranged such that it has a first memory cell group configured to operate in a first voltage domain and a second memory cell group configured to operate in a second voltage domain that is different from the first voltage domain. A corresponding data processing method is also provided.05-28-2009
20090138657DATA BACKUP SYSTEM FOR LOGICAL VOLUME MANAGER AND METHOD THEREOF - A data backup system for a logical volume manager (LVM) and a method thereof, capable of realizing data backup in the LVM having a battery backed cache memory (BBCM). The data backup system includes a physical storage device, a BBCM, an LVM, and a data backup function. The physical storage device is used to store data of the LVM. The BBCM is used to provide a plurality of index regions and a plurality of data regions. The LVM is used to manage data save position of the physical storage device. The data backup function is used to look up whether the BBCM saves the data to be backed up by the logical volume. If the BBCM has the data, the BBCM reads out the data to be backed up, and writes the data into a snapshot volume (SV).05-28-2009
20090055588Performing Useful Computations While Waiting for a Line in a System with a Software Implemented Cache - Mechanisms for performing useful computations during a software cache reload operation are provided. With the illustrative embodiments, in order to perform software caching, a compiler takes original source code, and while compiling the source code, inserts explicit cache lookup instructions into appropriate portions of the source code where cacheable variables are referenced. In addition, the compiler inserts a cache miss handler routine that is used to branch execution of the code to a cache miss handler if the cache lookup instructions result in a cache miss. The cache miss handler, prior to performing a wait operation for waiting for the data to be retrieved from the backing store, branches execution to an independent subroutine identified by a compiler. The independent subroutine is executed while the data is being retrieved from the backing store such that useful work is performed.02-26-2009
20090144500STORE PERFORMANCE IN STRONGLY ORDERED MICROPROCESSOR ARCHITECTURE - Apparatus and methods relating to store operations are disclosed. In one embodiment, a first storage unit is to store data. A second storage unit is to store the data only after it has become detectable by a bus agent. Moreover, the second storage unit may store an index field for each data value to be stored within the second storage unit. Other embodiments are also disclosed.06-04-2009
20090144499Preemptive write-inhibition for thin provisioning storage subsystem - Write requests from host computers are processed in relation to a thin provisioning storage subsystem. A write request is received from a host computer. The write request identifies a first virtual disk that has been previously assigned to the host computer. It is determined whether the first virtual disk has to be allocated additional physical storage locations of the thin provisioning storage subsystem for storing data associated with the write request. In response to determining that the virtual disk has to be allocated additional physical storage locations, the following is performed. First, a quantity of free space remaining unallocated within physical storage locations of the thin provisioning storage subsystem is determined. Second, where the quantity of free space remaining unallocated within the physical storage locations satisfies a policy threshold associated with a second virtual disk, the second virtual disk is write-inhibited. The first and second virtual disks can be different.06-04-2009
20110225367MEMORY CACHE DATA CENTER - A data center system includes a memory cache coupled to a data center controller. The memory cache includes volatile memory and stores data that is persisted in a database in a different data center system that is located remotely from the data center system rather than in the first data center system. The data center controller reads data from the memory cache and writes data to the memory cache.09-15-2011
20090024794Enhanced Access To Data Available In A Cache - Enhanced access data available in a cache. In one embodiment, a cache maintaining copies of source data is formed as a volatile memory. On receiving a request directed to the cache for a copy of a data element, the requested copy maintained in the cache is sent as a response to the request. In another embodiment used in the context of applications accessing databases in a navigational model, a cache maintains rows of data accessed by different user applications on corresponding connections. Applications may send requests directed to the cache to retrieve copies of the rows, populated potentially by other applications, while the cache restricts access to rows populated by other applications when processing requests directed to the source database system. In another embodiment, an application may direct requests to retrieve data elements caused to be populated by activity on different connections established by the same application.01-22-2009
20090024795METHOD AND APPARATUS FOR CACHING DATA - A relay unit inputs data and an index. A cache management unit determines whether or not a space area to cache data exists. In the case where there is a space area, the cache management unit caches data. An identifier generating unit generates an identifier corresponding to contents of the cached data. The identifier is registered in a cache data table in association with the data. The identifier is registered in a cache index table in association with the index. In the case where there is no space area, the cache management unit secures a space area. The cache management unit unregisters an identifier associated with the data which was cached in the secured area.01-22-2009
20110225368Apparatus and Method For Context-Aware Mobile Data Management - A context of a mobile device is determined. A context preference of a user associated with the mobile device is determined. The context of the mobile device and the user context preference is transmitted to another node and responsively returned data is received. Available free space in the mobile device is determined. All data whose timestamp is within a predetermined threshold is cached. The data is cached in at least a portion of the free space.09-15-2011
20110225366Dual-Mode, Dual-Display Shared Resource Computing - A dual-mode, dual-display shared resource computing (SRC) device is usable to stream SRC content from a host SRC device while in an on-line mode and maintain functionality with the content during an off-line mode. Such remote SRC devices can be used to maintain multiple user-specific caches and to back-up cached content for multi-device systems.09-15-2011
20090210623SYSTEM AND ARTICLE OF MANUFACTURE FOR STORING DATA - Provided are a system, and article of manufacture, wherein a first storage unit is coupled to a second storage unit. The first storage unit and the second storage unit are detected. A determination is made that the first storage unit is capable of responding to a write operation faster than the second storage unit, and that the second storage unit is capable of responding to a read operation at least as fast as the first storage unit. Data is written to the first storage unit. A transfer of the data is initiated from the first storage unit to the second storage unit. The data is read from the second storage unit, in response to a read request directed at both the first and the second storage units.08-20-2009
20090198894Method Of Updating IC Instruction And Data Cache - A method of updating a cache in an integrated circuit is provided. The integrated circuit incorporates the cache, memory and a memory interface connected to the cache and memory. Following a cache miss, the method fetches, using the memory interface, first data associated with the cache miss and second data from the memory, where the second data is stored in the memory adjacent the first data, and updates the cache with the fetched first and second data via the memory interface. The cache includes instruction and data cache, the method performing arbitration between instruction cache misses and data cache misses such that the fetching and updating are performed for data cache misses before instruction cache misses.08-06-2009
20090198893Microprocessor systems - A memory management arrangement includes a memory management unit 08-06-2009
20090198892RAPID CACHING AND DATA DELIVERY SYSTEM AND METHOD - The initial systems analysis of a new data source fully defines each data element and also designs, tests and encodes complete data integration instructions for each data element. A metadata cache stores the data element definition and data element integration instructions. The metadata cache enables a comprehensive view of data elements in an enterprise data architecture. When data is requested that includes data elements defined in a metadata cache, the metadata cache and associated software modules automatically generate database elements to fully integrate the requested data elements into existing databases.08-06-2009
20090055587Adaptive Caching of Input / Output Data - To improve caching techniques, so as to realize greater hit rates within available memory, of the present invention utilizes a entropy signature from the compressed data blocks to supply a bias to pre-fetching operations. The method of the present invention for caching data involves detecting a data I/O request, relative to a data object, and then selecting appropriate I/O to cache, wherein said selecting can occur with or without user input, or with or without application or operating system preknowledge. Such selecting may occur dynamically or manually. The method further involves estimating an entropy of a first data block to be cached in response to the data I/O request; selecting a compressor using a value of the entropy of the data block from the estimating step, wherein each compressor corresponds to one of a plurality of ranges of entropy values relative to an entropy watermark; and storing the data block in a cache in compressed form from the selected compressor, or in uncompressed form if the value of the entropy of the data block from the estimating step falls in a first range of entropy values relative to the entropy watermark. The method can also include the step of prefetching a data block using gap prediction with an applied entropy bias, wherein the data block is the same as the first data block to be cached or is a separate second data block. The method can also involve the following additional steps: adaptively adjusting the plurality of ranges of entropy values; scheduling a flush of the data block from the cache; and suppressing operating system flushes in conjunction with the foregoing scheduling step.02-26-2009
20120079199INTELLIGENT WRITE CACHING FOR SEQUENTIAL TRACKS - Method, system, and computer program product embodiments for, in a computing storage environment for destaging data from nonvolatile storage (NVS) to a storage unit, write caching for sequential tracks by a processor device are provided. If a first track is determined to be sequential, and an earlier track is also determined to be sequential, a temporal bit associated with the earlier track is cleared to allow for destage of data of the earlier track. If a temporal bit for one of a plurality of additional tracks in one of a plurality of strides in a modified cache is determined to be not set, a stride associated with the one of the plurality of additional tracks is selected for a destage operation. If the NVS exceeds a predetermined storage threshold, a predetermined one of the plurality of strides is selected for the destage operation.03-29-2012
20090083488Enabling Speculative State Information in a Cache Coherency Protocol - In one embodiment, the present invention includes a method for receiving a bus message in a first cache corresponding to a speculative access to a portion of a second cache by a second thread, and dynamically determining in the first cache if an inter-thread dependency exists between the second thread and a first thread associated with the first cache with respect to the portion. Other embodiments are described and claimed.03-26-2009
20080263278CACHE RECONFIGURATION BASED ON RUN-TIME PERFORMANCE DATA OR SOFTWARE HINT - A method for reconfiguring a cache memory is provided. The method in one aspect may include analyzing one or more characteristics of an execution entity accessing a cache memory and reconfiguring the cache based on the one or more characteristics analyzed. Examples of analyzed characteristic may include but are not limited to data structure used by the execution entity, expected reference pattern of the execution entity, type of an execution entity, heat and power consumption of an execution entity, etc. Examples of cache attributes that may be reconfigured may include but are not limited to associativity of the cache memory, amount of the cache memory available to store data, coherence granularity of the cache memory, line size of the cache memory, etc.10-23-2008
20110231610MEMORY SYSTEM - According to one embodiment, a free blocks included in a nonvolatile semiconductor memory are classified into a plurality of free block management lists. When a free block is acquired at normal priority, the free block is acquired from the free block management list in which a number of free blocks is larger than a first threshold. When a free block is acquired at high priority, the free block is acquired from the free block management list irrespective of the first threshold.09-22-2011
20110231611STORAGE APPARATUS AND CACHE CONTROL METHOD - Optimizing cache-resident area where cache residence control in units of LUs is employed to a storage apparatus that virtualizes the capacity by acquiring only a cache area of a size that is the same as the physical capacity assigned to the LU. An LU is a logical space resident in cache memory is configured by a set of pages acquired by dividing a pool volume as a physical space created by using a plurality of storage devices in a predetermined size. When the LU to be resident in the cache memory is created, a capacity corresponding to the size of the LU is not initially acquired in the cache memory, a cache capacity that is the same as the physical capacity allocated to a new page is acquired in the cache memory each time when the page is newly allocated, and the new page is resident in the cache memory.09-22-2011
20090100224CACHE MANAGEMENT - Systems, methods and computer readable media for cache management. Cache management can operate to organize pages into files and score the respective files stored in a cache memory. The organized pages can be stored to an optical storage media based upon the organization of the files and based upon the score associated with the files.04-16-2009
20090100225Data processing apparatus and shared memory accessing method - The present invention provides a data processing apparatus for processing data by causing a plurality of function blocks to share a single shared memory, the data processing apparatus including: a memory controller configured to cause the plurality of function blocks to write and read data to/and from the shared memory in response to requests from any one of the function blocks; a cache memory; and a companding section configured to compress the data to be written to the cache memory while expanding the data read therefrom.04-16-2009
20090198891Issuing Global Shared Memory Operations Via Direct Cache Injection to a Host Fabric Interface - A data processing system enables global shared memory (GSM) operations across multiple nodes with a distributed EA-to-RA mapping of physical memory. Each node has a host fabric interface (HFI), which includes HFI windows that are assigned to at most one locally-executing task of a parallel job. The tasks perform parallel job execution, but map only a portion of the effective addresses (EAs) of the global address space to the local, real memory of the task's respective node. The HFI window tags all outgoing GSM operations (of the local task) with the job ID, and embeds the target node and HFI window IDs of the node at which the EA is memory mapped. The HFI window also enables processing of received GSM operations with valid EAs that are homed to the local real memory of the receiving node, while preventing processing of other received operations without a valid EA-to-RA local mapping.08-06-2009
20090248982CACHE CONTROL APPARATUS, INFORMATION PROCESSING APPARATUS, AND CACHE CONTROL METHOD - A cache control apparatus determines whether to adopt or not data acquired by a speculative fetch by monitoring a status of the speculative fetch which is a memory fetch request output before it becomes clear whether data requested by a CPU is stored in a cache of the CPU and time period obtained by adding up the time period from when the speculative fetch is output to when the speculative fetch reaches a memory controller and time period from completion of writing of data to a memory which is specified by a data write command that has been issued, before issuance of the speculative fetch, for the same address as that for which the speculative fetch is issued to when a response of the data write command is returned.10-01-2009
20090254709Prediction Mechanism for Subroutine Returns in Binary Translation Sub-Systems of Computers - A sequence of input language (IL) instructions of a guest system is converted, for example by binary translation, into a corresponding sequence of output language (OL) instructions of a host system, which executes the OL instructions. In order to determine the return address after any IL call to a subroutine at a target entry address P, the corresponding OL return address is stored in an array at a location determined by an index calculated as a function of P. After completion of execution of the OL translation of the IL subroutine, execution is transferred to the address stored in the array at the location where the OL return address was previously stored. A confirm instruction block is included in each OL call site to determine whether the transfer was to the correct or incorrect call site, and a back-up routine is included to handle the cases of incorrect call sites.10-08-2009
20090254706METHOD AND SYSTEM FOR APPROXIMATING OBJECT SIZES IN AN OBJECT-ORIENTED SYSTEM - A method and system for increasing a system's performance and achieving improved memory utilization by approximating the memory sizes that will be required for data objects that can be deserialized and constructed in a memory cache. The method and system may use accurate calculations or measurements of similar objects to calibrate the approximate memory sizes.10-08-2009
20090254708METHOD AND APPARATUS FOR DELIVERING AND CACHING MULTIPLE PIECES OF CONTENT - Aspects relate to systems and methods for providing the ability to customize content delivery. A device can cache multiple presentations. The device can establish a cache depth upon initiation of the subscription service. The device can provide an interface to select a cache depth. The cache depth can be the number of presentations the device will maintain on the device at a given time.10-08-2009
20080320222Adaptive caching in broadcast networks - Adaptive caching techniques are described. In an implementation, a head end defines a plurality of cache periods having associated criteria. Request data for content is obtained and utilized to associate the content with the defined cache periods based on a comparison of the request data with the associated criteria. Then, the content is cached at the head end for the associated cache period.12-25-2008
20090254707Partial Content Caching - A network device, known as an appliance, is located in the data path between a client and a server. The appliance includes a cache that is used to cache static and near-static cacheable content items. When a request is received, the appliance determines whether any portion of the requested data is available in its cache; if so, that portion can be serviced by the appliance. If any portion of the requested content is dynamic and cannot be serviced by the cache, the dynamic portion is generated by the appliance or obtained from another source such as an application server. The appliance integrates the content retrieved from the cache, the dynamically generated content, and the content received from other sources to generate a response to the original content request. The present invention thus implements partial content caching for content that has a cached portion and a portion to be dynamically generated.10-08-2009
20090276575INFORMATION PROCESSING APPARATUS AND COMPILING METHOD - According to one embodiment, an information processing apparatus includes a processor, a cache, and a cache controller. The processor is configured to output a memory access request for accessing an entity of a variable stored in a variable-storage region provided in a memory by using first or second memory address. Both the first and second memory addresses are allocated to the variable-storage region. The cache is configured to store some of data items stored in the memory. The cache controller is configured to access the memory or the cache by using a memory address designating the variable-storage region, in accordance with one of the first and second memory addresses which is included in a memory access request coming from the processor.11-05-2009
20090276572Memory Management Among Levels of Cache in a Memory Hierarchy - Methods, apparatus, and product for memory management among levels of cache in a memory hierarchy in a computer with a processor operatively coupled through two or more levels of cache to a main random access memory, caches closer to the processor in the hierarchy characterized as higher in the hierarchy, including: identifying a line in a first cache that is preferably retained in the first cache, the first cache backed up by at least one cache lower in the memory hierarchy, the lower cache implementing an LRU-type cache line replacement policy; and updating LRU information for the lower cache to indicate that the line has been recently accessed.11-05-2009
20090150615METHOD AND STRUCTURE FOR PRODUCING HIGH PERFORMANCE LINEAR ALGEBRA ROUTINES USING STREAMING - A method (and structure) for executing a linear algebra subroutine on a computer having a cache, includes streaming data for matrices involved in processing the linear algebra subroutine such that data is processed using data for a first matrix stored in the cache as a matrix format and data from a second matrix and a third matrix is stored in a memory device at a higher level than the cache, the streaming providing data from the higher level as the streaming data is required for the processing.06-11-2009
20100161903APPARATUS, METHOD, COMPUTER PROGRAM AND MOBILE TERMINAL FOR PROCESSING INFORMATION - An apparatus for processing information, includes a memory storing a plurality of content items different in type and metadata containing time information of the content items, a cache processor for fetching from the memory the content item and the metadata of the content item to be displayed on a display and storing the fetched content item and the metadata thereof on a cache memory, a display controller for displaying on the display the metadata of the content items from the cache memory arranged in accordance with the time information and a selection operator selecting metadata corresponding to a content item desired to be processed, out of the metadata displayed, and a content processor for fetching from the cache memory a content item corresponding to the metadata selected by the selection operator by referencing the cache memory in response to the selected metadata, and for performing a process responsive to the fetched content item.06-24-2010
20100191911System-On-A-Chip Having an Array of Programmable Processing Elements Linked By an On-Chip Network with Distributed On-Chip Shared Memory and External Shared Memory - An integrated circuit having an array of programmable processing elements and a memory interface linked by an on-chip communication network. Each processing element includes a plurality of processing cores and a local memory. The memory interface block is operably coupled to external memory and to the on-chip communication network. The memory interface supports accessing the external memory in response to messages communicated from the processing elements of the array over the on-chip communication network. A portion of the local memory for a plurality of the processing elements of the array as well as a portion of the external memory are both allocated to store data shared by a plurality of processing elements of the array during execution of programmed operations distributed thereon.07-29-2010
20120198156SELECTIVE CACHE ACCESS CONTROL APPARATUS AND METHOD THEREOF - A data processor is disclosed that definitively determines an effective address being calculated and decoded will be associated with an address range that includes a memory local to a data processor unit, and will disable a cache access based upon a comparison between a portion of a base address and a corresponding portion of an effective address input operand. Access to the local memory can be accomplished through a first port of the local memory when it is definitively determined that the effective address will be associated with an address range. Access to the local memory cannot be accomplished through the first port of the local memory when it is not definitively determined that the effective address will be associated with the address range.08-02-2012
20100262780APPARATUS AND METHODS FOR RENDERING A PAGE - Aspects relate to apparatus and methods for rending a page on a computing device, such as a web page. The apparatus and methods include receiving a request for a requested instance of a page and determining if the requested instance of the page corresponds to a document object model (DOM) for the page stored in a memory. Further, the apparatus and methods include retrieving a dynamic portion of the DOM corresponding to the requested instance if the requested instance of the page corresponds to the DOM stored in the memory. The dynamic portion may be unique to the requested instance of the page. Moreover, the apparatus and methods include storing the dynamic portion of the DOM corresponding to the requested instance of the page in a relationship with the static portion of the DOM.10-14-2010
20100161902METHOD, SYSTEM, AND PROGRAM FOR AN ADAPTOR TO READ AND WRITE TO SYSTEM MEMORY - Provided are a method, system, and program for an adaptor to read and write to system memory. A plurality of blocks of data to write to storage are received at an adaptor. The blocks of data are added to a buffer in the adaptor. A determination is made of pages in a memory device and I/O requests are generated to write the blocks in the buffer to the determined pages, wherein two I/O requests are generated to write to one block split between two pages in the memory device. The adaptor executes the generated I/O requests to write the blocks in the buffer to the determined pages in the memory device.06-24-2010
20100217936SYSTEMS AND METHODS FOR PROCESSING ACCESS CONTROL LISTS (ACLS) IN NETWORK SWITCHES USING REGULAR EXPRESSION MATCHING LOGIC - A network node, such as an Ethernet switch, is configured to monitor packet traffic using regular expressions corresponding to Access Control List (ACL) rules. In one embodiment, the regular expressions are expressed in the form of a state machine. In one embodiment, as packets are passed through the network node, an access control module accesses the packets and traverses the state machine according to certain qualification content of the packets in order to determine if respective packets should be permitted to pass through the network switch.08-26-2010
20100217934METHOD, APPARATUS AND SYSTEM FOR OPTIMIZING IMAGE RENDERING ON AN ELECTRONIC DEVICE - Portable electronic devices typically have reduced computing resources, including reduced screen size. The method, apparatus and system of the present specification provides, amongst other things, an intermediation server configured to access network content that is requested by a portable electronic device and to analyze the content including analyzing images in that content. The intermediation server is further configured to accommodate the computing resources of the portable electronic device as part of fulfilling content requests from the portable electronic device.08-26-2010
20120198159INFORMATION PROCESSING DEVICE - An information processing device of the invention includes a measurement section which detects the changes in the uses of a built-in memory and an external memory, and a control section which monitors the measurement result from the measurement section, changes the configuration of the built-in memory, transfers the data stored in the built-in memory and the external memory, and changes the external memory area and the built-in memory area used by the CPU and other bus master devices, wherein it is possible to detect the changes in the memory utilization efficiency that cannot be predicted by static analysis, and to maintain an optimal memory configuration.08-02-2012
20100250850PROCESSOR AND METHOD FOR EXECUTING LOAD OPERATION AND STORE OPERATION THEREOF - A processor and a method for executing load operation and store operation thereof are provided. The processor includes a data cache and a store buffer. When executing a store operation, if the address of the store operation is the same as the address of an existing entry in the store buffer, the data of the store operation is merged into the existing entry. When executing a load operation, if there is a memory dependency between an existing entry in the store buffer and the load operation, and the existing entry includes the complete data required by the load operation, the complete data is provided by the existing entry alone. If the existing entry does not include the complete data, the complete data is generated by assembling the existing entry and a corresponding entry in the data cache.09-30-2010
20100153644ON DEMAND JAVA APPLICATION MANAGER - A system, method and program product for providing an on demand Java application manager. A system is provided that includes: a bootstrap system for setting up a cache within a local storage and pointing to at least one application at a network resource; a class loader that loads class files for a selected application into the JVM in an on demand fashion, wherein the class loader searches for a requested class file initially in the cache and if not present downloads the requested class file from the network resource to the cache; and a disk management system that manages storage space in the cache, wherein the disk management system includes a facility for discarding class files from the cache.06-17-2010
20100228917DEVICE MANAGEMENT APPARATUS, DEVICE INITIALIZATION METHOD, AND DEVICE SYSTEM - A device management apparatus that executes an initialization processing to a device that stores user data includes a first initialization processing section for executing a first initialization processing in which a progress status of an initialization is notified to another device management apparatus every time when the initialization equivalent to a processing unit of the initialization processing is executed to the device, a second initialization processing section for executing a second initialization processing in which a progress status of an initialization is notified to the another device management apparatus every time when the initialization for the predetermined number of processing units is executed to the device, a monitoring unit for monitoring a status of access to the device and an operation state of the device, and a changeover section for changing over the first initialization processing and the second initialization processing based on a monitoring result.09-09-2010
20100217935SYSTEM ON CHIP AND ELECTRONIC SYSTEM HAVING THE SAME - An electronic system includes a system on chip (SOC). The SOC includes at least one internal memory that operates selectively as a cache memory or a tightly-coupled memory (TCM). The SOC may include a microprocessor, an internal memory, and a selecting circuit. The selecting circuit may be configured to set the internal memory to one of a TCM mode or a cache memory mode in response to a memory selecting signal.08-26-2010
20090043965EARLY DATA RETURN INDICATION MECHANISM - One embodiment of a method is disclosed. The method generates requests waiting for data to be loaded into a data cache including a first level cache (FLC). The method further receives the requests from instruction sources, schedules the requests, and then passes the requests on to an execution unit having the data cache. Further, the method checks contents of the data cache, replays to the requests if the data is not located in the data cache, and stores the requests that are replay safe. The method further detects the readiness of the data of bus clocks prior to the data being ready to be transmitted to a processor, and transmits an early data ready indication to the processor to drain the requests from a resource scheduler.02-12-2009
20100250852USER TERMINAL APPARATUS AND CONTROL METHOD THEREOF, AS WELL AS PROGRAM - A user terminal apparatus and a control method therefor, which constitutes part of a thin client system which transfers data to a file server and stores the data therein. The system aggregates user data in a file server by controlling writing into a secondary storage device of the user terminal and controlling writing out to an external storage medium, to prevent loss and leakage of confidential information.09-30-2010
20100250851MULTI-PROCESSOR SYSTEM DEVICE AND METHOD DECLARING AND USING VARIABLES - A method of declaring and using variables includes; determining whether variables are independent variables or common variables, declaring and storing the independent variables in a plurality of data structures respectively corresponding to the plurality of processors, declaring and storing the common variables in a shared memory area, allowing each one of the plurality of processors to simultaneously use the independent variables in a corresponding one of the plurality of data structures, and allowing only one of the plurality of processors at a time to use the common variables in the shared memory area.09-30-2010
20090100226Cache memory device and microprocessor - A cache controller is connected to a processor and a main memory. The cache controller is also connected to a cache memory that can read and write at a speed higher than the main memory. The cache memory is provided with a plurality of cache lines that include a tag area storing an address on the main memory, a capacity area storing a capacity value of a cache block, and a cache block. When a read request is executed from the processor to the main memory, the cache controller checks whether the requested data is present in the cache memory or not. A cache capacity determination unit determines a capacity value for the cache block and supplies to a capacity area.04-16-2009
20100241807VIRTUALIZED DATA STORAGE SYSTEM CACHE MANAGEMENT - Virtual storage arrays consolidate branch data storage at data centers connected via wide area networks. Virtual storage arrays appear to storage clients as local data storage; however, virtual storage arrays actually store data at the data center. The virtual storage arrays overcomes bandwidth and latency limitations of the wide area network by predicting and prefetching storage blocks, which are then cached at the branch location. Virtual storage arrays leverage an understanding of the semantics and structure of high-level data structures associated with storage blocks to predict which storage blocks are likely to be requested by a storage client in the near future. Virtual storage arrays determine the association between requested storage blocks and corresponding high-level data structure entities to predict additional high-level data structure entities that are likely to be accessed. From this, the virtual storage array identifies the additional storage blocks for prefetching.09-23-2010
20100241805IMAGE FORMING APPARATUS, AND CONTROL METHOD AND PROGRAM THEREOF - An object attribute is determined with respect to an object and a determination is performed as to whether or not to execute image cache processing in response to the object attribute. By switching processing in accordance with this, execution of time-consuming image specifying processing is kept to a necessary minimum and performance reductions can be avoided. Furthermore, cache registration is avoided for images having low reusability, which achieves improvements in cache usage efficiency and improvements in cache search efficiency, thereby enabling performance to be improved.09-23-2010
20100211741Shared Composite Data Representations and Interfaces - Embodiments described herein provide information management features and functionality that can be used to manage information of distinct information sources, but are not so limited. In an embodiment, a computing environment includes a client that can be used to access data from distinct sources and generate a data composition representing aspects of accessed and other data and/or relationships of the distinct sources. In one embodiment, a client can include data composition and conflict resolution presentation features that can be used to manage one or more data compositions and/or source interrelationships. Other embodiments are available.08-19-2010
20110022798METHOD AND SYSTEM FOR CACHING TERMINOLOGY DATA - A method for caching terminology data, including steps of: receiving a terminology request; determining that the terminology request is related to at least one uncached terminology concept; retrieving a complete concept set of the terminology concept as a cache unit, wherein the complete concept set includes the terminology concept, all other terminology concepts which are directly correlated or indirectly correlated through a non-transitive relationship to the terminology concept, properties of each terminology concept, and the non-transitive relationship between each terminology concept; retrieving transitive relationship information for the complete concept set, the transitive relationship information at least including identifiers of terminology concepts which are correlated through the transitive relationship to each terminology concept in the complete concept set; and caching the cache unit and the transitive relationship information of the cache unit. A corresponding device caches terminology data.01-27-2011
20110113195SYSTEMS AND METHODS FOR AVOIDING PERFORMANCE DEGRADATION DUE TO DISK FRAGMENTATION IN A NETWORK CACHING DEVICE - Storage space on one or more hard disks of a network caching appliance is divided into a plurality S of stripes. Each stripe is a physically contiguous section of the disk(s), and is made up of a plurality of sectors. Content, whether in the form of objects or otherwise (e.g., byte-cache stream information), is written to the stripes one at a time, and when the entire storage space has been written the stripes are recycled as a whole, one at a time. In the event of a cache hit, if the subject content is stored on an oldest D ones of the stripes, the subject content is rewritten to a currently written stripe, where 1≦D≦(S−1).05-12-2011
20090276574ARITHMETIC DEVICE, ARITHMETIC METHOD, HARD DISC CONTROLLER, HARD DISC DEVICE, PROGRAM CONVERTER, AND COMPILER - This arithmetic device includes: a first memory to store a first program; a first arithmetic module to read the first program from the first memory to execute the first program; a second memory to store a second program which is embedded in processing of the first program and called from the first arithmetic module and executed, and whose access speed is lower than the first memory; a third memory storing data temporarily and whose access speed is higher than the second memory; a second arithmetic module to read the second program from the second memory and store in a third memory; and a third arithmetic module to read the second program from the third memory to execute the second program, in accordance with a call from the first arithmetic module to execute the first program.11-05-2009
20080301367WAVEFORM CACHING FOR DATA DEMODULATION AND INTERFERENCE CANCELLATION AT A NODE B - The present patent application discloses a method and apparatus for using external and internal memory for cancelling traffic interference comprising storing data in an external memory; and processing the data samples on an internal memory, wherein the external memory is low bandwidth memory; and the internal memory is high bandwidth on board cache. The present method and apparatus also comprises caching portions of the data on the internal memory, filling the internal memory by reading the newest data from the external memory and updating the internal memory; and writing the older data back to the external memory from the internal memory, wherein the data is incoming data samples.12-04-2008
20110113196AVOIDING MEMORY ACCESS LATENCY BY RETURNING HIT-MODIFIED WHEN HOLDING NON-MODIFIED DATA - A microprocessor is configured to communicate with other agents on a system bus and includes a cache memory and a bus interface unit coupled to the cache memory and to the system bus. The bus interface unit receives from another agent coupled to the system bus a transaction to read data from a memory address, determines whether the cache memory is holding the data at the memory address in an exclusive state (or a shared state in certain configurations), and asserts a hit-modified signal on the system bus and provides the data on the system bus to the other agent when the cache memory is holding the data at the memory address in an exclusive state. Thus, the delay of an access to the system memory by the other agent is avoided.05-12-2011
20100199043METHODS AND MECHANISMS FOR PROACTIVE MEMORY MANAGEMENT - A proactive, resilient and self-tuning memory management system and method that result in actual and perceived performance improvements in memory management, by loading and maintaining data that is likely to be needed into memory, before the data is actually needed. The system includes mechanisms directed towards historical memory usage monitoring, memory usage analysis, refreshing memory with highly-valued (e.g., highly utilized) pages, I/O pre-fetching efficiency, and aggressive disk management. Based on the memory usage information, pages are prioritized with relative values, and mechanisms work to pre-fetch and/or maintain the more valuable pages in memory. Pages are pre-fetched and maintained in a prioritized standby page set that includes a number of subsets, by which more valuable pages remain in memory over less valuable pages. Valuable data that is paged out may be automatically brought back, in a resilient manner. Benefits include significantly reducing or even eliminating disk I/O due to memory page faults.08-05-2010
20110113197QUEUE ARRAYS IN NETWORK DEVICES - A queue descriptor including a head pointer pointing to the first element in a queue and a tail pointer pointing to the last element in the queue is stored in memory. In response to a command to perform an enqueue or dequeue operation with respect to the queue, fetching from the memory to a cache only one of either the head pointer or tail pointer and returning to the memory from the cache portions of the queue descriptor modified by the operation.05-12-2011
20090276573Transient Transactional Cache - In one embodiment, a processor comprises an execution core, a level 1 (L1) data cache coupled to the execution core and configured to store data, and a transient/transactional cache (TTC) coupled to the execution core. The execution core is configured to generate memory read and write operations responsive to instruction execution, and to generate transactional read and write operations responsive to executing transactional instructions. The L1 data cache is configured to cache memory data accessed responsive to memory read and write operations to identify potentially transient data and to prevent the identified transient data from being stored in the L1 data cache. The TTC is also configured to cache transaction data accessed responsive to transactional read and write operations to track transactions. Each entry in the TTC is usable for transaction data and for transient data.11-05-2009
20090327609Performance based cache management - Methods and apparatus to manage cache memory are disclosed. In one embodiment, an electronic device comprises a first processing unit, a first cache memory, and a first cache controller, and a power management module, wherein the power management module determines at least one operating parameter for the cache memory and passes the at least one operating parameter for the cache memory to a cache controller. Further, the first cache controller manages the cache memory according to the at least one operating parameter, and the power management module evaluates, in the power management module, operating data for the cache memory from the cache controller, and generates, in the power management module, at least one modified operating parameter for the cache memory based on the operating data for the cache memory from the cache controller.12-31-2009
20090327608Accelerated resume from hibernation in a cached disk system - Various embodiments of the invention use a non-volatile (NV) memory to store hiberfile data before entering a hibernate state, and retrieve the data upon resume from hibernation. Unlike conventional systems, the reserve space in the NV memory (i.e., the erased blocks available to be used while in the run-time mode) may be used to store hiberfile data. Further, a write-through cache policy may be used to assure that all of the hiberfile data saved in cache will also be stored on the disk drive during the hibernation, so that if the cache and the disk drive are separated during hibernation, the full correct hiberfile data will still be available for a resume operation.12-31-2009
20090327607Apparatus and method for cache utilization - In some embodiments, an electronic system may include a cache located between a mass storage and a system memory, and code stored on the electronic system to prevent storage of stream data in the cache and to send the stream data directly between the system memory and the mass storage based on a comparison of first metadata of a first request for first information and pre-boot stream information stored in a previous boot context. Other embodiments are disclosed and claimed.12-31-2009
20080320223Cache controller and cache control method - A cache controller that writes data to a cache memory, includes a first buffer unit that retains data flowing in from outside to be written to the cache memory, a second buffer unit that retains a data piece to be currently written to the cache memory, among pieces of the data retained in the first buffer unit, and a write controlling unit that controls writing of the data piece retained in the second buffer unit to the cache memory.12-25-2008
20090177840System and Method for Servicing Inquiry Commands About Target Devices in Storage Area Network - Inquiry data received from sequential target devices is stored in a cache memory. In one embodiment, the cache memory is coupled to a router. In one embodiment, when the router receives from a host an inquiry command about a target, the router first checks to see if the inquiry command can be serviced from the cache. If so, the inquiry data about the target is retrieved from the cache and returned to the host. If not, the router checks to see if the target is busy. If not busy, the router routes the inquiry command to the target and stores the inquiry data returned by the target in the cache. If the target is busy, the router places the inquiry command in a queue. When the target becomes available, the router forwards the inquiry command to the target for processing, thereby keeping the inquiry command from timing out.07-09-2009
20090210622COMPRESSED CACHE IN A CONTROLLER PARTITION - A method of extending functionality of a data storage facility by adding to the primary storage system new functions using extension function subsystems is disclosed. One example of extending the functionality includes compressing and caching data in a data storage facility to improve storage and access performance of the data storage facility. A primary storage system queries a data storage extension system for availability of data tracks. If the primary storage system does not receive a response or the data tracks from the data storage extension system, it continues caching by fetching data tracks from a disk storage system. The storage extension system manages compression/decompression of data tracks in response to messages from the primary storage system. Data tracks transferred from the data storage extension system to the primary storage system are marked as stale at the data storage extension system and are made available for deletion.08-20-2009
20090164728SEMICONDUCTOR MEMORY DEVICE AND SYSTEM USING SEMICONDUCTOR MEMORY DEVICE - A semiconductor memory device includes a data storage region which includes a plurality of unit data regions storing data, an information storage region which includes a plurality of unit information regions each storing information related to the data stored in associated one of the unit data regions, and an address generation circuit which generates an address designating one of the unit data regions and one of the unit information regions associated with each other.06-25-2009
20120198158Multi-Channel Cache Memory - A cache memory including: a plurality of parallel input ports configured to receive, in parallel, memory access requests wherein each parallel input port is operable to receive a memory access request for any one of a plurality of processing units; and a plurality of cache blocks wherein each cache block is configured to receive memory access requests from a unique one of the plurality of input ports such that there is a one-to-one mapping between the plurality of parallel input ports and the plurality of cache blocks and wherein each of the plurality of cache blocks is configured to serve a unique portion of an address space of the memory.08-02-2012
20120198157GUEST INSTRUCTION TO NATIVE INSTRUCTION RANGE BASED MAPPING USING A CONVERSION LOOK ASIDE BUFFER OF A PROCESSOR - A method for translating instructions for a processor. The method includes accessing a plurality of guest instructions that comprise multiple guest branch instructions, and assembling the plurality of guest instructions into a guest instruction block. The guest instruction block is converted into a corresponding native conversion block. The native conversion block is stored into a native cache. A mapping of the guest instruction block to corresponding native conversion block is stored in a conversion look aside buffer. Upon a subsequent request for a guest instruction, the conversion look aside buffer is indexed to determine whether a hit occurred, wherein the mapping indicates whether the guest instruction has a corresponding converted native instruction in the native cache. The converted native instruction is forwarded for execution in response to the hit.08-02-2012
20090113130SYSTEM AND METHOD FOR UPDATING DIRTY DATA OF DESIGNATED RAW DEVICE - A system and method for updating dirty data of designated raw device is applied in Linux system. A format of a command parameter for updating the dirty data of the designated raw device is determined, to obtain the command parameter with the correct format and transmit it into the Kernel of the Linux system. Then, a data structure of the designated raw device is sought based on the command parameter, to obtain a fast search tree of the designated raw device. Finally, all dirty data pages of the designated raw device are found by the fast search tree, and then are updated into a magnetic disk in a synchronous or asynchronous manner. Therefore, the dirty data of an individual raw device can be updated and written into the magnetic disk without interrupting the normal operation of the system, hereby ensuring secure, convenient, and highly efficient update of the dirty data.04-30-2009
20100306469PROCESSING METHOD AND APPARATUS - A processing apparatus externally receives a processing request and executes the requested processing. The processing apparatus transmits the result of the processing to a processing request source if a connection to the processing request source is maintained until the requested processing is executed. The processing apparatus stores the result of executing the processing in a memory if the connection to the processing request source is disconnected before the end of the requested processing. The processing apparatus transmits the processing result stored in the memory to the processing request source if the processing requested when the processing request is received is executed but is stored in the memory.12-02-2010
20100281216METHOD AND APPARATUS FOR DYNAMICALLY SWITCHING CACHE POLICIES - A method implements a cache-policy switching module in a storage system. The storage system includes a cache memory to cache storage data. The cache memory uses a first cache configuration. The cache-policy switching module emulates the caching of the storage data with a plurality of cache configurations. Upon a determination that one of the plurality of cache configurations performs better than the first cache configuration, the cache-policy switching module automatically applies the better performing cache configuration to the cache memory for caching the storage data.11-04-2010
20100325357SYSTEMS AND METHODS FOR INTEGRATION BETWEEN APPLICATION FIREWALL AND CACHING - The present invention is directed towards systems and methods for integrating cache managing and application firewall processing in a networked system. In various embodiments, an integrated cache/firewall system comprises an application firewall operating in conjunction with a cache managing system in operation on an intermediary device. In various embodiments, the application firewall processes a received HTTP response to a request by a networked entity serviced by the intermediary device. The application firewall generates metadata from the HTTP response and stores the metadata in cache with the HTTP response. When a subsequent request hits in the cache, the metadata is identified to a user session associated with the subsequent request. In various embodiments, the application firewall can modify a cache-control header of the received HTTP response, and can alter the cookie-setting header of the cached HTTP response. The system and methods can significantly reduce processing time associated with application firewall processing of web content exchanged over a network.12-23-2010
20130013859Structure-Based Adaptive Document Caching - Techniques for generating, updating, and transmitting a structure-based data representation of a document are described herein. The structure-based adaptive document caching techniques may effectively eliminate redundancy in data transmission by exploiting structures of the document to be transmitted. The described techniques partitions a document into a sequence of structures, differentiate between cache-worthy structures and cache-unworthy structures, and generating a structure-based data representation of the document. The techniques may transmit updated structures and instructions, instead of all data of the document, to update previously cached structures at a client device; thereby resulting in higher cache hit rates.01-10-2013
20100332754System and Method for Caching Multimedia Data - Systems and methods are provided for caching media data to thereby enhance media data read and/or write functionality and performance. A multimedia apparatus, comprises a cache buffer configured to be coupled to a storage device, wherein the cache buffer stores multimedia data, including video and audio data, read from the storage device. A cache manager coupled to the cache buffer, wherein the cache buffer is configured to cause the storage device to enter into a reduced power consumption mode when the amount of data stored in the cache buffer reaches a first level.12-30-2010
20110010500Novel Context Instruction Cache Architecture for a Digital Signal Processor - Improved thrashing aware and self configuring cache architectures that reduce cache thrashing without increasing cache size or degrading cache hit access time, for a DSP. In one example embodiment, this is accomplished by selectively caching only the instructions having a higher probability of recurrence to considerably reduce cache thrashing.01-13-2011
20110022799METHOD TO SPEED UP ACCESS TO AN EXTERNAL STORAGE DEVICE AND AN EXTERNAL STORAGE SYSTEM - A method to speed up access to an external storage device for accessing to the external storage device comprises the steps of: 01-27-2011
20110029734Controller Integration - Roughly described, a data processing system comprises a central processing unit and a split network interface functionality, the split network interface functionality comprising: a first sub-unit collocated with the central processing unit and configured to at least partially form a series of network data packets for transmission to a network endpoint by generating data link layer information for each of those packets; and a second sub-unit external to the central processing unit and coupled to the central processing unit via an interconnect, the second sub-unit being configured to physically signal the series of network data packets over a network.02-03-2011
20100161901Correction of incorrect cache accesses - The application describes a data processor operable to process data, and comprising: a cache in which a storage location of a data item within said cache is identified by an address, said cache comprising a plurality of storage locations and said data processor comprising a cache directory operable to store a physical address indicator for each storage location comprising stored data; a hash value generator operable to generate a generated hash value from at least some of said bits of said address said generated hash value having fewer bits than said address; a buffer operable to store a plurality of hash values relating to said plurality of storage locations within said cache; wherein in response to a request to access said data item said data processor is operable to compare said generated hash value with at least some of said plurality of hash values stored within said buffer and in response to a match to indicate a indicated storage location of said data item; and said data processor is operable to access one of said physical address indicators stored within said cache directory corresponding to said indicated storage location and in response to said accessed physical address indicator not indicating said address said data processor is operable to invalidate said indicated storage location within said cache. 06-24-2010
20110035550Sharing Memory Resources of Wireless Portable Electronic Devices - It is not uncommon for two or more wireless-enabled devices to spend most of their time in close proximity to one another. For example, a person may routinely carry a personal digital assistant (PDA) and a portable digital audio/video player, or a cellphone and a PDA, or a smartphone and a gaming device. When it is desirable to increase the memory storage capacity of a first such device, it may be possible to use memory on one or more of the other devices to temporarily store data from the first device.02-10-2011
20110040938ELECTRONIC APPARATUS AND METHOD OF CONTROLLING THE SAME - Disclosed are an electronic apparatus and a method of controlling the same, the electronic apparatus comprising: a nonvolatile memory unit in which an application is stored; a volatile memory unit in which data based on execution of the application is stored; and a controller which stops supplying power to the volatile memory unit when the electronic apparatus is turned off if a remaining capacity of the volatile memory unit reaches a threshold value for initializing the volatile memory unit, and keeps the power supplied to the volatile memory to make the volatile memory unit retain the data based on the execution of the application even when the electronic apparatus is turned off if the remaining capacity does not reach the threshold value. With this, a memory leak that may be generated when using a STR mode can be effectively prevented.02-17-2011
20090049243Caching Dynamic Content - Aspects of the subject matter described herein relate to caching dynamic content. In aspects, caching components on a requesting entity and on a content server cache requested content. When a request for content similar to cached content is received, the requesting entity sends a request for the content and an identifier of similar cached content to the content server. The content server obtains the requested content and determines the differences between the requested content and the cached content. The content server then sends the differences to the requesting entity. The requesting entity uses the differences and its cached content to construct the requested content and provides the requested content.02-19-2009
20100153645Cache control apparatus and method - A cache control apparatus and method are provided. The cache control apparatus may include a parameter input unit to receive a first parameter corresponding to a block-level cache in a main memory, a cache index extraction unit to extract a cache index from the first parameter, a cache tag extraction unit to extract a cache tag from the first parameter, and a comparison unit to determine whether a cache hit occurs using the cache index and the cache tag.06-17-2010
20110145500SEMICONDUCTOR DEVICE AND DATA PROCESSING SYSTEM - A high-speed, low-cost data processing system capable of ensuring expandability of memory capacity and having excellent usability while keeping constant latency is provided. The data processing system is configured to include a data processing device, a volatile memory, and a non-volatile memory. As the data processing device, the volatile memory, and the non-volatile memory are connected in series and the number of connection signals are reduced, the speed is increased while keeping expandability of memory capacity. The data processing device measures latency and performs a latency correcting operation to keep the latency constant. When data in the non-volatile memory is transferred to the volatile memory, error correction is performed to improve reliability. The data processing system formed of these plurality of chips is configured as a data processing system module in which the chips are disposed so as to be multilayered each other and are connected by a ball grid array (BGA) or a technology of wiring these chips.06-16-2011
20110145499ASYNCHRONOUS FILE OPERATIONS IN A SCALABLE MULTI-NODE FILE SYSTEM CACHE FOR A REMOTE CLUSTER FILE SYSTEM - Asynchronous file operations in a scalable multi-node file system cache for a remote cluster file system, is provided. One implementation involves maintaining a scalable multi-node file system cache in a local cluster file system, and caching local file data in the cache by fetching file data on demand from the remote cluster file system into the cache over the network. The local file data corresponds to file data in the remote cluster file system. Local file information is asynchronously committed from the cache to the remote cluster file system over the network.06-16-2011
20110145498INSTRUMENTATION OF HARDWARE ASSISTED TRANSACTIONAL MEMORY SYSTEM - Monitoring performance of one or more architecturally significant processor caches coupled to a processor. The methods include executing an application on one or more processors coupled to one or more architecturally significant processor caches, where the application utilizes the architecturally significant portions of the architecturally significant processor caches. The methods further include at least one of generating metrics related to performance of the architecturally significant processor caches; implementing one or more debug exceptions related to performance of the architecturally significant processor caches; or implementing one or more transactional breakpoints related to performance of the architecturally significant processor caches as a result of utilizing the architecturally significant portions of the architecturally significant processor caches.06-16-2011
20100241804METHOD AND SYSTEM FOR FAST RETRIEVAL OF RANDOM UNIVERSALLY UNIQUE IDENTIFIERS THROUGH CACHING AND PARALLEL REGENERATION - In general, the invention relates to a system that includes a UUID cache and a UUID caching mechanism. The UUID caching mechanism is configured to, using a first thread, monitor the number of UUIDs stored in the UUID cache, determine that the number of UUIDs stored in the UUID cache is less than a first threshold, request a first set of UUIDs from a UUID generator, receive the first set of UUIDs from the UUID generator, and store the first set of UUIDs received from the UUID generator in the UUID cache. The UUID caching mechanism is further configured to provide a second set of UUIDs to a first application using a second thread, where at least one of the UUIDs in the second set of UUIDs is from the first set of UUIDs, and where the first thread and the second thread execute concurrently.09-23-2010
20100228918CONFIGURABLE LOGIC INTEGRATED CIRCUIT HAVING A MULTIDIMENSIONAL STRUCTURE OF CONFIGURABLE ELEMENTS - Programming of modules which can be reprogrammed during operation is described. Partitioning of code sequences is also described.09-09-2010
20110078379STORAGE CONTROL UNIT AND DATA MANAGEMENT METHOD - An I/O processor determines whether or not the amount of dirty data on a cache memory exceeds a threshold value and, if the determination is that this threshold value has been exceeded, writes a portion of the dirty data of the cache memory to a storage device. If a power source monitoring and control unit detects a voltage abnormality of the supplied power, the power monitoring and control unit maintains supply of power using power from a battery, so that a processor receives supply of power from the battery and saves the dirty data stored on the cache memory to a non-volatile memory.03-31-2011
20110078378METHOD FOR GENERATING PROGRAM AND METHOD FOR OPERATING SYSTEM - An information processing apparatus sequentially selects a function whose execution frequency is high as a selected function that is to be stored in an internal memory, in a source program having a hierarchy structure. The information processing apparatus allocates the selected function to a memory area of the internal memory, allocates a function that is not the selected function and is called from the selected function to an area close to the memory area of the internal memory, and generates an internal load module. The information processing apparatus allocates a remaining function to an external memory coupled to a processor and generates an external load module. Then, a program executed by the processor having the internal memory is generated. By allocating the function with a high execution frequency to the internal memory, it is possible to execute the program at high speed, which may improve performance of a system.03-31-2011
20100191910APPARATUS AND CIRCUITRY FOR MEMORY-BASED COLLECTION AND VERIFICATION OF DATA INTEGRITY INFORMATION - Apparatus and circuitry are provided for supporting collection and/or verification of data integrity information. A circuitry in a storage controller is provided for creating and/or verifying a Data Integrity Block (“DIB”). The circuitry comprises a processor interface for coupling with the processor of the storage controller. The circuitry also comprises a memory interface for coupling with a cache memory of the storage controller. By reading a plurality of Data Integrity Fields (“DIFs”) from the cache memory through the memory interface based on information received from the processor, the DIB is created in that each DIF in the DIB corresponds to a respective data block.07-29-2010
20110087840EFFICIENT LINE AND PAGE ORGANIZATION FOR COMPRESSION STATUS BIT CACHING - One embodiment of the present invention sets forth a technique for performing a memory access request to compressed data within a virtually mapped memory system comprising an arbitrary number of partitions. A virtual address is mapped to a linear physical address, specified by a page table entry (PTE). The PTE is configured to store compression attributes, which are used to locate compression status for a corresponding physical memory page within a compression status bit cache. The compression status bit cache operates in conjunction with a compression status bit backing store. If compression status is available from the compression status bit cache, then the memory access request proceeds using the compression status. If the compression status bit cache misses, then the miss triggers a fill operation from the backing store. After the fill completes, memory access proceeds using the newly filled compression status information.04-14-2011
20110087839APPARATUSES, METHODS AND SYSTEMS FOR A SMART ADDRESS PARSER - The apparatus, methods and systems for a smart address parser (hereinafter, “SAP”) described herein implement a text parser whereby users may enter a text string, such as manually via an input field. The SAP processes the input address string to extract address elements for storage, display, reporting, and/or use in a wide variety of back-end applications. In various embodiments and implementations, the SAP may facilitate: separation and identification of address components regardless of the order in which they are supplied in the input address string; supplementation of missing address information; correction and/or recognition of misspelled terms, abbreviations, alternate names, and/or the like variants of address elements; recognition of unique addresses based on minimal but sufficient input identifiers; and/or the like.04-14-2011
20100131711SERIAL INTERFACE CACHE CONTROLLER, CONTROL METHOD AND MICRO-CONTROLLER SYSTEM USING THE SAME - A serial interface cache controller, control method and micro-controller system using the same. The controller includes L rows of address tags, wherein each row of address tags includes an M-bits block tag and an N-bits valid area tag. The M-bits block tag records an address block of T-byte data stored in an internal cache memory, and the N-bits valid area tag records valid bit sectors in the address block. Each valid bit sector has the size of T/N bytes. The controller needs to read T/N bytes of data from an external memory to the internal cache memory at each time without the need of reading the T-byte data of the whole address block. Because the T-byte data of the whole address block is not necessary to be read by the micro-controller, the waiting time of the micro-controller may be shortened, and the performance can be increased.05-27-2010
20100262779Program And Data Annotation For Hardware Customization And Energy Optimization - Technologies are generally described herein for supporting program and data annotation for hardware customization and energy optimization. A code block to be annotated may be examined and a hardware customization may be determined to support a specified quality of service level for executing the code block with reduced energy expenditure. Annotations may be determined as associated with the determined hardware customization. An annotation may be provided to indicate using the hardware customization while executing the code block. Examining the code block may include one or more of performing a symbolic analysis, performing an empirical observation of an execution of the code block, performing a statistical analysis, or any combination thereof. A data block to be annotated may also be examined. One or more additional annotations to be associated with the data block may be determined.10-14-2010
20090049246APPARATUS AND METHOD OF CACHING FRAME - An apparatus and method of caching a frame is provided. The method of caching a frame includes receiving information on a frame to be cached from a main storage unit, setting an initial value of a specified mode using the received information, and caching the frame from the main storage unit using the specified mode.02-19-2009
20110252199Data Placement Optimization Using Data Context Collected During Garbage Collection - Mechanisms are provided for data placement optimization during runtime of a computer program. The mechanisms detect cache misses in a cache of the data processing system and collect cache miss information for objects of the computer program. Data context information is generated for an object in an object access sequence of the computer program. The data context information identifies one or more additional objects accessed as part of the object access sequence in association with the object. The cache miss information is correlated with the data context information of the object. Data placement optimization is performed on the object, in the object access sequence, with which the cache miss information is associated. The data placement optimization places connected objects in the object access sequence in close proximity to each other in a memory structure of the data processing system.10-13-2011
20100064105Leveraging Synchronous Communication Protocols to Enable Asynchronous Application and Line-of-Business Behaviors - Methods and systems of leveraging synchronous communication protocols to enable asynchronous application and line of business behaviors. An application platform may be provided and configured to provide a pending state for any synchronous operation. The pending state may indicate that the operation has not been completed yet. For an application which may know how to track an operation that has a pending state, the application may control when the operation enters and exits the pending state. The application may communicate to the application platform to hold off on other operations dependent upon the pending operation when the pending operation is not complete. For an application which does not know how to track an operation that has a pending state, the application platform may ignore the pending state of the operation and proceed to other operations. Accordingly, the synchronous user experience is preserved where a straightforward, down-level user interface and experience is appropriate. The user interface and experience is also extended when an application knows how to interpret and present the asynchronous nature of various underlying systems.03-11-2010
20090216948METHOD FOR SUBSTANTIALLY UNINTERRUPTED CACHE READOUT - A memory device capable of sequentially outputting multiple pages of cached data while mitigating any interruption typically caused by fetching and transferring operations. The memory device outputs cached data from a first page while data from a second page is fetched into sense amplifier circuitry. When the outputting of the first page reaches a predetermined transfer point, a portion of the fetched data from the second page is transferred into the cache at the same time the remainder of the cached first page is being output. The remainder of the second page is transferred into the cache after all of the data from the first page is output while the outputting of the first portion of the second page begins with little or no interruption.08-27-2009
20090216947SYSTEM, METHOD AND PROCESSOR FOR ACCESSING DATA AFTER A TRANSLATION LOOKASIDE BUFFER MISS - Data is accessed in a multi-level hierarchical memory system. A request for data is received, including a virtual address for accessing the data. A translation buffer is queried to obtain an absolute address corresponding to the virtual address. Responsive to the translation buffer not containing an absolute address corresponding to the virtual address, the absolute address is obtained from a translation unit. A directory look-up is performed with the absolute address to determine whether a matching absolute address exists in the directory. A fetch request for the requested data is sent to a next level in the multi-level hierarchical memory system. Processing of the fetch request by the next level occurs in parallel with the directory lookup. The requested data is received in the primary cache to make the requested data available to be written to the primary cache.08-27-2009
20100070708ARITHMETIC PROCESSING APPARATUS AND METHOD - An apparatus includes a TLB storing a part of a TSB area included in a memory accessed by the apparatus. The TSB area stores an address translation pair for translating a virtual address into a physical address. The apparatus further includes a cache memory that temporarily stores the pair; a storing unit that stores a starting physical address of the pair stored in the memory unit; a calculating unit that calculates, based on the starting physical address and a virtual address to be converted, a TSB pointer used in obtaining from the TSB area a corresponding address translation pair corresponding to the virtual address to be converted; and an obtaining unit that obtains the corresponding pair from the TSB area using the TSB pointer calculated and stores the corresponding pair in the cache memory, if the corresponding pair is not retrieved from the TLB or the cache memory.03-18-2010
20110153938SYSTEMS AND METHODS FOR MANAGING STATIC PROXIMITY IN MULTI-CORE GSLB APPLIANCE - The present invention is directed towards systems and methods for providing static proximity load balancing via a multi-core intermediary device. An intermediary device providing global server load balancing identifies a size of a location database comprising static proximity information. The intermediary device stores the location database to an external storage of the intermediary device responsive to determining the size of the location database is greater than a predetermined threshold. A first packet processing engine on the device receives a domain name service request for a first location, determines that proximity information for the first location is not stored in a first memory cache, transmits a request to a second packet processing engine for proximity information of the first location, and transmits a request to the external storage for proximity information of the first location responsive to the second packet processing engine not having the proximity information.06-23-2011
20080256295Method of Increasing Boot-Up Speed - There is provided a method of increasing boot-up speed in a computer system (10-16-2008
20100057993INTER-FRAME TEXEL CACHE - Methods, apparatuses, and systems are presented for caching. A cache memory area may be used for storing data from memory locations in an original memory area. The cache memory area may be used in conjunction with a repeatedly updated record of storage associated with the cache memory area. The repeatedly updated record of storage can thus provide a history of data storage associated with the cache memory area. The cache memory area may be loaded with entries previously stored in the cache memory area, by utilizing the repeatedly updated record of storage. In this manner, the record may be used to “warm up” the cache memory area, loading it with data entries that were previously cached and may be likely to be accessed again if repetition of memory accesses exists in the span of history captured by the repeatedly updated record of storage.03-04-2010
20110153935NUMA-AWARE SCALING FOR NETWORK DEVICES - The present disclosure describes a method and apparatus for network traffic processing in a non-uniform memory access architecture system. The method includes allocating a Tx/Rx Queue pair for a node, the Tx/Rx Queue pair allocated in a local memory of the node. The method further includes routing network traffic to the allocated Tx/Rx Queue pair. The method may include designating a core in the node for network traffic processing. Of course, many alternatives, variations and modifications are possible without departing from this embodiment.06-23-2011
20110153939SEMICONDUCTOR DEVICE, CONTROLLER ASSOCIATED THEREWITH, SYSTEM INCLUDING THE SAME, AND METHODS OF OPERATION - In one embodiment, the semiconductor device includes a data control unit configured to selectively process data for writing to a memory. The data control unit is configured to enable a processing function from a group of processing functions based on a mode register command during a write operation, the group of processing functions including at least three processing functions. The enabled processing function may be performed based on a signal received over a single pin associated with the group of processing functions. In another embodiment, the semiconductor device includes a data control unit configured to process data read from a memory. The data control unit is configured to enable a processing function from a group of processing functions based on a mode register command during a read operation. Here, the group of processing functions including at least two processing functions.06-23-2011
20110153940METHOD AND APPARATUS FOR COMMUNICATING DATA BETWEEN PROCESSORS IN MOBILE TERMINAL - A data communication method between processors in a portable terminal and an apparatus thereof are provided. The method includes storing data to be transmitted from a first processor to a second processor in a transmission buffer, determining a size of a free space in a shared memory, sequentially transmitting the data stored in the transmission buffer to the shared memory in units of the size of the free space to the shared memory, and reading out the data transmitted to the shared memory and storing the read data in a reception buffer by a second processor.06-23-2011
20110153937SYSTEMS AND METHODS FOR MAINTAINING TRANSPARENT END TO END CACHE REDIRECTION - The present disclosure presents systems and methods for maintaining original source and destination IP addresses of a request while performing intermediary cache redirection. An intermediary receives a request from a client destined to a server identifying a client IP address as a source IP address and a server IP address as a destination IP address. The intermediary transmits the request to a cache server, the request maintaining original IP addresses and identifying a MAC address of the cache server as the destination MAC address. The intermediary receives the request from the cache server responsive to a cache miss, the received request maintaining the original source and destination IP addresses. The intermediary identifying that the third request is coming from the cache server via one or more data link layer properties of the third transport layer connection. The intermediary transmits to the server the request identifying the client IP address as the source IP address and the server IP address as the destination IP address.06-23-2011
20110153936Aggregate Symmetric Multiprocessor System - An aggregate symmetric multiprocessor (SMP) data processing system includes a first SMP computer including at least first and second processing units and a first system memory pool and a second SMP computer including at least third and fourth processing units and second and third system memory pools. The second system memory pool is a restricted access memory pool inaccessible to the fourth processing unit and accessible to at least the second and third processing units, and the third system memory pool is accessible to both the third and fourth processing units. An interconnect couples the second processing unit in the first SMP computer for load-store coherent, ordered access to the second system memory pool in the second SMP computer, such that the second processing unit in the first SMP computer and the second system memory pool in the second SMP computer form a synthetic third SMP computer.06-23-2011
20110078376METHODS AND APPARATUS FOR OBTAINING INTEGRATED CONTENT FROM MULTIPLE NETWORKS - A method and apparatus for obtaining location content from multiple networks is disclosed. The method may comprises: obtaining coarse location content at a wireless communication device (WCD) from a first network using a first protocol, wherein the coarse location content includes information defining locations of geographic coverage regions for one or more second networks which use a second protocol, obtaining WCD location information, determining from the WCD location information and the coarse location content if the WCD is within the geographic coverage region of a second network, accessing the determined second network using the second protocol, receiving from the accessed second network fine location content, and generating an integrated location content item by combining the coarse location content with the fine location content.03-31-2011
20120303897CONFIGURABLE SET ASSOCIATIVE CACHE WAY ARCHITECTURE - System and method for dynamically configuring a set associative cache way architecture based on an application is disclosed. In one embodiment, a memory size required for the application is determined by a cache controller. Further, a required cache way size and a required number of cache ways in a set associative cache way are computed based on the determined memory size. Furthermore, the set associative cache way architecture is configured to power off selected areas of the set associative cache way based on the computed required cache way size and the required number of cache ways for running the application.11-29-2012
20120303896INTELLIGENT CACHING - Intelligent caching includes defining a cache policy for a data source, selecting parameters of data in the data source to monitor, the parameters forming a portion of the cache policy, and monitoring the data source for an event based on the cache policy. Upon an occurrence of an event, the intelligent caching also includes retrieving target data subject to the cache policy from a first location and moving the target data to a second location.11-29-2012
20120303895HANDLING HIGH PRIORITY REQUESTS IN A SEQUENTIAL ACCESS STORAGE DEVICE HAVING A NON-VOLATILE STORAGE CACHE - Provided are a computer program product, system, and method for handling high priority requests in a sequential access storage device. Received modified tracks for write requests are cached in a non-volatile storage device integrated with the sequential access storage device. A destage request is added to a request queue for a received write request having modified tracks for the sequential access storage medium cached in the non-volatile storage device. A read request indicting a priority is received. A determination is made of a priority of the read request as having a first priority or a second priority. The read request is added to the request queue in response to determining that the determined priority is the first priority. The read request is processed at a higher priority than the read and destage requests in the request queue in response to determining that the determined priority is the second priority.11-29-2012
20110072211Hardware For Parallel Command List Generation - A method for providing state inheritance across command lists in a multi-threaded processing environment. The method includes receiving an application program that includes a plurality of parallel threads; generating a command list for each thread of the plurality of parallel threads; causing a first command list associated with a first thread of the plurality of parallel threads to be executed by a processing unit; and causing a second command list associated with a second thread of the plurality of parallel threads to be executed by the processing unit, where the second command list inherits from the first command list state associated with the processing unit.03-24-2011
20130159625INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD - An information processing device includes an internal memory which is capable of performing processing faster than an external memory, and a memory controller which controls data transfer between the internal memory and the external memory. The memory controller controls a first data transfer and a second data transfer. The first data transfer is a data transfer from the external memory to the internal memory, and the second data transfer is a data transfer from the internal memory to the external memory. The second data transfer transfers a part of the data area of the internal memory transferred in the first data transfer, and the data area which is read out in a non-continuous way from the internal memory is transferred in place to the external memory in the second data transfer.06-20-2013
20130159624STORING THE MOST SIGNIFICANT AND THE LEAST SIGNIFICANT BYTES OF CHARACTERS AT NON-CONTIGUOUS ADDRESSES - In an embodiment, an indicator is set to indicate that all of a plurality of most significant bytes of characters in a character array are zero. A first index and an input character are received. The input character comprises a first most significant byte and a first least significant byte. The first most significant byte is stored at a first storage location and the first least significant byte is stored at a second storage location, wherein the first storage location and the second storage location have non-contiguous addresses. If the first most significant byte does not equal zero, the indicator is set to indicate that at least one of a plurality of most significant bytes of the characters in the character array is non-zero. The character array comprises the first most significant byte and the first least significant byte.06-20-2013
20100262778Empirically Based Dynamic Control of Transmission of Victim Cache Lateral Castouts - In response to a data request, a victim cache line is selected for castout from a lower level cache, and a target lower level cache of one of the plurality of processing units is selected. A determination is made whether the selected target lower level cache has provided more than a threshold number of retry responses to lateral castout (LCO) commands of the first lower level cache, and if so, a different target lower level cache is selected. The first processing unit thereafter issues a LCO command on the interconnect fabric. The LCO command identifies the victim cache line to be castout and indicates that the target lower level cache is an intended destination of the victim cache line. In response to a successful coherence response to the LCO command, the victim cache line is removed from the first lower level cache and held in the second lower level cache.10-14-2010
20100262777STORAGE APPARATUS AND METHOD FOR ELIMINATING REDUNDANT DATA STORAGE USING STORAGE APPARATUS - A storage apparatus 10-14-2010
20120203967REDUCING INTERPROCESSOR COMMUNICATIONS PURSUANT TO UPDATING OF A STORAGE KEY - Processing within a multiprocessor computer system is facilitated by: deciding by a processor, pursuant to processing of a request to update a previous storage key to a new storage key, whether to purge the previous storage key from, or update the previous storage key in, local processor cache of the multiprocessor computer system. The deciding includes comparing a bit value(s) of one or more required components of the previous storage key to respective predefined allowed stale value(s) for the required component(s), and leaving the previous storage key in local processor cache if the bit value(s) of the required component(s) in the previous storage key equals the respective predefined allowed stale value(s) for the required component(s). By selectively leaving the previous storage key in local processor cache, interprocessor communication pursuant to processing of the request to update the previous storage key to the new storage key is minimized.08-09-2012
20090240887INFORMATION PROCESSING UNIT, PROGRAM, AND INSTRUCTION SEQUENCE GENERATION METHOD - An information processing unit includes at least one cache memory provided between an instruction execution section and a storage section and a control section controlling content of address information based on a result of comparison processing between an address requested by a hardware prefetch request issuing section for memory access and address information held in an address information holding section, wherein when the control section causes the address information holding section to hold address information or address information in the address information holding section is updated, overwrite processing on the address information is inhibited for a predetermined time.09-24-2009
20080307162PRELOAD CONTROLLER, PRELOAD CONTROL METHOD FOR CONTROLLING PRELOAD OF DATA BY PROCESSOR TO TEMPORARY MEMORY, AND PROGRAM - A preload controller for controlling a bus access device that reads out data from a main memory via a bus and transfers the readout data to a temporary memory, including a first acquiring device to acquire access hint information which represents a data access interval to the main memory, a second acquiring device to acquire system information which represents a transfer delay time in transfer of data via the bus by the bus access device, a determining device to determine a preload unit count based on the data access interval represented by the access hint information and the transfer delay time represented by the system information, and a management device to instruct the bus access device to read out data for the preload unit count from the main memory and to transfer the readout data to the temporary memory ahead of a data access of the data.12-11-2008
20080229019METHOD AND SYSTEM FOR EFFICIENT FRAGMENT CACHING - Methods for serving data include maintaining an incomplete version of an object at a server and at least one fragment at the server. In response to a request for the object from a client, the incomplete version of the object, an identifier for a fragment comprising a portion of the objects and a position for the fragment within the object are sent to the client. After receiving the incomplete version of the object, the identifier, and the position, the client requests the fragment from the server using the identifier. The object is constructed by including the fragment in the incomplete version of the object in a location specified by the position.09-18-2008
20080229018Save data discrimination method, save data discrimination apparatus, and a computer-readable medium storing save a data discrimination program - A save data discrimination method saves calculation results including an element which is periodically saved when a computer executes a program repeating the same arithmetic process. The method includes analyzing a loop structure of the program from a source code of the program to detect a main loop of the arithmetic process repeated in the program and a sub-loop included in the main loop, determining a point of entrance to the main loop as a checkpoint that is a point for saving data of the calculation results, and analyzing the contents of the arithmetic process described in the main loop to identify reference-first elements which are elements only referred to and elements defined after being referred to as data to be saved at the checkpoint determined at the point of entrance.09-18-2008
20110161584SYSTEM AND METHOD FOR INQUIRY CACHING IN A STORAGE AREA NETWORK - A system and method for servicing an inquiry command from a host device requesting inquiry data about a sequential device on a storage area network. The inquiry data may be cached by a circuitry coupled to the host device and the sequential device. The circuitry may reside in a router. In some embodiments, depending upon whether the sequential device is available to process the inquiry command, the circuitry may forward the inquiry command to the sequential device or process the inquiry command itself, utilizing a cached version of the inquiry data. The cached version may include information indicating that the sequential device is not available. In some embodiments, regardless whether the sequential device is available, the circuitry may process the inquiry command and return the inquiry data from a cache memory.06-30-2011
20080313402VIRTUAL PERSONAL VIDEO RECORDER - The claimed subject matter provides a system and/or method that manages media content. The disclosed system includes a component that synchronizes with a multimedia player that is in communication with the component. The component upon synchronization automatically determines an amount of storage space available on the handheld device and based at least in part on this available space, the component substitutes a first media presentation persisted on the storage space with a second media presentation retrieved from a media storage farm.12-18-2008
20080270700DYNAMIC, ON-DEMAND STORAGE AREA NETWORK (SAN) CACHE - Disclosed are apparatus and methods for facilitating caching in a storage area network (SAN). In general, data transfer traffic between one or more hosts and one or more memory portions in one or more storage device(s) is redirected to one or more cache modules. One or more network devices (e.g., switches) of the SAN can be configured to redirect data transfer for a particular memory portion of one or more storage device(s) to a particular cache module. As needed, data transfer traffic for any number of memory portions and storage devices can be identified for or removed from being redirected to a particular cache module. Also, any number of cache modules can be utilized for receiving redirected traffic so that such redirected traffic is divided among such cache modules in any suitable proportion for enhanced flexibility.10-30-2008
20080256296INFORMATION PROCESSING APPARATUS AND METHOD FOR CACHING DATA - A processor is provided with a register and operates to: determine whether a first tag address match with a second tag address, the first tag address being derived from a target main memory address that is to be accessed for obtaining target data subjected to a computation, the second tag address being one of the tag addresses stored in the local memory; start copying data stored in at least one of the cache lines assigned with a line number that matches with a target line number that is derived from the target main memory address into the register before completing the determination of match between the first tag address and the second tag address; and access the register to obtain the data copied from the local memory when determined that the first tag address match with the second tag address.10-16-2008
20110055481CACHE MEMORY CONTROLLING APPARATUS - A controlling a cache memory includes: a data receiving unit to receive a sensor ID and data detected by the sensor; an attribute information acquiring unit to acquire attribute information corresponding to the sensor ID, from an attribute information memory, the attribute information memory storing the attribute information of the sensor mapped to the sensor ID; a sensor information memory to store information of a storage period, the sensor information memory including a cache memory storing the attribute information; and a cache memory control unit to acquire the attribute information from the attribute information acquiring unit when the attribute information is not stored in the cache memory, and store the acquired attribute information corresponding to the sensor ID in the cache memory during the storage period.03-03-2011
20110055480METHOD FOR PRELOADING CONFIGURATIONS OF A RECONFIGURABLE HETEROGENEOUS SYSTEM FOR INFORMATION PROCESSING INTO A MEMORY HIERARCHY - A method for preloading into a hierarchy of memories, bitstreams representing the configuration information for a reconfigurable processing system including several processing units. The method includes an off-execution step of determining tasks that can be executed on a processing unit subsequently to the execution of a given task. The method also includes, during execution of the given task, computing a priority for each of the tasks that can be executed. The priority depends on information relating to the current execution of the given task. The method also includes, during execution of the given task, sorting the tasks that can be executed in the order of their priorities. The method also includes, during execution of the given task, preloading into the memory, bitstreams representing the information of the configurations for the execution of the tasks that can be executed, while favoring the tasks whose priority is the highest.03-03-2011
20110055479Thread Compensation For Microarchitectural Contention - A thread (or other resource consumer) is compensated for contention for system resources in a computer system having at least one processor core, a last level cache (LLC), and a main memory. In one embodiment, at each descheduling event of the thread following an execution interval, an effective CPU time is determined. The execution interval is a period of time during which the thread is being executed on the central processing unit (CPU) between scheduling events. The effective CPU time is a portion of the execution interval that excludes delays caused by contention for microarchitectural resources, such as time spent repopulating lines from the LLC that were evicted by other threads. The thread may be compensated for microarchitectural contention by increasing its scheduling priority based on the effective CPU time.03-03-2011
20110258392METHOD AND SYSTEM FOR PROVIDING DIGITAL RIGHTS MANAGEMENT FILES USING CACHING - A method for providing DRM files using caching includes identifying DRM files to be displayed in a file list in response to a request, decoding a number of first DRM files from among the identified DRM files and caching the first DRM files in a first memory space, and reading the first DRM files in the first memory space in response to the request. Then, a system displays the first DRM files as a file list in a display area. The second DRM files from among the identified DRM files other than the first DRM files are not initially decoded, and file data related to the second DRM files are cached in a second memory space. DRM files from among the second DRM files are subsequently decoded in response to a subsequent command.10-20-2011
20110258391APPARATUS, SYSTEM, AND METHOD FOR DESTAGING CACHED DATA - An apparatus, system, and method are disclosed for destaging cached data. A controller detects one or more write requests to store data in a backing store. The cache controller sends the write requests to a storage controller for a nonvolatile solid-state storage device. The storage controller receives the write requests and caches the data associated with the write requests in the nonvolatile solid-state storage device by appending the data to a log of the nonvolatile solid-state storage device. The log includes a sequential, log-based structure preserved in the nonvolatile solid-state storage device. The cache controller receives at least a portion of the data from the storage controller in a cache log order and destages the data to the backing store in the cache log order. The cache log order comprises an order in which the data was appended to the log of the nonvolatile solid-state storage device.10-20-2011
20110119444ADAPTIVE CACHING OF DATA - Data access is facilitated by employing local caches and an adaptive caching strategy. Specific data is stored in each local cache and consistency is maintained between the caches. To maintain consistency, adaptive caching structures are used. The members of an adaptive caching structure are selected based on a sharing context, such as those members having a chosen association identifier or those members not having the chosen association identifier.05-19-2011
20120151142TRANSFER OF BUS-BASED OPERATIONS TO DEMAND-SIDE MACHINES - An L2 cache, method and computer program product for transferring an inbound bus operation to a processor side handling machine. The method includes a bus operation handling machine accepting the inbound bus operation received over a system interconnect, the bus operation handling machine identifying a demand operation of the processor side handling machine that will complete the bus operation, the bus operation handling machine sending the identified demand operation to the processor side handling machine, and the processor side handling machine performing the identified demand operation.06-14-2012
20110138123Managing Data Storage as an In-Memory Database in a Database Management System - System, method, computer program product embodiments and combinations and sub-combinations thereof for managing data storage as an in-memory database in a database management system (DBMS) are provided. In an embodiment, a specialized database type is provided as a parameter of a native DBMS command. A database hosted entirely in-memory of the DBMS is formed when the specialized database type is specified.06-09-2011
20120311265Read and Write Aware Cache - A mechanism is provided in a cache for providing a read and write aware cache. The mechanism partitions a large cache into a read-often region and a write-often region. The mechanism considers read/write frequency in a non-uniform cache architecture replacement polity. A frequently written cache line is placed in one of the farther banks. A frequently read cache line is place in one of the closer banks. The size ration between read-often and write-often regions may be static or dynamic. The boundary between the read-often region and the write-often region may be distinct or fuzzy.12-06-2012
20120311264DATA MANAGEMENT METHOD, DEVICE, AND DATA CHIP - The present invention discloses a data management method, device and data chip. The data management method includes: receiving write data of a write request; writing the write data according to a current data management mode, where when the data management mode is a first mode, the write data of the write request is stored in an on-chip cache and when the data management mode is a second mode, the write data of the write request is stored in the on-chip cache and an off-chip memory chip; and receiving a read request of the write data, searching for the write data in the on-chip cache according to the read request, and if the write data cannot be obtained from the on-chip cache, obtaining the write data from the off-chip memory chip, thereby reducing power consumption for data access to external memory chips.12-06-2012
20120311263SECTOR-BASED WRITE FILTERING WITH SELECTIVE FILE AND REGISTRY EXCLUSIONS - A method includes mounting a persistent volume of a data storage device of an electronic device. The persistent volume is based on a protected volume stored at the data storage device. The method also includes accessing the persistent volume to enable servicing access to the data storage device of the electronic device.12-06-2012
20120311262MEMORY CELL PRESETTING FOR IMPROVED MEMORY PERFORMANCE - Memory cell presetting for improved performance including a system that includes a memory, a cache, and a memory controller. The memory includes memory lines made up of memory cells. The cache includes cache lines that correspond to a subset of the memory lines. The memory controller is in communication with the memory and the cache. The memory controller is configured to perform a method that includes scheduling a request to set memory cells of a memory line to a common specified state in response to a cache line attaining a dirty state.12-06-2012
20120311261STORAGE SYSTEM AND STORAGE CONTROL METHOD - A storage system is provided with a memory region, a cache memory region, and a processor. The memory region stores the time relation information that indicates a time relationship of a data element that has been stored into the cache memory region and that is to be written to the logical region and a snapshot acquisition point of time to the primary volume. The processor judges whether or not the data element that has been stored into the cache memory region is a snapshot configuration element based on the time relation information for the data element that is to be written to a logical region of a write destination that conforms to the write request that specifies the primary volume and that has been stored into the cache memory region. In the case in which the result of the judgment is positive, the processor saves the data element to the secondary volume for holding a snapshot image in which the snapshot configuration element is a configuration element, and a data element of a write target is then stored into the cache memory region.12-06-2012
20100191909Administering Registered Virtual Addresses In A Hybrid Computing Environment Including Maintaining A Cache Of Ranges Of Currently Registered Virtual Addresses - Administering registered virtual addresses in a hybrid computing environment that includes a host computer, an accelerator, the accelerator architecture optimized, with respect to the host computer architecture, for speed of execution of a particular class of computing functions, the host computer and the accelerator adapted to one another for data communications by a system level message passing module, where administering registered virtual addresses includes maintaining a cache of ranges of currently registered virtual addresses, the cache including entries associating a range of currently registered virtual addresses, a handle representing physical addresses mapped to the range of currently registered virtual addresses, and a counter; determining whether to register ranges of virtual addresses in dependence upon the cache of ranges of currently registered virtual addresses; and determining whether to deregister ranges of virtual addresses in dependence upon the cache of ranges of currently registered virtual addresses.07-29-2010
20110010499STORAGE SYSTEM, METHOD OF CONTROLLING STORAGE SYSTEM, AND METHOD OF CONTROLLING CONTROL APPARATUS - A storage system including a storage, has a first power supplier for supplying electronic power, a second power supplier for supplying electronic power when the first power supplier not supplying electronic power to the storage system, a cache memory for storing data sent out from a host, a non-volatile memory for storing data stored in the cache memory, and a controller for writing the data stored in the cache memory into the non-volatile memory when the second supplier supplying electronic power to the storage system, for stopping the writing and for deleting data stored in the non-volatile memory so until a free space volume of the non-volatile memory being not less than a volume of the data stored in the cache memory when the first supplier restoring electronic power to the storage system.01-13-2011
20110078377SOCIAL NETWORKING UTILIZING A DISPERSED STORAGE NETWORK - Social networking data is received at the dispersed storage processing unit, the social networking data associated with at least one of a plurality of user devices. Dispersed storage metadata associated with the social networking data is generated. A full record and at least one partial record are generated based on the social networking data and further based on the dispersed storage metadata. The full record is stored in a dispersed storage network. The partial record is pushed to at least one other of the plurality of user devices via the data network.03-31-2011
20120151143TECHNIQUES FOR MANAGING DATA IN A STORAGE CONTROLLER - A technique for limiting an amount of write data stored in a cache memory includes determining a usable region of a non-volatile storage (NVS), determining an amount of write data in a current write request for the cache memory, and determining a failure boundary associated with the current write request. A count of the write data associated with the failure boundary is maintained. The current write request for the cache memory is rejected when a sum of the count of the write data associated with the failure boundary and the write data in the current write request exceeds a determined percentage of the usable region of the NVS.06-14-2012
20090292878METHOD AND SYSTEM FOR PROVIDING DIGITAL RIGHTS MANAGEMENT FILES USING CACHING - A method for providing DRM files using caching includes identifying DRM files to be displayed in a file list in response to a request, decoding a number of first DRM files from among the identified DRM files and caching the first DRM files in a first memory space, and reading the first DRM files in the first memory space in response to the request. Then, a system displays the first DRM files as a file list in a display area. The second DRM files from among the identified DRM files other than the first DRM files are not initially decoded, and file data related to the second DRM files are cached in a second memory space. DRM files from among the second DRM files are subsequently decoded in response to a subsequent command.11-26-2009
20090292877EVENT SERVER USING CACHING - An event server adapted to receive events from an input stream and produce an output event stream. The event server uses a processor using code in an event processing language to process the events. The event server obtaining input events from and/or producing output events to a cache.11-26-2009
20110197028Channel Controller For Multi-Channel Cache - Disclosed herein is a channel controller for a multi-channel cache memory, and a method that includes receiving a memory address associated with a memory access request to a main memory of a data processing system; translating the memory address to form a first access portion identifying at least one partition of a multi-channel cache memory, and at least one further access portion, where the at least one partition includes at least one channel; and applying the at least one further access portion to the at least one channel of the multi-channel cache memory.08-11-2011
20110264859MEMORY SYSTEM - A memory system according to an embodiment of the present invention comprises: a data managing unit 10-27-2011
20100030963MANAGING STORAGE OF CACHED CONTENT - A method of controlling storage of content on a storage device includes communicating with a storage device configured to cache content; and determining a storage cost for caching a first set of data objects on the storage device. The determining is based, at least in part, on characteristics of the first set of data objects and on characteristics of the storage device. Also provided is a storage system that includes a storage device capable of caching media content, a storage device agent and a cache manager. The storage device agent is operative to communicate with the storage device and with the cache manager, and to provide a storage cost to the cache manager. The storage device agent determines the storage cost for caching a data object on the storage device based, at least in part, on characteristics of the data object and on characteristics of the storage device.02-04-2010
20100023693Method and system for tiered distribution in a content delivery network - A tiered distribution service is provided in a content delivery network (CDN) having a set of surrogate origin (namely, “edge”) servers organized into regions and that provide content delivery on behalf of participating content providers, wherein a given content provider operates an origin server. According to the invention, a cache hierarchy is established in the CDN comprising a given edge server region and either (a) a single parent region, or (b) a subset of the edge server regions. In response to a determination that a given object request cannot be serviced in the given edge region, instead of contacting the origin server, the request is provided to either the single parent region or to a given one of the subset of edge server regions for handling, preferably as a function of metadata associated with the given object request. The given object request is then serviced, if possible, by a given CDN server in either the single parent region or the given subset region. The original request is only forwarded on to the origin server if the request cannot be serviced by an intermediate node.01-28-2010
20100023692MODULAR THREE-DIMENSIONAL CHIP MULTIPROCESSOR - A chip multiprocessor die supports optional stacking of additional dies. The chip multiprocessor includes a plurality of processor cores, a memory controller, and stacked cache interface circuitry. The stacked cache interface circuitry is configured to attempt to retrieve data from a stacked cache die if the stacked cache die is present but not if the stacked cache die is absent. In one implementation, the chip multiprocessor die includes a first set of connection pads for electrically connecting to a die package and a second set of connection pads for communicatively connecting to the stacked cache die if the stacked cache die is present. Other embodiments, aspects and features are also disclosed.01-28-2010
20100023691SYSTEM AND METHOD FOR IMPROVING A BROWSING RATE IN A HOME NETWORK - A system and method for improving a browsing rate in a Universal Plug and Play (UPnP) Audio/Video (AV) home network. A control point predicts browse data using a pre-fetching operation and pre-fetches and stores the predicted browse data, which is temporarily stored in a cache implemented in the control point. Accordingly, when a user has selected a corresponding container, the control point displays the pre-fetched browse data. The user can directly use the browse data and experiences a fast response.01-28-2010
20100023690CACHING DYNAMIC CONTENTS AND USING A REPLACEMENT OPERATION TO REDUCE THE CREATION/DELETION TIME ASSOCIATED WITH HTML ELEMENTS - An event to delete a structured object of a Web page rendered in a browser can be detected. The structured object comprises an HTML element set that was dynamically created for the Web page. The structured object can be placed in a cache without deleting memory allocations for the structured object. An event to dynamically create a new object of the Web page can be detected. The cache can be queried to find an object with structure equivalent to that of the new object. The found object can be taken from the cache and used as the new object after content of the cached object is replaced with that needed for the new object. Memory allocation and deallocation costs that would otherwise be needed to dispose of a dynamic HTML element set and to create a new HTML element set are thus saved using the cache.01-28-2010
20110307661MULTI-PROCESSOR CHIP WITH SHARED FPGA EXECUTION UNIT AND A DESIGN STRUCTURE THEREOF - An integrated circuit chip having plural processors with a shared field programmable gate array (FPGA) unit, a design structure thereof, and method for allocating the shared FPGA unit. A method includes storing a plurality of data that define a plurality of configurations of a field programmable gate array (FPGA), wherein the FPGA is arranged in the execution pipeline of at least one processor; selecting one of the plurality of data; and programming the FPGA based on the selected one of the plurality of data.12-15-2011
20120042125Systems and Methods for Efficient Sequential Logging on Caching-Enabled Storage Devices - A computer-implemented method for efficient sequential logging on caching-enabled storage devices may include 1) identifying a storage device with a cache, 2) allocating space on the storage device for a sequential log, 3) calculating a target size for the sequential log based at least in part on an input/output load directed to the sequential log, and then 4) restricting the sequential log to a portion of the allocated space corresponding to the target size. Various other methods, systems, and computer-readable media are also disclosed.02-16-2012
20120151141DETERMINING SERVER WRITE ACTIVITY LEVELS TO USE TO ADJUST WRITE CACHE SIZE - Provided are a computer program product, system, and method for determining server write activity levels to use to adjust write cache size. Information on server write activity to the cache is gathered. The gathered information on write activity is processed to determine a server write activity level comprising one of multiple write activity levels indicating a level of write activity. The determined server write activity level is transmitted to a storage server having a write cache, wherein the storage server uses the determined server write activity level to determine whether to adjust a size of the storage server write cache.06-14-2012
20120151140SYSTEMS AND METHODS FOR DESTAGING STORAGE TRACKS FROM CACHE - Systems and methods for destaging storage tracks from cache are provided. One system includes a cache and a processor coupled to the cache. The cache stores data in multiple storage tracks and each storage track includes an associated multi-bit counter. The processor is configured to perform the following method. One method includes writing data to the plurality of storage tracks and incrementing the multi-bit counter on each respective storage track a predetermined amount each time the processor writes to a respective storage track. The method further includes scan each of the storage tracks in each of multiple scan cycles, decrementing each multi-bit counter each scan cycle, and destaging each storage track including a zero count. Also provided are physical computer storage mediums including a computer program product for performing the above method.06-14-2012
20120210064EXTENDER STORAGE POOL SYSTEM - Various embodiments for managing data in a computing storage environment by a processor device are provided. In one such embodiment, by way of example only, an extender storage pool system is configured for at least one of a source and a target storage pool to expand an available storage capacity for the at least one of the source and the target storage pool. A most recent snapshot of the data is sent to the extender storage pool system. The most recent snapshot of the data is stored on the extender storage pool system as a last replicated snapshot of the data.08-16-2012
20120047328DATA DE-DUPLICATION FOR SERIAL-ACCESS STORAGE MEDIA - Data storage and retrieval methods and apparatus are provided for facilitating data de-duplication for serial-access storage media such as tape. During data storage, input data is divided into a succession of chunks and, for each chunk, a corresponding data item is written to the storage media. The data item comprises the chunk data itself where it is the first occurrence of that data, and otherwise comprises a chunk-data identifier identifying that chunk of subject data. To facilitate reconstruction of the original data on read-back from the storage media a cache (02-23-2012
20120210067MIRRORING DEVICE AND MIRRORING RECOVERY METHOD - To provide a mirroring device which does not need a competition control function dedicated for a restoring process without halting other access commands during the restoring process of a mirroring.08-16-2012
20120210065TECHNIQUES FOR MANAGING MEMORY IN A MULTIPROCESSOR ARCHITECTURE - Techniques for managing memory in a multiprocessor architecture are presented. Each processor of the multiprocessor architecture includes its own local memory. When data is to be removed from a particular local memory or written to storage that data is transitioned to another local memory associated with a different processor of the multiprocessor architecture. If the data is then requested from the processor, which originally had the data, then the data is acquired from a local memory of the particular processor that received and now has the data.08-16-2012
20120210066SYSTEMS AND METHODS FOR A FILE-LEVEL CACHE - A multi-level cache comprises a plurality of cache levels, each configured to cache I/O request data pertaining to I/O requests of a different respective type and/or granularity. The multi-level cache may comprise a file-level cache that is configured to cache I/O request data at a file-level of granularity. A file-level cache policy may comprise file selection criteria to distinguish cacheable files from non-cacheable files. The file-level cache may monitor I/O requests within a storage stage, and may service I/O requests from a cache device.08-16-2012
20130013860MEMORY CELL PRESETTING FOR IMPROVED MEMORY PERFORMANCE - Memory cell presetting for improved performance including a method for using a computer system to identify a region in a memory. The region includes a plurality of memory cells characterized by a write performance characteristic that has a first expected value when a write operation changes a current state of the memory cells to a desired state of the memory cells and a second expected value when the write operation changes a specified state of the memory cells to the desired state of the memory cells. The second expected value is closer than the first expected value to a desired value of the write performance characteristic. The plurality of memory cells in the region are set to the specified state, and the data is written into the plurality of memory cells responsive to the setting.01-10-2013
20110167222UNBOUNDED TRANSACTIONAL MEMORY SYSTEM AND METHOD - An unbounded transactional memory system which can process overflow data. The unbounded transactional memory system may include a host processor, a memory, and a memory processor. The host processor may include an execution unit to perform a transaction, and a cache to temporarily store data. The memory processor may store overflow data in overflow storage included in the memory in response to an overflow event in which the overflow data is generated in the cache during the transaction.07-07-2011
20120017048INTER-FRAME TEXEL CACHE - Methods, apparatuses, and systems are presented for caching. A cache memory area may be used for storing data from memory locations in an original memory area. The cache memory area may be used in conjunction with a repeatedly updated record of storage associated with the cache memory area. The repeatedly updated record of storage can thus provide a history of data storage associated with the cache memory area. The cache memory area may be loaded with entries previously stored in the cache memory area, by utilizing the repeatedly updated record of storage. In this manner, the record may be used to “warm up” the cache memory area, loading it with data entries that were previously cached and may be likely to be accessed again if repetition of memory accesses exists in the span of history captured by the repeatedly updated record of storage.01-19-2012
20120017045MULTI-RESOLUTION CACHE MONITORING - Multi-resolution cache monitoring devices and methods are provided. Multi-resolution cache devices illustratively have a cache memory, an interface, an information unit, and a processing unit. The interface receives a request for data that may be included in the cache memory. The information unit has state information for the cache memory. The state information is organized in a hierarchical structure. The process unit searches the hierarchical structure for the requested data.01-19-2012
20120017046UNIFIED MANAGEMENT OF STORAGE AND APPLICATION CONSISTENT SNAPSHOTS - A storage management application of a storage array is operable to create a new volume on the storage device array, and to automatically configure, responsive to user selection of an application protection profile, data protection services for application data to be stored on the volume, and/or, responsive to user selection of an application performance profile, application specific performance parameters. The application protection profile specifies scheduling and replication of snapshots for application data to be stored on the volume, and the application performance profile specifies performance parameters such as setting a block size, enabling or modifying a data caching algorithm, turning on or modifying data compression, etc. The scheduling, replication and/or application performance may be managed by a daemon associated with the storage management application which communicates with an agent associated with an application server on which the application executes.01-19-2012
20120117323STORE QUEUE SUPPORTING ORDERED AND UNORDERED STORES - Some described embodiments provide a system that performs stores in a memory system. During operation, the system receives a store for a first thread. The system then creates an entry for the store in a store queue for the first thread. While creating the entry, the system requests a store-mark for a cache line for the store, wherein the store-mark for the cache line indicates that one or more store queue entries are waiting to be committed to the cache line. The system then receives a response to the request for the store-mark, wherein the response indicates that the cache line for the store is store-marked. Upon receiving the response, the system updates a set of ordered records for the first thread by inserting data for the store in the set of ordered records, wherein the set of ordered records include store-marked stores for the first thread.05-10-2012
20120159071STORAGE SUBSYSTEM AND ITS LOGICAL UNIT PROCESSING METHOD - When a command to restore a logical unit is issued after a command to delete the logical unit, the logical unit is restored easily.06-21-2012
20120072666INTEGRATED CIRCUIT COMPRISING TRACE LOGIC AND METHOD FOR PROVIDING TRACE INFORMATION - An integrated circuit comprises trace logic for operably coupling to at least one memory element and for providing trace information for a signal processing system. The trace logic comprises trigger detection logic for detecting at least one trace trigger, memory access logic arranged to perform, upon detection of the at least one trace trigger, at least one read operation for at least one memory location of the at least one memory element associated with the at least one detected trigger, memory content message generation logic arranged to generate at least one memory content message comprising information relating to a result of the at least one read operation performed by the memory access logic, and output logic for outputting the at least one memory content message.03-22-2012
20120072665Caching of a Site Model in a Hierarchical Modeling System for Network Sites - Disclosed are various embodiments for caching of a hierarchical model of a network site. Upon receiving a request to resolve a network site, a hierarchical site model associated with a network site is retrieved. A directory model associated with the network site is also retrieved. A caching process is initiated that retrieves at least a subset of page models and loads them into a cache. The caching process is executed in parallel with the processing of the hierarchical site model.03-22-2012
20110066809XML PROCESSING DEVICE, XML PROCESSING METHOD, AND XML PROCESSING PROGRAM - Provided is an XML processing device capable of describing, using conventional XML processing language, a method of processing also an asynchronously inputted XML. The XML processing device converts, according to a predetermined rule, the XML inputted asynchronously from outside and outputs the XML. The XML processing device is characterized by including an XML conversion module which performs XML conversion of the XML inputted according to the rule, an output destination interpretation module which interprets an output destination described in the converted XML, and an output distribution module which allows the XML to be outputted to the output destination interpreted by the output destination interpretation module.03-17-2011
20110066808Apparatus, System, and Method for Caching Data on a Solid-State Storage Device - An apparatus, system, and method are disclosed for caching data on a solid-state storage device. The solid-state storage device maintains metadata pertaining to cache operations performed on the solid-state storage device, as well as storage operations of the solid-state storage device. The metadata indicates what data in the cache is valid, as well as information about what data in the nonvolatile cache has been stored in a backing store. A backup engine works through units in the nonvolatile cache device and backs up the valid data to the backing store. During grooming operations, the groomer determines whether the data is valid and whether the data is discardable. Data that is both valid and discardable may be removed during the grooming operation. The groomer may also determine whether the data is cold in determining whether to remove the data from the cache device. The cache device may present to clients a logical space that is the same size as the backing store. The cache device may be transparent to the clients.03-17-2011
20110066806System and method for memory bandwidth friendly sorting on multi-core architectures - In some embodiments, the invention involves utilizing a tree merge sort in a platform to minimize cache reads/writes when sorting large amounts of data. An embodiment uses blocks of pre-sorted data residing in “leaf nodes” residing in memory storage. A pre-sorted block of data from each leaf node is read from memory and stored in faster cache memory. A tree merge sort is performed on the nodes that are cache resident until a block of data migrates to a root node. Sorted blocks reaching the root node are written to memory storage in an output list until all pre-sorted data blocks have been moved to cache and merged upward to the root. The completed output list in memory storage is a list of the fully sorted data. Other embodiments are described and claimed.03-17-2011
20130013861CACHING PERFORMANCE OPTIMIZATION - A method for managing data storage is described. The method includes receiving data from an external host at a peripheral storage device, detecting a file system type of the external host, and adapting a caching policy for transmitting the data to a memory accessible by the storage device, wherein the caching policy is based on the detected file system type. The detection of the file system type can be based on the received data. The detection bases can include a size of the received data. In some implementations, the detection of the file system type can be based on accessing the memory for file system type indicators that are associated with a unique file system type. Adapting the caching policy can reduce a number of data transmissions to the memory. The detected file system type can be a file allocation table (FAT) system type.01-10-2013
20110107030SELF-ORGANIZING METHODOLOGY FOR CACHE COOPERATION IN VIDEO DISTRIBUTION NETWORKS - A content distribution network (CDN) comprising content storage nodes (CSNs) or caches having storage space that preferentially stores more popular content objects.05-05-2011
20100095064Pattern Matching Technique - A method, system and program are disclosed for accelerating data storage in a cache appliance that transparently monitors NFS and CIFS traffic between clients and NAS subsystems and caches files in a cache memory by using a perfect hashing memory index technique to rapidly detect predetermined patterns in received packet payloads and retrieve matching patterns from memory by generating a data structure pointer and index offset to directly address the pattern in the datagram memory, thereby accelerating evaluation of the packet with the matching pattern by the host processor.04-15-2010
20110099332METHOD AND SYSTEM OF OPTIMAL CACHE ALLOCATION IN IPTV NETWORKS - In an IPTV network, one or more caches may be provided at the network nodes for storing video content in order to reduce bandwidth requirements. Cache functions such as cache effectiveness and cacheability may be defined and optimized to determine the optimal size and location of cache memory and to determine optimal partitioning of cache memory for the unicast services of the IPTV network.04-28-2011
20090132764POWER CONSERVATION VIA DRAM ACCESS - Power conservation via DRAM access reduction is provided by a buffer/mini-cache selectively operable in a normal mode and a buffer mode. In the buffer mode, entered when CPUs begin operating in low-power states, non-cacheable accesses (such as generated by a DMA device) matching specified physical address ranges, or having specific characteristics of the accesses themselves, are processed by the buffer/mini-cache, instead of by a memory controller and DRAM. The buffer/mini-cache processing includes allocating lines when references miss, and returning cached data from the buffer/mini-cache when references hit. Lines are replaced in the buffer/mini-cache according to one of a plurality of replacement policies, including ceasing replacement when there are no available free lines. In the normal mode, entered when CPUs begin operating in high-power states, the buffer/mini-cache operates akin to a conventional cache and non-cacheable accesses are not processed therein.05-21-2009
20120317359PROCESSING A REQUEST TO RESTORE DEDUPLICATED DATA - For a restore request, at least a portion of a recipe that refers to chunks is read. Based on the recipe portion, a container having plural chunks is retrieved. From the recipe portion, it is identified which of the plural chunks of the container to save, where some of the chunks identified do not, at a time of the identifying, have to be presently communicated to a requester. The identified chunks are stored in a memory area from which chunks are read for the restore operation.12-13-2012
20120124290Integrated Memory Management and Memory Management Method - An integrated memory management device according to an example of the invention comprises an acquiring unit acquiring a read destination logical address from a processor, an address conversion unit converting the read destination logical address into a read destination physical address of a non-volatile main memory, an access unit reading, from the non-volatile main memory, data that corresponds to the read destination physical address and has a size that is equal to a block size or an integer multiple of the page size of the non-volatile main memory, and transmission unit transferring the read data to a cache memory of the processor having a cache size that depends on the block size or the integer multiple of the page size of the non-volatile main memory.05-17-2012
20090089505STEERING DATA UNITS TO A CONSUMER - A computer system may comprise a second device operating as a producer that may steer data units to a first device operating as a consumer. A processing core of the first device may wake-up the second device after generating a first data unit. The second device may generate steering values after retrieving a first data unit directly from the cache of the first device. The second device may populate a flow table with a plurality of entries using the steering values. The second device may receive a packet over a network and store the packet directly into the cache of the first device using a first steering value. The second device may direct an interrupt signal to the processing core of the first device using a second steering value.04-02-2009
20120221792OPPORTUNISTIC BLOCK TRANSMISSION WITH TIME CONSTRAINTS - A technique for determining a data window size allows a set of predicted blocks to be transmitted along with requested blocks. A stream enabled application executing in a virtual execution environment may use the blocks when needed.08-30-2012
20100088470OPTIMIZING INFORMATION LIFECYCLE MANAGEMENT FOR FIXED STORAGE - The method may query the disk drive for a size where size may be a total number of logical blocks on the disk drive. The drive may receive a size response where the size includes a total number of logical blocks on the disk drive. The number of usage blocks necessary to represent the number of logical blocks on the disk drive may then be determined and usage data may be stored in the usage blocks. The data may be stored in the buffer of the disk drive. The data may also be stored in the DDF of a RAID drive. The data may be used to permit incremental backups of disk drives by backing up only the blocks that are indicated as having been changed. In addition, information about the access to the drive may be collected and stored for later analysis.04-08-2010
20120131277ACTIVE MEMORY PROCESSOR SYSTEM - In general, the present invention relates to data cache processing. Specifically, the present invention relates to a system that provides reconfigurable dynamic cache which varies the operation strategy of cache memory based on the demand from the applications originating from different external general processor cores, along with functions of a virtualized hybrid core system. The system includes receiving a data request, selecting an operational mode based on the data request and a predefined selection algorithm, and processing the data request based on the selected operational mode.05-24-2012
20120131278DATA STORAGE APPARATUS AND METHODS - Data storage apparatus and methods are disclosed. A disclosed example data storage apparatus comprises a cache layer and a processor in communication with the cache layer. The processor is to dynamically enable or disable the cache layer via a cache layer enable line based on a data store access type.05-24-2012
20120131276INFORMATION APPARATUS AND METHOD FOR CONTROLLING THE SAME - An object is to efficiently set configurations of a storage apparatus. Provided is an information apparatus communicably coupled to a storage apparatus 10, which validates a script executed by the storage apparatus 10 for setting a configuration of the storage apparatus 10, the information apparatus generating configurations of the storage apparatus 10 when after each command described in a script is executed sequentially; and performing consistency validation on the script by determining whether or not the command described in the script is normally executable in a case the command is executed on an assumption that the storage apparatus 10 has the configuration immediately before the execution.05-24-2012
20120215980RESTORING DATA BACKED UP IN A CONTENT ADDRESSED STORAGE (CAS) SYSTEM - In one example, a method of restoring data backed up in a content addressed storage system may include retrieving a recipe and appended storage addresses from a first storage node of content addressed storage, where the recipe may include instructions for generating a data structure from two or more data pieces, and the two or more data pieces may be resident in locations identified by the appended storage addresses. The example method may further include populating a cache with the appended storage addresses for the two or more data pieces. As well the method may further include retrieving, and populating the cache with, the two or more data pieces without looking up a storage address for any of the two or more data pieces in an index, and restoring the data structure using the retrieved two or more data pieces in the cache.08-23-2012
20120166727WEATHER ADAPTIVE ENVIRONMENTALLY HARDENED APPLIANCES - Embodiments of the present invention provide a method, system and computer program product for weather adaptive environmentally hardened appliances. In an embodiment of the invention, a method for weather adaptation of an environmentally hardened computing appliance includes determining a location of an environmentally hardened computing appliance. Thereafter, a weather forecast including a temperature forecast can be retrieved for a block of time at the location. As a result, a cache policy for a cache of the environmentally hardened computing appliance can be adjusted to account for the weather forecast.06-28-2012
20120215981RECYCLING OF CACHE CONTENT - A method of operating a storage system comprises detecting a cut in an external power supply, switching to a local power supply, preventing receipt of input/output commands, copying content of cache memory to a local storage device and marking the content of the cache memory that has been copied to the local storage device. When a resumption of the external power supply is detected, the method continues by charging the local power supply, copying the content of the local storage device to the cache memory, processing the content of the cache memory with respect to at least one storage volume and receiving input/output commands. When detecting a second cut in the external power supply, the system switches to the local power supply, prevents receipt of input/output commands, and copies to the local storage device only the content of the cache memory that is not marked as present.08-23-2012
20120215979CACHE FOR STORING MULTIPLE FORMS OF INFORMATION AND A METHOD FOR CONTROLLING A CACHE STORING MULTIPLE FORMS OF INFORMATION - A cache is provided, including a data array having a plurality of entries configured to store a plurality of different types of data, and a tag array having a plurality of entries and configured to store a tag of the data stored at a corresponding entry in the data array and further configured to store an identification of the type of data stored in the corresponding entry in the data array.08-23-2012
20100205375METHOD, APPARATUS, AND SYSTEM OF FORWARD CACHING FOR A MANAGED CLIENT - A method, apparatus, and system are disclosed of forward caching for a managed client. A storage module stores a software image on a storage device of a backend server. The backend server provides virtual disk storage on the storage device through a first intermediate network point for a plurality of diskless data processing devices. Each diskless data processing device communicates directly with the first intermediate network point. The storage module caches an image instance of the software image at the first intermediate network point. A tracking module detects an update to the software image on the storage device. The storage module copies the updated software image to the first intermediate network point as an updated image instance.08-12-2010
20120137072HYBRID ACTIVE MEMORY PROCESSOR SYSTEM - In general, the present invention relates to data cache processing. Specifically, the present invention relates to a system that provides reconfigurable dynamic cache which varies the operation strategy of cache memory based on the demand from the applications originating from different external general processor cores, along with functions of a virtualized hybrid core system. The system includes receiving a data request, selecting an operational mode based on the data request and a predefined selection algorithm, and processing the data request based on the selected operational mode. The present invention is further configured to enable processing core and memory utilization by external systems through virtualization.05-31-2012
20100174867USING DIFFERENT ALGORITHMS TO DESTAGE DIFFERENT TYPES OF DATA FROM CACHE - Provided are a method, system, and article of manufacture for using different algorithms to destage different types of data from cache. A first destaging algorithm is used to destage a first type of data to a storage for a first duration. A second destaging algorithm is used to destage a second type of data to the storage for a second duration.07-08-2010
20120173818DETECTING ADDRESS CONFLICTS IN A CACHE MEMORY SYSTEM - A cache memory includes a data array that stores memory blocks, a directory of contents of the data array, and a cache controller that controls access to the data array. The cache controller includes an address conflict detection system having a set-associative array configured to store at least tags of memory addresses of in-flight memory access transactions. The address conflict detection system accesses the set-associative array to detect if a target address of an incoming memory access transaction conflicts with that of an in-flight memory access transaction and determines whether to allow the incoming transaction memory access transaction to proceed based upon the detection.07-05-2012
20100299480Method And System Of Executing Stack-based Memory Reference Code - A method and system of executing stack-based memory reference code. At least some of the illustrated embodiments are methods comprising waking a computer system from a reduced power operational state in which a memory controller loses at least some configuration information, executing memory reference code that utilizes a stack (wherein the memory reference code configures the main memory controller), and passing control of the computer system to an operating system. The time between executing a first instruction after waking the computer system and passing control to the operating system takes less than 200 milliseconds.11-25-2010
20100049920DYNAMICALLY ADJUSTING WRITE CACHE SIZE - A storage system includes a backend storage unit for storing electronic information; a controller unit for controlling reading and writing to the backend storage unit; and at least one of a cache and a non-volatile storage for storing the electronic information during at least one of the reading and the writing; the controller unit executing machine readable and machine executable instructions including instructions for: testing if a frequency of non-volatile storage full condition has occurred one of above and below an upper threshold frequency value and a lower threshold frequency value; if the frequency of the condition has exceeded a threshold frequency value, then calculating a new size; calculating an expected average response time for the new size; comparing actual response time to the expected response time; and one of adjusting and not adjusting a size of the non-volatile storage to minimize the response time.02-25-2010
20100281217SYSTEM AND METHOD FOR PERFORMING ENTITY TAG AND CACHE CONTROL OF A DYNAMICALLY GENERATED OBJECT NOT IDENTIFIED AS CACHEABLE IN A NETWORK - The present invention is directed towards a method and system for modifying by a cache responses from a server that do not identify a dynamically generated object as cacheable to identify the dynamically generated object to a client as cacheable in the response. In some embodiments, such as an embodiment handling HTTP requests and responses for objects, the techniques of the present invention insert an entity tag, or “etag” into the response to provide cache control for objects provided without entity tags and/or cache control information from an originating server. This technique of the present invention provides an increase in cache hit rates by inserting information, such as entity tag and cache control information for an object, in a response to a client to enable the cache to check for a hit in a subsequent request.11-04-2010
20100274970Robust Domain Name Resolution - A recursive DNS nameserver system and related domain name resolution techniques are disclosed. The DNS nameservers utilize a local cache having previously retrieved domain name resolution to avoid recursive resolution processes and the attendant DNS requests. If a matching record is found with a valid (not expired) TTL field, the nameserver returns the cached domain name information to the client. If the TTL for the record in the cache has expired and the nameserver is unable to resolve the domain name information using DNS requests to authoritative servers, the recursive DNS nameserver returns to the cache and accesses the resource record having an expired TTL. The nameserver generates a DNS response to the client device that includes the domain name information from the cached resource record. In various embodiments, subscriber information is utilized to resolve the requested domain name information in accordance with user-defined preferences.10-28-2010
20120233405Caching Method and System for Video Coding - A method of caching reference data in a reference data cache is provided that includes receiving an address of a reference data block in the reference data cache, wherein the address includes an x coordinate and a y coordinate of the reference data block in a reference block of pixels and a reference block identifier specifying which of a plurality of reference blocks of pixels includes the reference data block, computing an index of a set of cache lines in the reference data cache using bits from the x coordinate and bits from the y coordinate, using the index and a tag comprising the reference block identifier to determine whether the reference data block is in the set of cache lines, and retrieving the reference data block from reference data storage when the reference data block is not in the set of cache lines.09-13-2012
20120254539SYSTEMS AND METHODS FOR MANAGING CACHE DESTAGE SCAN TIMES - A system includes a cache and a processor. The processor is configured to utilize a first thread to continually determine a desired scan time for scanning the plurality of storage tracks in the cache and utilize a second thread to continually control an actual scan time of the plurality of storage tracks in the cache based on the continually determined desired scan time. One method includes utilizing a first thread to continually determine a desired scan time for scanning the plurality of storage tracks in the cache and utilizing a second thread to continually control an actual scan time of the plurality of storage tracks in the cache based on the continually determined desired scan time.10-04-2012
20120254538STORAGE APPARATUS AND COMPUTER PROGRAM PRODUCT - According to an embodiment, a storage apparatus includes a storage unit configured to store a plurality of pieces of data; a communication unit configured to communicate with a plurality of external devices each of which includes a first cache memory in which at least part of the plurality of pieces of data are stored; a write unit configured to write data into the storage unit when the communication unit receives the write request to write the data transmitted from one of the plurality of external devices; and a controller configured to control the communication unit such that the data is transmitted to another external device different from the one external device that has requested the write request.10-04-2012
20120185649VOLUME RECORD DATA SET OPTIMIZATION APPARATUS AND METHOD - A method for optimizing a plurality of volume records stored in cache may include monitoring a volume including multiple data sets, wherein each data set is associated with a volume record, and each volume record is stored in a volume record data set. The method may include tracking read and write operations to each of the data sets over a period of time. The method may further include reorganizing the volume records in the volume record data set such that volume records for data sets with a larger number of read operations relative to write operations are grouped together, and volume records for data sets with a smaller number of read operations relative to write operation are grouped together. A corresponding apparatus and computer program product are also disclosed.07-19-2012
20120185648STORAGE IN TIERED ENVIRONMENT FOR COLDER DATA SEGMENTS - Exemplary method, system, and computer program embodiments for storing data by a processor device in a computing environment are provided. In one embodiment, by way of example only, from a plurality of available data segments, a data segment having a storage activity lower than a predetermined threshold is identified as a colder data segment. A chunk of storage is located to which the colder data segment is assigned. The colder data segment is compressed. The colder data segment is migrated to the chunk of storage. A status of the chunk of storage is maintained in a compression data segment bitmap.07-19-2012
20120233406STORAGE APPARATUS, AND CONTROL METHOD AND CONTROL APPARATUS THEREFOR - A control apparatus, coupled to a storage medium via communication links, controls data write operations to the storage medium. A cache memory is configured to store a temporary copy of first data written in the storage medium. A processor receives second data with which the first data in the storage medium is to be updated, and determines whether the received second data coincides with the first data, based on comparison data read out of the storage medium, when no copy of the first data is found in the cache memory. When the second data is determined to coincide with the first data, the processor determines not to write the second data into the storage medium.09-13-2012
20120265937DISTRIBUTED STORAGE NETWORK INCLUDING MEMORY DIVERSITY - A dispersed storage (DS) unit a processing module and a plurality of hard drives. The processing module is operable to maintain states for at least some of the plurality of hard drives. The processing module is further operable to receive a memory access request regarding an encoded data slice and identify a hard drive of the plurality of hard drives based on the memory access request. The processing module is further operable to determine a state of the hard drive. When the hard drive is in a read state and the memory access request is a write request, the processing module is operable to queue the write request, change from the read state to a write state in accordance with a state transition process, and, when in the write state, perform the write request to store the encoded data slice in the hard drive.10-18-2012
20120239882CONTROL APPARATUS AND METHOD, AND STORAGE APPARATUS - In a storage apparatus, in the case where a data block to be written to a storage medium is a zero data block containing only zero data, a zero data information memory stores zero data identification information indicating that the data block is a zero data block. A control apparatus receives a data block from an access requesting apparatus in association with a write request issued by the access requesting apparatus for writing the data block a specified number of times to a predetermined storage area of the storage medium, and when determining that the data block is a zero data block containing only zero data, sets zero data identification information in the zero data information memory, and when completing the setting of the zero data identification information, sends the access requesting apparatus a completion notice of the writing to the storage medium.09-20-2012
20120331230CONTROL BLOCK LINKAGE FOR DATABASE CONVERTER HANDLING - A system to load a plurality of converter pages of a datastore into a database cache, the plurality of converter pages comprising a plurality of converter inner pages, and a plurality of converter leaf pages, to allocate a control block in the database cache for each of the plurality of converter inner pages, the control block of a converter inner page comprising a pointer to a control block of a parent converter inner page and a pointer to a control block of each child converter page of the converter inner page, and to allocate a control block in the database cache for each of the plurality of converter leaf pages, the control block of a converter leaf page comprising a pointer to a control block of a parent converter inner page.12-27-2012
20120324164Programmable Memory Address - A method includes storing defined memory address segments and defined memory address segment attributes for a processor. The processor is operated in accordance with the defined memory address segments and defined memory address segment attributes.12-20-2012
20120324165MEMORY CONTROL DEVICE AND MEMORY CONTROL METHOD - According to one embodiment, a memory control device includes: a buffer memory; a cache memory performing caching for the buffer memory on a unit-data-by-unit-data basis; and an adding module adding ByteECC data to the unit data.12-20-2012
20120324163STORAGE CONTROL APPARATUS AND STORAGE CONTROL METHOD - In one of the storage control apparatuses in the remote copy system which performs asynchronous remote copy between the storage control apparatuses, virtual logical volumes complying with Thin Provisioning are adopted as journal volumes to which journals are written. The controller in the one of the storage control apparatuses assigns a smaller actual area based on the storage apparatus than in case of assignment to the entire area of the journal volume, and adds a journal to the assigned actual area. If a new journal cannot be added, the controller performs wraparound, that is, overwrites the oldest journal in the assigned actual area by the new journal.12-20-2012
20120272002Detection and Control of Resource Congestion by a Number of Processors - In an embodiment, a system includes a resource. The system also includes a first processor having a load/store functional unit. The load/store functional unit is to attempt to access the resource based on access requests. The first processor includes a congestion detection logic to detect congestion of access of the resource based on a consecutive number of negative acknowledgements received in response to the access requests prior to receipt of a positive acknowledgment in response to one of the access requests within a first time period.10-25-2012
20110238915STORAGE SYSTEM - A switch device includes interfaces connected to a host, a first storage device, and a second storage device having a cache memory, and a processor executing receiving a copy command indicating to copy target data stored in the first storage device to the second storage device from the host, transmitting a reading out command indicating to read out the target data stored in the first storage device corresponding to the copy command, receiving the target data corresponding to the transmitted reading out command from the first storage device, and transmitting, to the second storage device, a writing command for writing the target data and release information indicating that the target data is releasable from the cache memory.09-29-2011
20120278555OPPORTUNISTIC BLOCK TRANSMISSION WITH TIME CONSTRAINTS - A technique for determining a data window size allows a set of predicted blocks to be transmitted along with requested blocks. A stream enabled application executing in a virtual execution environment may use the blocks when needed.11-01-2012
20120089781MECHANISM FOR RETRIEVING COMPRESSED DATA FROM A STORAGE CLOUD - A cloud storage appliance receives one or more read requests for data stored in a storage cloud. The cloud storage appliance determines, for a time period, a total amount of bandwidth that will be used to retrieve the requested data from the storage cloud. The cloud storage appliance then determines an amount of remaining bandwidth for the time period. The cloud storage appliance retrieves the requested data from the storage cloud in the time period to satisfy the one or more read requests. The cloud storage appliance additionally retrieves a quantity of unrequested data from the storage cloud in the time period, wherein the quantity of retrieved unrequested data is based on the amount of remaining bandwidth for the time period.04-12-2012
20110276761ACCELERATING SOFTWARE LOOKUPS BY USING BUFFERED OR EPHEMERAL STORES - A method and apparatus for accelerating lookups in an address based table is herein described. When an address and value pair is added to an address based table, the value is privately stored in the address to allow for quick and efficient local access to the value. In response to the private store, a cache line holding the value is transitioned to a private state, to ensure the value is not made globally visible. Upon eviction of the privately held cache line, the information is not written-back to ensure locality of the value. In one embodiment, the address based table includes a transactional write buffer to hold addresses, which correspond to tentatively updated values during a transaction. Accesses to the tentative values during the transaction may be accelerated through use of annotation bits and private stores as discussed herein. Upon commit of the transaction, the values are copied to the location to make the updates globally visible.11-10-2011
20120331229LOAD BALANCING BASED UPON DATA USAGE - A method of load balancing can include segmenting data from a plurality of servers into usage patterns determined from accesses to the data. Items of the data can be cached in one or more servers of the plurality of servers according to the usage patterns. Each of the plurality of servers can be designated to cache items of the data of a particular usage pattern. A reference to an item of the data cached in one of the plurality of servers can be updated to specify the server of the plurality of servers within which the item is cached.12-27-2012
20120096224OPPORTUNISTIC BLOCK TRANSMISSION WITH TIME CONSTRAINTS - A technique for determining a data window size allows a set of predicted blocks to be transmitted along with requested blocks. A stream enabled application executing in a virtual execution environment may use the blocks when needed.04-19-2012
20120096223LOW-POWER AUDIO DECODING AND PLAYBACK USING CACHED IMAGES - A particular method includes loading one or more memory images into a multi-way cache. The memory images are associated with an audio decoder, and the multi-way cache is accessible to a processor. Each of the memory images is sized not to exceed a page size of the multi-way cache.04-19-2012
20120331227FACILITATING IMPLEMENTATION, AT LEAST IN PART, OF AT LEAST ONE CACHE MANAGEMENT POLICY - An embodiment may include circuitry to facilitate implementation, at least in part, of at least one cache management policy. The at least one policy may be based, at least in part, upon respective priorities of respective classifications of respective network traffic. The at least one policy may concern, at least in part, caching of respective information associated, at least in part, with the respective network traffic belonging to the respective classifications. Many alternatives, variations, and modifications are possible.12-27-2012
20120331228DYNAMIC CONTENT CACHING - A system for caching content including a server supplying at least one of static and non-static content elements, content distinguishing functionality operative to categorize elements of the non-static content as being either dynamic content elements or pseudodynamic content elements, and caching functionality operative to cache the pseudodynamic content elements. The static content elements are content elements which are identified by at least one of the server and metadata associated with the content elements as being expected not to change, the non-static content elements are content elements which are not identified by the server and/or by metadata associated with the content elements as being static content elements, the pseudodynamic content elements are non-static content elements which, based on observation, are not expected to change, and the dynamic content elements are non-static content elements which are not pseudodynamic.12-27-2012
20110320717STORAGE CONTROL APPARATUS, STORAGE SYSTEM AND METHOD - A storage control apparatus includes a memory configured to store access management information concerning access from a host to each of a plurality of logical volumes, and a controller configured to refer to the access management information read from the memory, when receiving an entirety of updated data from the host, to set a write mode for data transfer from each of the plurality of logical volumes to the corresponding physical volume on the basis of the access management information to one of a difference data write mode in which difference data indicating a difference between an entirety of data stored in a storage apparatus and the entirety of updated data is written into a storage apparatus and an entire data write mode in which the entirety of updated data is written into the storage apparatus.12-29-2011
20110320716LOADING AND UNLOADING A MEMORY ELEMENT FOR DEBUG - A method of debugging a memory element is provided. The method includes initializing a line fetch controller with at least one of write data and read data; utilizing at least two separate clocks for performing at least one of write requests and read requests based on the at least one of the write data and the read data; and debugging the memory element based on the at least one of write requests and read requests.12-29-2011
20110320715IDENTIFYING TRENDING CONTENT ITEMS USING CONTENT ITEM HISTOGRAMS - Within a content item set, particular content items may be identified as trending, based on changes in a frequency of references to the content items. For example, users of a social network may reference web resources by posting the uniform resource locators (URLs) thereof in messages, and trending web resources may be identified by detecting changes in the frequencies of such references. These trends may be tracked by counting such references in content item histograms, and by computing trend scores at the time of detecting each reference to a content item. Trending content items may then be identified at a second time by comparing the trend scores after decaying the trend scores of respective content items, based on the period between the second time and the last reference time of the last detected reference to the content item.12-29-2011
20110320714MAINFRAME STORAGE APPARATUS THAT UTILIZES THIN PROVISIONING - Each actual page inside a pool is configured from a plurality of actual tracks, and each virtual page inside a virtual volume is configured from a plurality of virtual tracks. A storage control apparatus of a mainframe system has management information that includes information denoting a track in which there exists a user record, which is a record including user data (the data used by a host apparatus of a mainframe system). Based on the management information, a controller identifies an actual page that is configured only from tracks that do not comprise the user record, and cancels the allocation of the identified actual page to the virtual page.12-29-2011
20120290790METHOD, SERVER, COMPUTER PROGRAM AND COMPUTER PROGRAM PRODUCT FOR CACHING - It presented a method comprising the steps of: determining, in a caching server of a telecommunication network, a user profile to analyse; obtaining, in the caching server, a group of user profiles; obtaining correlation measurements for each user profile in the group of user profiles in relation to the user profile to analyse; and calculating a content caching priority for at least one piece of content of a content history associated with the group of user profiles, taking the correlation measurement into account. A corresponding server, computer program and computer program product are also provided.11-15-2012
20120290789PREFERENTIALLY ACCELERATING APPLICATIONS IN A MULTI-TENANT STORAGE SYSTEM VIA UTILITY DRIVEN DATA CACHING - A system may include multi-tenant electronic storage for hosting a plurality of applications having heterogeneous Input/Output (I/O) characteristics, relative importance levels, and Service-Level Objectives (SLOs). The system may also include a management interface for managing the multi-tenant electronic storage, where the management interface is configured to receive a storage resource arbitration policy based on at least one of a workload type, an SLO, or a priority for an application. The system may further include control programming configured to receive an association of a particular I/O stream with a particular application generating the I/O stream, where the association of the I/O stream with the application was determined by analyzing at least one I/O characteristic of the I/O stream, and determine at least one of a cache size or a caching policy for the application based on the association of the I/O stream with the application and the storage resource arbitration policy.11-15-2012
20120290792MEDIA DEVICE WITH INTELLIGENT CACHE UTILIZATION - A portable media device and a method for operating a portable media device are disclosed. According to one aspect, a battery-powered portable media device can manage use of a mass storage device to efficiently utilize battery power. By providing a cache memory and loading the cache memory so as to provide skip support, battery power for the portable media device can be conserved (i.e., efficiently consumed). According to another aspect, a portable media device can operate efficiently in a seek mode. The seek mode is an operational mode of the portable media device in which the portable media device automatically scans through media items to assist a user in selecting a desired one of the media items.11-15-2012
20120290791PROCESSOR AND METHOD FOR EXECUTING LOAD OPERATION THEREOF - A processor and a method for executing load operation and store operation thereof are provided. The processor includes a data cache and a store buffer. When executing a store operation, if the address of the store operation is the same as the address of an existing entry in the store buffer, the data of the store operation is merged into the existing entry. When executing a load operation, if there is a memory dependency between an existing entry in the store buffer and the load operation, and the existing entry includes the complete data required by the load operation, the complete data is provided by the existing entry alone. If the existing entry does not include the complete data, the complete data is generated by assembling the existing entry and a corresponding entry in the data cache.11-15-2012
20100131710METHOD AND APPARATUS FOR SHARING CONTENT BETWEEN PORTALS - A method and apparatus for enabling a first portal to receive and present or otherwise use content from a second portal. The first portal comprises an indication to a location within the second portal. During execution of the first portal, the indication, such as a shortcut is parsed, a connection between the first portal and the second portal is created, and requests and responses related to the content are exchanged between the first and the second portal. The shortcuts enable the loose coupling between the portals and avoid the need for managing multiple versions of the component providing the data or tight coupling In addition, the method and apparatus enable the execution of a non-executable unit of the second portal from an environment of the first portal. The method and apparatus can be used in a transitive manner, such that a first portal will use content from a second portal, which in turn uses content from a third portal.05-27-2010
20080229017Systems and Methods of Providing Security and Reliability to Proxy Caches - The present solution provides a variety of techniques for accelerating and optimizing network traffic, such as HTTP based network traffic. The solution described herein provides techniques in the areas of proxy caching, protocol acceleration, domain name resolution acceleration as well as compression improvements. In some cases, the present solution provides various prefetching and/or prefreshening techniques to improve intermediary or proxy caching, such as HTTP proxy caching. In other cases, the present solution provides techniques for accelerating a protocol by improving the efficiency of obtaining and servicing data from an originating server to server to clients. In another cases, the present solution accelerates domain name resolution more quickly. As every HTTP access starts with a URL that includes a hostname that must be resolved via domain name resolution into an IP address, the present solution helps accelerate HTTP access. In some cases, the present solution improves compression techniques by prefetching non-cacheable and cacheable content to use for compressing network traffic, such as HTTP. The acceleration and optimization techniques described herein may be deployed on the client as a client agent or as part of a browser, as well as on any type and form of intermediary device, such as an appliance, proxying device or any type of interception caching and/or proxying device.09-18-2008
20110246719PROVISIONING A DISK OF A CLIENT FOR LOCAL CACHE - Embodiments provide systems, methods, apparatuses and computer program products configured to provide alternative desktop computing solutions. Embodiments generally provide client devices configured with a local cache storing a common base image, with access to a user overlay on a remote storage device. Embodiments provide methods for provisioning a local disk of a client for use as the local cache with minimal IT administrator input.10-06-2011
20080222357Low power computer with main and auxiliary processors - A processing device comprises a processor, low power nonvolatile memory that communicates with the processor, high power nonvolatile memory that communicates with the processor. The processing device manages data using a cache hierarchy comprising a high power (HP) nonvolatile memory level for data in the high power nonvolatile memory and a low power (LP) nonvolatile memory level for data in the low power nonvolatile memory. The LP nonvolatile memory level has a higher level in the cache hierarchy than the HP nonvolatile memory level.09-11-2008
20130179638Streaming Translation in Display Pipe - In an embodiment, a display pipe includes one or more translation units corresponding to images that the display pipe is reading for display. Each translation unit may be configured to prefetch translations ahead of the image data fetches, which may prevent translation misses in the display pipe (at least in most cases). The translation units may maintain translations in first-in, first-out (FIFO) fashion, and the display pipe fetch hardware may inform the translation unit when a given translation or translation is no longer needed. The translation unit may invalidate the identified translations and prefetch additional translation for virtual pages that are contiguous with the most recently prefetched virtual page.07-11-2013
20130091329REDUCED LATENCY MEMORY COLUMN REDUNDANCY REPAIR - A memory column redundancy mechanism includes a memory having a number of data output ports each configured to output one data bit of a data element. The memory also includes a number of memory columns each connected to a corresponding respective data port. Each memory column includes a plurality of bit cells that are coupled to a corresponding sense amplifier that may differentially output a respective data bit from the plurality of bit cells on an output signal and a complemented output signal. The memory further includes an output selection unit that may select as the output data bit for a given data output port, one of the output signal of the sense amplifier associated with the given data output port or the complemented output signal of the sense amplifier associated with an adjacent data output port dependent upon a respective shift signal for each memory column.04-11-2013
20130097381MANAGEMENT APPARATUS, MANAGEMENT METHOD, AND PROGRAM - There is provided a management apparatus including a management unit that manages, based on execution control information indicating an execution sequence of a plurality of applications, an execution area and a cache area of a recording medium which temporarily stores the applications when the applications are executed.04-18-2013
20130097380METHOD FOR MAINTAINING MULTIPLE FINGERPRINT TABLES IN A DEDUPLICATING STORAGE SYSTEM - A system and method for managing multiple fingerprint tables in a deduplicating storage system. A computer system includes a storage medium, a first fingerprint table comprising a first plurality of entries, and a second fingerprint table comprising a second plurality of entries. Each of the first plurality of entries and the second plurality of entries are configured to store fingerprint related data corresponding to data stored in the storage medium. A storage controller is configured to select the first fingerprint table for storage of entries corresponding to data stored in the data storage medium that has been deemed more likely to be successfully deduplicated than other data stored in the data storage medium; and select the second fingerprint table for storage of entries corresponding to data stored in the data storage medium that has been deemed less likely to be successfully deduplicated than other data stored in the storage medium.04-18-2013
20130097379STORAGE SYSTEM AND METHOD OF CONTROLLING STORAGE SYSTEM - It is provided a storage system for storing data requested by a host computer to be written, the storage system comprising: at least one processor, a cache memory and a cache controller. The cache memory includes a first memory which can be accessed by way of either access that can specify an access range by a line or access that continuously performs a read and a write. The cache controller includes a second memory which has a higher flexibility than the first memory in specifying an access range. The cache controller determines an address of an access destination upon reception of a request for an access to the cache memory from the at least one processor, and switches a request for an access to a specific address into an access to a corresponding address in the second memory.04-18-2013
20130103904SYSTEM AND METHOD TO REDUCE MEMORY ACCESS LATENCIES USING SELECTIVE REPLICATION ACROSS MULTIPLE MEMORY PORTS - In one embodiment, a system comprises multiple memory ports distributed into multiple subsets, each subset identified by a subset index and each memory port having an individual wait time. The system further comprises a first address hashing unit configured to receive a read request including a virtual memory address associated with a replication factor, and referring to graph data. The first address hashing unit translates the replication factor into a corresponding subset index based on the virtual memory address, and converts the virtual memory address to a hardware based memory address that refers to graph data in the memory ports within a subset indicated by the corresponding subset index. The system further comprises a memory replication controller configured to direct read requests to the hardware based address to the one of the memory ports within the subset indicated by the corresponding subset index with a lowest individual wait time.04-25-2013
20130103903Methods And Apparatus For Reusing Prior Tag Search Results In A Cache Controller - Methods and apparatus are provided for reusing prior tag search results in a cache controller. A cache controller is disclosed that receives an incoming request for an entry in the cache having a first tag; determines if there is an existing entry in a buffer associated with the cache having the first tag; and reuses a tag access result from the existing entry in the buffer having the first tag for the incoming request. An indicator can be maintained in the existing entry to indicate whether the tag access result should be retained. Tag access results can optionally be retained in the buffer after completion of a corresponding request. The tag access result can be reused by (i) reallocating the existing entry to the incoming request if the indicator in the existing entry indicates that the tag access result should be retained; and/or (ii) copying the tag access result from the existing entry to a buffer entry allocated to the incoming request if a hazard is detected.04-25-2013
20130124799SELF-DISABLING WORKING SET CACHE - A method to monitor the behavior of a working set cache of a full data set at run time and determine whether it provides a performance benefit is disclosed. An effectiveness metric of the working set cache is tracked over a period of time by efficiently computing the amount of physical memory consumption the cache saves and comparing this to a straightforward measure of its overhead, if the effectiveness metric is determined to be on an ineffective side of a selected threshold amount, the working set cache is disabled. The working set cache can be re-enabled in response to a predetermined event.05-16-2013
20110276760NON-COMMITTING STORE INSTRUCTIONS - Techniques relating to a processor that supports a non-committing store instruction that is executable during a scouting thread to provide data to a subsequently executed load instruction. The processor may include a memory access unit configured to perform an instance of the non-committing store instruction by storing a value in an entry of a store buffer without committing the instance of the non-committing store instruction. In response to subsequently receiving an instance of a load instruction of the scouting thread that specifies a load from the memory address, the memory access unit is configured to perform the instance of the load instruction by retrieving the value. The memory access unit may retrieve the value from the store buffer or from a cache of the processor.11-10-2011
20130132673STORAGE SYSTEM, STORAGE APPARATUS AND METHOD OF CONTROLLING STORAGE SYSTEM - A storage system enables a core storage apparatus to execute processing requiring securing of data consistency, while providing high write performance to a host computer.05-23-2013
20130179637DATA STORAGE BACKUP WITH LESSENED CACHE POLLUTION - Control of the discard of data from cache during backup of the data. In a computer-implemented system comprising primary data storage; cache; backup data storage; and at least one processor, the processor is configured to identify data stored in the primary data storage for backup to the backup data storage, where the identified data is placed in the cache in the form of portions of the data, and where the portions of data are to be backed up from the cache to the backup storage. Upon backup of each portion of the identified data from the cache to the backup storage, the processor marks the backed up portion of the identified data for discard from the cache. Thus, the backed up data is discarded from the cache right away, lessening cache pollution.07-11-2013
20130145094Information Processing Apparatus and Driver - According to one embodiment, an information processing apparatus includes a memory that comprises a buffer area, a first external storage, a second external storage and a driver. The driver is configured to control the first and second external storages in units of predetermined blocks. The driver comprises a cache reservation module configured to (i) reserve a cache area in the memory, the cache area being logically between the buffer area and the first external storage and between the buffer area and the second external storage and (ii) manage the cache area. The cache area operates as a primary cache for the second external storage and a cache for the first external storage. Part or the entire first external storage is used as a secondary cache for the second external storage. The buffer area is used to transfer data between the driver and a host system that requests data reads/writes.06-06-2013
20110219187CACHE DIRECTORY LOOKUP READER SET ENCODING FOR PARTIAL CACHE LINE SPECULATION SUPPORT - In a multiprocessor system, with conflict checking implemented in a directory lookup of a shared cache memory, a reader set encoding permits dynamic recordation of read accesses. The reader set encoding includes an indication of a portion of a line read, for instance by indicating boundaries of read accesses. Different encodings may apply to different types of speculative execution.09-08-2011
20110238914STORAGE APPARATUS AND DATA PROCESSING METHOD FOR THE SAME - The present invention aims for efficient use of storage capacity in a storage system by reducing the amount of time taken for processing including removing redundancy and data compression executed with respect to transferred data.09-29-2011
20130151774Controlling a Storage System - A method, computer-readable storage medium and computer system for controlling a storage system, the storage system comprising a plurality of logical storage volumes, the method comprising: monitoring, for each of the logical storage volumes, one or more load parameters; receiving, for each of the logical storage volumes, one or more load parameter threshold values; comparing, for each of the logical storage volumes, the first load parameter values of said logical storage volume with the corresponding one or more load parameter threshold values; in case at least one of the first load parameter values of one of the logical storage volumes violates the load parameter threshold value it is compared with, automatically executing a corrective action.06-13-2013
20130151776RAPID MEMORY BUFFER WRITE STORAGE SYSTEM AND METHOD - Efficient and convenient storage systems and methods are presented. In one embodiment a storage system includes a host for processing information, a memory controller and a memory. The memory controller controls communication of the information between the host and the memory, wherein the memory controller routes data rapidly to a buffer of the memory without buffering in the memory controller. The memory stores the information. The memory includes a buffer for temporarily storing the data while corresponding address information is determined.06-13-2013
20130151775Information Processing Apparatus and Driver - According to one embodiment, an information processing apparatus includes a memory includes a buffer area, a first storage, a second storage and a driver. Controlling the first and second external storages, the driver comprises a cache reservation module configured to reserve a cache area in the memory. The cache area is logically between the buffer area and the first external storage and between the buffer area and the second external storage. The driver being configured to use the cache area, secured on the memory by the cache reservation module, as a primary cache for the second external storage and a cache for the first external storage, and uses part or the entire first external storage as a secondary cache for the second external storage. The buffer area is reserved in order to transfer data between the driver and a host system that requests for data writing and data reading.06-13-2013
20130151773DETERMINING AVAILABILITY OF DATA ELEMENTS IN A STORAGE SYSTEM - Data elements are stored at a plurality of nodes. Each data element is a member data element of one of a plurality of layouts. Each layout indicates a unique subset of nodes. All member data elements of the layout are stored on each node in the unique subset of nodes. A stored dependency list includes every layout that has member data elements. The dependency list is used to determine availability of data elements based on ability to access data from nodes from the plurality of nodes.06-13-2013
20100299479OBSCURING MEMORY ACCESS PATTERNS - For each memory location in a set of memory locations associated with a thread, setting an indication associated with the memory location to request a signal if data from the memory location is evicted from a cache; and in response to the signal, reloading the set of memory locations into the cache.11-25-2010
20100318740Method and System for Storing Real Time Values - A <> is inserted between the archiving subsystem (e.g. relational database writing API) and the tag data flow from the acquisition server. Then client data requests must be routed always through the <>. The dynamic data cache module>> is able to manage tag data that is not only coming from real-time acquisition (i.e. keeping the last n values of tag data in the cache) but also <> of data in a different time span. For this usage, the cache will be size-limited and a last recently used (LRU) algorithm may be used to free up space when needed.12-16-2010
20130185510CACHING SOURCE BLOCKS OF DATA FOR TARGET BLOCKS OF DATA - Provided is a method for processing a read operation for a target block of data. A read operation for the target block of data in target storage is received, wherein the target block of data is in an instant virtual copy relationship with a source block of data in source storage. It is determined that the target block of data in the target storage is not consistent with the source block of data in the source storage. The source block of data is retrieved. The data in the source block of data in the cache is synthesized to make the data appear to be retrieved from the target storage. The target block of data is marked as read from the source storage. In response to the read operation completing, the target block of data that was read from the source storage is demoted.07-18-2013
20130185509COMPUTING MACHINE MIGRATION - Systems and methods for migration between computing machines are disclosed. The source and target machines can be either physical or virtual; the source can also be a machine image. The target machine is connected to a snapshot or image of the source machine file system, and a redo-log file is created on the file system associated with the target machine. The target machine begins operation by reading data directly from the snapshot or image of the source machine file system. Thereafter, all writes are made to the redo-log file, and subsequent reads are made from the redo-log file if it contains data for the requested sector or from the snapshot or image if it does not. The source machine continues to be able to run separately and simultaneously after the target machine begins operation.07-18-2013
20130185508SYSTEMS AND METHODS FOR MANAGING CACHE ADMISSION - A cache layer leverages a logical address space and storage metadata of a storage layer (e.g., virtual storage layer) to cache data of a backing store. The cache layer maintains access metadata to track data characteristics of logical identifiers in the logical address space, including accesses pertaining to data that is not in the cache. The access metadata may be separate and distinct from the storage metadata maintained by the storage layer. The cache layer determines whether to admit data into the cache using the access metadata. Data may be admitted into the cache when the data satisfies cache admission criteria, which may include an access threshold and/or a sequentiality metric. Time-ordered history of the access metadata is used to identify important/useful blocks in the logical address space of the backing store that would be beneficial to cache.07-18-2013
20110314224Apparatus and method for handling access operations issued to local cache structures within a data processing apparatus - An apparatus and method are provided for handling access operations issued to local cache structures within a data processing apparatus. The data processing apparatus comprises a plurality of processing units each having a local cache structure associated therewith. Shared access coordination circuitry is also provided for coordinating the handling of shared access operations issued to any of the local cache structures. For a shared access operation, the access control circuitry associated with the local cache structure to which that shared access operation is issued will perform a local access operation to that local cache structure, and in addition will issue a shared access signal to the shared access coordination circuitry. For a local access operation, the access control circuitry would normally perform a local access operation on the associated local cache structure, and not notify the shared access coordination circuitry. However, if an access operation extension value is set, then the access control circuitry treats such a local access operation as a shared access operation. Such an approach ensures correction operation even after an operating system and/or an application program are migrated from one processing unit to another.12-22-2011
20110314223SYSTEM FOR PROTECTING AGAINST CACHE RESTRICTION VIOLATIONS IN A MEMORY - An apparatus comprising a plurality of tag circuits, a plurality of compare circuits and a processing circuit. The plurality of tag circuits may each be configured to store memory mapping data. The plurality of compare circuits may each be configured to generate a respective compare result in response to a match between the memory mapping data of a respective one of the tag circuits and a respective one of a plurality of tag fields. The processing circuit may be configured to receive each of the compare results from the plurality of compare circuits. The processing circuit may also be configured to count occurrences of the matches. If more than one match is identified within a predetermined time, the processing circuit may invalidate the memory mapping data and the tag field. If more than one match is identified within a predetermined time, the processing circuit may also re-fetch the memory mapping data.12-22-2011
20130191596ADJUSTMENT OF DESTAGE RATE BASED ON READ AND WRITE RESPONSE TIME REQUIREMENTS - A storage controller that includes a cache receives a command from a host, wherein a set of criteria corresponding to read and write response times for executing the command have to be satisfied. The storage controller determines ranks of a first type and ranks of a second type corresponding to a plurality of volumes coupled to the storage controller, wherein the command is to be executed with respect to the ranks of the first type. Destage rate corresponding to the ranks of the first type are adjusted to be less than a default destage rate corresponding to the ranks of the second type, wherein the set of criteria corresponding to the read and write response times for executing the command are satisfied.07-25-2013
20130191595METHOD AND APPARATUS FOR STORING DATA - Embodiments of the present invention provide a method and an apparatus for storing data, which relate to the field of data processing. In the present invention, a current device is divided into different load modes in the process of service processing, and manners of storing various data in a Cache are dynamically adjusted, so that nodes with different characteristics in the current device may control operations on the Cache, thus achieving lower power consumption and optimum performance of a large-capacity system under a heavy load.07-25-2013
20110320718READING OR WRITING TO MEMORY - To increase the efficiency of a running application, it is determined whether using a cache or directly a storage is more efficient block size-specifically; and the determined memory type is used for a data stream having a corresponding block size.12-29-2011
20120005429REUSING STYLE SHEET ASSETS - In a first embodiment of the present invention, a method is provided comprising: parsing a document, wherein the document contains at least one reference to a style sheet; for each referenced style sheet: determining if a ruleset corresponding to the referenced style sheet is contained in a first local cache; if the ruleset corresponding to the style sheet is contained in the first local cache; if the referenced style sheet is not contained in the first local cache, parsing the referenced style sheet to derive a ruleset; and applying the ruleset(s) to the document to derive a layout for displaying the document.01-05-2012
20120017047DATA VAULTING IN EMERGENCY SHUTDOWN - A data storage apparatus includes a processor, a write cache in operable communication with the processor, an auxiliary storage device in operable communication with the write cache, and a temporary power source in electrical communication with each of the processor, write cache, and auxiliary storage device for supplying power in the event of a loss of primary, external power. The auxiliary storage device is dimensioned to have sufficient size for holding dirty pages cached in the write cache, and the temporary power source is configured with sufficient energy for, subsequent to the loss of the external power, powering the processor, the write cache, and the auxiliary storage device for an entire duration of a backup process.01-19-2012
20120023294MEMORY DEVICE AND METHOD HAVING ON-BOARD PROCESSING LOGIC FOR FACILITATING INTERFACE WITH MULTIPLE PROCESSORS, AND COMPUTER SYSTEM USING SAME - A memory device includes an on-board processing system that facilitates the ability of the memory device to interface with a plurality of processors operating in a parallel processing manner. The processing system includes circuitry that performs processing functions on data stored in the memory device in an indivisible manner. More particularly, the system reads data from a bank of memory cells or cache memory, performs a logic function on the data to produce results data, and writes the results data back to the bank or the cache memory. The logic function may be a Boolean logic function or some other logic function.01-26-2012
20120030427Cache Control Method, Node Apparatus, Manager Apparatus, and Computer System - Disclosed is a computer system that includes a first apparatus, which stores data and metadata in a storage, and multiple units of a second apparatus, which store a copy of data and metadata in the first apparatus in a cache. The first apparatus acquires throughput achieved when the units of the second apparatus access the data in the storage as first access information, acquires throughput achieved when the units of the second apparatus access data thereof as second access information, and selects either a first judgment mode or a second judgment mode in accordance with the first access information and the second access information. This reduces the amount of network traffic for metadata acquisition, thereby increasing the speed of data access.02-02-2012
20130198455CACHE MEMORY GARBAGE COLLECTOR - A method for managing objects stored in a cache memory of a processing unit. The cache memory includes a set of entries corresponding to an object. The method includes: checking, for each entry of at least a subset of entries of the set of entries of the cache memory, whether an object corresponding to each entry includes one or more references to one or more other objects stored in the cache memory and storing the references; determining among the objects stored in the cache memory, which objects are not referenced by other objects, based on the stored references; marking entries as checked to distinguish entries corresponding to objects determined as being not referenced from other entries of the checked entries, and casting out, according to the marking, entries corresponding to objects determined as being not referenced.08-01-2013
20130198454CACHE DEVICE FOR CACHING - A cache device for caching scalable data structures in a cache memory exhibits a displacement strategy, in accordance with which scaling-down of one or more scalable files in the cache memory is provided for the purpose of freeing up storage space.08-01-2013
20130198453HYBRID STORAGE DEVICE INCLUCING NON-VOLATILE MEMORY CACHE HAVING RING STRUCTURE - A storage device is provided. The storage device has a storage region configured in a ring structure, and is divided into a reading cache region and writing cache region, thereby reducing electricity consumption and increasing speed of the storage device.08-01-2013
20130198456Fast Cache Reheat - Embodiments of the present invention allow for fast cache reheat by periodically storing a snapshot of information identifying the contents of the cache at the time of the snapshot, and then using the information from the last snapshot to restore the contents of the cache following an event that causes loss or corruption of cache contents such as a loss of power or system reset. Since there can be a time gap between the taking of a snapshot and such an event, the actual contents of the cache, and hence the corresponding data stored in a data store, may have changed since the last snapshot was taken. Thus, the information stored at the last snapshot is used to retrieve current data from the data store for use in restoring the contents of the cache.08-01-2013
20130212331Techniques for Storing Data and Tags in Different Memory Arrays - A memory controller includes logic circuitry to generate a first data address identifying a location in a first external memory array for storing first data, a first tag address identifying a location in a second external memory array for storing a first tag, a second data address identifying a location in the second external memory array for storing second data, and a second tag address identifying a location in the first external memory array for storing a second tag. The memory controller includes an interface that transfers the first data address and the first tag address for a first set of memory operations in the first and the second external memory arrays. The interface transfers the second data address and the second tag address for a second set of memory operations in the first and the second external memory arrays.08-15-2013

Patent applications in class Caching

Patent applications in all subclasses Caching