Class / Patent application number | Description | Number of patent applications / Date published |
711158000 | Prioritizing | 84 |
20080215832 | DATA BUS BANDWIDTH SCHEDULING IN AN FBDIMM MEMORY SYSTEM OPERATING IN VARIABLE LATENCY MODE - A method and system for scheduling the servicing of data requests, using the variable latency mode, in an FBDIMM memory sub-system. A scheduling algorithm pre-computes return time data for data connected to all DRAM buffer chips and stores the return time data in a table. The return time data is expressed as a set of data return time binary vectors with one bit equal to “1” in each vector. For each received data request, the memory controller retrieves the appropriate return time vector. Additionally, the scheduling algorithm utilizes an updated history vector representing a compilation of data return time vectors of all executing requests to determine whether the received request presents a conflict to the executing requests. By computing and utilizing a score for each request, the scheduling algorithm re-orders and schedules the execution of selected requests to preserve as much data bus bandwidth as possible, while avoiding conflict. | 09-04-2008 |
20080244201 | METHOD FOR DIGITAL STORAGE OF DATA ON A DATA MEMORY WITH LIMITED AVAILABLE STORAGE SPACE - The most important data in a first memory of a data processing system are stored in a limited second data memory given upon a transfer thereof. The demarcation between important (and still storable) data on the one hand and less important (and therefore no longer storable) data is made dependent on the available storage volume (SV) of the target data memory. This achieves that an optimal amount of the most important data can be stored on the target data memory. | 10-02-2008 |
20080288729 | METHODS AND APPARATUS FOR PREDICTIVE DOCUMENT RENDERING - A system receives a document having a predefined format to be rendered on the computer system. The document is comprised of a plurality of objects. The system identifies at least one correlation between at least two objects within the plurality of objects, and assigns a weight to the correlation. The system determines a logical relationship between at least two objects within the plurality of objects. The logical relationship is determined according to the weight of at least one correlation. The logical relationship is associated with an order in which at least one object is rendered on the computer system. | 11-20-2008 |
20090019243 | DRAM Power Management in a Memory Controller - A memory controller uses a power- and performance-aware scheduler which reorders memory commands based on power priorities. Selected memory ranks of the memory device are then powered down based on rank localities of the reordered commands. The highest power priority may be given to memory commands having the same rank as the last command sent to the memory device. Any memory commands having the same power priority can be further sorted based on one or more performance criteria such as an expected latency of the memory commands and an expected ratio of read and write memory commands. To optimize the power-down function, the power-down command is only sent when the selected memory rank is currently idle, the selected memory rank is not already powered down, none of the reordered memory commands correspond to the selected rank, and a currently pending memory command cannot be issued in the current clock cycle. | 01-15-2009 |
20090049256 | MEMORY CONTROLLER PRIORITIZATION SCHEME - A system includes a processor coupled to a memory through a memory controller. The memory controller includes first and second queues. The memory controller receives memory requests from the processor, assigns a priority to each request, stores each request in the first queue, and schedules processing of the requests based on their priorities. The memory controller changes the priority of a request in the first queue in response to a trigger, sends a next scheduled request from the first queue to the second queue in response to detecting the next scheduled request has the highest priority of any request in the first queue, and sends requests from the second queue to the memory. The memory controller changes the priority of different types of requests in response to different types of triggers. The memory controller maintains a copy of each request sent to the second queue in the first queue. | 02-19-2009 |
20090083501 | CANCELLATION OF INDIVIDUAL LOGICAL VOLUMES IN PREMIGRATION CHAINS - Provided are techniques for cancellation of premigration of a member in a chain. A set of premigration messages are received, wherein a separate premigration message is received for each logical volume in a chain of logical volumes. While processing the premigration messages in order of receipt of each of the premigration messages, a cancel message indicating that premigration of a logical volume in the chain is to be cancelled is received. In response to determining that the logical volume whose premigration is to be cancelled has not already been transferred to physical storage media, premigration of the logical volume is cancelled by removing a premigration message for that logical volume from the set of premigration messages and premigration of each other logical volume in the chain of logical volumes is continued in order of receipt. | 03-26-2009 |
20090164740 | DEVICE AND METHOD FOR EXTRACTING MEMORY DATA - A device and method for extracting data stored in a volatile memory are provided. In particular, a memory-data extracting device and method for ensuring integrity of data extracted from a volatile memory installed in a computer are provided. A memory-data extracting module extracts data stored in a memory. A module loader loads the memory-data extracting module in a kernel region of the memory and sets a priority of the loaded memory-data extracting module to be higher than priorities of kernel processors loaded in the memory. Task switching can be prevented in the course of extracting memory data by loading a process for extracting memory data in a kernel region and setting a priority of the loaded process to be higher than priorities of other kernel processes, thereby ensuring the integrity of data extracted from a non-volatile memory. | 06-25-2009 |
20090164741 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - An information processing apparatus and an information processing method are capable of correctly selecting data to be deleted, without a user having to perform a troublesome operation. In a backup operation, a determination is made for each image file as to whether a predetermined condition is satisfied. If the condition is satisfied, image files are backed up, and storage priority levels defined for these image files are reduced in accordance with a rule predefined by a user. The storage priority level is a measure indicating the priority of keeping an image file in a storage unit. The higher the storage priority, the lower the probability that image files are deleted. The storage priority levels are changed depending on whether image files have been backed up and depending on the number of times image files were backed up. | 06-25-2009 |
20090172315 | PRIORITY AWARE SELECTIVE CACHE ALLOCATION - A method and apparatus for is herein described providing priority aware and consumption guided dynamic probabilistic allocation for a cache memory. Utilization of a sample size of a cache memory is measured for each priority level of a computer system. Allocation probabilities for each priority level are updated based on the measured consumption/utilization, i.e. allocation is reduced for priority levels consuming too much of the cache and allocation is increased for priority levels consuming too little of the cache. In response to an allocation request, it is assigned a priority level. An allocation probability associated with the priority level is compared with a randomly generated number. If the number is less than the allocation probability, then a fill to the cache is performed normally. In contrast, a spatially or temporally limited fill is performed if the random number is greater than the allocation probability. | 07-02-2009 |
20090172316 | MULTI-LEVEL PAGE-WALK APPARATUS FOR OUT-OF-ORDER MEMORY CONTROLLERS SUPPORTING VIRTUALIZATION TECHNOLOGY - The invention relates generally to computer memory access. Embodiments of the invention provide a multi-level page-walk apparatus and method that enable I/O devices to execute multi-level page-walks with an out-of-order memory controller. In embodiments of the invention, the multi-level page-walk apparatus includes a demotion-based priority grant arbiter, a page-walk tracking queue, a page-walk completion queue, and a command packetizer. | 07-02-2009 |
20090177853 | SYSTEM AND METHODS FOR MEMORY EXPANSION - This document discusses, among other things, an example system and methods for memory expansion. An example embodiment includes detecting a memory command directed to a logical rand and a number of physical ranks mapped to the logical rank. The example embodiment may also include issuing the memory command to the number of physical ranks based on determining that the memory command is to be issued to the number of physical ranks. | 07-09-2009 |
20090193203 | System to Reduce Latency by Running a Memory Channel Frequency Fully Asynchronous from a Memory Device Frequency - A memory system is provided that reduces latency by running a memory channel fully asynchronous from a memory device frequency. The memory system comprises a memory hub device integrated in a memory module. The memory hub device comprises a command queue that receives a memory access command from an external memory controller via a memory channel at a first operating frequency. The memory system also comprises a memory hub controller integrated in the memory hub device. The memory hub controller reads the memory access command from the command queue at a second operating frequency. By receiving the memory access command at the first operating frequency and reading the memory access command at the second operating frequency an asynchronous boundary is implemented. The first operating frequency is a maximum designed operating frequency of the memory channel and the first operating frequency is independent of the second operating frequency. | 07-30-2009 |
20090193204 | SYSTEM AND METHOD OF ACCESSING MEMORY WITHIN AN INFORMATION HANDLING SYSTEM - A system and method of accessing memory within an information handling system are disclosed. In one form, a method of accessing memory can include detecting a first operating value of a first memory access node accessible to a first processor, and initiating operation of the first memory access node to a first data rate value. The method can also include initiating operation of a second memory access node to a second data rate value. In one form, the second data rate value can be different from the first data rate value. The method can also include enabling a first application access to either the first memory access node or the second memory access node via an operating system enabled by the processor. | 07-30-2009 |
20090240902 | Computer system and command execution frequency control method - A computer system of the present invention can adjust the execution frequencies of a command issued from a host and a command issued from a storage. An external manager disposed in the host configures a priority for a host command issued from a command issuing module inside the host. An internal manager disposed in the storage configures a priority for an internal command issued from a command issuing module inside the storage. The internal manager adjusts the execution frequency of the host command and the execution frequency of the internal command based on the host command priority and the internal command priority. | 09-24-2009 |
20090248999 | MEMORY CONTROL APPARATUS, MEMORY CONTROL METHOD AND INFORMATION PROCESSING SYSTEM - A memory control apparatus, a memory control method and an information processing system are disclosed. Fetch response data retrieved from a main storage unit is received, while bypassing a storage unit, by a first port in which the received fetch response data can be set. The fetch response data retrieved from the main storage unit, if unable to be set in the first port, is set in a second port through the storage unit. A transmission control unit performs priority control operation to send out, in accordance with a predetermined priority, the fetch response data set in the first port or the second port to the processor. As a result, the latency is shortened from the time when the fetch response data arrives to the time when the fetch response data is sent out toward the processor in response to a fetch request from the processor. | 10-01-2009 |
20090292886 | REACTIVE PLACEMENT CONTROLLER FOR INTERFACING WITH BANKED MEMORY STORAGE - An invention is provided for a reactive placement controller for interfacing with a banked memory storage. The reactive placement controller includes a read/write module, which is coupled to a command control module for a banked memory device. A command queue is included that comprises a plurality of queue entries coupled in series, with a top queue entry coupled to the read/write module. Each queue entry is capable of storing a memory command. Each queue entry includes its own queue control logic that functions to control storage of new memory commands into the command queue to reduce latency of commands in the command queue. | 11-26-2009 |
20090327624 | INFORMATION PROCESSING APPARATUS, CONTROLLING METHOD THEREOF, AND PROGRAM - An information processing apparatus controls writing to a disk. A command reception section receives from a host apparatus a write command and a control command controlling a cache about the write command. A queue storage section stores a queue for the write command and the control command received by the command reception section. A control section determines which of a first write command for data of a file and a second write command for metadata corresponding to the file the write command stored in the queue is, groups, when the control command is received by the command reception section, at least one first write command and at least one second write command that have been received and stored in the queue, assigns an execution sequence numbers to the first write command and the second write command in the group such that data write of the first write command to the disk is executed in priority to the data write of the second write command, and controls execution of the first write command and the second write command according to the assigned execution sequence numbers. | 12-31-2009 |
20100023712 | Storage subsystem and method of executing commands by controller - A storage subsystem capable of processing time-critical control commands while suppressing deterioration of the system performance to a minimum. When various commands are received in a multiplex manner via the same port from plural host devices, the channel adapter of the storage subsystem extracts commands of a first kind from the received commands. Then, the adapter executes the extracted commands of the first kind with high priority within a given unit time until a given number of guaranteed activations is reached. At the same time, commands of a second kind are enqueued in a queue of commands. After the commands of the first kind are executed as many as the number of guaranteed activations, the commands of the second kind are executed in the unit time. | 01-28-2010 |
20100138618 | Priority Encoders - A priority encoder and a processing device having the priority encoder. The priority encoder includes a port selector for generating a plurality of prioritized read requests based on a plurality of write requests from a plurality of processing devices and a predetermined priority assigned to each of the plurality of processing devices, one of the plurality of processing devices being selected based on the plurality of prioritized read requests; and a port latch for holding the values of the prioritized read requests to enable one of a plurality of communication ports unless the prioritized read requests are changed, each communication port for communicating with one of the processing devices to read data from the processing device. | 06-03-2010 |
20100161918 | Third dimensional memory with compress engine - An integrated circuit and method for modifying data by compressing the data in third dimensional memory technology is disclosed. In a specific embodiment, an integrated circuit is configured to perform compression of data disposed in third dimensional memory. For example, the integrated circuit can include a third dimensional memory array configured to store an input independent of storing a compressed copy of the input, a processor configured to compress the input to form the compressed copy of the input, and a controller configured to control access between the processor and the third dimensional memory array. The third dimension memory array can include one or more layers of non-volatile re-writeable two-terminal cross-point memory arrays fabricated back-end-of-the-line (BEOL) over a logic layer fabricated front-end-of-the-line (FEOL). The logic layer includes active circuitry for data operations (e.g., read and write operations) and data compression operations on the third dimension memory array. | 06-24-2010 |
20100325375 | DATA-ACCESS CONTROL DEVICE AND DATA-ACCESS CONTROL METHOD - A memory control unit sequentially performs access requests to a plurality of banks A to D for a high-speed module | 12-23-2010 |
20110010512 | METHOD FOR CONTROLLING STORAGE SYSTEM HAVING MULTIPLE NON-VOLATILE MEMORY UNITS AND STORAGE SYSTEM USING THE SAME - A method for controlling a storage system and the storage system using this method are disclosed. In the storage system, at least two memory units share an I/O bus. The shared I/O bus transfers information for each memory unit to execute an operation. The operation has at least one high priority cycle and at least one low priority cycle. When a low priority cycle is overlapped with a high priority cycle, the low priority cycle is suspended, and the high priority cycle is operated first. After the high priority cycle is finished, the suspended low priority cycle is then resumed. By doing so, the shared I/O bus may be used by one memory unit during a busy cycle for another memory unit, during which the latter memory unit does not use the I/O bus. Therefore, the I/O bus can be more efficiently used. | 01-13-2011 |
20110131385 | DATA PROCESSING CIRCUIT WITH ARBITRATION BETWEEN A PLURALITY OF QUEUES - Requests from a plurality of different agents ( | 06-02-2011 |
20110179240 | ACCESS SCHEDULER - Embodiments of the present invention provide a system for scheduling memory accesses for one or more memory devices. This system includes a set of queues configured to store memory access requests, wherein each queue is associated with at least one memory bank or memory device in the one or more memory devices. The system also includes a set of hierarchical levels configured to select memory access requests from the set of queues to send to the one or more memory devices, wherein each level in the set of hierarchical levels is configured to perform a different selection operation. | 07-21-2011 |
20110185134 | TEMPORARY STATE SERVICE PROTOCOL - A temporary state service protocol is utilized by clients to temporarily store and access data within a temporary data store between different requests. Each client associated with a web page can create data in the data store independently from other clients for the same web page. An Application Programming Interface (API) is used to manage and interact with the data in the data store. The procedures in temporary state service protocol allow clients to add, modify, retrieve, and delete data in the data store. The clients may also use the API to place and remove virtual locks on instances of the data. | 07-28-2011 |
20110197038 | SERVICING LOW-LATENCY REQUESTS AHEAD OF BEST-EFFORT REQUESTS - The invention relates to a method of controlling access of a System-on-Chip to an off-chip memory, wherein the System-on-Chip comprises a plurality of agents which need access to the memory. The method comprises: i) receiving low-priority requests (CBR, BER) for access to the memory; ii) receiving high-priority requests (LLR) for access to the memory; iii) distinguishing between first-subtype requests (CBR) and second-subtype requests (BER) in the low-priority requests (CBR, BER), wherein the first-subtype requests (CBR) require a latency-rate guarantee, and iv) arbitrating between the high-priority requests (LLR) and the low-priority requests (CBR, BER) such that the high-priority requests (LLR) are serviced with the highest priority, while guaranteeing the latency-rate guarantee for the first-subtype requests (CBR), wherein the high-priority requests (LLR) are serviced before the second-subtype requests (BER) if there are no first-subtype requests (CBR) to be serviced for guaranteeing the latency-rate guarantee. The invention further relates to a memory controller for use in a System-on-Chip connected to an off-chip memory, wherein the System-on-Chip comprises a plurality of agents, which need access to the memory, wherein the memory controller is configured for carrying such method. The invention also relates to a System-on-Chip comprising such memory controller. With the invention the high-priority requests (LL-requests) get a better service, i.e. a smaller average latency, at the expense of the second-subtype requests. | 08-11-2011 |
20110208926 | LOW LATENCY REQUEST DISPATCHER - A first-in-first-out (FIFO) queue optimized to reduce latency in dequeuing data items from the FIFO. In one implementation, a FIFO queue additionally includes buffers connected to the output of the FIFO queue and bypass logic. The buffers act as the final stages of the FIFO queue. The bypass logic causes input data items to bypass the FIFO and to go straight to the buffers when the buffers are able to receive data items and the FIFO queue is empty. In a second implementation, arbitration logic is coupled to the queue. The arbitration logic controls a multiplexer to output a predetermined number of data items from a number of final stages of the queue. In this second implementation, the arbitration logic gives higher priority to data items in later stages of the queue. | 08-25-2011 |
20110213938 | SIMULTANEOUS PERSONAL SENSING AND DATA STORAGE - A personal sensing device that may be used for storing personal data and sensed data arbitrates and prioritizes competing requests for memory access from sensing, wireless, and wired interfaces. The personal sensing device enables power efficiency with burst-writes to the memory at higher data rates then an incoming sensor data stream without risk of data loss. Sensing operations coordinated by reconfigurable control logic are partitioned from storage operations coordinated by a multi-port memory controller. The interface between the functional partitioning uses message passing, status/control registers and buffering to reduce or eliminate system interdependencies. | 09-01-2011 |
20110238934 | ASYNCHRONOUSLY SCHEDULING MEMORY ACCESS REQUESTS - A data processing system employs a scheduler to schedule pending memory access requests and a memory controller to service scheduled pending memory access requests. The memory access requests are asynchronously scheduled with respect to the clocking of the memory. The scheduler is operated using a clock signal with a frequency different from the frequency of the clock signal used to operate the memory controller. The clock signal used to clock the scheduler can have a lower frequency than the clock used by a memory controller. As a result, the scheduler is able to consider a greater number of pending memory access requests when selecting the next pending memory access request to be submitted to the memory for servicing and thus the resulting sequence of selected memory access requests is more likely to be optimized for memory access throughput. | 09-29-2011 |
20110264873 | External Memory Controller - A memory controller to provide memory access services in an adaptive computing engine is provided. The controller comprises: a network interface configured to receive a memory request from a programmable network; and a memory interface configured to access a memory to fulfill the memory request from the programmable network, wherein the memory interface receives and provides data for the memory request to the network interface, the network interface configured to send data to and receive data from the programmable network. | 10-27-2011 |
20120005439 | COMPUTER SYSTEM HAVING A CACHE MEMORY AND CONTROL METHOD OF THE SAME - A computer includes a memory that stores data, a cache memory that stores a copy of the data, a directory storage unit that stores directory information related to the data and includes information indicating that the data is copied to the cache memory, a directory cache storage unit that stores a copy of the directory information stored in the directory storage unit, and a control unit that controls storage of data in the directory cache storage unit, manages the data copied from the memory to the cache memory by dividing the data into an exclusive form and a shared form, and sets a priority of storage of the directory information related to the data fetched in the exclusive form in the directory cache storage unit higher than a priority of storage of the directory information related to the data fetched in the shared form in the directory cache storage unit. | 01-05-2012 |
20120017055 | Method and device for scheduling queues based on chained list - The present invention discloses a method for scheduling queues based on a chained list. The method includes the following steps: setting the number of addresses in a queuing chained list not less than the number of queues, and partitioning the queuing chained list into different queuing sub-chained lists according to the priorities of the queues, wherein the number of the addresses in the queuing sub-chained list is not less than the total number of the queues whose priorities correspond to the queuing sub-chained list; setting for each queue a queuing chained list identifier identifying whether the each queue has queued in the queuing chained list; determining whether a queue satisfying queuing criteria has queued in the queuing chained list according to the queuing chained list identifier of the queue before the queue is added to the queuing chained list; if it has queued, adding is not processed, otherwise, the queue is added to the end of the queuing sub-chained list which corresponds to the priority of the queue, and the queuing chained list identifier of the queue is modified to an identifier identifying the queue has already queued in the queuing chained list. The present invention also discloses a device for scheduling queues based on a chained list. The present invention ensures impartiality when queues having the same priority are scheduled. | 01-19-2012 |
20120089794 | METHODS AND DEVICES FOR DETERMINING QUALITY OF SERVICES OF STORAGE SYSTEMS - Methods and systems for allowing access to computer storage systems. Multiple requests from multiple applications can be received and processed efficiently to allow traffic from multiple customers to access the storage system concurrently. | 04-12-2012 |
20120137091 | SELECTING A MEMORY FOR STORAGE OF AN ENCODED DATA SLICE IN A DISPERSED STORAGE NETWORK - A method begins by a processing module receiving an encoded data slice for storage. The method continues with the processing module obtaining metadata associated with the encoded data slice and interpreting the metadata to determine whether the encoded data slice is to be stored in a first access speed memory or a second access speed memory, wherein the first access speed memory has a higher data access rate than the second access speed memory. The method continues with the processing module storing the encoded data slice in a memory device of the first access speed memory when the encoded data slice is to be stored in the first access speed memory and storing the encoded data slice in a memory device of the second access speed memory when the encoded data slice is to be stored in the second access speed memory. | 05-31-2012 |
20120159094 | ASSIGNING READ REQUESTS BASED ON BUSYNESS OF DEVICES - Techniques are provided for assigning read requests to storage devices in a manner that reduces the likelihood that any storage device will become overloaded or underutilized. Specifically, a read-request handler assigns read requests that are directed to each particular item among the storage devices that have copies of the item based on how busy each of those storage devices is. Consequently, even though certain storage devices may have copies of the same item, there may be times during which one storage device is assigned a disproportionate number of the reads of the item because the other storage device is busy with read requests for other items, and there may be other times during which other storage device is assigned a disproportionate number of the reads of the item because the one storage device is busy with read request for other items. Various techniques for estimating the busyness of storage devices are provided, including fraction-based estimates, interval-based estimates, and the response-time-based estimates. Techniques for smoothing those estimates, and for handicapping devices, are also provided. | 06-21-2012 |
20120185656 | Systems and Methods for Scheduling a Memory Command for Execution Based on a History of Previously Executed Memory Commands - A memory system is operated by maintaining a queue of memory commands to be executed, maintaining a list of previously executed memory commands, comparing local information associated with the commands to be executed with local information associated with the list of previously executed commands, and selecting one of the commands for execution from the queue of memory commands to be executed based on a result of the comparison. | 07-19-2012 |
20120203986 | APPARATUS, SYSTEM, AND METHOD FOR MANAGING OPERATIONS FOR DATA STORAGE MEDIA - An apparatus, system, and method are disclosed for managing operations for data storage media. An adjustment module interrupts or otherwise adjusts execution of an executing operation on the data storage media. A schedule module executes a pending operation on the data storage media in response to adjusting execution of the executing operation. The pending operation comprises a higher execution priority than the executing operation. The schedule module finishes execution of the executing operation in response to completing execution of the pending operation. | 08-09-2012 |
20120215997 | MANAGING BUFFER CONDITIONS - Systems and techniques include, in some implementations, a computer implemented method storing a portion of data elements present in a first buffer in a second buffer in response to detecting an overflow condition of the first buffer, wherein the data elements in the first buffer are sorted according to a predetermined order, and inserting a proxy data element in the first buffer to represent the portion of data elements stored to the second buffer. | 08-23-2012 |
20120221810 | REQUEST MANAGEMENT SYSTEM AND METHOD - A request management system includes a request priority queue module prioritizing requests to be placed in queues based on priorities, and a request priority rule module setting an order of placement of the requests in the queues. The request management system further includes a computerized request monitoring and management module dynamically managing processing of a request from the prioritized requests based on a request processing statistic, the priorities and the order of placement. | 08-30-2012 |
20120297155 | STORAGE SYSTEM AND METHOD OF EXECUTING COMMANDS BY CONTROLLER - A storage subsystem capable of processing time-critical control commands while suppressing deterioration of the system performance to a minimum. When various commands are received in a multiplex manner via the same port from plural host devices, the channel adapter of the storage subsystem extracts commands of a first kind from the received commands. Then, the adapter executes the extracted commands of the first kind with high priority within a given unit time until a given number of guaranteed activations is reached. At the same time, commands of a second kind are enqueued in a queue of commands. After the commands of the first kind are executed as many as the number of guaranteed activations, the commands of the second kind are executed in the unit time. | 11-22-2012 |
20120311277 | MEMORY CONTROLLERS WITH DYNAMIC PORT PRIORITY ASSIGNMENT CAPABILITIES - A programmable integrated circuit may have a memory controller that interfaces between master modules and system memory. The memory controller may receive memory access requests from the masters via ports that have associated priority values and fulfill the memory access requests by configuring system memory to respond to the memory access requests. To dynamically modify the associated priority values while the memory controller receives and fulfills the memory access requests, a priority value update module may be provided that dynamically updates priority values for the memory controller ports. The priority value update module may provide the updated priority values with update registers that are updated based on an update signal and a system clock. The priority values may be provided by shift registers, memory mapped registers, or provided by masters along with each memory access request. | 12-06-2012 |
20120317379 | STORAGE ARCHITECTURE FOR BACKUP APPLICATION - Aspects of the subject matter described herein relate to a storage architecture. In aspects, an address provided by a data source is translated into a logical storage address of virtual storage. This logical storage address is translated into an identifier that may be used to store data on or retrieve data from a storage system. The address space of the virtual storage is divided into chunks that may be streamed to the storage system. | 12-13-2012 |
20130007386 | MEMORY ARBITER WITH LATENCY GUARANTEES FOR MULTIPLE PORTS - Memory arbiter with latency guarantees for multiple ports. A method of controlling access to an electronic memory includes measuring a latency value indicative of a time difference between origination of an access request from a port of a plurality of ports and a response from the electronic memory. The method also includes calculating a difference between the latency value for the port and a target value associated with the port. The method further includes calculating a running sum of differences for the port covering each of a plurality of access requests. Further, the method includes determining a delta of a priority value for the port based on the running sum of differences. Moreover, the method includes prioritizing the access by the plurality of ports according to associated priority values. | 01-03-2013 |
20130013872 | External Memory Controller Node - A memory controller to provide memory access services in an adaptive computing engine is provided. The controller comprises: a network interface configured to receive a memory request from a programmable network; and a memory interface configured to access a memory to fulfill the memory request from the programmable network, wherein the memory interface receives and provides data for the memory request to the network interface, the network interface configured to send data to and receive data from the programmable network. | 01-10-2013 |
20130111159 | Digital Signal Processing Data Transfer | 05-02-2013 |
20130138900 | INFORMATION PROCESSING DEVICE AND COMPUTER PROGRAM PRODUCT - According to an embodiment, an information processing device that includes a first storage unit and a second storage unit having power consumption different from that of the first storage unit. The information processing device also includes a control unit configured to make a control to determine a priority of information that is to be stored in the first storage unit or the second storage unit. The control unit is configured to store the information into the first storage unit or into the second storage unit based on the determined priority. | 05-30-2013 |
20130198465 | CONNECTION APPARATUS, STORAGE APPARATUS, AND COMPUTER-READABLE RECORDING MEDIUM HAVING CONNECTION REQUEST TRANSMISSION CONTROL PROGRAM RECORDED THEREIN - A connection apparatus that connects a plurality of storage units and a controller that establishes connection with the respective storage units in response to a connection request issued from each of the plurality of storage units and accesses the storage units includes a processor; and a memory, wherein the processor transmits a connection request selected based on priority information that represents priority associated with the connection among a plurality of received connection requests to the controller, the priority information being stored in the memory, and changes priority information included in a connection request received from a certain storage unit among the plurality of storage units so that the priority information has higher priority than the priority information included in connection requests received from the other storage units for a period where a connection request is successively received from the certain storage unit and a predetermined condition is satisfied. | 08-01-2013 |
20130282994 | SYSTEMS, METHODS AND DEVICES FOR MANAGEMENT OF VIRTUAL MEMORY SYSTEMS - Systems, methods and devices for management of instances of virtual memory components for storing computer readable information for use by at least one first computing device, the system comprising at least one physical computing device, each physical computing device being communicatively coupled over a network and comprising: a physical memory component, a computing processor component, an operating system, a virtual machine monitor, and virtual memory storage appliances; at least one of the virtual memory storage appliances being configured to (a) accept memory instructions from the at least one first computing device, (b) instantiate instances of at least one virtual memory component, (c) allocate memory resources from at least one physical memory component for use by any one of the least one virtual memory components, optionally according to a pre-defined policy; and (d) implement memory instructions on the at least one physical memory component. | 10-24-2013 |
20130290656 | Concurrent Request Scheduling | 10-31-2013 |
20130326166 | ADAPTIVE RESOURCE MANAGEMENT OF A DATA PROCESSING SYSTEM - A method for resource management of a data processing system is described herein. According to one embodiment, a token is periodically pushed into a memory usage queue, where the token includes a timestamp indicating time entering the memory usage queue. The memory usage queue stores a plurality of memory page identifiers (IDs) identifying a plurality of memory pages currently allocated to a plurality of programs running within the data processing system. In response to a request to reduce memory usage, a token is popped from the memory usage queue. A timestamp of the popped token is then compared with current time to determine whether a memory usage reduction action should be performed. | 12-05-2013 |
20130326167 | REPRIORITIZING PENDING DISPERSED STORAGE NETWORK REQUESTS - A method begins by a dispersed storage (DS) processing module monitoring processing status of a plurality of pending dispersed storage network (DSN) access requests, where less than a desired number of DS units have favorably responded to a set of access requests. The method continues with the DS processing module interpreting the processing status of the plurality of pending DSN access requests to detect a processing anomaly. The method continues with the DS processing module reprioritizing further processing of at least one of the plurality of pending DSN access requests having the processing anomaly and another one or more of the plurality of pending DSN access requests. The method continues with the DS processing module sending notice of the reprioritized further processing to one or more DS units. | 12-05-2013 |
20130339641 | INTEGRATED CIRCUIT CHIP AND MEMORY DEVICE - A memory device includes a pad that provides an interface with an exterior, a first setting unit that generates a termination setting signal for setting the pad for a purpose of termination data strobe using a first specific code of a mode register set operation, a second setting unit that generates a mask setting signal for setting the pad for a purpose of data mask using a second specific code of the mode register set operation, and a third setting unit that generates a write inversion setting signal for setting the pad for a purpose of write data bus inversion using third specific code of the mode register set operation. When a setting signal with a higher priority is activated, a setting signal with a lower priority is deactivated regardless of a value of the corresponding code. | 12-19-2013 |
20140047201 | MEMORY-ACCESS-RESOURCE MANAGEMENT - The present application is directed to a memory-access-multiplexing memory controller that can multiplex memory accesses from multiple hardware threads, cores, and processors according to externally specified policies or parameters, including policies or parameters set by management layers within a virtualized computer system. A memory-access-multiplexing memory controller provides, at the physical-hardware level, a basis for ensuring rational and policy-driven sharing of the memory-access resource among multiple hardware threads, cores, and/or processors. | 02-13-2014 |
20140068204 | Low Power, Area-Efficient Tracking Buffer - A tracking buffer apparatus is disclosed. A tracking buffer apparatus includes lookup logic configured to locate entries having a transaction identifier corresponding to a received request. The lookup logic is configured to determine which of the entries having the same transaction identifier has a highest priority and thus cause a corresponding entry from a data buffer to be provided. When information is written into the tracking buffer, write logic writes a corresponding transaction identifier to the first free entry. The write logic also writes priority information in the entry based on other entries having the same transaction identifier. The entry currently being written may be assigned a lower priority than all other entries having the same transaction identifier. The priority information for entries having a common transaction identifier with one currently being read are updated responsive to the read operation. | 03-06-2014 |
20140082307 | SYSTEM AND METHOD TO ARBITRATE ACCESS TO MEMORY - Arbitrating memory access between a central processing unit CPU and a peripheral device to main memory. The memory access to and from the main memory by the CPU and memory access to and from the main memory by the peripheral device is prioritized respectively according to a CPU priority level and a peripheral device priority level. An arbitration module is provided externally to the CPU, to the peripheral device and to the memory controller. The arbitration module receives the peripheral device priority level. When the CPU priority level and the peripheral device priority level are both set at the highest available priority level, the arbitration module outputs to the memory controller a new CPU priority level less than the highest available priority level. | 03-20-2014 |
20140082308 | STORAGE CONTROL DEVICE AND METHOD FOR CONTROLLING STORAGE DEVICES - According to an aspect of the present invention, provided is a storage control device including a processor. The processor monitors a load value of a first storage device or a second storage device during copy processing in which a copy of data stored in the first storage device is stored in the second storage device. The processor controls, in a case where the load value exceeds a predetermined threshold, the first storage device and the second storage device so that input/output processing to/from the second storage device is executed with priority over the copy processing. | 03-20-2014 |
20140082309 | MEMORY CONTROL DEVICE, INFORMATION PROCESSING APPARATUS, AND MEMORY CONTROL METHOD - Accesses to a memory divided into a plurality of units of operation are controlled. First and second units of operation from among the plurality of units of operation constitute a memory mirror. A reception circuit receives a plurality of read requests including bank identification information corresponding to both a first bank included in a first unit of operation and a second bank included in a second unit of operation, respectively. A determination circuit determines an access target of each read access so that the plurality of read accesses based on the plurality of read requests are made to the first and second units of operation alternately. The control circuit controls each read request so that each read access is made to a unit of operation determined as the access target. | 03-20-2014 |
20140122815 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND CONTROL SYSTEM - An information processing device includes a storage device that stores information, and a controller that adjusts a consumption of read time of reading information to be read per unit data amount according to the priority of the information to be read from the storage device and a permitted read time during which read of information from the storage device is permitted. The permitted read time varies according to the processing time of another control different from the control of the read. | 05-01-2014 |
20140173224 | SEQUENTIAL LOCATION ACCESSES IN AN ACTIVE MEMORY DEVICE - Embodiments relate to sequential location accesses in an active memory device that includes memory and a processing element. An aspect includes a method for sequential location accesses that includes receiving from the memory a first group of data values associated with a queue entry at the processing element. A tag value associated with the queue entry and specifying a position from which to extract a first subset of the data values is read. The queue entry is populated with the first subset of the data values starting at the position specified by the tag value. The processing element determines whether a second subset of the data values in the first group of data values is associated with a subsequent queue entry, and populates a portion of the subsequent queue entry with the second subset of the data values. | 06-19-2014 |
20140173225 | REDUCING MEMORY ACCESS TIME IN PARALLEL PROCESSORS - Apparatus, computer readable medium, and method of servicing memory requests are presented. A first plurality of memory requests are associated together, wherein each of the first plurality of memory requests is generated by a corresponding one of a first plurality of processors, and wherein each of the first plurality of processors is executing a first same instruction. A second plurality of memory requests are associated together, wherein each of the second plurality of memory requests is generated by a corresponding one of a second plurality of processors, and wherein each of the second plurality of processors is executing a second same instruction. A determination is made to service the first plurality of memory requests before the second plurality of memory requests and the first plurality of memory requests is serviced before the second plurality of memory requests. | 06-19-2014 |
20140189267 | METHOD AND APPARATUS FOR MANAGING MEMORY SPACE - Embodiments of the present invention relate to a method, apparatus and computer product for managing memory space. In one aspect of the present invention, there is provided a method for managing memory space that is organized into pages, the pages being divided into a plurality of page sets, each page set being associated with one of a plurality of upper-layer systems, by: performing state monitoring to the plurality of upper-layer systems to assign priorities to the plurality of upper-layer systems; and determining an order of releasing the pages of the memory space based on the priorities of the plurality of upper-layer systems with the page sets as units. Other aspects and embodiments of invention are also disclosed. | 07-03-2014 |
20140201477 | METHODS AND APPARATUS TO MANAGE WORKLOAD MEMORY ALLOCATION - Methods, articles of manufacture, and apparatus are disclosed to manage workload memory allocation. An example method includes identifying a primary memory and a secondary memory associated with a platform, the secondary memory having first performance metrics different from second performance metrics of the primary memory, identifying access metrics associated with a plurality of data elements invoked by a workload during execution on the platform, prioritizing a list of the plurality of data elements based on the access metrics associated with corresponding ones of the plurality of data elements, and reallocating a first one of the plurality of data elements from the primary memory to the secondary memory based on the priority of the first one of the plurality of memory elements. | 07-17-2014 |
20140223115 | MANAGING OUT-OF-ORDER MEMORY COMMAND EXECUTION FROM MULTIPLE QUEUES WHILE MAINTAINING DATA COHERENCY - Responsive to selecting a particular queue from among at least two queues to place an incoming event into within a particular entry from among multiple entries ordered upon arrival of the particular queue each comprising a separate collision vector, a memory address for the incoming event is compared with each queued memory address for each queued event in the other entries in the at least one other queue. Responsive to the memory address for the incoming event matching at least one particular queued memory address for at least one particular queued event in the at least one other queue, at least one particular bit is set in a particular collision vector for the particular entry in at least one bit position from among the bits corresponding with at least one row entry position of the at least one particular queued memory address within the other entries. | 08-07-2014 |
20140223116 | METHODS FOR SEQUENCING MEMORY ACCESS REQUESTS - Memory access requests are successively received in a memory request queue of a memory controller. Any conflicts or potential delays between temporally proximate requests that would occur if the memory access requests were to be executed in the received order are detected, and the received order of the memory access requests is rearranged to avoid or minimize the conflicts or delays and to optimize the flow of data to and from the memory data bus. The memory access requests are executed in the reordered sequence, while the originally received order of the requests is tracked. After execution, data read from the memory device by the execution of the read-type memory access requests are transferred to the respective requestors in the order in which the read requests were originally received. | 08-07-2014 |
20140258649 | CONTROL OF PAGE ACCESS IN MEMORY - The present techniques provide systems and methods of controlling access to more than one open page in a memory component, such as a memory bank. Several components may request access to the memory banks. A controller can receive the requests and open or close the pages in the memory bank in response to the requests. In some embodiments, the controller assigns priority to some components requesting access, and assigns a specific page in a memory bank to the priority component. Further, additional available pages in the same memory bank may also be opened by other priority components, or by components with lower priorities. The controller may conserve power, or may increase the efficiency of processing transactions between components and the memory bank by closing pages after time outs, after transactions are complete, or in response to a number of requests received by masters. | 09-11-2014 |
20140310485 | Data Creation Device and Computer-Readable Medium Storing Data Creation Program - A field for which a first priority level that has been set is the same as a second priority level that is stored in a priority level storage portion is specified as an update field from among a plurality of fields that have been defined in a content that is stored in a data storage portion. At least one character that is contained in the update field is updated in the specified order. A level of the second priority level is lowering by one level in a case where a character rollover has occurred during the updating of the at least one character. Printable data for the content in which the at least one character has been updated is created in a case where the character rollover has not occurred during the updating of the at least one character. The update field is specified every time the second priority level that is stored in the priority level storage portion is lowered. | 10-16-2014 |
20140325166 | POWER SAVING MODE HYBRID DRIVE ACCESS MANAGEMENT - A hybrid drive includes multiple parts: a performance part (e.g., a flash memory device) and a base part (e.g., a magnetic or other rotational disk drive). A drive access system, which is typically part of an operating system of a computing device, issues input/output (I/O) commands to the hybrid drive to store data to and retrieve data from the hybrid drive. The drive access system supports multiple priority levels and obtains priority levels for groups of data identified by logical block addresses (LBAs). The LBAs read while the device is operating in a power saving mode are assigned a priority level that is at least the lowest of the multiple priority levels supported by the device, increasing the likelihood that LBAs read while the device is operating in the power saving mode are stored in the performance part of the hybrid drive. | 10-30-2014 |
20140372715 | PAGE-BASED COMPRESSED STORAGE MANAGEMENT - A memory is made up of multiple pages, and different pages can have different priority levels. A set of memory pages having at least similar priority levels are identified and compressed into an additional set of memory pages having at least similar priority levels. The additional set of memory pages are classified as being the same type of page as the set of memory pages that was compressed (e.g., as memory pages that can be repurposed). Thus, a particular set of memory pages can be compressed into a different set of memory pages of the same type and corresponding to at least similar priority levels. However, due to the compression, the quantity of memory pages into which the set of memory pages is compressed is reduced, thus increasing the amount of data that can be stored in the memory. | 12-18-2014 |
20150046665 | Data Storage System with Stale Data Mechanism and Method of Operation Thereof - Systems, methods and/or devices are used to enable a stale data mechanism. In one aspect, the method includes (1) receiving a write command specifying a logical address to which to write, (2) determining whether a stale flag corresponding to the logical address is set, (3) in accordance with a determination that the stale flag is not set, setting the stale flag and releasing the write command to be processed, and (4) in accordance with a determination that the stale flag is set, detecting an overlap, wherein the overlap indicates two or more outstanding write commands are operating on the same memory space. | 02-12-2015 |
20150052318 | MEMORY APPARATUSES, COMPUTER SYSTEMS AND METHODS FOR ORDERING MEMORY RESPONSES - Memory apparatuses that may be used for receiving commands and ordering memory responses are provided. One such memory apparatus includes response logic that is coupled to a plurality of memory units by a plurality of channels and may be configured to receiving a plurality of memory responses from the plurality of memory units. Ordering logic may be coupled to the response logic and be configured to cause the plurality of memory responses in the response logic to be provided in an order based, at least in part, on a system protocol. For example, the ordering logic may enforce bus protocol rules on the plurality of memory responses stored in the response logic to ensure that responses are provided from the memory apparatus in a correct order. | 02-19-2015 |
20150058582 | SYSTEM AND METHOD FOR CONTROLLING A REDUNDANCY PARITY ENCODING AMOUNT BASED ON DEDUPLICATION INDICATIONS OF ACTIVITY - According to one embodiment, a method includes determining, using a processor, which physical blocks are priority physical blocks based on at least one of: a number of application blocks referencing the physical block, and a number of accesses to the physical block, creating a reference to each priority physical block, and outputting the reference. According to another embodiment, a method includes receiving a reference to one or more priority physical blocks in a storage pool, and adjusting an amount of redundancy parity encoding for each of the one or more priority physical blocks based on the reference. | 02-26-2015 |
20150067282 | COPY CONTROL APPARATUS AND COPY CONTROL METHOD - A copy control apparatus includes a processor. The processor is configured to record, in update location information, an update count for each of sectional areas obtained by sectioning a copy-source area. The update count indicates a number of updates of data in a sectional area. The update count is indicative of more than two values. The processor is configured to perform first copy of copying data in the copy-source area to a copy-destination area based on the update location information. The processor is configured to deter the first copy for data in a sectional area for which an update count indicating more than a predetermined number is recorded in the update location information. | 03-05-2015 |
20150074359 | ELECTRONIC APPARATUS, CONTROL METHOD THEREFOR, AND COMPUTER PROGRAM PRODUCT - An electronic apparatus includes: a main storage unit; a first storage unit that stores multiple pieces of first setting information for the main storage unit; a second storage unit that stores second setting information, the second setting information being setting information for the main storage unit and corresponding to at least some of the multiple pieces of first setting information; a setting unit that sets the second setting information with a higher priority than the first setting information; and a control unit that controls the main storage unit based on information set by the setting unit. | 03-12-2015 |
20150074360 | SCHEDULER FOR MEMORY - A scheduler controls execution in a memory of operation requests received in an input request set (IRS) by providing a corresponding output request set (ORS). The scheduler includes zone standby units having a one-to-one relationship with corresponding zones such that each zone standby unit stores an operation request. The scheduler also includes an output processing unit that determines a processing sequence for the operation requests stored in the zone standby units to provide the ORS. | 03-12-2015 |
20150081990 | Intelligent Partitioning of External Memory Devices - Multiple memory devices, such as hard drives, can be combined and logical partitions can be formed between the drives to allow a user to control regions on the drives that will be used for storing content, and also to provide redundancy of stored content in the event that one of the drives fails. Priority levels can be assigned to content recordings such that higher value content can be stored in more locations and easily accessible locations within the utilized drives. Users can control and organize how recorded content is stored between the drives such that an external drive may be removed from a first gateway device and attached to a second gateway device without losing the ability to access the recorded content from the first gateway device at a later time. In this manner, a user is provided with the ability to transport an external drive containing stored content recordings between multiple different gateway devices such that the recordings may be accessed at different locations or user premises. | 03-19-2015 |
20150106578 | SYSTEMS, METHODS AND DEVICES FOR IMPLEMENTING DATA MANAGEMENT IN A DISTRIBUTED DATA STORAGE SYSTEM - Systems, methods and devices for monitoring data transactions in a data storage system, the data storage system being in network communication with a plurality of storage resources and comprising at least a data analysis module and a logging module, and receiving at the data analysis module at least one data transaction for data in the data storage system, each data transaction having at least one data-related characteristic; storing in the logging module the at least one data-related characteristic and a data transaction identifier that relates the data transaction to the associated at least one data-related characteristic in the logging module; analyzing at the data analysis module at least one data-related characteristic related to a first data transaction to determine if the first data transaction shares at least one data-related characteristic with other data transactions; and, in cases where the first data transaction shares at least one data-related characteristic with at least one other data transaction, logically linking the first data transaction with the other data transactions. | 04-16-2015 |
20150121020 | STORAGE APPARATUS, METHOD OF CONTROLLING STORAGE APPARATUS, AND COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN STORAGE APPARATUS CONTROL PROGRAM - A storage apparatus includes a processor. The processor calculates an upper limit of an input/output processing amount, which is determined based on priority levels set to a plurality of storage devices, for each storage device. The processor schedules an execution sequence of processes relating to input/output requests received from information processing apparatuses based on processing amounts relating to the input/output requests and the upper limits. The processor executes the processes relating to the input/output requests in the scheduled execution sequence. The processor is configured to determine, for each storage device, whether or not a processing amount of the storage device exceeds a processing bandwidth of the each storage device for a first predetermined time. The processor changes the upper limit for each storage device in a predetermined bandwidth accommodation unit in a case where the processing amount for each storage device is determined to exceed the processing bandwidth. | 04-30-2015 |
20160062938 | OPENING A DATA SET - A method of and system for opening a data set is disclosed. The method and system may include structuring a storage facility to have address spaces. The address spaces may include a first address space having an open manager. The open manager may be configured and arranged to manage activities associated with an open request in response to receiving the open request. The method and system may include performing pseudo-opens associated with the open request in the address spaces. The method and system may include performing a batch-open utilizing the pseudo-opens and a resource used to complete the open request. | 03-03-2016 |
20160092379 | PRIORITY FRAMEWORK FOR A COMPUTING DEVICE - Proving for a framework for propagating priorities to a memory subsystem in a computing system environment is disclosed herein. By way of example, a memory access handler is provided for managing memory access requests and determining associated priorities. The memory access handler includes logic configured for propagating memory requests and the associated priorities to lower levels of a computer hierarchy. A memory subsystem receives the memory access requests and the priorities. | 03-31-2016 |
20160139815 | JUST-IN-TIME REMOTE DATA STORAGE ALLOCATION - A just-in-time storage allocation is initiated for storage at a remote storage device having storage disks. Each of multiple containers comprises a grouping of one or more of the storage disks. The just-in-time storage allocation includes an application profile that includes a priority criteria for the storage of either a priority of performance over efficiency or a priority of efficiency over performance A determination is made of whether at least one container of the multiple containers satisfies the priority criteria based on at least one attribute of the at least one container. The storage is allocated in the at least one container, in response to the at least one container satisfying the priority criteria. | 05-19-2016 |
20160170659 | METHOD AND APPARATUS FOR ADAPTIVELY MANAGING DATA IN A MEMORY BASED FILE SYSTEM | 06-16-2016 |
20160179714 | TRACE BUFFER BASED REPLAY FOR CONTEXT SWITCHING | 06-23-2016 |
20160188246 | STORAGE APPARATUS, AND COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN STORAGE APPARATUS CONTROL PROGRAM - A storage apparatus includes a processor, in which the processor determines presence/absence of an input/output request that is in a standby state for each of a plurality of storage devices, arranges first storage devices, for which the input/output request that is in the standby state is determined to be present, among the plurality of storage devices in order determined based on priority levels set according to processing bandwidth values of the first storage devices, and executes bandwidth accommodation from at least one second storage device having a bandwidth to spare among the plurality of storage devices to the first storage devices in order of the arrangement of the first storage devices. Accordingly, even when the bandwidth accommodation is performed, an occurrence of bandwidth reversal between storage devices having mutually-different priority levels can be suppressed. | 06-30-2016 |
20220137883 | APPARATUS AND METHOD FOR PROCESSING DATA IN MEMORY SYSTEM - A memory system includes at least one memory device and a controller coupled with the at least one memory device via plural communication lines. The at least one memory device includes plural units, each unit including plural memory cells, each memory cell capable of storing multi-bit data. The controller determines a hierarchy used for determining an access sequence access to the plural communication lines, the plural units, and plural bits of the multi-bit data, and accesses memory cells included in the at least one memory device based on the hierarchy for a read or write operation regarding transmitted data. | 05-05-2022 |