Class / Patent application number | Description | Number of patent applications / Date published |
711148000 | Plural shared memories | 84 |
20080209136 | SYSTEM AND METHOD OF STORAGE SYSTEM ASSISTED I/O FENCING FOR SHARED STORAGE CONFIGURATION - Systems and methods for improved I/O fencing for shared storage in a clustered or grid computing environment. I/O fencing is performed with aid from the storage system and an I/O fencing management client process. The client process detects changes in the operational status of any of the clustered computing nodes. Upon sensing a change from a functional state to a dysfunctional state, the management client process effectuates reconfiguration of the storage system to disallow potentially destructive access by the dysfunctional node to the shared storage volumes. Upon sensing resumption of a functional status for the dysfunctional node, the client effectuates reconfiguration of the storage system to again allow desired access to the shared storage volumes by the now functional node. The client and storage system may share access to a database maintained by the client indicating the shared volumes a node may access and the initiators associated with each node. | 08-28-2008 |
20080215825 | Memory Share by a Plurality of Processors - A method and an apparatus for having a memory shared by a plurality of processors are disclosed. The digital processing apparatus in accordance with an embodiment of the present invention comprises a memory, a main processor connected to one side of the memory through a first memory bus, and application processors in a quantity of n connected parallel to the other side of the memory through a second memory bus. Each application processor performs at least one predetermined function. The main processor is connected parallel to the n application processors through a control bus, and delivers a control signal to at least one application processor through the control bus. With the present invention, the structure of a digital processing apparatus can be simplified, and the cost and size of a digital processing apparatus can be minimized. | 09-04-2008 |
20080222366 | MEMORY SHARING SYSTEM - A memory-use-information memory area stores therein a program ID, a request-source memory address, a request memory size which configure information for uniquely identifying a program file loaded into a storage area for virtual machine-A or storage area for virtual machine-B in association with a physical memory address. A memory reservation section uses, as the retrieval key, the program ID, request-source memory address, and request memory size of a program file corresponding to a memory reservation request to retrieval the memory-use-information memory area. When a entry that matches said retrieval key exists, the memory reservation section allows sharing of the memory area between a plurality of virtual machines. | 09-11-2008 |
20080270711 | METHOD AND APPARATUS FOR DATA TRANSMISSION BETWEEN PROCESSORS USING MEMORY REMAPPING - Provided are a method and apparatus for efficiently transferring a massive amount of multimedia data between two processors. The apparatus includes a first local switch, which connects a virtual page of a first processor element to a shared memory page, a second local switch, which connects a virtual page of a second processor element to the shared memory page, a shared page switch, which connects a predetermined shared memory page of a shared physical memory to the first or second local switch, and a switch manager, which remaps a certain shared memory page of the shared physical memory that stores data of a task performed by the first processor element to the virtual page of the second processor element. Accordingly, since memory remapping is used, the massive amount of multimedia data can be transmitted by changing a method of mapping a memory, unlike a case when multimedia data is transmitted by using a memory bus. | 10-30-2008 |
20080276048 | Addressing and Command Protocols for Non-Volatile Memories Utilized in Recording Usage Counts - Electrical interfaces, addressing schemes, and command protocols allow for communications with memory modules in computing devices such as imaging and printing devices. Memory modules may be assigned an address through a set of discrete voltages. One, multiple, or all of the memory modules may be addressed with a single command, which may be an increment counter command, a write command, or a punch out bit field. The status of the memory modules may be determined by sampling a single signal that may be at a low, high, or intermediate voltage level. | 11-06-2008 |
20080301379 | Shared memory architecture - Disclosed herein is an apparatus which may comprise a plurality of nodes. In one example embodiment, each of the plurality of nodes may include one or more central processing units (CPUs), a random access memory device, and a parallel link input/output port. The random access memory device may include a local memory address space and a global memory address space. The local memory address space may be accessible to the one or more CPUs of the node that comprises the random access memory device. The global memory address space may be accessible to CPUs of all the nodes. The parallel link input/output port may be configured to send data frames to, and receive data frames from, the global memory address space comprised by the random access memory device(s) of the other nodes. | 12-04-2008 |
20080320239 | Data storage system - A data storage system is provided. The data storage system includes a first storage module for storing a first data, a second storage module for storing a second data, a control module and a processing module. The control module generates a first control signal and a second control signal, and accesses the first data and the second data according to the first control signal and the second control signal. The processing module is coupled to the first storage module, the second storage module and the control module, and controls the first storage module and the second storage module to transmit the first data and the second data to the control module according to the first control signal and the second control signal respectively, wherein the processing module bypasses the second storage module when receiving the first control signal. | 12-25-2008 |
20090006772 | Memory Chip for High Capacity Memory Subsystem Supporting Replication of Command Data - A memory module contains a first interface for receiving data access commands and a second interface for re-transmitting data access commands to other memory modules, the second interface propagating multiple copies of received data access commands to multiple other memory modules. The memory module is preferably used in a high-capacity memory subsystem organized in a tree configuration in which data accesses are interleaved. Preferably, the memory module has multiple-mode operation, one of which supports multiple replication of commands and another of which supports conventional daisy-chaining | 01-01-2009 |
20090019237 | MULTIPATH ACCESSIBLE SEMICONDUCTOR MEMORY DEVICE HAVING CONTINUOUS ADDRESS MAP AND METHOD OF PROVIDING THE SAME - A semiconductor memory device for use in a multiprocessor system includes at least two shared memory areas and a row decoder. The at least two shared memory areas are accessible in common by multiple processors of the multiprocessor system through different ports, and assigned based on predetermined memory capacity to a portion of a memory cell array. The row decoder is configured to form a continuous address map for remaining memory portions of the at least two shared memory areas to be dedicated to one port. Each remaining memory portion does not include a corresponding data transfer portion within each shared memory area. | 01-15-2009 |
20090037668 | PROTECTED PORTION OF PARTITION MEMORY FOR COMPUTER CODE - A system comprises a plurality of computing nodes and a plurality of separate memory devices. A separate memory device is associated with each computing node. The separate memory devices are configured as partition memory in which memory accesses are interleaved across multiple of such memory devices. A protected portion of the partition memory is reserved for use by complex management (CM) code that coordinates partitions implemented on the system. The protected portion of partition memory is restricted from access by operating systems running in the partitions. | 02-05-2009 |
20090063784 | System for Enhancing the Memory Bandwidth Available Through a Memory Module - A memory system is provided that enhances the memory bandwidth available through a memory module. The memory system includes a memory hub device integrated in a memory module. The memory system includes a first memory device data interface integrated in the memory hub device that communicates with a first set of memory devices integrated in the memory module. The memory system also includes a second memory device data interface integrated in the memory hub device that communicates with a second set of memory devices integrated in the memory module. In the memory system, the first set of memory devices are separate from the second set of memory devices. In the memory system, the first and second set of memory devices are communicated with by the memory hub device via the separate first and second memory device data interfaces. | 03-05-2009 |
20090063785 | Buffered Memory Module Supporting Double the Memory Device Data Width in the Same Physical Space as a Conventional Memory Module - A memory system is provided that enhances the memory bandwidth available through a memory module. The memory system includes a memory hub device integrated into a memory module, a first memory device data interface integrated that communicates with a first set of memory devices and a second memory device data interface integrated that communicates with a second set of memory devices. In the memory system, the first set of memory devices are spaced in a first plane and coupled to a substrate of the memory module and the second set of memory devices are spaced in a second plane above the first plane and coupled to the substrate. In the memory system, data buses of the first set of memory devices are coupled to the substrate separately from data buses of the second set of memory devices. | 03-05-2009 |
20090063786 | Daisy-chain memory configuration and usage - Daisy-chain memory configuration and usage is disclosed. According to one configuration, a memory system includes a controller and corresponding string of multiple successive memory devices coupled in a daisy-chain manner. The controller communicates commands over the serial control link to configure a first memory device to write a block of data to a second memory device in the chain. For example, the controller initiates copying a block of data by communicating over the daisy-chain control link to configure a first memory device of the multiple memory devices to be a source for outputting data, communicating over the daisy-chain control link to configure a second memory device to be a destination for receiving data, and communicating over the daisy-chain control link to initiate a transfer of the data from the first memory device to the second memory device. | 03-05-2009 |
20090089513 | ADDRESSING MULTI-CORE ADVANCED MEMORY BUFFERS - In some embodiments a method of addressing advanced memory buffers identifies whether a dual inline memory module includes more than one advanced memory buffer. If the dual inline memory module includes more than one advanced memory buffer, then each of the advanced memory buffers of the dual inline memory module is addressed separately, and an address is computed for a next dual inline memory module. Other embodiments are described and claimed. | 04-02-2009 |
20090187718 | INFORMATION PROCESSING SYSTEM AND INFORMATION PROCESSING METHOD - An ascending ordered list without duplication is generated based on a value list divided and held by multiple memory modules. An information processing system has multiple PMMs, and the PMMs are interconnected via a data transmission path. The memory in the PMM has a list of values, which are ordered in ascending or descending order without duplication. The PMM determines, for a storage value in the value list (LOCAL_LIST) held by the PMM, whether or not the memory module is a representative module representing one or more memory modules holding the storage value based on rankings determined for the individual PMMs and the value lists received from the other PMMs, and if the memory module is determined to be the representative module (RV- | 07-23-2009 |
20090216958 | Hardware accelerator interface - A data processing system in the form of an integrated circuit | 08-27-2009 |
20090228662 | MULTI-CHANNEL MEMORY STORAGE DEVICE AND CONTROL METHOD THEREOF - The present invention discloses a multi-channel memory storage device and control method thereof. The method arranges physical locations for a file's data stored in the storage device. The storage device includes a plurality of memories. The major feature of the method is to decide whether the data is written to a single memory or parallel memories according to the size of the data. | 09-10-2009 |
20090307436 | Hypervisor Page Fault Processing in a Shared Memory Partition Data Processing System - Hypervisor page fault processing logic is provided for a shared memory partition data processing system. The logic, responsive to an executing virtual processor of the shared memory partition data processing system encountering a hypervisor page fault, allocates an input/output (I/O) paging request to the virtual processor from an I/O paging request pool and increments an outstanding I/O paging request count for the virtual processor. A determination is then made whether the outstanding I/O paging request count for the virtual processor is at a predefined threshold, and if not, the logic places the virtual processor in a wait state with interrupt wake-up reasons enabled based on the virtual processor's state, otherwise, it places the virtual processor in a wait state with interrupt wake-up reasons disabled. | 12-10-2009 |
20100023704 | VIRTUALIZABLE ADVANCED SYNCHRONIZATION FACILITY - A system and method for executing a transaction in a transactional memory system is disclosed. The system includes a processor of a plurality of processors coupled to shared memory, wherein the processor is configured to execute a section of code, including a plurality of memory access operations to the shared memory, as an atomic transaction relative to the execution of the plurality of processors. According to embodiments, the processor is configured to determine whether the memory access operations include any of a set of disallowed instructions, wherein the set includes one or more instructions that operate differently in a virtualized computing environment than in a native computing environment. If any of the memory access operations are ones of the disallowed instructions, then the processor aborts the transaction. | 01-28-2010 |
20100077156 | PROCESSOR, PROCESSING SYSTEM, DATA SHARING PROCESSING METHOD, AND INTEGRATED CIRCUIT FOR DATA SHARING PROCESSING - A processing device that processes data with use of one or more data blocks shared with a plurality of external processing devices. The device includes: a processor; a shared data storage unit that stores, respectively in one or more storage areas thereof, one or more data blocks to be shared with one or more external processing devices; an output unit that outputs, when the processor makes an access request to write data in a part of one of the data blocks, a block identifier identifying the one of the data blocks, and the data pertaining to the access request; and an input unit that judges whether to share external data outputted from one of the external processing devices, based on a block identifier outputted from the one of the external processing devices, and only when judging affirmatively, causes the shared data storage unit to store the external data. | 03-25-2010 |
20100082909 | MEMORY SYSTEM AND CONTROL METHOD - A control method for a memory system includes a plurality of processing apparatuses each having a comparison data holding area and a replacement data holding area, and a plurality of storage units each having a readout data holding area rewritably holding readout data and a memory unit shared by the plurality of processing apparatuses. The control method includes issuing an exclusive control instruction to exclusively access to one of the memory units from one of the processing apparatuses, sending comparison data to one of the plurality of storage units from the comparison data holding area of the one of the processing apparatuses when the exclusive control instruction is executed, and comparing the comparison data sent from the one of the processing apparatuses with the readout data in the storage unit. | 04-01-2010 |
20100211748 | Memory System With Point-to-Point Request Interconnect - A memory system includes a memory controller with a plurality N of memory-controller blocks, each of which conveys independent transaction requests over external request ports. The request ports are coupled, via point-to-point connections, to from one to N memory devices, each of which includes N independently addressable memory blocks. All of the external request ports are connected to respective external request ports on the memory device or devices used in a given configuration. The number of request ports per memory device and the data width of each memory device changes with the number of memory devices such that the ratio of the request-access granularity to the data granularity remains constant irrespective of the number of memory devices. | 08-19-2010 |
20100223432 | MEMORY SHARING AMONG COMPUTER PROGRAMS - A physical memory location among multiple programs is shared among multiple programs. In one embodiment, multiple memory units are scanned to detect duplicated contents in the memory units. The memory units are used by programs running on a computer system. A data structure is used to identify memory units of identical contents. To improve performance, an additional data structure can be used to identify memory units of identical contents. Memory units that are identified to have identical contents can share the same physical memory space. | 09-02-2010 |
20100299486 | Electronic Devices and Methods for Storing Data in a Memory - An electronic device containing a memory having a plurality of memory modules. Each memory module includes a plurality of memory devices. The electronic device also contains a data bus having a number of lines for transferring data from and to the memory devices. The data bus is configured to have at least two sub-sets of lines coupled to different memory modules. A method including reading a data word from memory devices of different memory modules through a data bus using different subsets of lines of the data bus for each memory module. | 11-25-2010 |
20100325369 | BROADCAST RECEIVING APPARATUS AND METHOD FOR MANAGING MEMORY THEREOF - A broadcast receiving apparatus and a method for managing a memory are provided. The method for managing a memory includes setting a part of a memory to be a first memory area to be used for a first operating system; setting a portion of the memory which is not set as the first memory area to be a second memory area; and if a second operating system uses the memory, expanding the first memory area to include at least part of the second memory area. Therefore, the broadcast receiving apparatus uses a plurality of operating systems. | 12-23-2010 |
20100332770 | Concurrency Control Using Slotted Read-Write Locks - A system and method for concurrency control may use slotted read-write locks. A slotted read-write lock is a lock data structure associated with a shared memory area, wherein the slotted read-write lock indicates whether any thread has a read-lock and/or a write-lock for the shared memory area. Multiple threads may concurrently have the read-lock but only one thread can have the write-lock at any given time. The slotted read-write lock comprises multiple slots, each associated with a single thread. To acquire the slotted read-write lock for reading, a thread assigned to a slot performs a store operation to the slot and then attempts to determine that no other thread holds the slotted read-write lock for writing. To acquire the slotted read-write lock for writing, a thread assigned to a slot sets its write-bit and then attempts to determine that the write-lock is not held. | 12-30-2010 |
20100332771 | PRIVATE MEMORY REGIONS AND COHERENCE OPTIMIZATIONS - Private or shared read-only memory regions. One embodiment may be practiced in a computing environment including a plurality of agents. A method includes acts for declaring one or more memory regions private to a particular agent or shared read only amongst agents by having software utilize processor level instructions to specify to hardware the private or shared read only memory address regions. The method includes an agent executing a processor level instruction to specify one or more memory regions as private to the agent or shared read-only amongst a plurality of agents. As a result of an agent executing a processor level instruction to specify one or more memory regions as private to the agent or shared read-only amongst a plurality of agents, a hardware component monitoring the one or more memory regions for conflicting accesses or prevents conflicting accesses on the one or more memory regions. | 12-30-2010 |
20110010508 | Memory system and information processing device - A memory system includes a first memory that is used as a main memory of a target device, a second memory that has an access speed lower than that of the first memory, a securing section that secures a predetermined area of the first memory as a temporary storage area of the second memory, and a memory control section that receives an instruction to write data into the second memory, temporarily stores the data into the first memory and also transfers the stored data from the first memory to the second memory. | 01-13-2011 |
20110016278 | Independent Threading of Memory Devices Disposed on Memory Modules - A memory module includes a substrate having signal lines thereon that form a control path and a plurality of data paths. A plurality of memory devices are mounted on the substrate. Each memory device is coupled to the control path and to a distinct data path. The memory module includes control circuitry to enable each memory device to process a distinct respective memory access command in a succession of memory access commands and to output data on the distinct data path in response to the processed memory access command. | 01-20-2011 |
20110055490 | Memory Sharing Arrangement - A digital system is provided with a memory interposer module configured to be coupled between a processor module and a memory module. The memory interposer module has a memory controller configured to couple to the memory module. It also includes a first memory emulator configured to couple to the processor module via a connector, wherein the first memory emulator is configured to emulate the memory module. There is an arbiter coupled between the memory controller and the memory emulator. A second memory emulator is connected to the arbiter, wherein the second memory emulator is also configured to emulate the memory module. Each memory emulator is operable to stall a memory request when a conflict occurs. | 03-03-2011 |
20110119453 | METHOD AND SYSTEM FOR IMPLEMENTING MULTI-CONTROLLER SYSTEMS - A method for implementing a high-availability system that includes a plurality of controllers that each includes a shared memory. The method includes storing in the shared memory, by each controller, status data related to each of a plurality of failure modes, and calculating, by each controller, an availability score based on the status data. The method also includes determining, by each controller, one of the plurality of controllers having a highest availability score, and identifying the one of the plurality of controllers having the highest availability score as a master controller. | 05-19-2011 |
20110131383 | MODULAR COMMAND STRUCTURE FOR MEMORY AND MEMORY SYSTEM - A system including a memory system and a memory controller is connected to a host system. The memory system has at least one memory device storing data. The controller translates the requests from the host system to one or more separatable commands interpretable by the at least one memory device. Each command has a modular structure including an address identifier for one of the at least one memory devices and a command identifier representing an operation to be performed by the one of the at least one memory devices. The at least one memory device and the controller are in a series-connection configuration for communication such that only one memory device is in communication with the controller for input into the memory system. The memory system can include a plurality of memory devices connected to a common bus. | 06-02-2011 |
20110161602 | LOCK-FREE CONCURRENT OBJECT DICTIONARY - An object storage system comprises one or more computer processors or threads that can concurrently access a shared memory, the shared memory comprising an array of equally-sized cells. In one embodiment, each cell is of the size used by the processors to represent a pointer, e.g., 64 bits. Using an algorithm performing only one memory write, and using a hardware-provided transactional operation, such as a compare-and-swap instruction, to implement the memory write, concurrent access is safely accommodated in a lock-free manner. | 06-30-2011 |
20110208922 | POOL OF DEVICES PROVIDING OPERATING SYSTEM REDUNDANCY - Systems, methods, and computer program products for providing operating system (O/S) redundancy in a computing system are provided. One system includes a host computing device, a plurality of memory devices, and a sub-loader coupled between the host computing device and the plurality of memory devices. Each memory device stores a respective O/S and the sub-loader is configured such that the plurality of memory devices appear transparent to the host computing device. One method includes designating, a first logical unit device as a primary logical unit device and subsequently determining that the first logical unit device is unresponsive. The designation is removed from the first logical unit device and a second logical unit device is designated as a new primary logical unit device. One computer program product includes instructions for performing the above method. | 08-25-2011 |
20110238928 | MEMORY SYSTEM - According to one embodiment, a memory system includes a memory that includes a plurality of parallel operation elements, each of which stores therein write data from a host device and on each of which read/write is individually performed, a control unit that performs the read/write to the parallel operation elements simultaneously, and a required-performance measuring unit that measures a required performance from the host device are included. The control unit changes the number of simultaneous executions of the read/write of the parallel operation elements based on the required performance measured by the required-performance measuring unit. | 09-29-2011 |
20120047334 | FLEXIBLE SELECTION COMMAND FOR NON-VOLATILE MEMORY - Some embodiments of the invention pertain to a memory system containing multiple memory devices, in which one or multiple ones of the memory devices may flexibly be selected at one time for a common operation to be performed by all the selected devices concurrently. | 02-23-2012 |
20120137083 | SEMICONDUCTOR MEMORY DEVICE - In a semiconductor memory device, an update data control circuit is provided, which selectively couples a physical address input data line or an effective address input data line to a common input data line coupled to a physical address cell that stores a physical address page number. A control terminal of an update circuit of the physical address cell is coupled to a page size cell that stores page size information via an update control circuit, to control a write port of the physical address cell with the page size cell. | 05-31-2012 |
20120144127 | DATA TRANSMISSION - A method of transmitting data from a first module to addressable storage devices in a second module. The method comprises: transmitting from the first module to a second module in a first transmission cycle an address identifying a storage device in the second module for a data item; at the second module, determining the status of a storage location in the device identified by the address for holding a data item and dispatching in a second transmission cycle a pre-emptive acknowledgement signal, the state of which depends on the status of that storage location; transmitting in the second transmission cycle the data item from the first module to the second module; transmitting the address in a later transmission cycle from the first module to the second module; and selectively transmitting one of the data item and a next data item depending on the state of the pre-emptive acknowledgement signal. | 06-07-2012 |
20120166738 | MANAGING SHARED DATA OBJECTS TO PROVIDE VISIBILITY TO SHARED MEMORY - A system for sharing data between computer processes. The system includes a processor configured to implement a method that includes executing a plurality of independent processes on an application server, the processes including a first process and a second process. A shared memory utilized by the plurality of independent processes is provided. A single copy of the data and metadata are stored in the shared memory. The metadata includes an address of the data. The first process initiates the storing of the data in the shared memory. An address of the metadata is transferred from the first process to the second process to notify the second process about the data. The second process determines the address of the shared memory by reading the metadata. The data in the shared memory is accessed by the second process. | 06-28-2012 |
20120173826 | MEMORY SYSTEM AND METHOD FOR CONTROLLING MEMORY SYSTEM - A memory system connected to another apparatus via a data crossbar, has a first memory, a second memory that forms a dual configuration together with the first memory, a first memory controller that transmits or receives data to be written into the first memory or data read out from the first memory to or from the other apparatus, a second memory controller that transmits or receives data to be written into the second memory or data read out from the second memory to or from the other apparatus, and a system controller that instructs the first memory controller and the second memory controller to read out, from the first memory and the second memory, data requested to be read out by the other apparatus if the system controller detects that any one of the first data crossbar and the second data crossbar being not capable of transmitting or receiving data. | 07-05-2012 |
20120179880 | SHARED ACCESS MEMORY SCHEME - A memory device loops back control information from one interface to another interface to facilitate sharing of the memory device by multiple devices. In some aspects, a memory controller sends control and address information to one interface of a memory device when accessing the memory device. The memory device may then loop back this control and address information to another interface that is used by another memory controller to access the memory device. The other memory controller may then use this information to determine how to access the memory device. In some aspects a memory device loops back arbitration information from one interface to another interface thereby enabling controller devices that are coupled to the memory device to control (e.g., schedule) accesses of the memory device. | 07-12-2012 |
20120179881 | Performing An Allreduce Operation Using Shared Memory - Methods, apparatus, and products are disclosed for performing an allreduce operation using shared memory that include: receiving, by at least one of a plurality of processing cores on a compute node, an instruction to perform an allreduce operation; establishing, by the core that received the instruction, a job status object for specifying a plurality of shared memory allreduce work units, the plurality of shared memory allreduce work units together performing the allreduce operation on the compute node; determining, by an available core on the compute node, a next shared memory allreduce work unit in the job status object; and performing, by that available core on the compute node, that next shared memory allreduce work unit. | 07-12-2012 |
20120221800 | MEMORY SHARING AMONG COMPUTER PROGRAMS - A system and method for memory sharing among computer programs is disclosed. A method for memory sharing among computer programs includes identifying memory units of a plurality of memory units having identical contents, collapsing the identified memory units into a single merged memory page, and mapping the single merged memory page into an associated shared physical memory location. The method further includes when a request to write to a memory unit merged into the single merged memory page is received: copying, by a computer system, contents in the associated shared physical memory location to a different memory location, and redirecting, by the computer system, the request to the different memory location. | 08-30-2012 |
20120226873 | MULTIPROCESSOR ARRANGEMENT HAVING SHARED MEMORY, AND A METHOD OF COMMUNICATION BETWEEN PROCESSORS IN A MULTIPROCESSOR ARRANGEMENT - A multiprocessor arrangement is disclosed, in which a plurality of processors are able to communicate with each other by means of a plurality of time-sliced memory blocks. At least one, and up to all, of the processors may be able to access more than one time-sliced memories. A mesh arrangement of such processors and memories is disclosed, which may be a partial or complete mesh. The mesh may to two-dimensional, or higher dimensional. | 09-06-2012 |
20120233412 | MEMORY MANAGEMENT SYSTEM AND METHOD THEREOF - The invention discloses a memory management system and a memory management method are disclosed. The memory management system includes a first memory, at least one secondary memory, and a memory management device. The first memory includes a normal access memory bank and at least one switching access memory bank. The secondary memory includes at least one secondary access memory bank corresponding to the switching access memory bank. The memory management device reads/writes the normal access memory bank or the secondary access memory bank. | 09-13-2012 |
20120331240 | DATA PROCESSING DEVICE AND DATA PROCESSING ARRANGEMENT - A data processing device is described with a memory and a first and a second data processing component. The first data processing component comprises a control memory comprising, for each memory region of a plurality of memory regions of the memory, an indication whether a data access to the memory region may be carried out by the first data processing component and a data access circuit configured to carry out a data access to a memory region of the plurality of memory regions if a data access to the memory region may be carried out by the first data processing component; and a setting circuit configured to set the indication for a memory region to indicate that a data access to the memory region may not be carried out by the first data processing component in response to the completion of a data access of the first data processing component to the memory region. | 12-27-2012 |
20130007378 | MECHANISMS FOR EFFICIENT INTRA-DIE/INTRA-CHIP COLLECTIVE MESSAGING - Mechanism of efficient intra-die collective processing across the nodelets with separate shared memory coherency domains is provided. An integrated circuit die may include a hardware collective unit implemented on the integrated circuit die. A plurality of cores on the integrated circuit die is grouped into a plurality of shared memory coherence domains. Each of the plurality of shared memory coherence domains is connected to the collective unit for performing collective operations between the plurality of shared memory coherence domains. | 01-03-2013 |
20130036273 | Memory Signal Buffers and Modules Supporting Variable Access Granularity - Described are memory modules that include a configurable signal buffer that manages communication between memory devices and a memory controller. The buffer can be configured to support threading to reduce access granularity, the frequency of row-activation, or both. The buffer can translate controller commands to access information of a specified granularity into subcommands seeking to access information of reduced granularity. The reduced-granularity information can then be combined, as by concatenation, and conveyed to the memory controller as information of the specified granularity. | 02-07-2013 |
20130061004 | MEMORY/LOGIC CONJUGATE SYSTEM - In a memory/logic conjugate system, a plurality of cluster memory chips each including a plurality of cluster memories ( | 03-07-2013 |
20130067172 | METHODS AND STRUCTURE FOR IMPROVED BUFFER ALLOCATION IN A STORAGE CONTROLLER - Methods and structure for improved buffer management in a storage controller. A plurality of processes in the controller each transmits buffer management requests to buffer management control logic. A plurality of reserved portions and a remaining non-reserved portion are defined in a shared pool memory managed by the buffer management control logic. Each reserved portion is defined as a corresponding minimum amount of memory of the shared pool. Each reserved portion is associated with a private pool identifier. Each allocation request from a client process supplies a private pool identifier for the associated buffer to be allocated. The buffer is allocated from the reserved portion if there sufficient available space in the reserved portion identified by the supplied private pool identifier. Otherwise, the buffer is allocated if sufficient memory is available in the non-reserved portion. Otherwise the request is queued for later re-processing. | 03-14-2013 |
20130073814 | Computer System - A computer system, comprising a plurality of nodes, the plurality of nodes are grouped into m node groups, each node group comprises n nodes, wherein m is a natural number greater than or equal to 1, n is a natural number greater than or equal to 2, the n nodes in each of the node group are connected directly or indirectly into a dual interconnection structure, wherein first node controllers of the n nodes in the same node group are connected directly or indirectly to form a first interconnection structure, second node controllers of nodes in the same node group are connected directly or indirectly to form a second interconnection structure. Therefore, less interconnection chips are required, the access path between nodes is shortened, the access delay time is reduced, the cost is reduced, and the system performance is improved. | 03-21-2013 |
20130159636 | INFORMATION PROCESSING APPARATUS AND CONTROL METHOD OF INFORMATION PROCESSING APPARATUS - An information processing apparatus includes a directory. Information is registered with the directory in a first format having entries corresponding to data storage areas, respectively. The information indicates a CPU that stores data stored in a data storage area of one information processing part of plural information processing parts or an information processing part having the CPU. The information processing part converts into a second format. The second format is such that an entry registered in such a way that data is not to be used from among the plural entries of the first format is removed and the number of the entries is reduced. | 06-20-2013 |
20130246716 | MEMORY SYSTEM AND DATA WRITING METHOD - According to one embodiment, when a controller writes update data in a second memory to a first memory which is nonvolatile and a difference between a size of a page and a size of the update data is equal to or greater than a size of a cluster, the controller configured to generate write data by adding, to the update data, data which has the size of the cluster, store an update content of management information corresponding to the update data and an update content storage position indicating a storage position of the update content of the management information in the first memory, and write the generated write data to a block in writing of the first memory. | 09-19-2013 |
20130246717 | INFORMATION PROCESSING SYSTEM - An information processing system includes: CPUs; storage devices; switches; dummy storage devices which are with respective storage devices and each of which sends, when receiving an identifying information request, its own identifying information back to a sender of the identifying information request; and dummy CPUs which are associated with respective CPUs and each of which tries to, when receiving an instruction for acquiring identifying information from a dummy storage device, acquire the identifying information of the dummy storage device by transmitting the identifying information request, and sends the identifying information as response information back to a sender device of the acquiring instruction. | 09-19-2013 |
20130246718 | CONTROL DEVICE AND CONTROL METHOD FOR CONTROL DEVICE - A control device includes a storage to store correspondence information indicating a correspondence between each of memories and each of information processing devices; and a processor to execute an operation including: detecting a first memory from among the memories, the first memory being a memory whose access frequency exceeds a predetermined access frequency or is relatively high, and a second memory being a memory whose access frequency is lower than or equal to a predetermined access frequency, changing the correspondence information so that an information processing device corresponding to the first memory changes from a first information processing device to a second information processing device corresponding to the second memory, and notifying a management device of the changed correspondence information, and outputting data read from the first memory to the management device via the second information processing device. | 09-19-2013 |
20130246719 | PARTITION-FREE MULTI-SOCKET MEMORY SYSTEM ARCHITECTURE - A technique to increase memory bandwidth for throughput applications. In one embodiment, memory bandwidth can be increased, particularly for throughput applications, without increasing interconnect trace or pin count by pipelining pages between one or more memory storage areas on half cycles of a memory access clock. | 09-19-2013 |
20130262787 | SCALABLE MEMORY ARCHITECTURE FOR TURBO ENCODING - Low-power, easily scalable architectures for high-speed data handling are critical to modern circuits and systems. Successful architectures must provide efficient data storage and efficient/flexible data retrieval with low power consumption. Data encoding, including that achieved with turbo codes, have data streams split into a sequence of even and odd data bits. These bits are written into multiple single-port memories so that the writing alternates between memories. Scheduling for the reading and writing is performed to avoid conflicts and give priority to the read operations. | 10-03-2013 |
20130275687 | MEMORY AND PROCESS SHARING VIA INPUT/OUTPUT WITH VIRTUALIZATION - Embodiments of the present invention provide an approach for memory and process sharing via input/output (I/O) with virtualization. Specifically, embodiments of the present invention provide a circuit design/system in which multiple chipsets are present that communicate with one another via a communications channel. Each chipset generally comprises a processor coupled to a memory unit. Moreover, each component has its own distinct/separate power supply. Pursuant to a communication and/or command exchange with a main controller, a processor of a particular chipset may disengage a memory unit coupled thereto, and then access a memory unit of another chipset (e.g., coupled to another processer in the system). Among other things, such an inventive configuration reduces memory leakage and enhances overall performance and/or efficiency of the system. | 10-17-2013 |
20130326158 | MEMORY CHANNEL SELECTION IN A MULTI-CHANNEL MEMORY SYSTEM - In general, this disclosure describes techniques for selecting a memory channel in a multi-channel memory system for storing data, so that usage of the memory channels is well-balanced. A request to write data to a logical memory address of a memory system may be received. The logical memory address may include a logical page number and a page offset, where the logical page number maps to a physical page number and the logical memory address maps to a physical memory address. A memory unit out of a plurality of memory units in the memory system may be determined by performing a logical operation on one or more bits of the page offset and one or more bits of the physical page number. The data may be written to a physical memory address in the determined memory unit in the memory system. | 12-05-2013 |
20130326159 | SHARED LIBRARY IN A DATA STORAGE SYSTEM - The library server according to certain aspects can manage the use of tape drives according to the data requirements of different storage operation cells. The library server according to certain aspects can also facilitate automatic management of tape media in a tape library by allocating the tapes and slots to different cells. For instance, the library server can manage the positioning and placement of the tapes into appropriate slots within the tape library. | 12-05-2013 |
20130339632 | PROCESSOR MANAGEMENT METHOD - A processor management method includes setting a master mechanism in a given processor among multiple processors, where the master mechanism manages the processors; setting a local master mechanism and a virtual master mechanism in each of processors other than the given processor among the processors, where the local master mechanism and the virtual master mechanism manage each of the processors; and notifying by the master mechanism, the processors of an offset value of an address to allow a shared memory managed by the master mechanism to be accessed as a continuous memory by the processors. | 12-19-2013 |
20140068201 | TRANSACTIONAL MEMORY PROXY - Processors in a compute node offload transactional memory accesses addressing shared memory to a transactional memory agent. The transactional memory agent typically resides near the processors in a particular compute node. The transactional memory agent acts as a proxy for those processors. A first benefit of the invention includes decoupling the processor from the direct effects of remote system failures. Other benefits of the invention includes freeing the processor from having to be aware of transactional memory semantics, and allowing the processor to address a memory space larger than the processor's native hardware addressing capabilities. The invention also enables computer system transactional capabilities to scale well beyond the transactional capabilities of those found computer systems today. | 03-06-2014 |
20140095810 | MEMORY SHARING ACROSS DISTRIBUTED NODES - A method and apparatus are disclosed for enabling nodes in a distributed system to share one or more memory portions. A home node makes a portion of its main memory available for sharing, and one or more sharer nodes mirrors that shared portion of the home node's main memory in its own main memory. To maintain memory coherency, a memory coherence protocol is implemented. Under this protocol, load and store instructions that target the mirrored memory portion of a sharer node are trapped, and store instructions that target the shared memory portion of a home node are trapped. With this protocol, valid data is obtained from the home node and updates are propagated to the home node. Thus, no “dirty” data is transferred between sharer nodes. As a result, the failure of one node will not cause the failure of another node or the failure of the entire system. | 04-03-2014 |
20140143510 | ACCESSING ADDITIONAL MEMORY SPACE WITH MULTIPLE PROCESSORS - An apparatus and method is provided for coupling additional memory to a plurality of processors. The method may include determining the memory requirements of the plurality of processors in a system, comparing the memory requirements of the plurality of processors to an available memory assigned to each of the plurality of processors, and selecting a processor from the plurality of processors that requires additional memory capacity. The apparatus may include a plurality of processors, where the plurality of processors is coupled to a logic element. In addition, the apparatus may include an additional memory coupled to the logic element, where the logic element is adapted to select a processor from the plurality of processors to couple with the additional memory. | 05-22-2014 |
20140164717 | Systems and Methods for Improved Communications in a Nonvolatile Memory System - Systems and methods are provided for improved communications in a nonvolatile memory (“NVM”) system. The system can toggle between multiple communications channels to provide point-to-point communications between a host device and NVM dies included in the system. The host device can toggle between multiple communications channels that extend to one or more memory controllers of the system, and the memory controllers can toggle between multiple communications channels that extend to the NVM dies. Power islands may be incorporated into the system to electrically isolate system components associated with inactive communications channels. | 06-12-2014 |
20140181421 | PROCESSING ENGINE FOR COMPLEX ATOMIC OPERATIONS - A system includes an atomic processing engine (APE) coupled to an interconnect. The interconnect is to couple to one or more processor cores. The APE receives a plurality of commands from the one or more processor cores through the interconnect. In response to a first command, the APE performs a first plurality of operations associated with the first command. The first plurality of operations references multiple memory locations, at least one of which is shared between two or more threads executed by the one or more processor cores. | 06-26-2014 |
20140181422 | PROTOCOL ENGINE FOR PROCESSING DATA IN A WIRELESS TRANSMIT/RECEIVE UNIT - A protocol engine (PE) for processing data within a protocol stack in a wireless transmit/receive unit (WTRU) is disclosed. The protocol stack executes decision and control operations. The data processing and re-formatting which was performed in a conventional protocol stack is removed from the protocol stack and performed by the PE. The protocol stack issues a control word for processing data and the PE processes the data based on the control word. Preferably, the WTRU includes a shared memory and a second memory. The shared memory is used as a data block place holder to transfer the data amongst processing entities. For transmit processing, the PE retrieves source data from the second memory and processes the data while moving the data to the shared memory based on the control word. For receive processing, the PE retrieves received data from the shared memory and processes it while moving the data to the second memory. | 06-26-2014 |
20140281280 | SELECTING BETWEEN NON-VOLATILE MEMORY UNITS HAVING DIFFERENT MINIMUM ADDRESSABLE DATA UNIT SIZES - An apparatus includes a controller capable of being coupled to a host interface and a memory device. The memory device includes two or more non-hierarchical, non-volatile memory units having different minimum addressable data unit sizes. The controller is configured to at least perform determining a workload indicator of a data object being stored in the memory device via the host interface. The controller selects one of the memory units in response to the workload indicator of the data object corresponding to the minimum addressable data unit size of the selected memory unit corresponding to the workload indicator. The data object is stored in the selected memory unit in response thereto. | 09-18-2014 |
20140281281 | HOST COMMAND BASED READ DISTURB METHODOLOGY - An apparatus comprising a memory and a controller. The memory may be configured to process a plurality of read/write operations. The memory comprises a plurality of memory modules each having a size less than a total size of the memory. The controller is configured to (i) determine if a read disturb has occurred, and (ii) if the read disturb has occurred, the controller (a) determines a size of the group of read/write operations, and (b) writes all of the group of read/write operations to one of the memory modules. | 09-18-2014 |
20140281282 | STORAGE DEVICE AND STORAGE SYSTEM - According to one embodiment, a storage device includes a first memory, an interface that includes first physical layers and connects a host and the first memory, a second memory that temporarily stores the data transferred between the host and the first memory, a controller that controls operation of the interface. When the data is transferred from the first memory to the host, the controller reads the data corresponding to the data transfer request into the second memory, the controller selects the physical layer to transfer the data from the second memory to the host based on a first period until the data is ready for transmission after data transfer is requested. | 09-18-2014 |
20140297969 | INFORMATION PROCESSING DEVICE, METHOD FOR CONTROLLING INFORMATION PROCESSING DEVICE, AND PROGRAM FOR CONTROLLING INFORMATION PROCESSING DEVICE - An information processing device includes a processor, and a plurality of memories arranged on the processor and coupled to the processor, wherein the plurality of memories are stacked on each other, and wherein a first memory that is located farthest from the processor among the plurality of memories is allocated for a program for managing the information processing device, and the processor executes the program. | 10-02-2014 |
20140310481 | MEMORY SYSTEM - A memory system includes a memory controller to control a first memory device and a second memory device. The first and second memory devices are different in terms of at least one of physical distance from the memory controller, a manner of connection to the memory controller, error correction capability, or memory supply voltage. The first and second memory devices also have different latencies. | 10-16-2014 |
20140317360 | MEMORY ACCESS CONTROL - Memory access circuitry for controlling access to a memory comprising multiple memory units arranged in parallel with each other. The memory access circuitry comprising: two access units each configured to select one of the multiple memory units in response to a received memory access request and to control and track subsequent accesses to the selected memory unit, the multiple memory units comprising at least three memory units; arbitration circuitry configured to receive the memory access requests from a system and to select and forward the memory access requests to one of the two access units, the arbitration circuitry being configured to forward a plurality of memory access requests for accessing one memory unit to a first of the two access units, and to direct a plurality of memory access requests for accessing a further memory unit to a second of the two access units and to subsequently direct a plurality of memory access requests for accessing a yet further memory unit to one of the first or second access units. The two access units comprise storing circuitry to store requests in a queue prior to transmitting the requests to the respective memory unit; and tracking circuitry to track requests sent to the respective memory units and to determine when to transmit subsequent requests from the queue. The control circuitry is configured to set a state of each of the two access units, the state being one of active, prepare and dormant, the access unit in the active state being operable to transmit both access and activate requests to the respective memory unit, the activate request preparing the access in the respective memory unit and the access request accessing the data, the access unit in the prepare state being operable to transmit the activate requests and not the access requests, the access unit in the dormant state being operable not to transmit any access or activate requests, the control circuitry being configured to switch states of the two access units periodically and to set not more than one of the access units to the active state at a same time. | 10-23-2014 |
20140380000 | MEMORY CONTROLLER AND ACCESSING SYSTEM UTILIZING THE SAME - A memory controller is coupled to a memory device including a first block and a second block and includes a first register module, a first execution unit and a second register module. The first register module includes a plurality of set registers to store a first configuration file and a second configuration file. The first execution unit computes data stored in the first block simultaneously according to the first and the second configuration files to generate a first computation result and a computation operation result. The second register module includes a plurality of result registers to store the first and the second computation results. | 12-25-2014 |
20150052317 | SYSTEMS, DEVICES, MEMORY CONTROLLERS, AND METHODS FOR MEMORY INITIALIZATION - Systems, devices, memory controllers, and methods for initializing memory are described. Initializing memory can include configuring memory devices in parallel. The memory devices can receive a shared enable signal. A unique volume address can be assigned to each of the memory devices. | 02-19-2015 |
20150067274 | MEMORY SYSTEM - A memory system, including a plurality of stacked slices and a controller electrically coupled to the plurality of slices, includes: the plurality of slices configured to share a command in a preset number unit, wherein a slice performs a data input/output operation; and the controller configured to generate the command and a control signal for selecting slices in the preset number unit from the plurality of slices. | 03-05-2015 |
20150106573 | DATA PROCESSING SYSTEM - A data processing system includes a host device including a first working memory and a data storage device suitable for responding to an access request from the host device. The data storage device includes a controller suitable for controlling an operation of the data storage device, a second working memory suitable for storing data used for driving of the controller, and an access controller suitable for accessing a shared memory region of the first working memory under the control of the controller. | 04-16-2015 |
20150324319 | INTERCONNECT SYSTEMS AND METHODS USING HYBRID MEMORY CUBE LINKS - System on a Chip (SoC) devices include two packetized memory busses for conveying local memory packets and system interconnect packets. In an in-situ configuration of a data processing system two or more SoCs are coupled with one or more hybrid memory cubes (HMCs). The memory packets enable communication with local HMCs in a given SoC's memory domain. The system interconnect packets enable communication between SoCs and communication between memory domains. In a dedicated routing configuration each SoC in a system has its own memory domain to address local HMCs and a separate system interconnect domain to address HMC hubs, HMC memory devices, or other SoC devices connected in the system interconnect domain. | 11-12-2015 |
20150370732 | INFORMATION PROCESSING APPARATUS, INPUT AND OUTPUT CONTROL DEVICE, AND METHOD OF CONTROLLING INFORMATION PROCESSING APPARATUS - A control device is coupled to node devices and one or more input and output devices, each node device including an arithmetic processing unit and a memory. The control device is configured to store history information including an entry in which device identification information for identifying the input and output device that is accessed based on a request corresponds to node identification information for identifying a node device of the node devices which is a transmission source of the request; determine the node identification information corresponding to the device identification information that indicates an input and output device which outputs a memory access request to the memory based on search of the entry in the history information; generate a packet in which the determined node identification information is set to a memory access destination based on the memory access request; and output the generated packet. | 12-24-2015 |
20150378882 | BOOTING AN APPLICATION FROM MULTIPLE MEMORIES - Disclosed herein are system, apparatus, article of manufacture, method and/or computer program product embodiments for booting an application from multiple memories. An embodiment operates by executing in place from a first memory a first portion of the application, loading a second portion of the application from a second memory, and executing the second portion of the application. | 12-31-2015 |
20160034392 | SHARED MEMORY SYSTEM - A method for sending data from a local memory device in a first computing device to an external memory device in a second computing device is described herein. In one example, a method includes configuring the local memory device to store data for the external memory device and detecting a request for data from the external memory device. The method also includes translating a memory address that corresponds to the requested data from an external memory address to a local memory address. Additionally, the method includes retrieving the requested data based on the local memory address and sending the requested data to the second computing device. | 02-04-2016 |
20160062802 | A SCHEDULING METHOD FOR VIRTUAL PROCESSORS BASED ON THE AFFINITY OF NUMA HIGH-PERFORMANCE NETWORK BUFFER RESOURCES - The present invention discloses a scheduling method for virtual processors based on the affinity of NUMA high-performance network buffer resources, including: in a NUMA architecture, when a network interface card of a virtual machine is started, getting distribution of the buffer of the network interface card on each NUMA node; getting affinities of each NUMA node for the buffer of the network interface card on the basis of an affinity relationship between each NUMA node; determining a target NUMA node in combination with the distribution of the buffer of the network interface card on each NUMA node and affinities of each NUMA node for the buffer of the network interface card; scheduling the virtual processor to the CPU on the target NUMA node. The present invention solves the problem that the affinity between the VCPU of the virtual machine and the buffer of the network interface card is not optimal in the NUMA architecture, so that the speed of VCPU processing network packets is not high. | 03-03-2016 |
20160085449 | MANAGING MEMORY IN A MULTIPROCESSOR SYSTEM - In an example, a circuit to manage memory between a first and second microprocessors each of which is coupled to a control circuit, includes: first and second memory circuits; and a switch circuit coupled to the first and second memory circuits, and memory interfaces of the first and second microprocessors, the switch circuit having a mode signal as input. The switch is configured to selectively operate in one of a first mode or a second mode based on the mode signal such that, in the first mode, the switch circuit couples the first memory circuit to the memory interface of the first microprocessor and the second memory circuit to the memory interface of the second microprocessor and, in the second mode, the switch circuit selectively couples the first or second memory circuits to the memory interface of either the first or second microprocessor. | 03-24-2016 |
20170235504 | Application-Specific Chunk-Aligned Prefetch for Sequential Workloads | 08-17-2017 |