Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


Shared memory area

Subclass of:

711 - Electrical computers and digital processing systems: memory

711100000 - STORAGE ACCESSING AND CONTROL

Patent class list (only not empty are listed)

Deeper subclasses:

Class / Patent application numberDescriptionNumber of patent applications / Date published
711149000 Multiport memory 72
711148000 Plural shared memories 51
711153000 Shared memory partitioning 47
711152000 Memory access blocking 41
711151000 Prioritized access regulation 30
711150000 Simultaneous access regulation 23
Entries
DocumentTitleDate
20080263285Processor extensions for accelerating spectral band replication - Enhancements to hardware architectures (e.g., a RISC processor or a DSP processor) to accelerate spectral band replication (SBR) processing are described. In some embodiments, instruction extensions configure a reconfigurable processor to accelerat SBR and other audio processing. In addition to the instruction extensions, execution units (e.g., multiplication and accumulation units (MACs)) may operate in parallel to reduce the number of audio processing cycles. Performance may be further enhanced through the use of source and destination units which are configured to work with the execution units and quickly fetch and store source and destination operands.10-23-2008
20110191547COMPUTER SYSTEM AND LOAD EQUALIZATION CONTROL METHOD FOR THE SAME - A computer system having a plurality of controllers for data input/output control is provided, wherein even if a control authority of a processor is transferred to another processor and the computer system migrates control information necessary for a controller to execute data input/output processing, from a shared memory to a local memory for the relevant controller, the computer system prevents the occurrence of unbalanced allocation of a control function necessary for data input/output control between the plurality of controllers; and a load equalization method for such a computer system is also provided.08-04-2011
20100082908Access control and computer system - An access control method for a computer system in which a plurality of clusters share a storage unit, includes predefining an access instruction with exclusive right in addition to an access instruction that is issued with respect to the storage unit from the plurality of clusters, and monitoring, in the storage unit, based on the access instruction with exclusive right transferred from an arbitrary cluster, an access state of an other cluster and executing access instructions with exclusion if a region accessed by an access instruction from the other cluster overlaps a region accessed by the access instruction with exclusive right.04-01-2010
20100049922DISTRIBUTED SHARED MEMORY - Systems and methods for implementing a distributed shared memory (DSM) in a computer cluster in which an unreliable underlying message passing technology is used, such that the DSM efficiently maintains coherency and reliability. DSM agents residing on different nodes of the cluster process access permission requests of local and remote users on specified data segments via handling procedures, which provide for recovering of lost ownership of a data segment while ensuring exclusive ownership of a data segment among the DSM agents detecting and resolving a no-owner messaging deadlock, pruning of obsolete messages, and recovery of the latest contents of a data segment whose ownership has been lost.02-25-2010
20100077155Managing shared memory through a kernel driver - Methods, apparatus, systems and computer program product for managing shared memory between a plurality of applications. A kernel driver can create a region of shared memory and then map this memory into each application that requests access to this specific memory. The kernel driver can separate the entire memory into multiple shared memory sections, regions and/or pools, each of which exists independently from each other, thereby maintaining security between applications. The kernel driver can create a claim ticket containing information about the storage location of shared data; this ticket may then be passed to, from and between a plurality of applications needing to access the shared data.03-25-2010
20130080710HARDWARE METHOD FOR DETECTING TIMEOUT CONDITIONS IN A LARGE NUMBER OF DATA CONNECTIONS - Tracking several open data connections is difficult with a large number of connections. Checking for timeouts in software uses valuable processor resources. Employing a co-processor dedicated to checking timeouts uses valuable logic resources and consumes extra space. In one embodiment, a finite state machine implemented in hardware increases the speed connections can be checked for timeouts. The finite state machine stores a last accessed time stamp for each connection in a memory, and loops through the memory to compare each last accessed time stamp with a current time stamp of the system minus a global timeout value. In this manner, the finite state machine can efficiently find and react to timed out connections.03-28-2013
20130036272STORAGE ENGINE NODE FOR CLOUD-BASED STORAGE - A system includes a storage engine node that includes a processor and a memory coupled to the processor. The memory stores a protocol mapper executable by the processor to convert storage access requests from a local storage protocol to a cloud storage protocol.02-07-2013
20090144510VM INTER-PROCESS COMMUNICATIONS - A method for enabling inter-process communication between a first application and a second application, the first application running within a first context and the second application running within a second context of a virtualization system is described. The method includes receiving a request to attach a shared region of memory to a memory allocation, identifying a list of one or more physical memory pages defining the shared region that corresponds to the handle, and mapping guest memory pages corresponding to the allocation to the physical memory pages. The request is received by a framework from the second application and includes a handle that uniquely identifies the shared region of memory as well as an identification of at least one guest memory page corresponding to the memory allocation. The framework is a component of a virtualization software, which executes in a context distinct from the context of the first application.06-04-2009
20090172299System and Method for Implementing Hybrid Single-Compare-Single-Store Operations - A hybrid Single-Compare-Single-Store (SCSS) operation may exploit best-effort hardware transactional memory (HTM) for good performance in the case that it succeeds, and may transparently resort to software-mediated transactions if the hardware transactional mechanisms fail. The SCSS operation may compare a value in a control location to a specified expected value, and if they match, may store a new value in a separate data location. The control value may include a global lock, a transaction status indicator, and/or a portion of an ownership record, in different embodiments. If another transaction in progress owns the data location, the SCSS operation may abort the other transaction or may help it complete by copying the other transactions' write set into its own right set before acquiring ownership. A hybrid SCSS operation, which is usually nonblocking, may be applied to building software transactional memories (STMs) and/or hybrid transactional memories (HyTMs), in some embodiments.07-02-2009
20130042079METHOD FOR PROCESSING DATA OF A CONTROL UNIT IN A DATA COMMUNICATION DEVICE - A method for processing data of a control unit in a data communication device, which has a first memory area and a second memory area, and is connected to the control unit through an interface. Data from the control unit is transmitted to the data communication device through the interface. A value is stored identically in the first memory area and in the second memory area. The data communication device tests whether a first trigger is present, and if present, storage in the first memory area is discontinued, or the trigger class of the first trigger is tested and storage in the first memory area is discontinued only in the presence of a predefined trigger class. Subsequently, values of the data are read out from the first memory area, whereby values arriving chronologically after the first trigger are stored in the second memory area by the data communication device.02-14-2013
20100042788METHOD AND APPARATUS FOR CONTROLLING SHARED MEMORY AND METHOD OF ACCESSING SHARED MEMORY - Provided are a method and apparatus for controlling a shared memory, and a method of accessing the shared memory. The apparatus includes a processing unit configured to process an application program, a user program unit configured to execute a program written by a user based on the application program of the processing unit, a shared memory unit connected to each of the processing unit and the user program unit through a system bus and configured to store data interchanged between the processing unit and the user program unit, and a control unit configured to relay a control signal indicating whether the system bus, by which the data is interchanged between the processing unit and the user program unit, is occupied, and control connection of each of the processing unit and the user program unit with the system bus in response to the control signal. In the apparatus, the processing unit and the user program unit can easily interchange a large amount of messages through the shared memory unit having an abundant memory space.02-18-2010
20120166737Information Processing Apparatus, Data Duplication Method, Program, and Storage Medium - An enhanced security protection in data duplication using a shared storage area is provided. Specifically, an information processing apparatus, in which one or more applications operate, includes a copy-operation monitoring portion that acquires copy data that the copy source application issues an instruction to copy to a general-purpose shared memory, sets a lifetime interpreted from an operation pattern via an input device for the copy data, and then stores the copy data in a storage area; a display portion that displays, on a display, a paste candidate selected from one or more items of copy data stored in the storage area; a paste-operation monitoring portion that transfers the paste candidate read from the storage area to the paste destination application in response to a confirmation operation via the input device; and an erasing portion that erases, from the storage area, copy data that has become unpermitted to remain because the lifetime has expired.06-28-2012
20130046938QoS-Aware Scheduling - In an embodiment, a memory controller includes multiple ports. Each port may be dedicated to a different type of traffic. In an embodiment, quality of service (QoS) parameters may be defined for the traffic types, and different traffic types may have different QoS parameter definitions. The memory controller may be configured to schedule operations received on the different ports based on the QoS parameters. In an embodiment, the memory controller may support upgrade of the QoS parameters when subsequent operations are received that have higher QoS parameters, via sideband request, and/or via aging of operations. In an embodiment, the memory controller is configured to reduce emphasis on QoS parameters and increase emphasis on memory bandwidth optimization as operations flow through the memory controller pipeline.02-21-2013
20110004732DMA in Distributed Shared Memory System - An example embodiment of the present invention provides processes relating to direct memory access (DMA) for nodes in a distributed shared memory system with virtual storage. The processes in the embodiment relate to DMA read, write, and push operations. In the processes, an initiator node in the system sends a message to the home node where the data for the operation will reside or presently resides, so that the home node can directly receive data from or send data to the target server, which might be a virtual I/O server. The processes employ a distributed shared memory logic circuit that is a component of each node and a connection/communication protocol for sending and receiving packets over a scalable interconnect such as InfiniBand. In the example embodiment, the processes also employ a DMA control block which points to a scatter/gather list and which control block resides in shared memory.01-06-2011
20090307435Distributed Computing Utilizing Virtual Memory - A method for distributed computing utilizing virtual memory is disclosed. The method can include identifying a first node to process an application, identifying paging space accessible to the first node, identifying a second node to share paged data with the first node, and transacting the paged data between the first node and the identified paging space. Thus, application processing results from the first node can be stored in paging space and a second node can retrieve the first result from the paging space such that the paging space can be shared between nodes. Other embodiments are also disclosed.12-10-2009
20090307434METHOD FOR MEMORY INTERLEAVE SUPPORT WITH A CEILING MASK - A distributed shared memory multiprocessor system that supports both fine- and coarse-grained interleaving of the shared memory address space. A ceiling mask sets a boundary between the fine-grain interleaved and coarse-grain interleaved memory regions of the distributed shared memory. A method for satisfying a memory access request in a distributed shared memory subsystem of a multiprocessor system having both fine- and coarse-grain interleaved memory segments. Certain low or high order address bits, depending on whether the memory segment is fine- or coarse-grain interleaved, respectively, are used to determine if the memory address is local to a processor node. A method for setting the ceiling mask of a distributed shared memory multiprocessor system to optimize performance of a first application run on a single node and performance of a second application run on a plurality of nodes.12-10-2009
20120191920Reducing Remote Reads Of Memory In A Hybrid Computing Environment By Maintaining Remote Memory Values Locally - Reducing remote reads of memory in a hybrid computing environment by maintaining remote memory values locally, the hybrid computing environment including a host computer and a plurality of accelerators, the host computer and the accelerators each having local memory shared remotely with the other, including writing to the shared memory of the host computer packets of data representing changes in accelerator memory values, incrementing, in local memory and in remote shared memory on the host computer, a counter value representing the total number of packets written to the host computer, reading by the host computer from the shared memory in the host computer the written data packets, moving the read data to application memory, and incrementing, in both local memory and in remote shared memory on the accelerator, a counter value representing the total number of packets read by the host computer.07-26-2012
20090006771Digital data management using shared memory pool - Memory management techniques involve establishing a memory pool having an amount of sharable memory, and dynamically allocating the sharable memory to concurrently manage multiple sets of sequenced units of digital data. In an exemplary scenario, the sets of sequenced units of digital data are sets of time-ordered media samples forming clips of media content, and the techniques are applied when media samples from two or more clips are simultaneously presentable to a user as independently-controlled streams. Variable amounts of sharable memory are dynamically allocated for preparing upcoming media samples for presentation to the user. In one possible implementation, a ratio of average data rates of individual streams is calculated, and amounts of sharable memory are allocated to rendering each stream based on the ratio. Then, the sharable memory allocated to rendering individual streams is reserved as needed to prepare particular upcoming media samples for presentation to the user.01-01-2009
20130166849Physically Remote Shared Computer Memory - A computing system with physically remote shared computer memory, the computing system including: a remote memory management module, a plurality of computing devices, a plurality of remote memory modules that are external to the plurality of computing devices, and a remote memory controller, the remote memory management module configured to partition the physically remote shared computer memory amongst a plurality of computing devices; each computing device including a computer processor and a local memory controller, the local memory controller including: a processor interface, a local memory interface, and a local interconnect interface; each remote memory controller including: a remote memory interface and a remote interconnect interface, wherein the remote memory controller is operatively coupled to the data communications interconnect via the remote interconnect interface such that the remote memory controller is coupled for data communications with the local memory controller over the data communications interconnect.06-27-2013
20090063783METHOD AND APPARTAUS TO TRIGGER SYNCHRONIZATION AND VALIDATION ACTIONS UPON MEMORY ACCESS - A system and method to trigger synchronization and validation actions at memory access, in one aspect, identifies a storage class associated with a region of shared memory being accessed by a thread, determines whether the thread holds the storage class and acquires the storage class if the thread does not hold the storage class, identifies a programmable action associated with the storage class and the thread, and triggers the programmable action. One or more storage classes are respectively associated with one or more regions of shared memory. An array of storage classes associated with a thread holds one or more storage classes acquired by the thread. A configurable action table associated with a thread indicates one or more programmable actions associated with a storage class.03-05-2009
20110302375Multi-Part Aggregated Variable in Structured External Storage - A mechanism is provided for multi-part aggregated variables in structured external storage. The shared external storage provides a serialized, aggregated structure update capability. The shared external storage identifies each local value for which a group value is needed by name. Each time a member writes out its value, the member specifies the name of the object, the member's current value, and the type of aggregate function to apply (e.g., minimum, maximum, etc.). The structured external storage in one atomic operation updates the member's value, recalculates the aggregate of all of the individual values, and returns the aggregate to the member. The advantage of this approach is that it requires only one write operation to the structured external storage. The update operation does not require locking, because the operation is atomic.12-08-2011
20130219130METHOD, APPARATUS, AND COMPUTER PROGRAM PRODUCT FOR INTER-CORE COMMUNICATION IN MULTI-CORE PROCESSORS - Method, apparatus, and computer program product embodiments of the invention are disclosed for efficient communication between processor units in a multi-core processor integrated circuit architecture. In example embodiments of the invention, a method comprises: storing with a shared inter-core communication unit in a multi-core processor, first data produced by a producer processor core, in a first token memory located at a first memory address of a memory address space; and connecting with the shared inter-core communication unit, the first token memory to a consumer processor core of the multi-core processor, to load the first data from the first token memory into the consumer processor core, in response to a first-type command from the producer processor core.08-22-2013
20090031088METHOD AND APPARATUS FOR HANDLING EXCESS DATA DURING MEMORY ACCESS - A computer system includes a system memory and a processor having one or more processor cores and a memory controller. The memory controller may control data transfer to the system memory. The processor further includes a cache memory such as an L3 cache, for example, that includes a data storage array for storing blocks of data. In response to a request for data by a given processor core, the system memory may provide a first data block that corresponds to the requested data, and an additional data block that is associated with the first data block and that was not requested by the given processor core. In addition, the memory controller may provide the first data block to the given processor core and store the additional data block in the cache memory.01-29-2009
20120110272CROSS PROCESS MEMORY MANAGEMENT - A method for efficiently managing memory resources in a computer system having a graphics processing unit that runs several processes simultaneously on the same computer system includes using threads to communicate that additional memory is needed. If the request indicates that termination will occur then the other processes will reduce their memory usage to a minimum to avoid termination but if the request indicates that the process will not run optimally then the other processes will reduce their memory usage to 1/N where N is the count of the total number of running processes. The apparatus includes a computer system using a graphics processing unit and processes with threads that can communicate directly with other threads and with a shared memory which is part of the operating system memory.05-03-2012
20100088474SYSTEM AND METHOD FOR MAINTAINING MEMORY PAGE SHARING IN A VIRTUAL ENVIRONMENT - In a virtualized system using memory page sharing, a method is provided for maintaining sharing when Guest code attempts to write to the shared memory. In one embodiment, virtualization logic uses a pattern matcher to recognize and intercept page zeroing code in the Guest OS. When the page zeroing code is about to run against a page that is already zeroed, i.e., contains all zeros, and is being shared, the memory writes in the page zeroing code have no effect. The virtualization logic skips over the writes, providing an appearance that the Guest OS page zeroing code has run to completion but without performing any of the writes that would have caused a loss of page sharing. The pattern matcher can be part of a binary translator that inspects code before it executes.04-08-2010
20120239886DELAYED UPDATING OF SHARED DATA - To provide delayed updating of shared data, a concept of dualistic sequence information is introduced. In the concept, if during local modification of data, a modification to the data is published by another user, a local deviation is created, and when the modification is published, it is associated with an unambiguous sequence reference and the local deviation.09-20-2012
20090282198SYSTEMS AND METHODS FOR OPTIMIZING BUFFER SHARING BETWEEN CACHE-INCOHERENT CORES - According to at least some embodiments, systems and methods are provided for mapping, by a first processor, of a memory portion that is inaccessible to a second processor to at least a segment of a pre-reserved region of memory addresses used by the second processor to enable the second processor to access the contents of the memory portion. The mapped memory portion comprising two temporary pages and all pages of data in a buffer to be shared excepting a first block of data and a last block of data, and copying the contents of the first block of data and the last block of data into its respective temporary page, at least one of the first and last blocks of data are unaligned prior to being copied into its respective temporary page. In some embodiments, at least one of the first and last blocks of data, prior to being copied into its respective temporary page, comprises a portion of data to be shared on a same cache line as a portion of data not to be shared.11-12-2009
20110271060Method And System For Lockless Interprocessor Communication - A computer readable storage medium storing a set of instructions executable by a processor. The set of instructions is operable to receive, from a first processor, a message to be sent to a second processor; store the message in a portion of a shared memory, the shared memory being shared by the first processor and the second processor; store, in an instruction list stored in a further portion of the shared memory, an instruction corresponding to the message; and prompt the second processor to read the message list.11-03-2011
20110271059REDUCING REMOTE READS OF MEMORY IN A HYBRID COMPUTING ENVIRONMENT - A hybrid computing environment in which the host computer allocates, in the shadow memory area of the host computer, a memory region for a packet to be written to the shared memory of an accelerator; writes packet data to the accelerator's shared memory in a memory region corresponding to the allocated memory region; inserts, in a next available element of the accelerator's descriptor array, a descriptor identifying the written packet data; increments the copy of the head pointer of the accelerator's descriptor array maintained on the host computer; and updates a copy of the head pointer of the accelerator's descriptor array maintained on the accelerator with the incremented copy.11-03-2011
20090287886VIRTUAL COMPUTING MEMORY STACKING - Virtual stacking is utilized in a virtual machine environment by receiving a data element for storage to a shared memory location and writing to the shared memory location. Writing to the shared memory location may be implemented by reading the shared memory location contents, encoding the received data element with the shared memory location contents to derive an encoded representation and writing the encoded representation to the shared memory location so as to overwrite the previous shared memory location contents. The method may further comprise receiving a request for a desired data element encoded into the shared memory location, decoding the shared memory location contents until the desired data element is recovered and communicating the requested data element.11-19-2009
20120297148RESOURCE SHARING IN A TELECOMMUNICATIONS ENVIRONMENT - A transceiver is designed to share memory and processing power amongst a plurality of transmitter and/or receiver latency paths, in a communications transceiver that carries or supports multiple applications. For example, the transmitter and/or receiver latency paths of the transceiver can share an interleaver/deinterleaver memory. This allocation can be done based on the data rate, latency, BER, impulse noise protection requirements of the application, data or information being transported over each latency path, or in general any parameter associated with the communications system.11-22-2012
20080244196Method and apparatus for a unified storage system - A unified storage system for executing a variety of types of storage control software using a single standardized hardware platform includes multiple storage control modules connected to storage devices for storing data related to input/output (I/O) operations. A first type of storage control software is initially installed and executed on a first storage control module for processing a first type of I/O operations. A management module replaces the first type of storage control software by installing a second type of storage control software onto the first storage control module. When the second type of storage control software is installed and executed, the first storage control module processes a second type of I/O operation, different from the first type of I/O operation. Control of volumes originally accessed by the first storage control module may be transferred to a second storage control module having the first type of storage control software installed.10-02-2008
20090049250Memory device and method having on-board processing logic for facilitating interface with multiple processors, and computer system using same - A memory device includes an on-board processing system that facilitates the ability of the memory device to interface with a plurality of processors operating in a parallel processing manner. The processing system includes circuitry that performs processing functions on data stored in the memory device in an indivisible manner. More particularly, the system reads data from a bank of memory cells or cache memory, performs a logic function on the data to produce results data, and writes the results data back to the bank or the cache memory. The logic function may be a Boolean logic function or some other logic function.02-19-2009
20090265512Methods and Apparatus for Efficiently Sharing Memory and Processing in a Multi-Processor - A shared memory network for communicating between processors using store and load instructions is described. A new processor architecture which may be used with the shared memory network is also described that uses arithmetic/logic instructions that do not specify any source operand addresses or target operand addresses. The source operands and target operands for arithmetic/logic execution units are provided by independent load instruction operations and independent store instruction operations.10-22-2009
20080282043STORAGE MANAGEMENT METHOD AND STORAGE MANAGEMENT SYSTEM - There is provided a storage management system capable of utilizing division management with enhanced flexibility and of enhancing security of the entire system, by providing functions by program products in each division unit of a storage subsystem. The storage management system has a program-product management table stored in a shared memory in the storage subsystem and showing presence or absence of the program products, which provide management functions of respective resources to respective SLPRs. At the time of executing the management functions by the program products in the SLPRs of users in accordance with instructions from the users, the storage management system is referred to and execution of the management function having no program product is restricted.11-13-2008
20080288726Transactional Memory System with Fast Processing of Common Conflicts - A computing system processes memory transactions for parallel processing of multiple threads of execution by support of which an application need not be aware. The computing system transactional memory support provides a Transaction Table in memory and performs fast detection of potential conflicts between multiple transactions. Special instructions may mark the boundaries of a transaction and identify memory locations applicable to a transaction. A ‘private to transaction’ (PTRAN) tag, enables a quick detection of potential conflicts with other transactions that are concurrently executing on another thread of said computing system. The tag indicates whether (or not) a data entry in memory is part of a speculative memory state of an uncommitted transaction that is currently active in the system. A transaction program employs a plurality of Set Associative Transaction Tables, one for each microprocessor, and Load and Store Summary Tables in memory for fast processing of common conflict.11-20-2008
20080270709Shared closures on demand - A method and apparatus for copying data from a virtual machine to a shared closure on demand. This process improves system efficiency by avoiding the copying of data in the large number of cases where the same virtual machine is the next to request access and use of the data. Load balancing and failure recovery are supported by copying the data to the shared closure when the data is requested by another virtual machine or recovering the data from the failed virtual machine and storing it in the shared closure before a terminated virtual machine is discarded.10-30-2008
20110208921INVERTED DEFAULT SEMANTICS FOR IN-SPECULATIVE-REGION MEMORY ACCESSES - A method for accessing memory by a first processor of a plurality of processors in a multi-processor system includes, responsive to a memory access instruction within a speculative region of a program, accessing contents of a memory location using a transactional memory access to the memory access instruction unless the memory access instruction indicates a non-transactional memory access. The method may include accessing contents of the memory location using a non-transactional memory access by the first processor according to the memory access instruction responsive to the instruction not being in the speculative region of the program. The method may include updating contents of the memory location responsive to the speculative region of the program executing successfully and the memory access instruction not being annotated to be a non-transactional memory access.08-25-2011
20090138665MEMORY CONTROLLER - To provide a memory controller capable of flexibly dealing with the change in the form of use or operation state of a system, a memory controller (05-28-2009
20090043969Semiconductor memory devices that are resistant to power attacks and methods of operating semiconductor memory devices that are resistant to power attacks - A semiconductor memory device according to some embodiments includes a random converter that receives data and address information including a start address value and an end address value of the address from a central processing unit (CPU), generates and stores at least one random number for each address value from the start address value to the end address value, performs a logical operation on the random number and the data corresponding to the address, and responsively generates randomized data to be stored in memory. Accordingly, the semiconductor memory device randomizes a power consumption signature that can occur when data is stored, thereby writing and reading data in a manner that is resistant to a power attack.02-12-2009
20090055597Method and Device for Sharing Information Between Memory Parcels In Limited Resource Environments - The invention relates to the management of information such as data and/or procedures residing in the memory in systems with reduced processing and storing capacity, for example, those available in a smart card. A method and a device disclosed in the invention make it possible for various applications lodged in different memory parcels to safely share data and/or procedures by making optimum use of the processing capacity of the system to which the memory belongs. A strict sharing mechanism ensures that if an application has obtained a data item or a procedure from another application or the system itself in which it is lodged, it has done so because it is authorized to use it and therefore no verification has to be made. The sharing mechanism is based on the principle that data and procedures of one application can only be referenced by another application during its execution and through the sharing mechanisms defined in this invention.02-26-2009
20090144509MEMEORY SHARING BETWEEN TWO PROCESSORS - A wireless device includes a memory having a data port configured to facilitate access to the memory and at least two processing units which are configured to share the memory. The device also includes an arbiter (separate from at least one of the processing units) configured to facilitate sharing of the memory. One or both of the processing units interacts with the arbiter as if the arbiter was the memory. The wireless device could also include one or more additional processing units, which additional processing units may share access to the memory (e.g. facilitated by the arbiter).06-04-2009
20090024802NON-VOLATILE MEMORY SHARING SYSTEM FOR MULTIPLE PROCESSORS AND RELATED METHOD THEREOF - A non-volatile memory sharing system is provided. The non-volatile memory sharing system includes a plurality of processors comprising at least a first processor and a second processor, a non-volatile memory, and a processor bridge coupled between the first processor and the second processor. The non-volatile memory is coupled to the first processor, and is used for storing a plurality of program codes or data comprising at least a first program code or data for the first processor and a second program code or data for the second processor. The first processor is for executing the first program code stored in the non-volatile memory, and the second processor is for obtaining the second program code or data from the non-volatile memory via the first processor and the processor bridge.01-22-2009
20090019236Data write/read auxiliary device and method for writing/reading data - A data write/read auxiliary device and method for writing/reading data are provided. A data storage unit and a program storage unit are installed in the data write/read auxiliary device, wherein the program storage unit is for storing automatic execution program and protection program. When the data write/read auxiliary device is connected to a data processing device, the automatic execution program is executed for automatically executing programs stored in the program storage unit. The protection program is executed for executing an access process on data storage unit to judge whether driving a write/read head or not based on outcome of the access process when a file sharing software is executed for data downloading and uploading. Through the data storage unit as a buffer area of data before writing/reading data on a hard disk, the data write/read auxiliary device and method for writing/reading data can protect the hard disk.01-15-2009
20110231616DATA PROCESSING METHOD AND SYSTEM - A configurable multi-core structure is provided for executing a program. The configurable multi-core structure includes a plurality of processor cores and a plurality of configurable local memory respectively associated with the plurality of processor cores. The configurable multi-core structure also includes a plurality of configurable interconnect structures for serially interconnecting the plurality of processor cores. Further, each processor core is configured to execute a segment of the program in a sequential order such that the serially-interconnected processor cores execute the entire program in a pipelined way. In addition, the segment of the program for one processor core is stored in the configurable local memory associated with the one processor core along with operation data to and from the one processor core.09-22-2011
20090204770DEVICE HAVING SHARED MEMORY AND METHOD FOR CONTROLLING SHARED MEMORY - A device having a shared memory and a shared memory controlling method are disclosed. A digital processing device can include a shared memory, having a storage area including at least one common section, coupled to each of the processors through separate buses and outputting access information to whether a processor is accessing a common section. With the present invention, each processor can efficiently use or/and control a shared memory by using access information.08-13-2009
20090248992Upgrade of Low Priority Prefetch Requests to High Priority Real Requests in Shared Memory Controller - A prefetch controller implements an upgrade when a real read access request hits the same memory bank and memory address as a previous prefetch request. In response per-memory bank logic promotes the priority of the prefetch request to that of a read request. If the prefetch request is still waiting to win arbitration, this upgrade in priority increases the likelihood of gaining access generally reducing the latency. If the prefetch request had already gained access through arbitration, the upgrade has no effect. This thus generally reduces the latency in completion of a high priority real request when a low priority speculative prefetch was made to the same address.10-01-2009
20090222630MEMORY SHARE BY A PLURALITY OF PROCESSORS - The present invention is directed to a method and a device for memory share by a plurality of processors. The portable terminal according to an embodiment of the present invention comprises a main memory; a sub-control unit coupled to the main memory through bus #1, the sub-control unit processing and storing raw data in accordance with a process order, the raw data being stored in the main memory, the main memory being accessed through bus #1; and a main control unit coupled to the main memory through bus #2 and coupled to the sub-control unit independently through bus #3, the main control unit transmitting said process order to the sub-control unit through bus #3. The present invention can prevent the weakening of processing power or the bottleneck problem during the process of information transmission between the memory and a plurality of processors.09-03-2009
20080270710APPARATUS, METHOD AND DATA PROCESSING ELEMENT FOR EFFICIENT PARALLEL PROCESSING OF MULTIMEDIA DATA - Provided are an apparatus, a method, and a data processing element (DPE) for efficient parallel processing of multimedia data. The DPE includes: a memory routing unit (MRU) comprising a shared memory page shared by the DPE and DPEs that are adjacent to the DPE, and a shared page switch selectively connecting the shared memory page to the DPE and the adjacent DPEs; and a data processing unit (DPU) comprising a virtual page for connecting the DPU to the shared memory page, and a dynamic remapper assigning the shared memory page to a DPE according to conditions that a series of tasks for processing multimedia data are performed in the DPE and the adjacent DPEs, and controlling the shared page switch according to the assigning. Accordingly, multimedia data can be efficiently processed in parallel by mapping a temporal and directional shared memory between DPEs.10-30-2008
20090248991Termination of Prefetch Requests in Shared Memory Controller - A real request from a CPU to the same memory bank as a prior prefetch request is transmitted to the per-memory bank logic along with a kill signal to terminate the prefetch request. This avoids waiting for a prefetch request to complete before sending the real request to the same memory bank. The kill signal gates off any acknowledgement of completion of the prefetch request. This invention reduces the latency for completion of a high priority real request when a low priority speculative request to a different address in the same memory bank has already been dispatched.10-01-2009
20090248990PARTITION-FREE MULTI-SOCKET MEMORY SYSTEM ARCHITECTURE - A technique to increase memory bandwidth for throughput applications. In one embodiment, memory bandwidth can be increased, particularly for throughput applications, without increasing interconnect trace or pin count by pipelining pages between one or more memory storage areas on half cycles of a memory access clock.10-01-2009
20100161911METHOD AND APPARATUS FOR MPI PROGRAM OPTIMIZATION - Machine readable media, methods, apparatus and system for MPI program optimization. In some embodiments, shared data may be retrieved from a message passing interface (MPI) program, wherein the shared data is sharable by a plurality of processes. Then, the shared data may be allocated to a shared memory, wherein the shared memory is accessible by the plurality of processes. A single copy of the shared data may be maintained in the shared data in a global buffer of the processes of the plurality of processes can read or write the single copy of the shared data from or to the shared memory.06-24-2010
20100161908Efficient Memory Allocation Across Multiple Accessing Systems - Various embodiments of the present invention provide systems and methods for reducing memory usage across multiple virtual machines. For example, various embodiments of the present invention provide methods for reducing resource duplication across multiple virtual machines. Such methods include allocating a shared memory resource between a first virtual machine and a second virtual machine. A data set common to both the first virtual machine and the second virtual machine is identified. A first set of configuration information directing access to the data set by the first virtual machine to a first physical memory space is provided, and a second set of configuration information directing access to the data set by the second virtual machine to a second physical memory space is provided. The first physical memory space at least partially overlaps the second physical memory space.06-24-2010
20100161912Memory space management and mapping for memory area network - A mechanism for simultaneous multiple host access to shared centralized memory space via a virtualization protocol utilizing a network transport. The invention combines local memory interfacing with the handling of multiple hosts implementing virtualized memory-mapped I/O systems, such that the memory becomes a global resource. The end result is a widely distributed memory-mapped computer cluster, sharing a 2̂64 byte memory space.06-24-2010
20100161909Systems and Methods for Quota Management in a Memory Appliance - Various embodiments of the present invention provide systems and methods for using providing memory access across multiple virtual machines. For example, various embodiments of the present invention provide thinly provisioned computing systems. Such thinly provisioned computing systems include a network switch, at least two or more processors each communicably coupled to the network switch, and a memory appliance communicably coupled to the at least two or more processors via the network switch. The memory appliance includes a bank of memory of a memory size, and the memory size is less than the aggregate memory quota. In some instances of the aforementioned embodiments, the memory appliance further includes a memory controller that is operable to receive requests to allocate and de-allocate portions of the bank of memory.06-24-2010
20100185822MULTI-READER MULTI-WRITER CIRCULAR BUFFER MEMORY - A system for managing a circular buffer memory includes a number of data writers, a number of data readers, a circular buffer memory; and logic configured to form a number of counters, form a number of temporary variables from the counters, and allow the data writers and the data readers to simultaneously access locations in the circular buffer memory determined by the temporary variables.07-22-2010
20100161910STORED VALUE ACCESSORS IN SHARED MEMORY REGIONS - Instruction sets in computing environments may execute within one of several domains, such as a natively executing domain, an interpretively executing domain, and a debugging executing domain. These domains may store values in a shared region of memory in different ways. It may be difficult to perform operations on such values, particularly if a domain that generated a particular value cannot be identified or no longer exist, which may obstruct shared accessing of values and evaluative tasks such as stack walks. Instead, accessors may be associated with a stored value that perform various operations (such as low-level assembly instructions like Load, Store, and Compare) according to the standards of the value-generating domain, and domains may be configured to operate on the value through the accessors. This configuration may promote consistent accessing of values without having to identify or consult the value-generating domain or reconfiguring the instruction sets.06-24-2010
20090077324METHODS AND SYSTEMS FOR EXCHANGING DATA - A method for exchanging data between a producer and a consumer is provided. The method includes writing the data with the producer without blocking the consumer and without waiting for access to the consumer. The method also includes reading the data with the consumer without blocking the producer and without waiting for access to the producer. The data is exchanged from the producer to the consumer upon reading the data.03-19-2009
20100228923Memory system having multiple processors - A memory system includes multiple processors. The memory system includes first and second processors, a storage device and a controller. The storage device includes one or more banks which are respectively allocated to the first processor or the second processor. The controller controls the storage device to access a plurality of banks through an interleaving method when the plurality of banks are allocated to one processor. The memory system can improve performance and power efficiency.09-09-2010
20100235588SHARED INFORMATION DISTRIBUTING DEVICE, HOLDING DEVICE, CERTIFICATE AUTHORITY DEVICE, AND SYSTEM - A distributing device for generating private information correctly even if shared information is destroyed or tampered with. A shared information distributing device for use in a system for managing private information by a secret sharing method, including: segmenting unit that segments private information into a first through an n09-16-2010
20100250863PAGING PARTITION ARBITRATION OF PAGING DEVICES TO SHARED MEMORY PARTITIONS - Disclosed is a computer implemented method, computer program product, and apparatus to establish at least one paging partition in a data processing system. The virtualization control point (VCP) reserves up to the subset of physical memory for use in the shared memory pool. The VCP configures at least one logical partition as a shared memory partition. The VCP assigns a paging partition to the shared memory pool. The VCP determines whether a user requests a redundant assignment of the paging partition to the shared memory pool. The VCP assigns a redundant paging partition to the shared memory pool, responsive to a determination that the user requests a redundant assignment. The VCP assigns a paging device to the shared memory pool. The hypervisor may transmit at least one paging request to a virtual asynchronous services interface configured to support a paging device stream.09-30-2010
20100250864Method And Apparatus For Compressing And Decompressing Data - One embodiment of the invention provides a method and apparatus for decompressing a compressed data set using a processing device having a plurality of processing units and a shared memory. The compressed data set comprises a plurality of compressed data segments, in which each compressed data segment corresponds to a predetermined size of uncompressed data. The method includes loading the compressed data set into the shared memory so that each compressed data segment is stored into a respective memory region of the shared memory. The respective memory region has a size equal to the predetermined size of the corresponding uncompressed data segment. The method further includes decompressing the compressed data segments with the processing units; and storing each decompressed data segment back to its respective memory region within the shared memory.09-30-2010
20100235589MEMORY ACCESS CONTROL IN A MULTIPROCESSOR SYSTEM - Access to a memory area by a first processor that executes a first processor program and a second processor that executes a second processor program is granted to one of the first processor and the second processor at a time. Access to the memory area by the first processor and the second processor are cyclically uniquely allocated (e.g., t≡[(ad mod m)=o]) between the first and the second processor by the first and second processor programs.09-16-2010
20100211747PROCESSOR WITH RECONFIGURABLE ARCHITECTURE - Disclosed is configuration memory access technology in a processor with a reconfigurable architecture. The processor with the reconfigurable architecture includes an array of processing elements (PEs), a configuration memory and a token network. The configuration memory stores configuration data associated with controlling data flow of the respective PEs. The token network reads the configuration data from the configuration memory, estimates data flow of the PEs from the read configuration data, reads required configuration data from the configuration memory based on the estimated data flow, and supplies the required configuration data to corresponding PEs. By reducing configuration memory access frequency through a token network, power consumption may be reduced.08-19-2010
20100122040ACCESS CONTROLLER - The present invention aims to provide an access control apparatus that can improve responsiveness to an access request of a processor compared with a conventional technology.05-13-2010
20100191921REGION COHERENCE ARRAY FOR A MULT-PROCESSOR SYSTEM HAVING SUBREGIONS AND SUBREGION PREFETCHING - A Region Coherence Array (RCA) having subregions and subregion prefetching for shared-memory multiprocessor systems having a single-level, or a multi-level interconnect hierarchy architecture.07-29-2010
20090327617Shared Object Control - Methods, systems, and computer program products for controlling information read/write processing. The method includes assigning a plurality of division areas to a shared storage area for storing a shared object: specifying a division area used for read/write processing in accordance with user identification information for identifying a user; and executing the read processing for reading information from a specified division area and the write processing for writing information to the specified division area. The shared object is shared among a plurality of processes.12-31-2009
20090210635FACILITATING INTRA-NODE DATA TRANSFER IN COLLECTIVE COMMUNICATIONS, AND METHODS THEREFOR - Intra-node data transfer in collective communications is facilitated. A memory object of one task of a collective communication is concurrently attached to the address spaces of a plurality of other tasks of the communication. Those tasks that attach the memory object can access the memory object as if it was their own. Data can be directly written into or read from an application data structure of the memory object by the attaching tasks without copying the data to/from shared memory.08-20-2009
20100306480Shared Memory - The present invention relates to a shared memory (12-02-2010
20100281226Address Management Device - Conventionally, when a switch virtualizing a storage (storage virtualization switch) is installed in a computer system including an SAN, a host computer, and a storage device, since a port ID of a virtual storage and a port ID of a storage device assigned to the virtual storage are different, the computer system has to be suspended at the time of installation of the storage virtualization switch. The storage virtualization switch installed in the computer system assigns a port ID to a port of a virtual storage generated by the storage virtualization switch so as to be equivalent to a port ID of an existing storage device and, in the case in which the port ID is designated as an access destination by an access request from one computer to the storage device, sends the access request to the virtual storage.11-04-2010
20100281225DATA PROCESSING APPARATUS OF BASIC INPUT/OUTPUT SYSTEM - A data processing apparatus of a basic input/output system (BIOS) is provided. The data processing apparatus includes a BIOS unit, a share memory and a control unit. The BIOS unit writes command data into the share memory, wherein the command data includes identification data stored in an identification field. The control unit reads and performs the command data according to the identification data in the identification field. After the command data is performed, the control unit writes returned data into the share memory for the BIOS unit to read the returned data, wherein the returned data includes the execution result of the command data performed by the control unit and also includes the identification data.11-04-2010
20130138895METHOD AND SYSTEM FOR MAINTAINING A POINTER'S TYPE - A processing device implements a sandbox that provides an isolated execution environment, a memory structure. The processing device generates a pointer to a data item, the pointer having a type. The processing device generates a key for the pointer based on the type of the pointer. The processing device designates a name for the pointer based on the key. The processing device then inserts the pointer having the designated name into the memory structure, causing the pointer to become a private pointer.05-30-2013
20130138896Reader-Writer Synchronization With High-Performance Readers And Low-Latency Writers - Data writers desiring to update data without unduly impacting concurrent readers perform a synchronization operation with respect to plural processors or execution threads. The synchronization operation is parallelized using a hierarchical tree having a root node, one or more levels of internal nodes and as many leaf nodes as there are processors or threads. The tree is traversed from the root node to a lowest level of the internal nodes and the following node processing is performed for each node: (1) check the node's children, (2) if the children are leaf nodes, perform the synchronization operation relative to each leaf node's associated processor or thread, and (3) if the children are internal nodes, fan out and repeat the node processing with each internal node representing a new root node. The foregoing node processing is continued until all processors or threads associated with the leaf nodes have performed the synchronization operation.05-30-2013
20110010507HOST MEMORY INTERFACE FOR A PARALLEL PROCESSOR - A memory interface for a parallel processor which has an array of processing elements and can receive a memory address and supply the memory address to a memory connected to the processing elements. The processing elements transfer data to and from the memory at the memory address. The memory interface can connect to a host configured to access data in a conventional SDRAM memory device so that the host can access data in the memory.01-13-2011
20110035556Reducing Remote Reads Of Memory In A Hybrid Computing Environment By Maintaining Remote Memory Values Locally - Reducing remote reads of memory in a hybrid computing environment by maintaining remote memory values locally, the hybrid computing environment including a host computer and a plurality of accelerators, the host computer and the accelerators each having local memory shared remotely with the other, including writing to the shared memory of the host computer packets of data representing changes in accelerator memory values, incrementing, in local memory and in remote shared memory on the host computer, a counter value representing the total number of packets written to the host computer, reading by the host computer from the shared memory in the host computer the written data packets, moving the read data to application memory, and incrementing, in both local memory and in remote shared memory on the accelerator, a counter value representing the total number of packets read by the host computer.02-10-2011
20110145514METHOD AND APPARATUS FOR INTER-PROCESSOR COMMUNICATION IN MOBILE TERMINAL - A method for inter-processor communication in a mobile terminal is disclosed. The method of inter-processor communication for a mobile terminal having a first processor, a second processor, and a shared memory includes determining, by the first processor, the size of data to be sent to the second processor, comparing the determined size of the data with the size of one of multiple buffer areas in the shared memory to be used for transmission, rearranging the shared memory according to the data size when the size of the data is greater than the size of the buffer area to be used, and sending the data to the second processor through the rearranged shared memory. It is possible to increase data transfer rates between processors when inter-processor communication is performed through a shared memory in a mobile terminal having multiple processors.06-16-2011
20110119452 Hybrid Transactional Memory System (HybridTM) and Method - A computer processing system having memory and processing facilities for processing data with a computer program is a Hybrid Transactional Memory multiprocessor system with modules 05-19-2011
20100228925PROCESSING SYSTEM WITH INTERSPERSED PROCESSORS USING SHARED MEMORY OF COMMUNICATION ELEMENTS - A processing system comprising processors and the dynamically configurable communication elements coupled together in an interspersed arrangement. The processors each comprise at least one arithmetic logic unit, an instruction processing unit, and a plurality of processor ports. The dynamically configurable communication elements each comprise a plurality of communication ports, a first memory, and a routing engine. For each of the processors, the plurality of processor ports is configured for coupling to a first subset of the plurality of dynamically configurable communication elements. For each of the dynamically configurable communication elements, the plurality of communication ports comprises a first subset of communication ports configured for coupling to a subset of the plurality of processors and a second subset of communication ports configured for coupling to a second subset of the plurality of dynamically configurable communication elements.09-09-2010
20100228924RESOURCE SHARING IN A TELECOMMUNICATIONS ENVIRONMENT - A transceiver is designed to share memory and processing power amongst a plurality of transmitter and/or receiver latency paths, in a communications transceiver that carries or supports multiple applications. For example, the transmitter and/or receiver latency paths of the transceiver can share an interleaver/deinterleaver memory. This allocation can be done based on the data rate, latency, BER, impulse noise protection requirements of the application, data or information being transported over each latency path, or in general any parameter associated with the communications system.09-09-2010
20110082984Shared Script Files in Multi-Tab Browser - A host device executes a browser application that displays web content to a user in plurality of tabs or windows. The browser application includes an interpreter that determines whether an external file referenced in the web content already exists in a shared memory resource available to a plurality of the tabs or windows. If the external file does not exist, the interpreter obtains the external file and generates the intermediate representation of the external file for storage in the shared memory resource. If the external file does exist, the interpreter links an intermediate representation of the code embedded in the web content that is stored in a dedicated memory resource to the corresponding intermediate representation of the external file stored in the shared memory resource.04-07-2011
20110087846Accessing a Multi-Channel Memory System Having Non-Uniform Page Sizes - A method includes predicting a memory access pattern of each master of a plurality of masters. The plurality of masters can access a multi-channel memory via a crossbar interconnect, where the multi-channel memory has a plurality of banks The method includes identifying a page size associated with each bank of the plurality of banks The method also includes assigning at least one bank of the plurality of banks to each master of the plurality of masters based on the memory access pattern of each master.04-14-2011
20100131720MANAGEMENT OF OWNERSHIP CONTROL AND DATA MOVEMENT IN SHARED-MEMORY SYSTEMS - A method to exchange data in a shared memory system includes the use of a buffer in communication with a producer processor and a consumer processor. The cache data is temporarily stored in the buffer. The method includes for the consumer and the producer to indicate intent to acquire ownership of the buffer. In response to the indication of intent, the producer, consumer, buffer are prepared for the access. If the consumer intends to acquire the buffer, the producer places the cache data into the buffer. If the producer intends to acquire the buffer, the consumer removes the cache data from the buffer. The access to the buffer, however, is delayed until the producer, consumer, and the buffer are prepared.05-27-2010
20110179230METHOD OF READ-SET AND WRITE-SET MANAGEMENT BY DISTINGUISHING BETWEEN SHARED AND NON-SHARED MEMORY REGIONS - A method of read-set and write-set management distinguishes between shared and non-shared memory regions. A shared memory region, used by a transactional memory application, which may be shared by one or more concurrent transactions is identified. A non-shared memory region, used by the transactional memory application, which is not shared by the one or more concurrent transactions is identified. A subset of a read-set and a write-set that access the shared memory region is checked for conflicts with the one or more concurrent transactions at a first granularity. A subset of the read-set and the write-set that access the non-shared memory region is checked for conflicts with the one or more concurrent transactions at a second granularity. The first granularity is finer than the second granularity.07-21-2011
20100058001Distributed shared memory multiprocessor and data processing method - A distributed shared memory multiprocessor that includes a first processing element, a first memory which is a local memory of the first processing element, a second processing element connected to the first processing element via a bus, a second memory which is a local memory of the second processing element, a virtual shared memory region, where physical addresses of the first memory and the second memory are associated for one logical address in a logical address space of a shared memory having the first memory and the second memory, and an arbiter which suspends an access of the first processing element, if there is a write access request from the first processing element to the virtual shared memory region, according to a state of a write access request from the second processing element to the virtual shared memory region.03-04-2010
20110153958NETWORK LOAD REDUCING METHOD AND NODE STRUCTURE FOR MULTIPROCESSOR SYSTEM WITH DISTRIBUTED MEMORY - Provided are a network load reducing method and a node structure for a multiprocessor system with a distributed memory. The network load reducing method uses a multiprocessor system including a node having a distributed memory and an auxiliary memory storing a sharer history table. The network load reducing method includes recording the history of a sharer node in the sharer history table of the auxiliary memory, requesting share data with reference to the sharer history table of the auxiliary memory, and deleting share data stored in the distributed memory and updating the sharer history table of the auxiliary memory.06-23-2011
20110078385System and Method for Performing Visible and Semi-Visible Read Operations In a Software Transactional Memory - The software transactional memory system described herein may implement a revocable mechanism for managing read ownership in a shared memory. In this system, write ownership may be revoked by readers or writers at any time other than when a writer transaction is in a commit state, wherein its write ownership is irrevocable. An ownership record associated with one or more locations in the shared memory may include an indication of whether the memory locations are owned for writing, and an identifier of the latest writer. A read ownership array may record data indicating which, if any, threads currently own the memory locations for reading. The system may provide an efficient read-validation operation, in which a full read-set validation is avoided unless a change in a global read-write conflict counter value indicates a potential conflict. The system may support a wide range of contention management policies, and may provide implicit privatization.03-31-2011
20130013868RING BUFFER - A computer implemented method for writing to a software bound ring buffer. A network adapter may determine that data is available to write to the software bound ring buffer. The network adapter determines that a read index is not equal to a write index, responsive to a determination that data is available to write to the software bound ring buffer. The network adapter writes the data to memory referenced by the hardware write index, wherein memory referenced by the write index is offset according to an offset, and the memory contents comprise a data portion and a valid bit. The network adapter writes an epoch value of the write index to the valid bit. The network adapter increments the write index, responsive to writing the data to memory referenced by the write index. Further disclosed is method to access a hardware bound ring buffer.01-10-2013
20100306479PROVIDING SHARED MEMORY IN A DISTRIBUTED COMPUTING SYSTEM - A distributed computing system includes a plurality of processors and shared memory service entities executable on the processors. Each of the shared memory service entities is associated with a local shared memory buffer. A producer is associated with a particular shared memory service entity, and the producer provides data that is stored in the local shared memory buffer associated with the particular shared memory service entity. The shared memory service entities propagate content of the local shared memory buffers into a global shared memory, wherein propagation of content of the local shared memory buffers to the global shared memory is performed using a procedure that relaxes guarantees of consistency between the global shared memory and the local shared memory buffers.12-02-2010
20100115208CONTROL I/O OFFLOAD IN A SPLIT-PATH STORAGE VIRTUALIZATION SYSTEM - Various embodiments of systems, methods, computer systems and computer software are disclosed for implementing a control I/O offload feature in a split-path storage virtualization system. One embodiment is a method for providing split-path storage services to a plurality of hosts via a storage area network.05-06-2010
20080256305MULTIPATH ACCESSIBLE SEMICONDUCTOR MEMORY DEVICE - A multipath accessible semiconductor memory device provides an interfacing function between multiple processors which indirectly controls a flash memory. The multipath accessible semiconductor memory device comprises a shared memory area, an internal register and a control unit. The shared memory area is accessed by first and second processors through different ports and is allocated to a portion of a memory cell array. The internal register is located outside the memory cell array and is accessed by the first and second processors. The control unit provides storage of address map data associated with the flash memory outside the shared memory area so that the first processor indirectly accesses the flash memory by using the shared memory area and the internal register even when only the second processor is coupled to the flash memory. The control unit also controls a connection path between the shared memory area and one of the first and second processors. The processors share the flash memory and a multiprocessor system is provided that has a compact size, thereby substantially reducing the cost of memory utilized within the multiprocessor system.10-16-2008
20110264867MULTIPROCESSOR COMPUTING SYSTEM WITH MULTI-MODE MEMORY CONSISTENCY PROTECTION - Disclosed are a method and apparatus for protecting memory consistency in a multiprocessor computing system, relating to program code conversion such as dynamic binary translation. The exemplary multiprocessor computing system provides memory and multiple processors, and a set of controller/translator units TX10-27-2011
20100325368SHARED MEMORY HAVING MULTIPLE ACCESS CONFIGURATIONS - An apparatus includes a first processor that accesses memory according to a first clock frequency, a second processor that accesses memory according to a second clock frequency, and a memory device is configurable to selectively operate according to the first clock frequency or the second clock frequency. A memory controller enables dynamic configuration of organization of the memory device to allow a first portion of the memory device to be accessed by the first processor according to the first clock frequency and a second portion of the memory device to be accessed by the second processor according to the second clock frequency.12-23-2010
20100332769Updating Shared Variables Atomically - When a thread begins an atomic transaction, the thread reads one or more variables from one or more source addresses. The read portion of the transaction is constrained to a predetermined amount of time or number of cycles (N). The mechanism then performs a test and set operation to determine whether any other threads hold locks on the one or more source addresses. If the locks for the one or more source addresses are free, then the thread acquires locks on the one or more source addresses. The thread then performs work and updates the one or more variables. Thereafter, the mechanism delays for an amount of time or number of cycles greater than or equal to N before releasing the locks. If another thread attempts to acquire a lock on the one or more source addresses, then the test and set operation for that other thread will fail.12-30-2010
20100122041MEMORY CONTROL APPARATUS, PROGRAM, AND METHOD - A memory control apparatus which controls access to a shared memory for each transaction. The apparatus includes a management unit that stores versions of data stored in the shared memory, a log storage unit that stores an update entry including a version of data subjected to an update operation in response to execution of an update operation on the shared memory in processing each transaction, and a control unit that writes a result of processing corresponding to execution of a relevant update operation to the shared memory when a request to commit a transaction has been given, and a relevant update entry version matches a corresponding version stored in the management unit, or re-executes the update operation and writes a result of re-execution to the shared memory when the update entry version does not match the corresponding version in the management unit.05-13-2010
20100122039Memory Systems and Accessing Methods - Memory systems and accessing methods are disclosed. In one embodiment, a method of accessing a memory device includes accessing a first end of the memory device regarding a first data type, and accessing a second end of the memory device regarding a second data type.05-13-2010
20110093662MEMORY HAVING INTERNAL PROCESSORS AND DATA COMMUNICATION METHODS IN MEMORY - Memory having internal processors, and methods of data communication within such a memory are provided. In one embodiment, an internal processor may concurrently access one or more banks on a memory array on a memory device via one or more buffers. The internal processor may be coupled to a buffer capable of accessing more than one bank, or coupled to more than one buffer that may each access a bank, such that data may be retrieved from and stored in different banks concurrently. Further, the memory device may be configured for communication between one or more internal processors through couplings between memory components, such as buffers coupled to each of the internal processors. Therefore, a multi-operation instruction may be performed by different internal processors, and data (such as intermediate results) from one internal processor may be transferred to another internal processor of the memory, enabling parallel execution of an instruction(s).04-21-2011
20090292884SYSTEM ENABLING TRANSACTIONAL MEMORY AND PREDICTION-BASED TRANSACTION EXECUTION METHOD - This invention provides a system enabling Transactional Memory with overflow prediction mechanism, comprising: prediction unit for predicting the mode for the next execution of a transaction based on the final status of the previous execution of the transaction; execution unit for executing the transaction in the execution mode predicted by the prediction unit, wherein the execution mode comprises overflow mode and non-overflow made. According to this invention, before a transaction is executed, it is predicted whether or not the transaction will overflow, and therefore, the execution of the transaction which is necessary to determine whether or not an overflow will occur is saved and the system performance can be improved.11-26-2009
20110153957SHARING VIRTUAL MEMORY-BASED MULTI-VERSION DATA BETWEEN THE HETEROGENOUS PROCESSORS OF A COMPUTER PLATFORM - A computer system may comprise a computer platform and input-output devices. The computer platform may include a plurality of heterogeneous processors comprising a central processing unit (CPU) and a graphics processing unit) GPU and a shared virtual memory supported by a physical private memory space of at least one heterogeneous processor or a physical shared memory shared by the heterogeneous processor. The CPU (producer) may create shared multi-version data and store such shared multi-version data in the physical private memory space or the physical shared memory. The GPU (consumer) may acquire or access the shared multi-version data.06-23-2011
20100023703HARDWARE TRANSACTIONAL MEMORY SUPPORT FOR PROTECTED AND UNPROTECTED SHARED-MEMORY ACCESSES IN A SPECULATIVE SECTION - A system and method is disclosed for implementing a hardware transactional memory system capable of executing a speculative section of code containing both protected and unprotected memory access operations. A processor in a multi-processor system is configured to execute a section of code that performs a transaction using shared memory, such that a first subset of memory operations in the section of code is performed atomically with respect to the concurrent execution of the one or more other processors and a second subset of memory operations in the section of code is not. In some embodiments, the section of code includes a plurality of declarator operations, each of which is executable to designate a respective location in the shared memory as protected.01-28-2010
20100023702Shared JAVA JAR files - Techniques are disclosed for sharing programmatic modules among isolated virtual machines. A master JVM process loads data from a programmatic module, storing certain elements of that data into its private memory region, and storing other elements of that data into a “read-only” area of a shareable memory region. The master JVM process copies loaded data from its private memory region into a “read/write” area of the shareable memory region. Instead of re-loading the data from the programmatic module, other JVM processes map to the read-only area and also copy the loaded data from the read/write area into their own private memory regions. The private memory areas of all of the JVM processes begin at the same virtual memory address, so references between read-only data and copied data are preserved correctly. As a result, multiple JVM processes start up faster, and memory is conserved by avoiding the redundant storage of shareable data.01-28-2010
20100017569PCB INCLUDING MULTIPLE CHIPS SHARING AN OFF-CHIP MEMORY, A METHOD OF ACCESSING OFF-CHIP MEMORY AND A MCM UTILIZING FEWER OFF-CHIP MEMORIES THAN CHIPS - A PCB having fewer off-chip memories than chips, a MCM, and a method of accessing an off-chip shared memory space. In one embodiment, the method includes: (1) generating a memory request at a first chip of the printed circuit board, (2) transforming the memory request to a shared memory request and (3) directing the shared memory request to an off-chip shared memory space indirectly coupled to the first chip via a second chip of the printed circuit board.01-21-2010
20110307669SHARED MEMORY ARCHITECTURE - A shared memory architecture is disclosed to support operations associated with executing shared functions from a shared memory space in such a manner that separate pieces of software can execute the shared functions.12-15-2011
20110307668METHOD AND SYSTEM OF UPDATING SHARED MEMORY - A method and system is disclosed for updating a shared memory or other memory location where multiple entities rely on code stored to the same memory to support one or more operation functions. The shared memory may be updated such that the code intended to the replace the currently stored code may be relied upon prior to replacement of the code currently written to the shared memory.12-15-2011
20090172300Device and method for creating a distributed virtual hard disk on networked workstations - Method and device for providing a virtual drive on a workstation PC which is connected via a network to other workstation PCs, encompassing a driver which makes available the virtual drive and carries out the following steps: 07-02-2009
20120151153Programmable Controller - A controller is provided which comprises one or more processors, a control store, a first interface control unit for interfacing a local core and a second interface control unit for interfacing one or more remote cores via an interconnect, wherein the processor/s discloses programmable mini-processor/s is adapted to execute, add, remove or modify a function by executing micro-code maintained typically in the local memory but also possibly in remote, or even off-chip memory, and obtained via the control store, in response to receiving a command from the first or the second interface control unit.06-14-2012
20120005434SEMICONDUCTOR MEMORY APPARATUS - A semiconductor memory apparatus includes a data selection unit, a first data processing unit, and a second data processing unit. The data selection unit is configured to select one of the first and second transfer lines to be coupled to a data pad in response to address signals. The first data processing unit is connected to the first transfer line and a first memory bank of a plurality of memory banks, and performs a data input/output (I/O) operation between the first transfer line and the first memory bank. The second data processing unit is connected to the second transfer line and a second memory bank of the plurality of memory banks, which is different from the first memory bank, and performs a data input/output (I/O) operation between the second transfer line and the second memory bank.01-05-2012
20110167225MULTIPLE-MEMORY APPLICATION-SPECIFIC DIGITAL SIGNAL PROCESSOR - An integrated circuit device is provided comprising a circuit board and one or more digital signal processors implemented thereon. The digital signal processor comprises a data unit comprising a function core configured to perform a specific mathematical expression in order to perform at least a portion of a specific application and an instruction memory storing one or more instructions configured to send commands to the control unit and the data unit to perform the specific application, and a control unit configured to control the flow of data between a plurality of memory banks and the function core for performing the specific application, and the plurality of memory banks coupled to each of the one or more digital signal processors and comprising at least two or more local memory banks integrated onto the circuit board.07-07-2011
20110167226SHARED MEMORY ARCHITECTURE - Disclosed herein is an apparatus which may comprise a plurality of nodes. In one example embodiment, each of the plurality of nodes may include one or more central processing units (CPUs), a random access memory device, and a parallel link input/output port. The random access memory device may include a local memory address space and a global memory address space. The local memory address space may be accessible to the one or more CPUs of the node that comprises the random access memory device. The global memory address space may be accessible to CPUs of all the nodes. The parallel link input/output port may be configured to send data frames to, and receive data frames from, the global memory address space comprised by the random access memory device(s) of the other nodes.07-07-2011
20120023296Recording Dirty Information in Software Distributed Shared Memory Systems - A page table entry dirty bit system may be utilized to record dirty information for a software distributed shared memory system. In some embodiments, this may improve performance without substantially increasing overhead because the dirty bit recording system is already available in certain processors. By providing extra bits, coherence can be obtained with respect to all the other uses of the existing page table entry dirty bits.01-26-2012
20120159088Processing Quality-of-Service (QoS) Information of Memory Transactions - Systems and methods for processing quality-of-service (QoS) information of memory transactions are described. In an embodiment, a method comprises receiving identification information and quality-of-service information corresponding to a first or original memory transaction transmitted from a hardware subsystem to a memory, receiving a given memory transaction from a processor complex that does not support quality-of-service encoding, determining whether the given memory transaction matches the original memory transaction, and appending the stored quality-of-service information to the given memory transaction in response to the given memory transaction matching the original memory transaction. In some embodiments, a system may be implemented as a system-on-a-chip (SoC). Devices suitable for using these systems include, for example, desktop and laptop computers, tablets, network appliances, mobile phones, personal digital assistants, e-book readers, televisions, and game consoles.06-21-2012
20090113141MEMORY PROTECTION SYSTEM AND METHOD - A shared memory controller is provided for controlling access to a shared memory by a plurality of processors. At least one device includes a storage area for storing a respective address range for each of a plurality of memory regions. The at least one device further includes a permission table containing, for each of the plurality of memory regions, read and write permission data for each of the plurality of processors. A memory fault detector is coupled to the at least one device and has an input for receiving a memory access request including a memory address, a processor identification and a read/write indicator. The memory fault detector includes logic for determining whether a memory access according to the memory access request would conflict with the read and write permission data in the permission table.04-30-2009
20120072676SELECTIVE MEMORY COMPRESSION FOR MULTI-THREADED APPLICATIONS - A method, system, and computer usable program product for selective memory compression for multi-threaded applications are provided in the illustrative embodiments. An identification of a memory region that is shared by a plurality of threads in an application is received at a first entity in a data processing system. A request for a second entity in the data processing system to keep the memory region uncompressed when compressing at least one of a plurality of memory regions that comprise the memory region is provided from the first entity to the second entity.03-22-2012
20120110271MECHANISM TO SPEED-UP MULTITHREADED EXECUTION BY REGISTER FILE WRITE PORT REALLOCATION - Various systems and processes may be used to speed up multi-threaded execution. In certain implementations, a system and process may include the ability to write results of a first group of execution units associated with a first register file into the first register file using a first write port of the first register file and write results of a second group of execution units associated with a second register file into the second register file using a first write port of the second register file. The system, apparatus, and process may also include the ability to connect, in a shared register file mode, results of the second group of execution units to a second write port of the first register file and connect, in a split register file mode, results of a part of the first group of execution units to the second write port of the first register file.05-03-2012
20110066814CONTROL SOFTWARE FOR DISTRIBUTED CONTROL, AND ELECTRONIC CONTROL DEVICE - The control software which can improve the development efficiency of a control system using a plurality of processing units by absorbing the difference due to the data exchange through a shared storage area is provided.03-17-2011
20110066813Method And System For Local Data Sharing - Embodiments for a local data share (LDS) unit are described herein. Embodiments include a co-operative set of threads to load data into shared memory so that the threads can have repeated memory access allowing higher memory bandwidth. In this way, data can be shared between related threads in a cooperative manner by providing a re-use of a locality of data from shared registers. Furthermore, embodiments of the invention allow a cooperative set of threads to fetch data in a partitioned manner so that it is only fetched once into a shared memory that can be repeatedly accessed via a separate low latency path.03-17-2011
20090043968Sharing Volume Data Via Shadow Copies - Aspects of the subject matter described herein relate to sharing volume data via shadow copies. In aspects, an active computer creates a shadow copy of a volume. The shadow copy is exposed to one or more passive computers that may read but not write to the volume. A passive computer may obtain data from the shadow copy by determining whether the data has been written to a differential area and, if so, reading it from the differential area. If the data has not been written to the differential area, the passive computer may obtain it by first reading it from the volume, then re-determining whether it has been written to the differential area, and if so, reading the data from the differential area. Otherwise, the data read from the volume corresponds to the data needed for the shadow copy.02-12-2009
20120166736STORAGE SYSTEM COMPRISING MULTIPLE STORAGE APPARATUSES WITH BOTH STORAGE VIRTUALIZATION FUNCTION AND CAPACITY VIRTUALIZATION FUNCTION - A first virtual storage and a second virtual storage share an external LU (Logical Unit) inside an external storage. The first virtual storage comprises a first LU, which comprises multiple first virtual areas and conforms to thin provisioning, and an external capacity pool, which is a storage area based on the external LU, and which is partitioned into multiple external pages, which are sub-storage areas. The second virtual storage comprises a second LU, which comprises multiple second virtual areas and conforms to thin provisioning. In a data migration from the first LU to the second LU, for a data migration from a first virtual area, to which an external page has been allocated, to a second virtual area, the first virtual storage notifies the second virtual storage of a migration-source address, which is an address of the first virtual area, and an external address, which is an address of the external page that has been allocated to this virtual area, and the second virtual storage stores a corresponding relationship between the notified migration-source address and external address.06-28-2012
20120317371Usage Aware NUMA Process Scheduling - Processes may be assigned to specific processors when memory objects consumed by the processes are located in memory banks closely associated with the processors. When assigning processes to threads operating in a multiple processor NUMA architecture system, an analysis of the memory objects accessed by a process may identify processor or group of processors that may minimize the memory access time of the process. The selection may take into account the connections between memory banks and processors to identify the shortest communication path between the memory objects and the process. The processes may be pre-identified as functional processes that make little or no changes to memory objects other than information passed to or from the processes.12-13-2012
20120131285LOCKING AND SIGNALING FOR IMPLEMENTING MESSAGING TRANSPORTS WITH SHARED MEMORY - Disclosed are systems and methods for transporting data using shared memory comprising allocating, by one of a plurality of sender application, one or more pages, wherein the one or more pages are stored in a shared memory, wherein the shared memory is partitioned into one or more pages, and writing data, by the sender application, to the allocated one or more pages, wherein a page is either available for use or allocated to the sender applications, wherein the one or more pages become available after the sender application has completed writing the data. The systems and methods further disclose sending a signal, by the sender application, to a receiver application, wherein the signal notifies the receiver application that writing the data to a particular page is complete, reading, by the receiver application, the data from the one or more pages, and de-allocating, by the receiver application, the one or more pages.05-24-2012
20120317372Efficient Communication of Producer/Consumer Buffer Status - A mechanism is provided for efficient communication of producer/consumer buffer status. With the mechanism, devices in a data processing system notify each other of updates to head and tail pointers of a shared buffer region when the devices perform operations on the shared buffer region using signal notification channels of the devices. Thus, when a producer device that produces data to the shared buffer region writes data to the shared buffer region, an update to the head pointer is written to a signal notification channel of a consumer device. When a consumer device reads data from the shared buffer region, the consumer device writes a tail pointer update to a signal notification channel of the producer device. In addition, channels may operate in a blocking mode so that the corresponding device is kept in a low power state until an update is received over the channel.12-13-2012
20100235587Staged Software Transactional Memory - A new form of software transactional memory based on maps for which data goes through three stages. Updates to shared memory are first redirected to a transaction-private map which associates each updated memory location with its transaction-private value. Maps are then added to a shared queue so that multiple versions of memory can be used concurrently by running transactions. Maps are later removed from the queue when the updates they refer to have been applied to the corresponding memory locations. This design offers a very simple semantic where starting a transaction takes a stable snapshot of all transactional objects in memory. It prevents transactions from aborting or seeing inconsistent data in case of conflict. Performance is interesting for long running transactions as no synchronization is needed between a transaction's start and commit, which can themselves be lock free.09-16-2010
20120215990METHOD AND APPARATUS FOR SELECTING A NODE WHERE A SHARED MEMORY IS LOCATED IN A MULTI-NODE COMPUTING SYSTEM - A method and an apparatus for selecting a node where a shared memory is located in a multi-node computing system are provided, improving the total access performance of the multi-node computing system. The method comprises: acquiring parameters for determining a sum of memory affinity weight values between each of the CPUs and a memory on a random one of nodes; calculating the sum of the memory affinity weight values between each of the CPUs and the memory on the random one of the nodes according to the parameters; and selecting the node with the calculated minimal sum of the memory affinity weight values as the node where the shared memory for each of the CPUs is located.08-23-2012
20100205382DYNAMIC QUEUE MANAGEMENT - A method may include receiving a data unit and identifying a state of a memory storing data units. The method may include selecting a threshold value having a first threshold unit or a second threshold unit based on the state of the memory. The method may include comparing the threshold value to a queue state using the first threshold unit if the memory is in a first state. The method may include comparing the threshold value to the queue state using the second threshold unit if the memory is in a second state.08-12-2010
20100205381System and Method for Managing Memory in a Multiprocessor Computing Environment - A method for managing a memory communicatively coupled to a plurality of processors may include analyzing a data structure associated with a processor to determine if one or more portions of memory associated with the processor are sufficient to store data associated with an operation of the processor. The method may also include storing data associated with the operation in the one or more portions of the memory associated with the processor if the portions of memory associated with the processor are sufficient. If the portions of memory associated with the processor are not sufficient, the method may include determining if at least one portion of the memory is unassociated with any of the plurality of processors storing data associated with the operation in the at least one unassociated portion of the memory.08-12-2010
20120137082GLOBAL AND LOCAL COUNTS FOR EFFICIENT MEMORY PAGE PINNING IN A MULTIPROCESSOR SYSTEM - Embodiments of the disclosure relate to the management of memory pages available for pin operations by groups of processors in a multiprocessor system to reduce cache contention and improve system performance. An exemplary embodiment comprises a system that may include interconnected processors, a global count of the number of pages available for pinning, and a plurality of local counts of pages available for pinning by groups of processors. Each local count may be in proximity to a processor group and include a subset of the pages allocated from the global count that are available for pinning by processors in the group. The local counts are adjusted accordingly in response to page pinning and unpinning by processors in the respective processor groups.05-31-2012
20110185129SECONDARY JAVA HEAPS IN SHARED MEMORY - A computing system includes a first virtual machine associated with a memory region readable by the first virtual machine, and a first private memory region. A data object is created by the first virtual machine in the sharable memory region, readable and writeable by the first virtual machine and a second virtual machine. A mapping is established between the first virtual machine and a particular area of the shareable memory region. The computing system includes the second virtual machine associated with a second private memory region, and a reference to the particular area of the shareable memory region. The mapping enables both the first virtual machine and second virtual machine to read and write second data in the shareable memory region without creating a copy of the second data in the first and second private memory regions.07-28-2011
20100180086DATA STORAGE DEVICE DRIVER - A method, system, and computer usable program product for an improved data storage device driver are provided in the illustrative embodiments. For managing an elevator queue, several requests are stored in the elevator queue. A determination is made whether the elevator queue is sorted. A number of requests in the elevator queue is determined if the elevator queue is unsorted. The unsorted elevator queue is monitored. Reaching a threshold condition in the unsorted elevator queue is detected. Sorting of the unsorted elevator queue is initiated. The requests may be I/O requests for a data storage device. The elevator queue may be sorted according to an ascending or descending order of data block addresses in the requests. The monitoring may monitor a remaining number of unsorted requests in the elevator queue as requests are removed from the elevator queue. The threshold condition may be associated with a threshold value.07-15-2010
20100174872Media Memory System - A method and apparatus for matching parent processor address translations to media processors' address translations and providing concurrent memory access to a plurality of media processors through separate translation table information. In particular, a page directory for a given media application is copied to a media processor's page directory when the media application allocates memory that is to be shared by a media application running on the parent processor and media processors.07-08-2010
20120221799SYSTEMS AND METHODS FOR PERFORMING STORAGE OPERATIONS IN A COMPUTER NETWORK - Methods and systems are described for performing storage operations on electronic data in a network. In response to the initiation of a storage operation and according to a first set of selection logic, a media management component is selected to manage the storage operation. In response to the initiation of a storage operation and according to a second set of selection logic, a network storage device to associate with the storage operation. The selected media management component and the selected network storage device perform the storage operation on the electronic data.08-30-2012
20080294851METHOD, APPARATUS, COMPUTER PROGRAM PRODUCT, AND SYSTEM FOR MANAGEMENT OF SHARED MEMORY - A system for providing management of shared memory for concurrent access is provided. The system includes a hardware element, a software element, and a memory that is accessible by the hardware and software elements. The memory includes control data that provides logical information describing the structure of the memory and the location of data within the memory. The software element may be executed to cause the control data to be written/updated to reflect alterations to the memory. By accessing the control data of the memory, the hardware element is able to identify a location in the memory to which to write data. In this way, the hardware element may write data to the memory without interacting with the software element while writing data. An indicator may also be provided to direct the hardware element to a location of the memory to which to write data.11-27-2008
20120179879MECHANISMS FOR EFFICIENT INTRA-DIE/INTRA-CHIP COLLECTIVE MESSAGING - Mechanism of efficient intra-die collective processing across the nodelets with separate shared memory coherency domains is provided. An integrated circuit die may include a hardware collective unit implemented on the integrated circuit die. A plurality of cores on the integrated circuit die is grouped into a plurality of shared memory coherence domains. Each of the plurality of shared memory coherence domains is connected to the collective unit for performing collective operations between the plurality of shared memory coherence domains.07-12-2012
20120233411Protecting Large Objects Within an Advanced Synchronization Facility - A system and method are disclosed for allowing protection of larger areas than memory lines by monitoring accessed and dirty bits in page tables. More specifically, in some embodiments, a second associative structure with a different granularity is provided to filter out a large percentage of false positives. By providing the associative structure with sufficient size, the structure exactly specifies a region in which conflicting cache lines lie. If entries within this region are evicted from the structure, enabling the tracking for the entire index filters out a substantial number of false positives (depending on a granularity and a number of indices present). In some embodiments, this associative structure is similar to a translation look aside buffer (TLB) with 4 k, 2M entries.09-13-2012
20080301378TIMESTAMP BASED TRANSACTIONAL MEMORY - A hardware implemented transactional memory system includes a mechanism to allow multiple processors to access the same memory system. A set of timestamps are stored that each correspond to a region of memory. A time stamp is updated when any memory in its associated region is updated. For each memory transaction, the time at which the transaction begins is recorded. Write operations that are part of a transaction are performed by writing the data to temporary memory. When a transaction is to be recorded, the hardware automatically commits the transaction by determining whether the timestamps associated with data read for the transaction are all prior to the start time for the transaction. In this manner, the software need not check the data for all other processes or otherwise manage collision of data with respect to different processes. The software need only identify which reads and writes are part of a transaction.12-04-2008
20110004733Node Identification for Distributed Shared Memory System - An example embodiment of the present invention provides processes relating to a connection/communication protocol and a memory-addressing scheme for a distributed shared memory system. In the example embodiment, a logical node identifier comprises bits in the physical memory addresses used by the distributed shared memory system. Processes in the embodiment include logical node identifiers in packets which conform to the protocol and which are stored in a connection control block in local memory. By matching the logical node identifiers in a packet against the logical node identifiers in the connection control block, the processes ensure reliable delivery of packet data. Further, in the example embodiment, the. logical node identifiers are used to create a virtual server consisting of multiple nodes in. the distributed shared memory system.01-06-2011
20080282041Method and Apparatus for Accessing Data of a Message Memory of a Communication Module - A method and an apparatus for accessing data of a message memory of a communication module by inputting or outputting data into or from the message memory, the message memory being connected to a buffer memory assemblage and the data being transferred to the message memory or from the message memory, the buffer memory assemblage having an input buffer memory in the first transfer direction and an output buffer memory in the second transfer direction; and the input buffer memory and the output buffer memory each being divided into a partial buffer memory and a shadow memory, the following steps being performed in each transfer direction: inputting data into the respective partial buffer memory, and transposing access to the partial buffer memory and shadow memory, so that subsequent data can be inputted into the shadow memory while the previously inputted data are already being outputted from the partial buffer memory in the stipulated transfer direction.11-13-2008
20080282042Multi-path accessible semiconductor memory device with prevention of pre-charge skip - A multiprocessor system includes first and second processors and a multi-path accessible semiconductor memory device including a shared memory area and a pseudo operation execution unit. The shared memory area is accessible by the first and second processors according to a page open policy. The pseudo operation execution unit responds to a virtual active command from one of the first and second processors to close a last-opened page. The virtual active command is generated with a row address not corresponding to any row of the shared memory area. For example, bit-lines of a last accessed row are pre-charged for closing the last-opened page.11-13-2008
20120272012DISTRIBUTED SHARED MEMORY - Systems and methods for implementing a distributed shared memory (DSM) in a computer cluster in which an unreliable underlying message passing technology is used, such that the DSM efficiently maintains coherency and reliability. DSM agents residing on different nodes of the cluster process access permission requests of local and remote users on specified data segments via handling procedures, which provide for recovering of lost ownership of a data segment while ensuring exclusive ownership of a data segment among the DSM agents detecting and resolving a no-owner messaging deadlock, pruning of obsolete messages, and recovery of the latest contents of a data segment whose ownership has been lost.10-25-2012
20110213936PROCESSOR, MULTIPROCESSOR SYSTEM, AND METHOD OF DETECTING ILLEGAL MEMORY ACCESS - A processor included in a multiprocessor system including a shared memory, the processor according to an embodiment of the present invention comprises: a storing unit that stores a break occurrence memory area that is address information of the shared memory; and a break generator that causes memory access break when the break occurrence memory area is accessed and puts the processor to a debug state. The break occurrence memory area includes address information of the shared memory in another processor included in the multiprocessor system.09-01-2011
20120331238Asynchronous Grace-Period Primitives For User-Space Applications - A technique for implementing user-level read-copy update (RCU) with support for asynchronous grace periods. In an example embodiment, a user-level RCU subsystem is established that executes within threads of a user-level multithreaded application. The multithreaded application may comprise one or more reader threads that read RCU-protected data elements in a shared memory. The multithreaded application may further comprise one or more updater threads that perform updates to the RCU-protected data elements in the shared memory and register callbacks to be executed following a grace period in order to free stale data resulting from the updates. The RCU subsystem may implement two or more helper threads (helpers) that are created or selected as needed to track grace periods and execute the callbacks on behalf of the updaters instead of the updaters performing such work themselves.12-27-2012
20120331237Asynchronous Grace-Period Primitives For User-Space Applications - A technique for implementing user-level read-copy update (RCU) with support for asynchronous grace periods. In an example embodiment, a user-level RCU subsystem is established that executes within threads of a user-level multithreaded application. The multithreaded application may comprise one or more reader threads that read RCU-protected data elements in a shared memory. The multithreaded application may further comprise one or more updater threads that perform updates to the RCU-protected data elements in the shared memory and register callbacks to be executed following a grace period in order to free stale data resulting from the updates. The RCU subsystem may implement two or more helper threads (helpers) that are created or selected as needed to track grace periods and execute the callbacks on behalf of the updaters instead of the updaters performing such work themselves.12-27-2012
20110320741METHOD AND APPARATUS PROVIDING FOR DIRECT CONTROLLED ACCESS TO A DYNAMIC USER PROFILE - An apparatus may include a profile determiner configured to determine a user profile. A contextual characteristic determiner may be configured to determine contextual characteristics relating to the apparatus and/or the user of the apparatus such that the profile determiner may infer user preferences and thereby create a dynamic portion of the user profile. An index builder may be configured to build an index of profile categories included within the user profile. A subscription registrar may cause the user profile to be registered for sharing with a service provider. Thereby a profile manager may provide for direct controlled access to the user profile which may be limited by user selection of permission levels and/or profile categories which are shared. Thereby access to the user profile may occur directly with the apparatus without storing the user profile on a separate server.12-29-2011
20120331239SHARED MEMORY ARCHITECTURE - Disclosed herein is an apparatus which may comprise a plurality of nodes. In one example embodiment, each of the plurality of nodes may include one or more central processing units (CPUs), a random access memory device, and a parallel link input/output port. The random access memory device may include a local memory address space and a global memory address space. The local memory address space may be accessible to the one or more CPUs of the node that comprises the random access memory device. The global memory address space may be accessible to CPUs of all the nodes. The parallel link input/output port may be configured to send data frames to, and receive data frames from, the global memory address space comprised by the random access memory device(s) of the other nodes.12-27-2012
20130019068SYSTEMS AND METHODS FOR SHARING MEDIA IN A COMPUTER NETWORK - A computerized method for sharing removable storage media in a network, the method comprising associating, in an index entry, a first piece of removable storage media in a first storage device with at least a first storage policy copy and a second storage policy copy; copying, to the first piece of removable storage media, data associated with the first storage policy copy; and copying, to the first piece of removable storage media, data associated with the second storage policy copy.01-17-2013
20080244195METHODS AND APPARATUSES TO SUPPORT MEMORY TRANSACTIONS USING PARTIAL PHYSICAL ADDRESSES - Methods and apparatuses to support memory transactions using partial physical addresses are disclosed. Method embodiments generally comprise home agents monitoring multiple responses to multiple memory requests, wherein at least one of the responses has a partial address for a memory line, resolving conflicts for the memory requ'fvests, and suspending conflict resolution for the memory requests which match partial address responses until determining the full address. Apparatus embodiments generally comprise a home agent having a response monitor and a conflict resolver. The response monitor may observe a snoop response of a memory agent, wherein the snoop response only has a partial address and is for a memory line of a memory agent. The conflict resolver may suspend conflict resolution for memory transactions which match the partial address of the memory line until the conflict resolver receives a full address for the memory line.10-02-2008
20110246726PROCESSING DATA IN SHARED MEMORY - Various embodiments of systems and methods for processing data in shared memory are described herein. A number of work processes of an application server write data in corresponding areas of shared memory. At least one data unit for a first process is read from a first area of the shared memory by the first process. The first process also reads at least one unit of data for a second process from a second area of the shared memory. The first process writes information in a third area of the memory to indicate that the at least one unit of data for the first process and the at least one unit of data for the second process are read. The read data units are aggregated and saved in a storage by the first process.10-06-2011
20110246725System and Method for Committing Results of a Software Transaction Using a Hardware Transaction - The system and methods described herein may exploit hardware transactional memory to improve the performance of a software or hybrid transactional memory implementation, even when an entire user transaction cannot be executed within a hardware transaction. The user code of an atomic transaction may be executed within a software transaction, which may collect read and write sets and/or other information about the atomic transaction. A single hardware transaction may be used to commit the atomic transaction by validating the transaction's read set and applying the effects of the user code to memory, reducing the overhead associated with commitment of software transactions. Because the hardware transaction code is carefully controlled, it may be less likely to fail to commit. Various remedial actions may be taken before retrying hardware transactions following some failures. If a transaction exceeds the constraints of the hardware, it may be committed by the software transactional memory alone.10-06-2011
20110246724System and Method for Providing Locale-Based Optimizations In a Transactional Memory - The system and methods described herein may reduce read/write fence latencies and cache pressure related to STM metadata accesses. These techniques may leverage locality information (as reflected by the value of a respective locale guard) associated with each of a plurality of data partitions (locales) in a shared memory to elide various operations in transactional read/write fences when transactions access data in locales owned by their threads. The locale state may be disabled, free, exclusive, or shared. For a given memory access operation of an atomic transaction targeting an object in the shared memory, the system may implement the memory access operation using a contention mediation mechanism selected based on the value of the locale guard associated with the locale in which the target object resides. For example, a traditional read/write fence may be employed in some memory access operations, while other access operations may employ an optimized read/write fence.10-06-2011
20080222365Managed Memory System - A managed memory system is provided. More specifically, in one embodiment, there is provided a system including a memory device and a switch coupled to the memory device. The switch has at least a first switch position and a second switch position. The system also includes a memory controller coupled to the first switch position and a processor interface coupled to the second switch position.09-11-2008
20130145106COMMAND PORTAL FOR SECURELY COMMUNICATING AND EXECUTING NON-STANDARD STORAGE SUBSYSTEM COMMANDS - A command portal enables a host system to send non-standard or “vendor-specific” storage subsystem commands to a storage subsystem using an operating system (OS) device driver that does not support or recognize such non-standard commands. The architecture thereby reduces or eliminates the need to develop custom device drivers that support the storage subsystem's non-standard commands. To execute non-standard commands using the command portal, the host system embeds the non-standard commands in blocks of write data, and writes these data blocks to the storage subsystem using standard write commands supported by standard OS device drivers. The storage subsystem extracts and executes the non-standard commands. The non-standard commands may alternatively be implied by the particular target addresses used. The host system may retrieve execution results of the non-standard commands using standard read commands. The host-side functionality of the command portal may be embodied in an API that is made available to application developers.06-06-2013
20130097389MEMORY ACCESS CONTROLLER, MULTI-CORE PROCESSOR SYSTEM, MEMORY ACCESS CONTROL METHOD, AND COMPUTER PRODUCT - A memory access controller includes a semiconductor circuit configured to classify into a first group of cores having made an exclusive access request to shared memory and a second group of cores not having made an exclusive access request to the shared memory, multiple cores capable of accessing the shared memory; detect a core having completed the exclusive access among the first group of cores; and send to a core among the first group of cores and standing by for the exclusive access, a notification of release from a standby state, when detecting a core having completed the exclusive access.04-18-2013
20130097388DEVICE AND DATA PROCESSING SYSTEM - A device is disclosed which includes a register storing a plurality of latency data and a control unit responding to the latency data. Each of the latency data indicates a period of time between issue of a data transfer request command responsive to an access request from one of access request sources and initiation of a data transfer operation responsive to the data transfer request command. The control unit controls an order in issue of data transfer request commands responsive to access requests from the access request sources so that between issue of a first data transfer request command responsive to a first access request and initiation of a first data transfer operation responsive to the first data transfer request command, at least issue of a second data transfer request command responsive to a second access request is performed.04-18-2013
20130124804DATA RESTORATION PROGRAM, DATA RESTORATION APPARATUS, AND DATA RESTORATION METHOD - A computer-readable recording medium stores a program that causes a computer capable of accessing a multicore processor equipped with volatile memories and a plurality of cores accessing the volatile memories, to execute a data restoration process. The data restoration process includes detecting a suspend instruction to any one of the cores in the multicore processor; and restoring, when the suspend instruction is detected, data stored in a volatile memory accessed by a core receiving the suspend instruction, the data being restored in a shared memory accessed by the cores in operation and based on parity data stored in the volatile memories accessed by the cores in operation other than the core receiving the suspend instruction.05-16-2013
20130132684AUTOMATIC OPTIMIZATION FOR PROGRAMMING OF MANY-CORE ARCHITECTURES - The present invention extends to methods, systems, and computer program products for automatically optimizing memory accesses by kernel functions executing on parallel accelerator processors. A function is accessed. The function is configured to operate over a multi-dimensional matrix of memory cells through invocation as a plurality of threads on a parallel accelerator processor. A layout of the memory cells of the multi-dimensional matrix and a mapping of memory cells to global memory at the parallel accelerator processor are identified. The function is analyzed to identify how each of the threads access the global memory to operate on corresponding memory cells when invoked from the kernel function. Based on the analysis, the function altered to utilize a more efficient memory access scheme when performing accesses to the global memory. The more efficient memory access scheme increases coalesced memory access by the threads when invoked over the multi-dimensional matrix.05-23-2013
20130145105Data Storage Systems and Methods - Example data storage systems and methods are described. In one implementation, a method identifies data to be written to a shared storage system that includes multiple storage nodes. The method communicates a write operation vote request to each of the multiple storage nodes. The write operation vote request is associated with a data write operation to write the identified data to the shared storage system. A positive response is received from at least a portion of the multiple storage nodes. The data write operation is initiated in response to receiving positive responses from a quorum of the storage nodes.06-06-2013
20130185523DECOUPLED METHOD FOR TRACKING INFORMATION FLOW AND COMPUTER SYSTEM THEREOF - A computer system and a method for tracking information flow are provided. The computer system divides an information flow tracking task into two decoupled tasks executed by two procedures. The first procedure emulates execution of instructions and divides the instructions into code blocks according to an instruction executing sequence. The first procedure translates the instructions of the code blocks into information flow codes and transmits them to the second procedure. The first procedure further translates the instructions into dynamic emulation instructions and executes the dynamic emulation instructions to generate addressing results of the dynamic addressing instructions. The second procedure executes the information flow codes according to the addressing results to emulate the instructions of the code blocks. Moreover, the method also tries to reduce the amount of data transmission between the two procedures when the first procedure executes the emulation task. Therefore, the efficiency of tracking information flow is enhanced.07-18-2013
20130151791TRANSACTIONAL MEMORY CONFLICT MANAGEMENT - A computing device initiates a transaction, corresponding to an application, which includes operations for accessing data stored in a shared memory and buffering alterations to the data as speculative alterations to the shared memory. The computing device detects a transaction abort scenario corresponding to the transaction and notifies the application regarding the transaction abort scenario. The computing device determines whether to abort the transaction based on instructions received from the application regarding the transaction abort scenario. When the transaction is to be aborted, the computing device restores the transaction to an operation prior to accessing the data stored in the shared memory and buffering alterations to the data as speculative alterations to the shared memory. When the transaction is not to be aborted, the computing device enables the transaction to continue.06-13-2013
20130151792PROCESSOR COMMUNICATIONS - A processor module including a processor configured to share data with at least one further processor module processor; and a memory mapped peripheral configured to communicate with at least one further processor memory mapped peripheral to control the sharing of the data, wherein the memory mapped peripheral includes a sender part including a data request generator configured to output a data request indicator to the further processor module dependent on a data request register write signal from the processor; and an acknowledgement waiting signal generator configured to output an acknowledgement waiting signal to the processor dependent on a data acknowledgement signal from the further processor module, wherein the data request generator data request indicator is further dependent on the data acknowledgement signal and the acknowledgement waiting signal generator acknowledgement waiting signal is further dependent on the acknowledgement waiting register write signal.06-13-2013
20130151793Multi-Context Configurable Memory Controller - The exemplary embodiments provide a multi-context configurable memory controller comprising: an input-output data port array comprising a plurality of input queues and a plurality of output queues; at least one configuration and control register to store, for each context of a plurality of contexts, a plurality of configuration bits; a configurable circuit element configurable for a plurality of data operations, each data operation corresponding to a context of a plurality of contexts, the plurality of data operations comprising memory address generation, memory write operations, and memory read operations, the configurable circuit element comprising a plurality of configurable address generators; and an element controller, the element controller comprising a port arbitration circuit to arbitrate among a plurality of contexts having a ready-to-run status, and the element controller to allow concurrent execution of multiple data operations for multiple contexts having the ready-to-run status.06-13-2013
20120260046PROGRAMMABLE LOGIC APPARATUS EMPLOYING SHARED MEMORY, VITAL PROCESSOR AND NON-VITAL COMMUNICATIONS PROCESSOR, AND SYSTEM INCLUDING THE SAME - A programmable logic apparatus includes a shared memory having a first port, a second port and a third port; a first vital processor interfaced to the first port of the shared memory; and a non-vital communications processor separated from the first vital processor in the programmable logic apparatus and interfaced to the second port of the shared memory. The third port of the shared memory is an external port structured to interface an external second vital processor.10-11-2012
20100318748DATA RECORDER - A data recorder includes a first memory element including read/write capability, a second memory element including non-volatile memory and a controller for realizing memory management functions. The controller responds to a predetermined triggering event by writing selected data from the first memory element to the second memory element. The selected data include data units that have been modified after a prior triggering event.12-16-2010
20130159634Systems and Methods for Handling Out of Order Reporting in a Storage Device - Various embodiments of the present invention provide systems and methods for handling out of order reporting in a storage device.06-20-2013
20130159635SYSTEM AND METHOD FOR MAINTAINING MEMORY PAGE SHARING IN A VIRTUAL ENVIRONMENT - In a virtualized system using memory page sharing, a method is provided for maintaining sharing when Guest code attempts to write to the shared memory. In one embodiment, virtualization logic uses a pattern matcher to recognize and intercept page zeroing code in the Guest OS. When the page zeroing code is about to run against a page that is already zeroed, i.e., contains all zeros, and is being shared, the memory writes in the page zeroing code have no effect. The virtualization logic skips over the writes, providing an appearance that the Guest OS page zeroing code has run to completion but without performing any of the writes that would have caused a loss of page sharing. The pattern matcher can be part of a binary translator that inspects code before it executes.06-20-2013
20120030433Method, Mobile Terminal and Computer Program Product for Sharing Storage Device - The invention discloses a method of sharing a storage device and a mobile terminal. The mobile terminal comprises a first processor, a second processor and a readable and writable nonvolatile storage device. A processing capacity of the first processor is different from that of the second processor. A state in which the first processor is operating and using the storage device is a second state. A state in which the second processor is operating and using the storage device is a third state. The method comprising: the first processor receiving a switch instruction; the first processor controlling the storage device to enter the second state or the third state according to the switch instruction. As compared with the prior art, by controlling the sharing of the storage device by the first processor, the invention reduces the elements in the mobile terminal and saves the hardware cost of the mobile terminal; moreover, the physical connection between the components in the mobile terminal is simple and easily controlled.02-02-2012
20120030432SYSTEMS AND METHODS FOR SHARING MEDIA IN A COMPUTER NETWORK - A computerized method for sharing removable storage media in a network, the method comprising associating, in an index entry, a first piece of removable storage media in a first storage device with at least a first storage policy copy and a second storage policy copy; copying, to the first piece of removable storage media, data associated with the first storage policy copy; and copying, to the first piece of removable storage media, data associated with the second storage policy copy.02-02-2012
20120066457SYSTEM AND METHOD FOR ALLOCATING AND DEALLOCATING MEMORY WITHIN TRANSACTIONAL CODE - Methods and systems are provided for managing memory allocations and deallocations while in transactional code, including nested transactional code. The methods and systems manage transactional memory operations by using identifiers, such as sequence numbers, to handle memory management in transactions. The methods and systems also maintain lists of deferred actions to be performed at transaction abort and commit times. A number of memory management routines associated with one or more transactions examine the transaction sequence number of the current transaction, manipulate commit and/or undo logs, and set/use the transaction sequence number of an associated object, but are not so limited. The methods and systems provide for memory allocation and deallocations within transactional code while preserving transactional semantics. Other embodiments are described and claimed.03-15-2012
20130212337EVALUATION SUPPORT METHOD AND EVALUATION SUPPORT APPARATUS - An evaluation support method includes acquiring a first number of occurrences of accessing target data stored in a first storage apparatus per unit time, a second number of occurrences of accessing a second storage apparatus per unit time, and a predictive response time for accessing the second storage apparatus after the target data is transferred to the second storage apparatus; calculating, based on the first number of occurrences, the second number of occurrences, and the predictive response time, multiplicity that expresses the extent to which process time periods for accesses overlap when each access to the second storage apparatus after the target data is transferred is processed in parallel; and outputting the multiplicity.08-15-2013
20130212338MULTICORE PROCESSOR - A multicore processor includes a plurality of cores; a shared memory that is shared by the cores and that is divided into a plurality of storage areas whose writable data sizes are determined in advance; a receiving unit that receives a task given to the cores; and a writing unit that writes the received task in one of the storage areas that is set in advance according to a data size of the task.08-15-2013

Patent applications in class Shared memory area

Patent applications in all subclasses Shared memory area