Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


COMPUTER-TO-COMPUTER DIRECT MEMORY ACCESSING

Subclass of:

709 - Electrical computers and digital processing systems: multicomputer data transferring

Patent class list (only not empty are listed)

Deeper subclasses:

Entries
DocumentTitleDate
20120246256Administering An Epoch Initiated For Remote Memory Access - Methods, systems, and products are disclosed for administering an epoch initiated for remote memory access that include: initiating, by an origin application messaging module on an origin compute node, one or more data transfers to a target compute node for the epoch; initiating, by the origin application messaging module after initiating the data transfers, a closing stage for the epoch, including rejecting any new data transfers after initiating the closing stage for the epoch; determining, by the origin application messaging module, whether the data transfers have completed; and closing, by the origin application messaging module, the epoch if the data transfers have completed.09-27-2012
20130086197MANAGING CACHE AT A COMPUTER - A method and system for managing caching at a computer. A computer receives a file from a storage device on a network in response to a request by a first user. The computer may then determine if other users of the computer are likely to request the file, based upon a type of the file and a type of the network. If other users are likely to request the file, the computer may then cache the file at the computer. In one embodiment, the computer may determine if other users of the computer are likely to request the file based upon access permissions to the file at a source of the file. In another embodiment, the computer may determine if other users of the computer are likely to request the file based upon if the file has been previously cached at the computer.04-04-2013
20130086196SYSTEM AND METHOD FOR SUPPORTING DIFFERENT MESSAGE QUEUES IN A TRANSACTIONAL MIDDLEWARE MACHINE ENVIRONMENT - A system and method can support different message queues in a transactional middleware machine environment. The transactional middleware machine environment includes an advertized table that comprises a first queue table and a second queue table, with the first queue table storing address information for a first message queue and the second queue table storing address information for a second message queue. The advertized table is further adaptive to be used by a first transactional client to locate a transactional service provided by a transactional server. The first transactional client operates to look up the first queue table for a key that indicates the address information of the transactional service that is stored in the second queue table.04-04-2013
20130080564MESSAGING IN A PARALLEL COMPUTER USING REMOTE DIRECT MEMORY ACCESS ('RDMA') - Messaging in a parallel computer using remote direct memory access (‘RDMA’), including: receiving a send work request; responsive to the send work request: translating a local virtual address on the first node from which data is to be transferred to a physical address on the first node from which data is to be transferred from; creating a local RDMA object that includes a counter set to the size of a messaging acknowledgment field; sending, from a messaging unit in the first node to a messaging unit in a second node, a message that includes a RDMA read operation request, the physical address of the local RDMA object, and the physical address on the first node from which data is to be transferred from; and receiving, by the first node responsive to the second node's execution of the RDMA read operation request, acknowledgment data in the local RDMA object.03-28-2013
20130080563EFFECTING HARDWARE ACCELERATION OF BROADCAST OPERATIONS IN A PARALLEL COMPUTER - Compute nodes of a parallel computer organized for collective operations via a network, each compute node having a receive buffer and establishing a topology for the network; selecting a schedule for a broadcast operation; depositing, by a root node of the topology, broadcast data in a target node's receive buffer, including performing a DMA operation with a well-known memory location for the target node's receive buffer; depositing, by the root node in a memory region designated for storing broadcast data length, a length of the broadcast data, including performing a DMA operation with a well-known memory location of the broadcast data length memory region; and triggering, by the root node, the target node to perform a next DMA operation, including depositing, in a memory region designated for receiving injection instructions for the target node, an instruction to inject the broadcast data into the receive buffer of a subsequent target node.03-28-2013
20130080562USING TRANSMISSION CONTROL PROTOCOL/INTERNET PROTOCOL (TCP/IP) TO SETUP HIGH SPEED OUT OF BAND DATA COMMUNICATION CONNECTIONS - A method establishes a transport layer connection between a first system and a second system. The establishment of the transport layer connection includes identifying a remote direct memory access (RDMA) connection between the first system and the second system. After establishing to transport layer connection, the first and second systems exchange data using the RDMA connection identified in establishing the transport layer connection.03-28-2013
20130036185METHOD AND APPARATUS FOR MANAGING TRANSPORT OPERATIONS TO A CLUSTER WITHIN A PROCESSOR - A method and corresponding apparatus of managing transport operations between a first memory cluster and one or more other memory clusters, include receiving, in the first cluster, information related to one or more transport operations with related data buffered in an interface device, the interface device coupling the first cluster to the one or more other clusters, selecting at least one transport operation, from the one or more transport operations, based at least in part on the received information, and executing the selected at least one transport operation.02-07-2013
20130080561USING TRANSMISSION CONTROL PROTOCOL/INTERNET PROTOCOL (TCP/IP) TO SETUP HIGH SPEED OUT OF BAND DATA COMMUNICATION CONNECTIONS - A transport layer connection is established between a first system and a second system. The establishment of the transport layer connection includes identifying a remote direct memory access (RDMA) connection between the first system and the second system. After establishing to transport layer connection, the first and second systems exchange data using the RDMA connection identified in establishing the transport layer connection.03-28-2013
20130080560System and Method for Sharing Digital Data on a Presenter Device to a Plurality of Participant Devices - There is provided a system and method for sharing a plurality of data contents from a presenter device to a plurality of participant devices. There is provided a system comprising a processor configured to execute a data sharing application, wherein the data sharing application is configured to receive a selection of the plurality of data contents, connect to the plurality of participant devices using a hotspot service executing on the presenter device, establish a sharing session with the plurality of participant devices, and present the plurality of data contents onto the plurality of participant devices. Accordingly, the presenter device maintains full control over the plurality of data contents being shared and reduces the time for sharing and the bandwidth consumed for presenting the plurality of data contents.03-28-2013
20130041969SYSTEM AND METHOD FOR PROVIDING A MESSAGING APPLICATION PROGRAM INTERFACE - A system and method for providing a message bus component or version thereof (referred to herein as an implementation), and a messaging application program interface, for use in an enterprise data center, middleware machine system, or similar environment that includes a plurality of processor nodes together with a high-performance communication fabric (or communication mechanism) such as InfiniBand. In accordance with an embodiment, the messaging application program interface enables features such as asynchronous messaging, low latency, and high data throughput, and supports the use of in-memory data grid, application server, and other middleware components.02-14-2013
20130046844ADMINISTERING CONNECTION IDENTIFIERS FOR COLLECTIVE OPERATIONS IN A PARALLEL COMPUTER - Administering connection identifiers for collective operations in a parallel computer, including prior to calling a collective operation, determining, by a first compute node of a communicator to receive an instruction to execute the collective operation, whether a value stored in a global connection identifier utilization buffer exceeds a predetermined threshold; if the value stored in the global ConnID utilization buffer does not exceed the predetermined threshold: calling the collective operation with a next available ConnID including retrieving, from an element of a ConnID buffer, the next available ConnID and locking the element of the ConnID buffer from access by other compute nodes; and if the value stored in the global ConnID utilization buffer exceeds the predetermined threshold: repeatedly determining whether the value stored in the global ConnID utilization buffer exceeds the predetermined threshold until the value stored in the global ConnID utilization buffer does not exceed the predetermined threshold.02-21-2013
20090307328REMOTE MANAGEMENT INTERFACE FOR A MEDICAL DEVICE - A method and system for remote management of a hand held medical device of a type which does not include a physical keyboard or a large display screen including connectable hardware providing a communications channel between the device and a remote computer system to provide a fully featured interface, with a full sized screen and keyboard, for use when manipulating data from the medical device.12-10-2009
20100274868Direct Memory Access In A Hybrid Computing Environment - DMA in a computing environment that includes several computers and DMA engines, the computers adapted to one another for data communications by an data communications fabric, each computer executing an application, where DMA includes pinning, by a first application, a memory region, including providing, to all applications, information describing the memory region; effecting, by a second application in dependence upon the information describing the memory region, DMA transfers related to the memory region, including issuing DMA requests to a particular DMA engine for processing; and unpinning, by the first application, the memory region, including insuring, prior to unpinning, that no additional DMA requests related to the memory region are issued, that all outstanding DMA requests related to the memory region are provided to a DMA engine, and that processing of all outstanding DMA requests related to the memory region and provided to a DMA engine has been completed.10-28-2010
20130060880Hybrid Content-Distribution System and Method - The present invention discloses a hybrid content-distribution system. It uses two types of memory to distribute contents: re-writable memory (RWM) and three-dimensional mask-programmed read-only memory (3D-MPROM). During a publication period, new contents are transferred from a remote server to the RWM. At the end of the publication period, a user receives a 3D-MPROM, which stores a collection of the transferred contents. To make room for the contents to be released during the next publication period, the contents common to the 3D-MPROM and the RWM are deleted from the RWM afterwards.03-07-2013
20120311062METHODS, CIRCUITS, DEVICES, SYSTEMS AND ASSOCIATED COMPUTER EXECUTABLE CODE FOR DISTRIBUTED CONTENT CACHING AND DELIVERY - Disclosed are methods, circuits, devices, systems and associated computer executable code for distributed content caching and delivery. An access or gateway network may include two or more gateway nodes integral or otherwise functionally associated with a caching unit. Each of the caching units may include: (a) a caching repository, (b) caching/delivery logic and (c) an inter-cache communication module. Caching logic of a given caching unit may include content characterization functionality for generating one or more characterization parameters associated with and/or derived from content entering a gateway node with which the given caching unit is integral or otherwise functionally associated. Content characterization parameters generated by a characterization module of a given caching unit may be compared with content characterization parameters of content already cached in: one or more cache repositories of the given caching unit, and one or more cache repositories of other caching units.12-06-2012
20090271492REAL-TIME COMMUNICATIONS OVER DATA FORWARDING FRAMEWORK - Methods and apparatus, including computer program products, for real-time communications over data forwarding framework. A framework includes a group of interconnected computer system nodes each adapted to receive data and continuously forward the data from computer memory to computer memory without storing on any physical storage device in response to a request from a client system to store data from a requesting system and retrieve data being continuously forwarded from computer memory to computer memory in response to a request to retrieve data from the requesting system, and at least two client systems linked to the group, each of the client systems executing a real-time communications client program.10-29-2009
20090271491SYSTEM AND METHOD TO CONTROL WIRELESS COMMUNICATIONS - A method of controlling wireless communications is provided. A first call is received at a first distributed mobile architecture (DMA) server from a first mobile communication device. The first DMA server communicates with the first mobile communication device via a first wireless communication protocol. A second call is received at the first DMA server from a second mobile communication device. The first DMA server communicates with the second mobile communication device via a second wireless communication protocol. Voice information associated with the first call is converted to first packet data and voice information associated with the second call to second packet data. The first packet data and the second packet data are routed via a private Internet Protocol (IP) network to at least one second DMA device, where the first call is accessible to a first destination device and the second call is accessible to a second destination device via the at least one second DMA device.10-29-2009
20090031001Repeating Direct Memory Access Data Transfer Operations for Compute Nodes in a Parallel Computer - Methods, apparatus, and products are disclosed for repeating DMA data transfer operations for nodes in a parallel computer that include: receiving, by a DMA engine on an origin node, a RGET data descriptor that specifies a DMA transfer operation data descriptor and a second RGET data descriptor, the second RGET data descriptor also specifying the DMA transfer operation data descriptor; creating, in dependence upon the RGET data descriptor, an RGET packet that contains the DMA transfer operation data descriptor and the second RGET data descriptor; processing the DMA transfer operation data descriptor included in the RGET packet, including performing a DMA data transfer operation between the origin node and a target node in dependence upon the DMA transfer operation data descriptor; and processing the second RGET data descriptor included in the RGET packet, thereby performing again the DMA transfer operation in dependence upon the DMA transfer operation data descriptor.01-29-2009
20130067018METHODS AND COMPUTER PROGRAM PRODUCTS FOR MONITORING THE CONTENTS OF NETWORK TRAFFIC IN A NETWORK DEVICE - Provided are methods and computer program products monitoring the contents of network traffic in a network device. Methods may include collecting, using a kernel space driver interface, network traffic data sent by and/or received at the network device, parsing the collected network traffic data to extract transaction data corresponding to at least one logical transaction defined by a network protocol and storing an indicator of a quantity of the collected network traffic data that was parsed, and generating an event incorporating the extracted transaction data.03-14-2013
20120233282Method and System for Transferring a Virtual Machine - A virtual machine management system is used to instantiate, wake, move, sleep, and destroy individual operating environments in a cloud or cluster. In various embodiments, there is a method and system for transferring an operating environment from a first host to a second host. The first host contains an active environment, with a disk and memory. The disk is snapshotted while the operating environment on the first host is still live, and the snapshot is transferred to the second host. After the initial snapshot is transferred, a differential update using rsync or a similar mechanism can be used to transfer just the changes from the snapshot from the first to the second host. In a further embodiment, the contents of the memory are also transferred. This memory can be transferred as a snapshot after pausing the active environment, or by synchronizing the memory spaces between the two hosts.09-13-2012
20130166669SYSTEM AND METHOD FOR A MOBILE DEVICE TO USE PHYSICAL STORAGE OF ANOTHER DEVICE FOR CACHING - Systems and methods for a mobile device to use physical storage of another device for caching are disclosed. In one embodiment, a mobile device is able to receive over a cellular or IP network a response or content to be cached and wirelessly access the physical storage of the other device via a wireless network to cache the response or content for the mobile device.06-27-2013
20110289177Effecting Hardware Acceleration Of Broadcast Operations In A Parallel Computer - Compute nodes of a parallel computer organized for collective operations via a network, each compute node having a receive buffer and establishing a topology for the network; selecting a schedule for a broadcast operation; depositing, by a root node of the topology, broadcast data in a target node's receive buffer, including performing a DMA operation with a well-known memory location for the target node's receive buffer; depositing, by the root node in a memory region designated for storing broadcast data length, a length of the broadcast data, including performing a DMA operation with a well-known memory location of the broadcast data length memory region; and triggering, by the root node, the target node to perform a next DMA operation, including depositing, in a memory region designated for receiving injection instructions for the target node, an instruction to inject the broadcast data into the receive buffer of a subsequent target node.11-24-2011
20110295967Accelerator System For Remote Data Storage - Data processing and an accelerator system therefor are described. An embodiment relates generally to a data processing system. In such an embodiment, a bus and an accelerator are coupled to one another. The accelerator has an application function block. The application function block is to process data to provide processed data to storage. A network interface is coupled to obtain the processed data from the storage for transmission.12-01-2011
20120191800METHODS AND SYSTEMS FOR PROVIDING DIRECT DMA - A method and system for efficient direct DMA for processing connection state information or other expediting data packets. One example is the use of a network interface controller to buffer TCP type data packets that may contain connection state information. The connection state information is extracted from a received packet. The connection state information is stored in a special DMA descriptor that is stored in a ring buffer area of a buffer memory that is accessible by a host processor when an interrupt signal is received. The packet is then discarded. The host processor accesses the ring buffer memory only to retrieve the stored connection state information from the DMA descriptor without having to access a packet buffer area in the memory.07-26-2012
20120023186OUTPUTTING CONTENT FROM MULTIPLE DEVICES - Technologies are generally described for outputting content from multiple devices. In some examples, a method includes receiving content from a first content output device at a processor. In some examples, the method further includes recording at least a portion of the content by the processor. In some examples, the method further includes determining an identifier of the content by the processor based on the portion. In some examples, the method further includes determining a source of the content by the processor based on the identifier. In some examples, the method further includes requesting that the content be sent from the source to a second content output device.01-26-2012
20110270944NETWORKING SYSTEM CALL DATA DIVISION FOR ZERO COPY OPERATIONS - A method for sending data over a network from a host computer. The host computer includes an operating system comprising at least a user space and a kernel space. The amount of data provided from the user space to the kernel space within one system call exceeds the size of an IP packet. A loop function in an application in the user space sends multiple packets to the kernel space within a single system call containing IO vectors which contain pointers to the data in the user space. A last data unit being processed may be designated using a flag included in the message header. In the kernel space a second loop function is used to reassemble the vector groups and pass them down the network stack. The data may then be passed to the network hardware using a direct memory access transfer directly from the user space to the network hardware.11-03-2011
20110270943ZERO COPY DATA TRANSMISSION IN A SOFTWARE BASED RDMA NETWORK STACK - A method for data transmission on a device without intermediate buffering is provided. An application request is received to transmit data from the device to a second device over a network. The data from application memory is formatted for transmitting to the second device. The data are transmitted from the device to the second device without intermediate buffering. A send state is retrieved. The send state is compared to expected send state. If the send state meets the expected send state, a completion of the data transmit request is generated.11-03-2011
20110173290ROTATING ENCRYPTION IN DATA FORWARDING STORAGE - A method includes receiving a request from a source system to store data, directing the data to a computer memory, the computer memory employing an encryption scheme, and continuously forwarding the data from one computer memory to another computer memory in the network of interconnected computer system nodes without storing on any physical storage device in the network, each computer memory employing the encryption scheme. The continuously forwarding includes determining an address of a node available to receive the data based on one or more factors, sending a message to the source system with the address of a specific node for the requester to forward the data, detecting a presence of the data in memory of the specific node, and forwarding the data to another computer memory of a node in the network of interconnected computer system nodes without storing any physical storage device.07-14-2011
20120110107DIRECT MEMORY ACCESS (DMA) TRANSFER OF NETWORK INTERFACE STATISTICS - In general, in one aspect, the disclosure describes a method that includes maintaining statistics, at a network interface, metering operation of the network interface. The statistics are transferred by direct memory access from the network interface to a memory accessed by at least one processor.05-03-2012
20110173288NETWORK STORAGE SYSTEM AND RELATED METHOD FOR NETWORK STORAGE - A network storage system includes a first data buffer, a second data buffer, a pre-allocating module and a control module. The first data buffer is utilized for storing a storage data received from a network-base. The second data buffer is coupled to the first data buffer and includes a plurality of data buffering units. The pre-allocating module is coupled to the second data buffer and utilized for allocating the plurality of data buffering units to the second data buffer in advance. The control module controls the first data buffer to write the stored storage data into the plurality of data buffering units.07-14-2011
20090287792METHOD OF PROVIDING SERVICE RELATING TO CONTENT STORED IN PORTABLE STORAGE DEVICE AND APPARATUS THEREFOR - Provided are a method of providing a service relating to content stored in a portable storage device to an external device, and an apparatus therefor. The method includes outputting a user interface to manage information relating to contents stored in the portable storage device through a display unit associated with the external device, receiving a command to select content from among the contents through the output user interface, executing a service corresponding to the content selected based on the command, and providing a result of executing the service to the external device.11-19-2009
20080270564Virtual machine migration - Virtual machine migration is described. In embodiment(s), a virtual machine can be migrated from one host computer to another utilizing LUN (logic unit number) masking. A virtual drive of the virtual machine can be mapped to a LUN mask associates the LUN with a host computer. The LUN mask can be changed to unmask the LUN to a second computer to migrate the virtual machine from the host computer to the second computer.10-30-2008
20100005150IMAGE DISPLAY DEVICE, STORAGE DEVICE, IMAGE DISPLAY SYSTEM AND NETWORK SETUP METHOD - An image display system 01-07-2010
20090327444Dynamic Network Link Selection For Transmitting A Message Between Compute Nodes Of A Parallel Comput - Methods, apparatus, and products are disclosed for dynamic network link selection for transmitting a message between nodes of a parallel computer. The nodes are connected using a data communications network. Each node connects to adjacent nodes in the data communications network through a plurality of network links. Each link provides a different data communication path through the network between the nodes of the parallel computer. Such dynamic link selection includes: identifying, by an origin node, a current message for transmission to a target node; determining, by the origin node, whether transmissions of previous messages to the target node have completed; selecting, by the origin node from the plurality of links for the origin node, a link in dependence upon the determination and link characteristics for the plurality of links for the origin node; and transmitting, by the origin node, the current message to the target node using the selected link.12-31-2009
20100146069METHOD AND SYSTEM FOR COMMUNICATING BETWEEN MEMORY REGIONS - A method and system are provided for transferring data in a networked system between a local memory in a local system and a remote memory in a remote system. A RDMA request is received and a first buffer region is associated with a first transfer operation. The system determines whether a size of the first buffer region exceeds a maximum transfer size of the networked system. Portions of the second buffer region may be associated with the first transfer operation based on the determination of the size of the first buffer region. The system subsequently performs the first transfer operation.06-10-2010
20100146068DEVICE, SYSTEM, AND METHOD OF ACCESSING STORAGE - Device, system, and method of accessing storage. For example, a server includes: a Solid-State Drive (SSD) to store data; a memory mapper to map at least a portion of a storage space of the SSD into a memory space of the server; and a network adapter to receive a Small Computer System Interface (SCSI) read command incoming from a client device, to map one or more parameters of the SCSI read command into an area of the memory space of the server from which data is requested to be read by the client device, said area corresponding to a storage area of the SSD, and to issue a Remote Direct Memory Access (RDMA) write command to copy data directly to the client device from said area of the memory space corresponding to the SSD.06-10-2010
20110173289NETWORK SUPPORT FOR SYSTEM INITIATED CHECKPOINTS - A system, method and computer program product for supporting system initiated checkpoints in parallel computing systems. The system and method generates selective control signals to perform checkpointing of system related data in presence of messaging activity associated with a user application running at the node. The checkpointing is initiated by the system such that checkpoint data of a plurality of network nodes may be obtained even in the presence of user applications running on highly parallel computers that include ongoing user messaging activity.07-14-2011
20110173287PREVENTING MESSAGING QUEUE DEADLOCKS IN A DMA ENVIRONMENT - Embodiments of the invention may be used to manage message queues in a parallel computing environment to prevent message queue deadlock. A direct memory access controller of a compute node may determine when a messaging queue is full. In response, the DMA may generate an interrupt. An interrupt handler may stop the DMA and swap all descriptors from the full messaging queue into a larger queue (or enlarge the original queue). The interrupt handler then restarts the DMA. Alternatively, the interrupt handler stops the DMA, allocates a memory block to hold queue data, and then moves descriptors from the full messaging queue into the allocated memory block. The interrupt handler then restarts the DMA. During a normal messaging advance cycle, a messaging manager attempts to inject the descriptors in the memory block into other messaging queues until the descriptors have all been processed.07-14-2011
20090248830REMOTE DIRECT MEMORY ACCESS FOR iSCSI - A storage networking device provides remote direct memory access to its buffer memory, configured to store storage networking data. The storage networking device may be particularly adapted to transmit and receive iSCSI data, such as iSCSI input/output operations. The storage networking device comprises a controller and a buffer memory. The controller manages the receipt of storage networking data and buffer locational data. The storage networking data advantageously includes at least one command for at least partially controlling a device attached to a storage network. Advantageously, the storage networking data may be transmitted using a protocol adapted for the transmission of storage networking data, such as, for example, the iSCSI protocol. The buffer memory advantageously is configured to at least temporarily store at least part of the storage networking data at a location within the buffer memory that is based at least in part on the locational data.10-01-2009
20090024714Method And Computer System For Providing Remote Direct Memory Access - A method for providing remote direct memory access (RDMA) between two computers, preferably between central processing units (CPUs) and a functional subsystem of a computer system as part of their network communication, e.g. using TCP/IP. Tasks of analyzing network protocol data and the actual RDMA operations can be offloaded to the functional subsystem with this method. Further, the functional subsystem cannot compromise the status of the first computer system as only access to certain allowed memory locations is granted by a memory protection unit during phases of actual data transfer between the functional subsystem and the CPUs.01-22-2009
20090198788FAST PATH MESSAGE TRANSFER AGENT - A method of providing a fast path message transfer agent is provided. The method includes receiving bytes of a message over a network connection and determining whether the number of bytes exceeds a predetermined threshold. If the number of bytes is less than a predetermined threshold, then the message is written only to memory. However, if the number of bytes exceeds the predetermined threshold, then some of the bytes (e.g. up to the predetermined threshold) are written to memory, wherein the remainder of the bytes are stored onto the non-volatile storage. If the message was received successfully by each destination, then the message is removed from the memory/non-volatile storage. If not, all failed destinations are identified and the message (with associated failed destinations) is stored on the non-volatile storage for later sending.08-06-2009
20090063651System And Method For Saving Dump Data Of A Client In A Network - A system and method for saving memory dump data from an operating system of a client in a network. The method includes configuring the client to allocate client system memory according to system memory classifications, configuring the client to transfer dump data to at least one dump server, saving said dump data periodically during client system run-time based on the system memory classifications, and saving dump data in the event of a client system crash to at least complement the dump data sent periodically during client system run-time.03-05-2009
20080263171Peripheral device that DMAS the same data to different locations in a computer - A method is disclosed comprising: receiving,.by a network interface, data and a corresponding header; storing, by the network interface, the data in a first memory buffer of a computer that is coupled to the network interface; and storing, by the network interface, the data in a second memory buffer of the computer. For example, the network interface can first store the data in a part of the computer memory that is accessible by a device driver for the network interface. If the application provides to the driver a pointer to a location in memory for storing the data, the driver can pass this pointer to the network interface, which can write the data directly to that location without copying by the CPU. If, however, the application does not provide a pointer, the data controlled by the driver can be copied by the CPU into the application's memory space.10-23-2008
20100153513WIRELESS DATA CHANNELING DEVICE AND METHOD - A wireless data channeling device associated with a host computer is provided, the host computer comprising data that is deemed to be visible with respect to the data channeling device. The data channeling device comprises a connector to allow the device to be fitted to a device that is remote from the host computer, a device transceiver to enable the device to receive/transmit data from/to a host transceiver of the host computer, and a controller for functionally connecting the connector to the host transceiver. Thus, when in a data channeling mode, the visible data on the host computer can be wirelessly transmitted to the remote device via the data channeling device and/or data on the remote device can be transmitted to the host computer via the data channeling device. In an example embodiment, the connector is similar to a connector of a traditional USB flash memory disk so that, when connected to the remote device, the data channeling mimics a standard, off-the-shelf memory stick.06-17-2010
20100180004APPARATUS AND METHODS FOR NETWORK ANALYSIS - Embodiments of methods, systems and apparatus for analysis and capture of network data items are described herein. Some embodiments include a receiving module which may receive a network data item from a network and which may then duplicate the network data item into two network data items. A capture module may receive one of the network data items for storage in storage device. A statistics or analysis module may in parallel receive the other network data item and may then perform network analysis on that network data item. Other embodiments are described and claimed.07-15-2010
20130219005RETRIEVING CONTENT FROM LOCAL CACHE - A network device transmits, to a cache located proximate to the network device, instructions to store content in the cache. The cache stores the content based on the instructions. The network device further receives a request for the content from a mobile communication device; determines, based on the request, that the content is stored in the local cache; and retrieves the content from the local cache. The network device also creates packets based on the retrieved content, and transmits the packets to the mobile communication device.08-22-2013
20100185743Encoding Method and Apparatus for Frame Synchronization Signal - An encoding method for a frame synchronization signal includes: encoding a predetermined intermediate variable corresponding to a cell ID or cell group ID to obtain short codes corresponding to the cell ID or cell group ID; and generating SCH codewords according to the said short codes, instead of directly encoding the cell ID or cell group ID, thereby ensuring that a first short code in each generated S-SCH codeword is larger than a second short code, or a first short code in each generated S-SCH codeword is smaller than a second short code, and a short code distance thereof is relatively small, so as to enhance the reliability of the frame synchronization. An encoding apparatus for a frame synchronization signal is further provided.07-22-2010
20080215700FIREFIGHTING VEHICLE AND METHOD WITH NETWORK-ASSISTED SCENE MANAGEMENT - A method comprises acquiring information pertaining to a scene of a fire. The acquiring step is performed by a sensor connected to a first computer. The method further comprises transmitting the information from the first computer to a second computer by way of a wireless communication network. The second computer is mounted to a fire fighting vehicle and is connected to a display. The method further comprises displaying the information at the fire fighting vehicle using the display.09-04-2008
20100241725ISCSI RECEIVER IMPLEMENTATION - A method for communication is disclosed and may include, in a network interface device, parsing a portion of a TCP segment into one or more portions of Internet Small Computer Systems Interface (iSCSI) Protocol Data Units (PDUs). A header and/or a payload for one or more of the parsed iSCSI PDUs may be recovered. Concurrent with parsing of a remaining portion of the TCP segment to recover a remaining portion of PDUs, the recovered header may be evaluated and/or the recovered payload may be routed external to the network interface device for processing. The evaluating and the routing may occur independently of the parsing within the network interface device. Respective separate physical processors may be used for the parsing and the recovering. The respective separate processors for recovering may be used for the evaluating and the routing.09-23-2010
20090319634Mechanism for enabling memory transactions to be conducted across a lossy network - A network interface is disclosed for enabling remote programmed I/O to be carried out in a “lossy” network (one in which packets may be dropped). The network interface: (1) receives a plurality of memory transaction messages (MTM's); (2) determines that they are destined for a particular remote node; (3) determines a transaction type for each MTM; (4) composes, for each MTM, a network packet to encapsulate at least a portion of that MTM; (5) assigns a priority to each network packet based upon the transaction type of the MTM that it is encapsulating; (6) sends the network packets into a lossy network destined for the remote node; and (7) ensures that at least a subset of the network packets are received by the remote node in a proper sequence. By doing this, the network interface makes it possible to carry out remote programmed I/O, even across a lossy network.12-24-2009
20100306336Communication System - The invention relates to a data communication method which is based on a layer model, the layer model (12-02-2010
20130138759NETWORK SUPPORT FOR SYSTEM INITIATED CHECKPOINTS - A system, method and computer program product for supporting system initiated checkpoints in parallel computing systems. The system and method generates selective control signals to perform checkpointing of system related data in presence of messaging activity associated with a user application running at the node. The checkpointing is initiated by the system such that checkpoint data of a plurality of network nodes may be obtained even in the presence of user applications running on highly parallel computers that include ongoing user messaging activity.05-30-2013
20110246598MAPPING RDMA SEMANTICS TO HIGH SPEED STORAGE - Embodiments described herein are directed to extending remote direct memory access (RDMA) semantics to enable implementation in a local storage system and to providing a management interface for initializing a local data store. A computer system extends RDMA semantics to provide local storage access using RDMA, where extending the RDMA semantics includes the following: mapping RDMA verbs of an RDMA verbs interface to a local data store and altering RDMA ordering semantics to allow out-of-order processing and/or out-of-order completions. The computer system also accesses various portions of the local data store using the extended RDMA semantics.10-06-2011
20110246599STORAGE CONTROL APPARATUS AND STORAGE CONTROL METHOD - An apparatus includes a first storage unit for storing data received from the upper-layer apparatus in the first storage unit, a second storage unit, a data transmitting unit for transmitting the data stored in the first storage unit to the second storage apparatus based on an order that the data is stored in the first storage unit, a transferring unit for transferring and storing transfer data stored in the first storage unit into the second storage unit when an amount of the data stored in the first storage unit is larger than a predetermined amount, the transfer data being at least part of the data stored in the first storage unit; and, a staging unit for transferring the transfer data stored in the second storage unit into the first storage unit if an amount of the data stored in the first storage unit is smaller than a predetermined amount.10-06-2011
20100161750IP STORAGE PROCESSOR AND ENGINE THEREFOR USING RDMA - An IP Storage processor and processing engines for use in the IP storage processor is disclosed. The IP Storage processor uses an architecture that may provide capabilities to transport and process Internet Protocol (IP) packets from Layer 2 through transport protocol layer and may also perform packet inspection through Layer 7. The engines may perform pass-through packet classification, policy processing and/or security processing enabling packet streaming through the architecture at nearly the full line rate. A scheduler schedules packets to packet processors for processing. An internal memory or local session database cache may store a transport protocol session information database and/or store a storage information session database, for a certain number of active sessions. The session information that is not in the internal memory is stored and retrieved to/from an additional memory. An application running on an initiator or target can in certain instantiations register a region of memory, which is made available to its peer(s) for access directly without substantial host intervention through ROMA data transfer.06-24-2010
20110035459Network Direct Memory Access - In one embodiment, a system comprises at least a first node and a second node coupled to a network. The second node comprises a local memory and a direct memory access (DMA) controller coupled to the local memory. The first node is configured to transmit at least a first packet to the second node to access data in the local memory and at least one other packet that is not coded to access the local memory. The second node is configured to capture the packet from a data link layer of a protocol stack, and wherein the DMA controller is configured to perform one more transfers with the local memory to access the data specified by the first packet responsive to the first packet received from the data link layer. The second node is configured to process the other packet to a top of the protocol stack.02-10-2011
20100332611FILE SHARING SYSTEM - To realize efficient processing regarding accesses to files. A remote controlling processing apparatus 12-30-2010
20110213854Device, system, and method of accessing storage - Device, system, and method of accessing storage. For example, a server includes: a Solid-State Drive (SSD) to store data; a memory mapper to map at least a portion of a storage space of the SSD into a memory space of the server; and a network adapter to receive a Small Computer System Interface (SCSI) read command incoming from a client device, to map one or more parameters of the SCSI read command into an area of the memory space of the server from which data is requested to be read by the client device, said area corresponding to a storage area of the SSD, and to issue a Remote Direct Memory Access (RDMA) write command to copy data directly to the client device from said area of the memory space corresponding to the SSD.09-01-2011
20110125865METHOD FOR OPERATING AN ELECTRONIC CONTROL UNIT DURING A CALIBRATION PHASE - A method for operating an electronic control unit during a calibration phase; the method contemplating the steps of: dividing an area of a FLASH storage memory connected to a microprocessor in two pages between them identical and redundant, each of which is aimed at storing all the calibration parameters used by a control software; and using the two pages alternatively so that a first page contains the values of the calibration parameters and is queried by the microprocessor, while a second page is cleared and made available to store the updated values of the calibration parameters.05-26-2011
20090313347System and Method to Integrate Measurement Information within an Electronic Laboratory Notebook Environment - Capability to record relevant aggregated data via a test and measurement instrument interface through a software agent. The agent resides within the test and measurement instrument and gathers the information when activated. The information can be measurement data; measurement setup parameters; test system topology; user notes, brief descriptions, audio recordings or pen input; pictures; or attached documents. The agent can communicate directly to an electronic laboratory notebook server or can store the information on a portable computer readable media (CRM). A user can upload the information from the portable CRM to the server. The user can access the information via a PC workstation.12-17-2009
20100023595SYSTEM AND METHOD OF MULTI-PATH DATA COMMUNICATIONS - In a particular embodiment, a multi-path bridge circuit includes a backplane input/output (I/O) interface to couple to a local backplane having at least one communication path to a processing node and includes at least one host interface adapted to couple to a corresponding at least one processor. The multi-path bridge circuit further includes logic adapted to identify two or more communication paths through the backplane interface to a destination memory, to divide a data block stored at a source memory into data block portions, and to transfer the data block portions in parallel from the source memory to the destination node via the identified two or more communication paths.01-28-2010
20100198936STREAMING MEMORY CONTROLLER - A memory controller (SMC) is provided for coupling a memory (MEM) to a network (N). The memory controller (SMC) comprises a first interface (PI), a streaming memory unit (SMU) and a second interface (MI). The first interface (PI) is used for connecting the memory controller (SMC) to the network (N) for receiving and transmitting data streams (ST08-05-2010
20090083392SIMPLE, EFFICIENT RDMA MECHANISM - A server interconnect system for sending data includes a first server node and a second server node. Each server node is operable to send and receive data. The interconnect system also includes a first and second interface unit. The first interface unit is in communication with the first server node and has one or more RDMA doorbell registers. Similarly, the second interface unit is in communication with the second server node and has one or more RDMA doorbell registers. The system also includes a communication switch that is operable to receive and route data from the first or second server nodes using a RDMA read and/or an RDMA write when either of the first or second RDMA doorbell registers indicates that data is ready to be sent or received.03-26-2009
20100036930APPARATUS AND METHODS FOR EFFICIENT INSERTION AND REMOVAL OF MPA MARKERS AND RDMA CRC DIGEST - The invention relates to insertion and removal of MPA markers and RDMA CRCs in RDMA data streams, after determining the locations for these fields. An embodiment of the invention comprises a host interface, a transmit interface connected to the host interface, and a processor interface connected to both transmit and host interfaces. The host interface operates under the direction of commands received from the processor interface when processing inbound RDMA data. The host interface calculates the location of marker locations and removes the markers. The transmit interface operates under the direction of commands received from the processor interface when processing outbound RDMA data. The transmit interface calculates the positions in the outbound data where markers are to be inserted. The transmit interface then places the markers accordingly.02-11-2010
20100011084ADVERTISEMENT FORWARDING STORAGE AND RETRIEVAL NETWORK - Methods and apparatus, including computer program products, for to advertisement forwarding storage and retrieval network. A method includes, in a network of interconnected computer system nodes, directing advertisement to a computer memory, directing data to a computer memory, continuously forwarding each of the unique data, independent of each other, from one computer memory to another computer memory in the network of interconnected computer system nodes without storing on any physical storage device in the network, continuously forwarding each of the unique advertisements, independent of each other, from one computer memory to another computer memory in the network of interconnected computer system nodes without storing on any physical storage device in the network, and retrieving one of the advertisements in response to an activity.01-14-2010
20110179131MEDIA DELIVERY IN DATA FORWARDING STORAGE NETWORK - Methods and apparatus, including computer program products, for media delivery in data forwarding storage network. A method includes, in a network of interconnected computer system nodes, directing unique data items to a computer memory, ory, and continuously forwarding each of the unique data items, independent of each other, from one computer memory to another computer memory in the network of interconnected computer system nodes without storing on any physical storage device in the network.07-21-2011
20090031002Self-Pacing Direct Memory Access Data Transfer Operations for Compute Nodes in a Parallel Computer - Methods, apparatus, and products are disclosed for self-pacing DMA data transfer operations for nodes in a parallel computer that include: transferring, by an origin DMA on an origin node, a RTS message to a target node, the RTS message specifying an message on the origin node for transfer to the target node; receiving, in an origin injection FIFO for the origin DMA from a target DMA on the target node in response to transferring the RTS message, a target RGET descriptor followed by a DMA transfer operation descriptor, the DMA descriptor for transmitting a message portion to the target node, the target RGET descriptor specifying an origin RGET descriptor on the origin node that specifies an additional DMA descriptor for transmitting an additional message portion to the target node; processing, by the origin DMA, the target RGET descriptor; and processing, by the origin DMA, the DMA transfer operation descriptor.01-29-2009
20110153771DIRECT MEMORY ACCESS WITH MINIMAL HOST INTERRUPTION - Data received over a shared network interface is directly placed by the shared network interface in a designated memory area of a host. In providing this direct memory access, the incoming data packets are split, such that the headers are separated from the data. The headers are placed in a designated area of a memory buffer of the host. Additionally, the data is stored in contiguous locations within the buffer. This receive and store is performed without interruption to the host. Then, at a defined time, the host is interrupted to indicate the receipt and direct storage of the data.06-23-2011
20130159448Optimized Data Communications In A Parallel Computer - A parallel computer includes nodes that include a network adapter that couples the node in a point-to-point network and supports communications in opposite directions of each dimension. Optimized communications include: receiving, by a network adapter of a receiving compute node, a packet—from a source direction—that specifies a destination node and deposit hints. Each hint is associated with a direction within which the packet is to be deposited. If a hint indicates the packet to be deposited in the opposite direction: the adapter delivers the packet to an application on the receiving node; forwards the packet to a next node in the opposite direction if the receiving node is not the destination; and forwards the packet to a node in a direction of a subsequent dimension if the hints indicate that the packet is to be deposited in the direction of the subsequent dimension.06-20-2013
20130159449Method and Apparatus for Low Latency Data Distribution - Various techniques are disclosed for distributing data, particularly real-time data such as financial market data, to data consumers at low latency. Exemplary embodiments include embodiments that employ adaptive data distribution techniques and embodiments that employ a multi-class distribution engine.06-20-2013
20120303735DOMAIN NAME SERVICE RESOLVER - A domain name service (DNS) resolver returns Internet protocol (IP) addresses. A connection with an Internet application or device receives domain name resolution requests that originate outside of the Internet. A direct DNS resolver identifies IP addresses without referring to the Internet or using other DNS resolvers. An address store includes a predetermined list of domain names and corresponding IP addresses specified from a point remote to the DNS resolver. The DNS resolver processes the domain name resolutions for the predetermined list of domain names differently than domain name resolutions for other domain names not on the predetermined list of domain names. At least part of the predetermined list is pushed to a destination upon receiving a resolution request for a domain name in the predetermined list of domain names, the request being of a type other than an authoritative resolution request to be performed by the direct DNS resolver.11-29-2012
20130159450OPTIMIZED DATA COMMUNICATIONS IN A PARALLEL COMPUTER - A parallel computer includes nodes that include a network adapter that couples the node in a point-to-point network and supports communications in opposite directions of each dimension. Optimized communications include: receiving, by a network adapter of a receiving compute node, a packet—from a source direction—that specifies a destination node and deposit hints. Each hint is associated with a direction within which the packet is to be deposited. If a hint indicates the packet to be deposited in the opposite direction: the adapter delivers the packet to an application on the receiving node; forwards the packet to a next node in the opposite direction if the receiving node is not the destination; and forwards the packet to a node in a direction of a subsequent dimension if the hints indicate that the packet is to be deposited in the direction of the subsequent dimension.06-20-2013
20110258282OPTIMIZED UTILIZATION OF DMA BUFFERS FOR INCOMING DATA PACKETS IN A NETWORK PROTOCOL - A method, system and computer program product for facilitating network data packet management. In one embodiment, a controller is configured to receive data packets. Incoming data packets are stored in DMA mapped packet buffers. A time stamp is associated with the packet buffers. When the associated time stamp exceeds a defined threshold, the controller is configured to copy the packet buffers stored in DMA memory to non-DMA memory. Once copied, the DMA memory previously used to store the packet buffers is available to receive new data packets. The controller is configured to continue copying aged packet buffers to non-DMA memory until an unallocated amount DMA memory is reached.10-20-2011
20090177755SCRIPT SERVING APPARATUS AND METHOD - A computer system comprising a processor operably connected to a memory device. The memory device stores an application providing functionality and a plug-in augmenting that functionality. In selected embodiments, the plug-in includes a request module configured to generate a request for a script, a communication module configured to contact a server and submit the request thereto, an input module configured to receive the script from the server, and an execution module configured to load the script directly into application memory corresponding to the application.07-09-2009
20080228896Pipelined buffer interconnect with trigger core controller - A method and system to transfer a data stream from a data source to a data sink are described herein. The system comprises a trigger core, a plurality of dedicated buffers and a plurality of dedicated buses coupled to the plurality of buffers, trigger core, the data source and the data sink. In response to receiving a request for a data transfer from a data source to a data sink, the trigger core assigns a first buffer and a first bus to the data source for writing data, locks the first buffer and first bus, releases the first buffer and the first bus upon indication from the data source of completion of data transfer to the first buffer, assigns the first buffer and first bus to the data sink for reading data and assigns a second buffer and second bus to the data source for writing data thereby pipelining the data transfer from the data source to the data sink.09-18-2008
20080270563Message Communications of Particular Message Types Between Compute Nodes Using DMA Shadow Buffers - Message communications of particular message types between compute nodes using DMA shadow buffers includes: receiving a buffer identifier specifying an application buffer having a message of a particular type for transmission to a target compute node through a network; selecting one of a plurality of shadow buffers for a DMA engine on the compute node for storing the message, each shadow buffer corresponding to a slot of an injection FIFO buffer maintained by the DMA engine; storing the message in the selected shadow buffer; creating a data descriptor for the message stored in the selected shadow buffer; injecting the data descriptor into the slot of the injection FIFO buffer corresponding to the selected shadow buffer; selecting the data descriptor from the injection FIFO buffer; and transmitting the message specified by the selected data descriptor through the data communications network to the target compute node.10-30-2008
20110055346DIRECT MEMORY ACCESS BUFFER MANAGEMENT - Disclosed are systems and methods for reclaiming posted buffers during a direct memory access (DMA) operation executed by an input/output device (I/O device) in connection with data transfer across a network. During the data transfer, the I/O device may cancel a buffer provided by a device driver thereby relinquishing ownership of the buffer. A condition for the I/O device relinquishing ownership of a buffer may be provided by a distance vector that may be associated with the buffer. The distance vector may specify a maximum allowable distance between the buffer and a buffer that is currently fetched by the I/O device. Alternatively, a condition for the I/O device relinquishing ownership of a buffer may be provided by a timer. The timer may specify a maximum time that the I/O device may maintain ownership of a particular buffer. In other implementations, a mechanism is provided to force the I/O device to relinquish some or all of the buffers that it controls.03-03-2011
20120278422LIVE OBJECT PATTERN FOR USE WITH A DISTRIBUTED CACHE - A live object pattern is described that enables a distributed cache to store live objects as data entries thereon. A live object is a data entry stored in the distributed cache which represents a particular function or responsibility. When a live object arrives to the cache on a particular cluster server, a set of interfaces are called back which inform the live object that it has arrived at that server and that it should initiate to perform its functions. A live object is thus different from “dead” data entries because a live object performs a set of function, can be started/stopped and can interact with other live objects in the distributed cache. Because live objects are backed up across the cluster just like normal data entries, the functional components of the system are more highly available and are easily transferred to another server's cache in case of failures.11-01-2012
20120311063METHOD AND APPARATUS FOR USING A SINGLE MULTI-FUNCTION ADAPTER WITH DIFFERENT OPERATING SYSTEMS - A flexible arrangement allows a single arrangement of Ethernet channel adapter (ECA) hardware functions to appear as needed to conform to various operating system deployment models. A PCI interface presents a logical model of virtual devices appropriate to the relevant operating system. Mapping parameters and values are associated with the packet streams to allow the packet streams to be properly processed according to the presented logical model and needed operations. Mapping occurs at both the host side and at the network side to allow the multiple operations of the ECA to be performed while still allowing proper delivery at each interface.12-06-2012
20110099243APPARATUS AND METHOD FOR IN-LINE INSERTION AND REMOVAL OF MARKERS - An apparatus is provided, for performing a direct memory access (DMA) operation between a host memory in a first server and a network adapter. The apparatus includes a host frame parser and a protocol engine. The host frame parser is configured to receive data corresponding to the DMA operation from a host interface, and is configured to insert markers on-the-fly into the data at a prescribed interval and to provide marked data for transmission to a second server over a network fabric. The protocol engine is coupled to the host frame parser. The protocol engine is configured to direct the host frame parser to insert the markers, and is configured to specify a first marker value and an offset value, whereby the host frame parser is enabled to locate and insert a first marker into the data.04-28-2011
20110138008Deterministic Communication Between Graphical Programs Executing on Different Computer Systems Using Variable Nodes - A system and method for enabling deterministic or time-triggered data exchange between a first graphical program and a second graphical program. A first variable is assigned to a first time slot in a network cycle. A first graphical program may be configured to write data to the first variable. A second graphical program may be configured to read data from the first variable. The first graphical program may be executed on a first computer system, where executing the first graphical program comprises writing data to the first variable. Writing data to the first variable may cause the data to be delivered over a network to a second computer system when the first time slot occurs. The second graphical program may be executed on the second computer system, where executing the second graphical program comprises reading from the first variable the data sent from the first computer system.06-09-2011
20100030866METHOD AND SYSTEM FOR REAL-TIME CLOUD COMPUTING - A system for providing real-time cloud computing. The system includes a plurality of computing nodes, each node including a CPU, a memory, and a hard disk. The system includes a central intelligence manager for real-time assigning of tasks to the plurality of computing nodes. The central intelligence manager is configured to provide CPU scaling in parallel. The central intelligence manager is configured to provide a concurrent index. The central intelligence manager is configured to provide a multi-level cache. The central intelligence manager is configured to provide direct disk reads to the hard disks. The central intelligence manager is configured to utilize UDP for peer-to-peer communication between the computing nodes.02-04-2010
20100017496METHOD AND SYSTEM FOR USING SHARED MEMORY WITH OPTIMIZED DATA FLOW TO IMPROVE INPUT/OUTPUT THROUGHOUT AND LATENCY - The data path in a network storage system is streamlined by sharing a memory among multiple functional modules (e.g., N-module and D-module) of a storage server that facilitates symmetric access to data from multiple clients. The shared memory stores data from clients or storage devices to facilitate communication of data between clients and storage devices and/or between functional modules, and reduces redundant copies necessary for data transport. It reduces latency and improves throughput efficiencies by minimizing data copies and using hardware assisted mechanisms such as DMA directly from host bus adapters over an interconnection, e.g. switched PCI-e “network”. This scheme is well suited for a “SAN array” architecture, but also can be applied to NAS protocols or in a unified protocol-agnostic storage system. The storage system can provide a range of configurations ranging from dual module to many modules with redundant switched fabrics for I/O, CPU, memory, and disk connectivity.01-21-2010
20120150986Method and System for Extending Memory Capacity of a Mobile Device Using Proximate Devices and Unicasting - An improved download capability for mobile devices, without requiring increasing of the local memory of such devices, by providing a set of multimedia devices with the capability to create a cooperative download grid where multiple instrumented devices can be aggregated together according to predefined profiles. This capability is useful in at least two different scenarios. The first is when a SIP enabled device must download a large file having a capacity that is larger than the available memory of the SIP device. The second is when a SIP enabled device must download a file but cannot be connected for a long enough time to accomplish the download. If the SIP device is in proximity to other compatible devices such as Voice over Internet Protocol (VoIP) or Session Initiation Protocol (SIP), these devices are operable to be dynamically aggregated to provide a download grid with multiprotocol support that allows optimized downloading.06-14-2012
20120005300SELF CLOCKING INTERRUPT GENERATION IN A NETWORK INTERFACE CARD - A network interface card may issue interrupts to a host in which the determination of when to issue an interrupt to the host may be based on the incoming packet rate. In one implementation, an interrupt controller of the network interface card may issue interrupts to that informs a host of the arrival of packets. The interrupt controller may issue the interrupts in response to arrival of a predetermined number of packets, where the interrupt controller re-calculates the predetermined number based on an arrival rate of the incoming packets.01-05-2012
20120209940METHOD FOR SWITCHING TRAFFIC BETWEEN VIRTUAL MACHINES - Methods for switching traffic include a physical machine running source and destination virtual machines (VMs). The source VM issues a data unit addressed to the destination VM. The physical machine has a physical network interface in communication with the VMs. The physical network interface transmits a sub-packet, which includes a partial portion of the data unit, over a network while a majority portion of the data unit remains at the physical machine. A network switch on the network receives the sub-packet transmitted by the physical network interface. The network switch performs one or more OSI Layer 2 through Layer 7 switching functions on the sub-packet and returns that sub-packet to the physical network interface. The physical network interface identifies the data unit stored in the memory in response to the sub-packet returned from the network switch and forwards the identified data unit to the destination VM.08-16-2012
20120209939MEMORY SYSTEM CAPABLE OF ADDING TIME INFORMATION TO DATA OBTAINED VIA NETWORK - According to one embodiment, a memory system includes a non-volatile semiconductor memory device, a control unit, a memory, an extension register, and a timer. The control unit controls the non-volatile semiconductor memory device. The memory as a work area is connected to the control unit. The extension register is provided in the memory and time information is set therein. The timer updates the time information. When the control unit records a file obtained via a network in the non-volatile semiconductor memory device, the control unit adds the time information updated by the timer to the file.08-16-2012
20110167127MEASUREMENT IN DATA FORWARDING STORAGE - Methods and apparatus, including computer program products, for measurement in data forwarding storage. A method includes, in a network of interconnected computer system nodes, receiving a request from a source system to store data, directing the data to a computer memory, storing information about the data in a store associated with a central server, and continuously forwarding the data and the store from one computer memory to another computer memory in the network of interconnected computer system nodes without storing on any physical storage device in the network.07-07-2011
20120209941COMMUNICATION APPARATUS, AND APPARATUS AND METHOD FOR CONTROLLING COLLECTION OF STATISTICAL DATA - In a communication apparatus, a collector collects a plurality of statistical data values describing communication activities, based on given user data frames, and produces statistics transmission data including the collected statistical data values and management data tags added thereto. A controller transfers, by using a direct memory access technique, the statistics transmission data produced by the collector. A memory stores the statistics transmission data transferred by the controller.08-16-2012
20120072523NETWORK DEVICES WITH MULTIPLE DIRECT MEMORY ACCESS CHANNELS AND METHODS THEREOF - A method, computer readable medium, and a system for communicating with networked clients and servers through a network device is disclosed. A first network data packet is received at a first port of a network device. The first network data packet is destined for a first executing application of a plurality of executing applications operating in the network device. The plurality of executing applications are associated with corresponding application drivers utilizing independent and unique direct memory access (DMA) channels. A first DMA channel is identified, wherein the first DMA channel is mapped to the first port and associated with a first application driver corresponding to the first executing application. The first network data packet is transmitted to the first executing application over the first identified DMA channel.03-22-2012
20110106906METHOD AND SYSTEM FOR OFFLINE DATA ACCESS ON COMPUTER SYSTEMS - While a computer system is in operational state, a network interface controller (NIC) in the computer system may be operable to copy select data to a secondary storage device. The secondary storage device is accessible by the NIC while the computer system is in an offline state or not operational. The NIC may be operable to provide remote accessibility to the copy of the select data stored in the secondary storage device over a network while the computer system is in the offline state and the NIC is supplied with electrical power and active. While the computer system is in the operational state and whenever a change is made to the select data, the NIC is operable to replace the copy of the select data stored in the secondary storage device with an updated copy of the select data based on the change.05-05-2011
20110106905DIRECT SENDING AND ASYNCHRONOUS TRANSMISSION FOR RDMA SOFTWARE IMPLEMENTATIONS - Exemplary embodiments include RDMA methods and systems for sending application data to a computer memory destination in a direct but non-blocking fashion. The method can include posting a new work request for an RDMA connection or association, determining if there is a prior work request for the same connection or association enqueued for processing, in response to a determination that no prior work request is enqueued for processing, processing the new work request directly by sending RDMA frames containing application data referred to by the work request to the computer memory destination, performing direct sending while there is sufficient send space to process the new work request, and delegating the new work request to asynchronous transmission if a prior work request is already enqueued for processing or lack of send space would block a subsequent transmission operation.05-05-2011
20100094948WORKLOAD MIGRATION USING ON DEMAND REMOTE PAGING - In one embodiment a method for migrating a workload from one processing resource to a second processing resource of a computing platform is disclosed. The method can include a command to migrate a workload that is processing and the process can be interrupted and some memory processes can be frozen in response to the migration command. An index table can be created that identifies memory locations that determined where the process was when it is interrupted. Table data, pinned page data, and non-private process data can be sent to the second processing resource. Contained in this data can be restart type data. The second resource or target resource can utilize this data to restart the process without the requirement of bulk data transfers providing an efficient migration process. Other embodiments are also disclosed.04-15-2010
20120221669COMMUNICATION METHOD FOR PARALLEL COMPUTING, INFORMATION PROCESSING APPARATUS AND COMPUTER READABLE RECORDING MEDIUM - A communication method includes reporting information that indicates disposition of communication data in a communication buffer from a first node to second nodes by a multi-destination delivery using a barrier synchronization or a reduction to all nodes. The communication data is transferred between the first node and the second nodes by at least one of collective communication methods as a node-to-node communication method used in parallel computing. The communication method transfers the communication data by the second nodes using the information that indicates the disposition of the communication data in the communication buffer.08-30-2012
20120221668CLOUD STORAGE ACCESS DEVICE AND METHOD FOR USING THE SAME - A cloud storage access device includes a data fetching unit, a user management unit, and a data link unit. The data fetching unit collects private data of each user of the cloud storage access device. The user management unit creates a home directory corresponding to each user in the cloud storage access device. The data link unit connects each of the home directories to both the cloud and a local storage of a network terminal, such that the cloud storage access device communicates with both the cloud and the local storage. Each user of the cloud storage access device stores data to the cloud or the local storage and accesses data stored in the cloud or the local storage through the home directory corresponding to the user.08-30-2012
20120131125METHODS AND SYSTEMS OF DYNAMICALLY MANAGING CONTENT FOR USE BY A MEDIA PLAYBACK DEVICE - Some embodiments provide systems and/or methods of managing content in providing a playback experience associated with a portable storage medium by detecting access to a first portable storage medium with multimedia content recorded on the first portable storage medium; evaluating content on the first portable storage medium; evaluating local memory of the multimedia playback device; determining, in response to the evaluation of the content on the first portable storage medium and the evaluation of the local memory, whether memory on the local memory needs to be freed up in implementing playback of multimedia content in association with the first portable storage medium; and moving one or more contents stored on the local memory of the multimedia playback device to a virtual storage accessible by the multimedia playback device over a distributed network in response to determining that memory on the local memory needs to be freed up.05-24-2012
20120131124RDMA READ DESTINATION BUFFERS MAPPED ONTO A SINGLE REPRESENTATION - A computer-implemented method, system, and article of manufacture for data communication between a requester and a responder in a remote direct memory access (RDMA) network, where each of the requester and the responder is an RDMA-enabled host of the network. The method includes: sending a request for the responder to provide data, where the request includes a mapped steering tag that is obtained by mapping a set of memory buffers of the requester onto a single representation that allows for identifying each of the memory buffers of the set; and receiving the requested data together with the mapped steering tag and assigning the data being received to the memory buffers of the set consistently with the mapping.05-24-2012
20120136958METHOD FOR ANALYZING PROTOCOL DATA UNIT OF INTERNET SMALL COMPUTER SYSTEMS INTERFACE - A method for analyzing a Protocol Data Unit (PDU) of an internet Small Computer Systems Interface (iSCSI) is used for processing a data write request of the iSCSI. The method includes sending the data write request to a target; the target generating a Ready to Transfer (R2T) PDU according to the data write request, and transferring the R2T PDU to an initiator; the initiator generating multiple groups of Data Out PDUs, and writing a scatter/gather block in a target transfer tag of each Data Out PDU; the target finding the corresponding scatter/gather block according to the target transfer tag, and obtaining a host buffer from the scatter/gather block; the target executing a Direct Memory Access command, so as to directly write a payload content received by the target in the host buffer; and after the target completes the write request, the target sending out an RSP PDU to the initiator.05-31-2012
20110185032COMMUNICATION APPARATUS, INFORMATION PROCESSING APPARATUS, AND METHOD FOR CONTROLLING COMMUNICATION APPARATUS - A communication apparatus including: a receiving portion that receives alignment specifying information, the alignment specifying information indicating which of main memories included in a first information processing apparatus and a second information processing apparatus to align the requested data; a division location calculating portion that calculates a divisional location of the requested data so that the divisional location of the requested data becomes an alignment boundary on the main memory included in any one of the first and the second information processing apparatuses specified by the received alignment specifying information, the alignment boundary being integral multiples of a given data width; and a transmitting portion that divides the requested data stored into the main memory in the second information processing apparatus based on the calculated divisional location, and transmits the divided data to the first information processing apparatus.07-28-2011
20110185031Device and method for controlling dissemination of contents between peers having wireless communication capacities, depending on vote vectors - A method is intended for controlling dissemination of content in a peer-to-peer mode between peers having wireless communication capacities and comprising a cache memory for storing contents. This method consists, each time a peer, having a group of variable values each associated to a content it can store into its cache memory and representative of utility that storing of this content represents for it and for other peers, accesses to a wireless network or to another peer offering access to these contents, in downloading N contents having the N highest variable values into its group, N being a number depending on the storage capacity the peer is ready to use into its cache memory to store contents to be downloaded.07-28-2011
20100049821DEVICE, SYSTEM, AND METHOD OF DISTRIBUTING MESSAGES - Device, system, and method of distributing messages. For example, a data publisher capable of communication with a plurality of subscribers via a network fabric, the data publisher comprising: a memory allocator to allocate a memory area of a local memory unit of the data publisher to be accessible for Remote Direct Memory Access (RDMA) read operations by one or more of the subscribers; and a publisher application to create a message log in said memory area, to send a message to one or more of the subscribers using a multicast transport protocol, and to store in said memory area a copy of said message. A subscriber device handles recovery of lost messages by directly reading the lost messages from the message log of the data publisher using RDMA read operation(s).02-25-2010
20120185554DATA TRANSFER DEVICE AND DATA TRANSFER METHOD - An object of the present invention is to efficiently perform a data transfer by using a plurality of data transfer devices. A storage apparatus 07-19-2012
20120084380METHOD AND SYSTEM FOR COMMUNICATING BETWEEN MEMORY REGIONS - A method and system are provided for transferring data in a networked system between a local memory in a local system and a remote memory in a remote system. A RDMA request is received and a first buffer region is associated with a first transfer operation. The system determines whether a size of the first buffer region exceeds a maximum transfer size of the networked system. Portions of the second buffer region may be associated with the first transfer operation based on the determination of the size of the first buffer region. The system subsequently performs the first transfer operation.04-05-2012
20120259941SERVER AND METHOD FOR THE SERVER TO ACCESS A VOLUME - Embodiments of the present technical solution relate to the technique field of storage, and disclose a server and a method for the server to access a volume. The method comprises: determining, from a first list, a block that needs to be accessed according to an access offset of a volume that needs to be accessed; determining, from a second list, a storage controller corresponding to the block that needs to be accessed according to the determined block; and sending a data reading request or a data writing request to the storage controller corresponding to the block that needs to be accessed to process. Embodiments of the present invention can reduce time delay when the data reading request or the data writing request of the server reaches the block that needs to be accessed.10-11-2012
20120259940RDMA (REMOTE DIRECT MEMORY ACCESS) DATA TRANSFER IN A VIRTUAL ENVIRONMENT - In an embodiment, a method is provided. In an embodiment, the method provides determining that a message has been placed in a send buffer; and transferring the message to an application on a second virtual machine by bypassing use of an operating system to process the message by directly placing the message in an application memory space from which the application can retrieve the message.10-11-2012
20120233283DETERMINING SERVER WRITE ACTIVITY LEVELS TO USE TO ADJUST WRITE CACHE SIZE - Provided are a computer program product, system, and method for determining server write activity levels to use to adjust write cache size. Information on server write activity to the cache is gathered. The gathered information on write activity is processed to determine a server write activity level comprising one of multiple write activity levels indicating a level of write activity. The determined server write activity level is transmitted to a storage server having a write cache, wherein the storage server uses the determined server write activity level to determine whether to adjust a size of the storage server write cache.09-13-2012
20080301254METHOD AND SYSTEM FOR SPLICING REMOTE DIRECT MEMORY ACCESS (RDMA) TRANSACTIONS IN AN RDMA-AWARE SYSTEM - Aspects of a system for splicing RDMA transactions in an RDMA system may include a main processor within a main server that may receive read requests from a client device. The main processor may translate a data reference contained in each read request to generate a physical buffer list (PBL). The processor 12-04-2008
20120265838PROGRAMMABLE LOGIC CONTROLLER - A PLC is enabled to determine whether device data has been completely transferred to an FTP server and is improved in the flexibility of setting for a completed-transfer notice code. To this end, the PLC includes a logging section for logging device data and outputting a log file describing the results of logging of the device data to a memory card; and a file transfer section for transferring the log file delivered to the memory card to the FTP server. After having completely transferred all the data that constitutes the log file (Yes in step S10-18-2012
20120265840Providing a Memory Region or Memory Window Access Notification on a System Area Network - A system and method for providing a memory region/memory window (MR/MW) access notification on a system area network are provided. Whenever a previously allocated MR/MW is accessed, such as via a remote direct memory access (RDMA) read/write operation, a notification of the access is generated and written to a queue data structure associated with the MR/MW. In one illustrative embodiment, this queue data structure may be a MR/MW event queue (EQ) data stricture that is created and used for all consumer processes and all MR/MWs. In other illustrative embodiments, the EQ is associated with a protection domain. In yet another illustrative embodiment, an event record may be posted to an asynchronous event handler in response to the accessing of the MR/MW. In another illustrative embodiment, a previously posted queue element may be used to generate a completion queue element in response to the accessing of the MR/MW.10-18-2012
20120265839RESPONSE DEVICE, INTEGRATED CIRCUIT OF SAME, RESPONSE METHOD, AND RESPONSE SYSTEM - The present invention performs efficient data transfer between devices. In particular, the present invention can reduce processing loads and power consumption of a response device 10-18-2012
20120265837REMOTE DIRECT MEMORY ACCESS OVER DATAGRAMS - A communication stack for providing remote direct memory access (RDMA) over a datagram network is disclosed. The communication stack has a user level interface configured to accept datagram related input and communicate with an RDMA enabled network interface card (NIC) via an NIC driver. The communication stack also has an RDMA protocol layer configured to supply one or more data transfer primitives for the datagram related input of the user level. The communication stack further has a direct data placement (DDP) layer configured to transfer the datagram related input from a user storage to a transport layer based on the one or more data transfer primitives by way of a lower layer protocol (LLP) over the datagram network.10-18-2012
20120324034PROVIDING ACCESS TO SHARED STATE DATA - Methods, systems, and computer-readable media for manipulating in-memory data entities are provided. Embodiments of the present invention use a Representational State Transfer (“REST”) web service to manipulate the in-memory data entities. In one embodiment, a “snap shot” is taken of the in-memory data entities at a point in time to create representations of the entities. A hierarchy of the representations is built. The hierarchy is used to make the entities addressable via a URI. Embodiments of the invention may then map the entity representations in the hierarchy to the entities. An embodiment of the invention uses handlers to process a REST style request addressed to an entity representation. The handler reads the command and determines whether the command is authorized for performance on the entity and performs that command, if appropriate.12-20-2012
20110264758USER-LEVEL STACK - A method for transmitting data by means of a data processing system, the system being capable of supporting an operating system and at least one application and having access to a memory and a network interface device capable of supporting a communication link over a network with another network interface device, the method comprising the steps of: forming by means of the application data to be transmitted; requesting by means of the application a non-operating-system functionality of the data processing system to send the data to be transmitted; responsive to that request: writing the data to be transmitted to an area of the memory; and initiating by means of direct communication between the non-operating-system functionality and the network interface device a transmission operation of at least some of the data over the network; and subsequently accessing the memory by means of the operating system and performing at least part of a transmission operation of at least some of the data over the network by means of the network interface device.10-27-2011
20120089694TCP/IP PROCESSOR AND ENGINE USING RDMA - A TCP/IP processor and data processing engines for use in the TCP/IP processor is disclosed. The TCP/IP processor can transport data payloads of Internet Protocol (IP) data packets using an architecture that provides capabilities to transport and process Internet Protocol (IP) packets from Layer 2 through transport protocol layer and may also provide packet inspection through Layer 7. The engines may perform pass-through packet classification, policy processing and/or security processing enabling packet streaming through the architecture at nearly the full line rate. An application running on an initiator or target can in certain instantiations register a region of memory, which is made available to its peer(s) for access directly without substantial host intervention through RDMA data transfer.04-12-2012
20120331083RECEIVE QUEUE MODELS TO REDUCE I/O CACHE FOOTPRINT - A method according to one embodiment includes the operations of configuring a primary receive queue to designate a first plurality of buffers; configuring a secondary receive queue to designate a second plurality of buffers, wherein said primary receive queue is sized to accommodate a first network traffic data rate and said secondary receive queue is sized to provide additional accommodation for burst network traffic data rates; selecting a buffer from said primary receive queue, if said primary receive queue has buffers available, otherwise selecting a buffer from said secondary receive queue; transferring data from a network controller to said selected buffer; indicating that said transferring to said selected buffer is complete; reading said data from said selected buffer; and returning said selected buffer, after said reading is complete, to said primary receive queue if said primary receive queue has space available for the selected buffer, otherwise returning said selected buffer to said secondary receive queue.12-27-2012
20120290675SYSTEM AND METHOD FOR A MOBILE DEVICE TO USE PHYSICAL STORAGE OF ANOTHER DEVICE FOR CACHING - Systems and methods for a mobile device to use physical storage of another device for caching are disclosed. In one embodiment, a mobile device is able to receive over a cellular or IP network a response or content to be cached and wirelessly access the physical storage of the other device via a wireless network to cache the response or content for the mobile device.11-15-2012
20090125604THIRD PARTY, BROADCAST, MULTICAST AND CONDITIONAL RDMA OPERATIONS - In a multinode data processing system in which nodes exchange information over a network or through a switch, the mechanism which enables out-of-order data transfer via Remote Direct Memory Access (RDMA) also provides a corresponding ability to carry out broadcast operations, multicast operations, third party operations and conditional RDMA operations. In a broadcast operation a source node transfers data packets in RDMA fashion to a plurality of destination nodes. Multicast operation works similarly except that distribution is selective. In third party operations a single central node in a cluster or network manages the transfer of data in RDMA fashion between other nodes or creates a mechanism for allowing a directed distribution of data between nodes. In conditional operation mode the transfer of data is conditioned upon one or more events occurring in either the source node or in the destination node.05-14-2009
20110161456Apparatus and Method for Supporting Memory Management in an Offload of Network Protocol Processing - A number of improvements in network adapters that offload protocol processing from the host processor are provided. Specifically, mechanisms for handling memory management and optimization within a system utilizing an offload network adapter are provided. The memory management mechanism permits both buffered sending and receiving of data as well as zero-copy sending and receiving of data. In addition, the memory management mechanism permits grouping of DMA buffers that can be shared among specified connections based on any number of attributes. The memory management mechanism further permits partial send and receive buffer operation, delaying of DMA requests so that they may be communicated to the host system in bulk, and expedited transfer of data to the host system.06-30-2011
20110246597REMOTE DIRECT STORAGE ACCESS - Embodiments of the present disclosure include systems, apparatuses, and methods that relate to remote, direct access of solid-state storage. In some embodiments, a network interface component (NIC) of a server may access a solid-state storage module of the server by a network storage access link that bypasses a central processing unit (CPU) and main memory of the server. Other embodiments may be described and claimed.10-06-2011
20120254339METHODS AND APPARATUS TO TRANSMIT DEVICE DESCRIPTION FILES TO A HOST - Example methods and apparatus to transmit device description files to a host are disclosed. A disclosed example method includes communicatively coupling a field device to the host to provision the field device within a process control system, receiving an indication that the host does not include a version of a device description file that corresponds to a version of the field device, accessing the device description file from a memory of the field device, and transmitting the device description file from the field device to the host.10-04-2012
20110270942COMBINING MULTIPLE HARDWARE NETWORKS TO ACHIEVE LOW-LATENCY HIGH-BANDWIDTH POINT-TO-POINT COMMUNICATION - Systems, methods and articles of manufacture are disclosed for performing a collective operation on a parallel computing system that includes multiple compute nodes and multiple networks connecting the compute nodes. Each of the networks may have different characteristics. A source node may broadcast a DMA descriptor over a first network to a target node, to initialize the collective operation. The target node may perform the collective operation over a second network and using the broadcast DMA descriptor.11-03-2011
20130138758Efficient data transfer between servers and remote peripherals - Methods and apparatus are provided for transferring data between servers and a remote entity having multiple peripherals. Multiple servers are connected to a remote entity over an Remote Direct Memory Access capable network. The remote entity includes peripherals such as network interface cards (NICs) and host bus adapters (HBAs). Server descriptor rings and descriptors are provided to allow efficient and effective communication between the servers and the remote entity.05-30-2013
20130091236REMOTE DIRECT MEMORY ACCESS ('RDMA') IN A PARALLEL COMPUTER - Remote direct memory access (‘RDMA’) in a parallel computer, the parallel computer including a plurality of nodes, each node including a messaging unit, including: receiving an RDMA read operation request that includes a virtual address representing a memory region at which to receive data to be transferred from a second node to the first node; responsive to the RDMA read operation request: translating the virtual address to a physical address; creating a local RDMA object that includes a counter set to the size of the memory region; sending a message that includes an DMA write operation request, the physical address of the memory region on the first node, the physical address of the local RDMA object on the first node, and a remote virtual address on the second node; and receiving the data to be transferred from the second node.04-11-2013
20130091235DYNAMIC CONTENT INSTALLER FOR MOBILE DEVICES - Apparatus and methods for obtaining a content item in a mobile environment include receiving a content item of a first type and content management information that corresponds to the content item. The content management information specifies a destination storage location for content items of the first type, and the destination storage location is different from a default storage location for the content items of the first type. Further, these aspects include storing the content item on the communication device at the destination storage location based on the content management information, and executing an application on a computing platform of the communication device. The application interacts with the content item at the destination storage location based on the content management information. Additional apparatus and methods relating to distributing content are also disclosed.04-11-2013
20130103777NETWORK INTERFACE CONTROLLER WITH CIRCULAR RECEIVE BUFFER - A method for communication includes allocating in a memory of a host device a contiguous, cyclical set of buffers for use by a transport service instance on a network interface controller (NIC). First and second indices point respectively to a first buffer in the set to which the NIC is to write and a second buffer in the set from which a client process running on the host device is to read. Upon receiving at the NIC a message directed to the transport service instance and containing data to be pushed to the memory, the data are written to the first buffer that is pointed to by the first index, and the first index is advanced cyclically through the set. The second index is advanced cyclically through the set when the data in the second buffer have been read by the client process.04-25-2013
20130124665ADMINISTERING AN EPOCH INITIATED FOR REMOTE MEMORY ACCESS - Methods, systems, and products are disclosed for administering an epoch initiated for remote memory access that include: initiating, by an origin application messaging module on an origin compute node, one or more data transfers to a target compute node for the epoch; initiating, by the origin application messaging module after initiating the data transfers, a closing stage for the epoch, including rejecting any new data transfers after initiating the closing stage for the epoch; determining, by the origin application messaging module, whether the data transfers have completed; and closing, by the origin application messaging module, the epoch if the data transfers have completed.05-16-2013
20130124666MANAGING INTERNODE DATA COMMUNICATIONS FOR AN UNINITIALIZED PROCESS IN A PARALLEL COMPUTER - A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior to initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory.05-16-2013
20130179527Application Engine Module, Modem Module, Wireless Device and Method - A wireless device has a modem module and an application engine module. A communication and memory sharing interface connects the modem module to the application engine module. The application engine module has an application layer component for providing application layer processing for the wireless device and a modem component for providing, in combination with the modem module, modem processing for the wireless device. The wireless device has a memory and a memory interface for connecting the application engine module directly to the memory.07-11-2013
20130144966COORDINATING WRITE SEQUENCES IN A DATA STORAGE SYSTEM - According to one aspect of the present disclosure, a method and technique for coordinating write sequences in a data storage system is disclosed. The method includes: responsive to a primary device receiving a request to write to primary storage, receiving from the primary device a request for a sequence number; generating a current sequence number for the write; generating a first identifier indicating an identity of secondary devices writing to secondary storage based on the current sequence number; generating a second identifier indicating an identity of secondary devices writing to secondary storage based on the current sequence number and a previous sequence number; transmitting the current sequence number and the second identifier to the primary device; and transmitting the current sequence number and the first identifier to the secondary devices writing to secondary storage based on the previous sequence number.06-06-2013
20130151644COPYING DATA ONTO AN EXPANDABLE MEMORY - This document describes a method for synchronizing files on an expandable memory card coupled to a first computing device with an application running on a second computing device, where downloading of files is performed wirelessly without user involvement.06-13-2013
20120284355METHOD AND SYSTEM FOR COMMUNICATING BETWEEN MEMORY REGIONS - A method and system are provided for transferring data in a networked system between a local memory in a local system and a remote memory in a remote system. A RDMA request is received and a first buffer region is associated with a first transfer operation. The system determines whether a size of the first buffer region exceeds a maximum transfer size of the networked system. Portions of the second buffer region may be associated with the first transfer operation based on the determination of the size of the first buffer region. The system subsequently performs the first transfer operation.11-08-2012
20130185375CONFIGURING COMPUTE NODES IN A PARALLEL COMPUTER USING REMOTE DIRECT MEMORY ACCESS ('RDMA') - Configuring compute nodes in a parallel computer using remote direct memory access (‘RDMA’), the parallel computer comprising a plurality of compute nodes coupled for data communications via one or more data communications networks, including: initiating, by a source compute node of the parallel computer, an RDMA broadcast operation to broadcast binary configuration information to one or more target compute nodes in the parallel computer; preparing, by each target compute node, the target compute node for receipt of the binary configuration information from the source compute node; transmitting, by each target compute node, a ready message to the target compute node, the ready message indicating that the target compute node is ready to receive the binary configuration information from the source compute node; and performing, by the source compute node, an RDMA broadcast operation to write the binary configuration information into memory of each target compute node.07-18-2013
20120016949DISTRIBUTED PROCESSING SYSTEM, INTERFACE, STORAGE DEVICE, DISTRIBUTED PROCESSING METHOD, DISTRIBUTED PROCESSING PROGRAM - A distributed processing system which distributes a load of a request from a client without being restricted by a processing status and processing performance of transfer processing means is provided:01-19-2012
20130198310CONTROL SYSTEM AND LOG DELIVERY METHOD - Provided is a control system that is provided with a plurality of control devices each of which includes an arithmetic processing device and a storage unit for storing logs of the arithmetic processing device. The control system includes a first generation unit, a second generation unit, and a delivery unit. The first generation unit generates a first log file such that the plurality of logs of the arithmetic processing devices stored in the storage unit of each control device are stored within an upper limit of log capacitance determined based on a total number of the arithmetic processing devices in the control system and in order based on priorities. The second generation unit generates a second log file including a plurality of the first log files of the arithmetic processing devices. The delivery unit delivers the second log file to an external device.08-01-2013
20130198311Techniques for Use of Vendor Defined Messages to Execute a Command to Access a Storage Device - Examples are disclosed for use of vendor defined messages to execute a command to access a storage device maintained at a server. In some examples, a network input/output device coupled to the server may receive the command from a client remote to the server for the client to access the storage device. For these examples, elements or components of the network input/output device may be capable of forwarding the command either directly to a Non-Volatile Memory Express (NVMe) controller that controls the storage device or to a manageability module coupled between the network input/out device and the NVMe controller. Vendor specific information may be forwarded with the command and used by either the NVMe controller or the manageability module to facilitate execution of the command. Other examples are described and claimed.08-01-2013
20130198312Techniques for Remote Client Access to a Storage Medium Coupled with a Server - Examples are disclosed for client access to a storage medium coupled with a server. A network input/output device for the server may receive a remote direct memory access (RDMA) command including a steering tag (S-Tag) from a client remote to the server. For these examples, the network input/output device may forward the RDMA command to a Non-Volatile Memory Express (NVMe) controller and access provided to a storage medium based on an allocation scheme that assigned the S-Tag to the storage medium. In some other examples, an NVMe controller may generate a memory mapping of one or more storage devices controlled by the NVMe controller to addresses for a base address register (BAR) on a Peripheral Component Interconnect Express (PCIe) bus. PCIe memory access commands received by the NVMe controller may be translated based on the memory mapping to provide access to the storage device. Other examples are described and claimed.08-01-2013
20120059899Communications-Network Data Processing Methods, Communications-Network Data Processing Systems, Computer-Readable Storage Media, Communications-Network Data Presentation Methods, and Communications-Network Data Presentation Systems - Communications-network data processing methods include receiving a request to perform an action involving data associated with a configuration of a communications network or a behavior of the communications network and in response to the receiving of the request, performing the action. Communications-network data presentation methods include receiving information indicating a source of data characterizing a communications network and a desired presentation format of the data, accessing the source to obtain the data characterizing the communications network, and presenting the data according to the desired presentation format.03-08-2012
20120066334INFORMATION PROCESSING SYSTEM, STORAGE MEDIUM STORING AN INFORMATION PROCESSING PROGRAM AND INFORMATION PROCESSING METHOD - A game system includes game apparatuses and a server. The one game apparatus has a first memory storing a software program being made up of a program and save data, the other game apparatus has a second memory capable of additionally storing a software program, and the server has a third memory storing a program. The one game apparatus transmits the save data in the first memory to the other game apparatus by utilizing a local communication, and the server transmits the program in the third memory to the other game apparatus by utilizing Wi-Fi communications. The other game apparatus receives the save data and then the program, and additionally stores them in the second memory as a software program.03-15-2012
20120066333ABSTRACTING SPECIAL FILE INTERFACES TO CONCURRENTLY SUPPORT MULTIPLE OPERATING SYSTEM LEVELS - Some embodiments of the inventive subject matter are directed to detecting a request to access a symbol via a special file that accesses kernel memory directly. The request can come from an application from a first instance of an operating system (OS) running a first version of the OS. A second instance of the OS, which manages the first OS, receives the request. The second instance of the OS includes a kernel shared between the first and second instances of the OS. The second instance of the OS runs a second version of the OS. Some embodiments are further directed to detecting data associated with the symbol, where the data is in a first data format that is compatible with the second version of the OS but is incompatible with the first version of the OS. Some embodiments are further directed to reformatting the data from the first data format to a second data format compatible with the second version of the OS.03-15-2012
20120096105DEVICE, SYSTEM, AND METHOD OF DISTRIBUTING MESSAGES - Device, system, and method of distributing messages. For example, a data publisher capable of communication with a plurality of subscribers via a network fabric, the data publisher comprising: a memory allocator to allocate a memory area of a local memory unit of the data publisher to be accessible for Remote Direct Memory Access (RDMA) read operations by one or more of the subscribers; and a publisher application to create a message log in said memory area, to send a message to one or more of the subscribers using a multicast transport protocol, and to store in said memory area a copy of said message. A subscriber device handles recovery of lost messages by directly reading the lost messages from the message log of the data publisher using RDMA read operation(s).04-19-2012
20120096104ELECTRONIC DEVICE WITH CUSTOMIZABLE EMBEDDED SOFTWARE AND METHODS THEREFOR - An electronic device comprising: a central processing unit; memory in data communication with the central processing unit; a network connector in data communication with the central processing unit; a firmware image stored in a compressed format within the memory, wherein the firmware image includes a plurality of software components; and an update agent stored within the memory and configured to provide a list of software components for communication out the network connector, wherein the electronic device is configured to communicate the list of software components out the network connector and receive a modified firmware image in a compressed format that includes at least one additional software component.04-19-2012
20130212206METHOD OF DISCOVERING IP ADDRESSES OF SERVERS - A method of discovering IP addresses of servers includes: (i1) beginning discovery processes of management modules and initialization processes of servers; (i2) the management modules sending network packages to one of the servers; (i3) the server responding with its IP address to the management modules, if the server receives the network packages of the management modules; (i4) ending the initialization processes of the server; (i5) each such management module storing the IP address in a database, if any of the management modules receive the IP addresses of the server; (i6) the management modules sending network packages to a next one of the servers with a next MAC address, if a next one of the servers with a next MAC address exists; (i7) repeating (i2) through (i5) if and as necessary in respect of the next one of the servers; and (i8) ending the discovery processes of the management modules.08-15-2013

Patent applications in class COMPUTER-TO-COMPUTER DIRECT MEMORY ACCESSING