Patent application number | Description | Published |
20080222303 | LATENCY HIDING MESSAGE PASSING PROTOCOL - A method, system, and article of manufacture that provide latency hiding, high bandwidth message passing protocols used for data communication between nodes of a parallel computer system are disclosed. A source node transmits a request to send message to a receiving node. Prior to receiving a clear to send message, the sending node continues to send deterministically routed (or fully described) data packets to the receiving node, thereby hiding the latency inherent in the request to send—clear to send message exchange. Once the sending node receives the clear to send message, any remaining portion of the message may be sent using partially described packets which may be routed dynamically, thereby maximizing bandwidth. | 09-11-2008 |
20080259816 | Validating a Cabling Topology in a Distributed Computing System - Validating a cabling topology in a distributed computing system comprised of cabled nodes connected using data communications cables, each cabled node characterized by cabling dimensions, each cable corresponding to one of the cabling dimensions, includes: receiving a selection from a user of at least one cabled node for topology validation; identifying, for each cabling dimension for each selected cabled node, a shortest cabling path; determining, for each cabling dimension, whether the number of cabled nodes in the shortest cabling path for each selected cabled node match; and if, for each cabling dimension, the number of cabled nodes in the shortest cabling path for each selected cabled node match: selecting, for each cabling dimension, the number of cabled nodes in the shortest cabling path as a representative value for the cabling dimension, calculating a product of the representative values, and determining whether the product equals the number of selected cabled nodes. | 10-23-2008 |
20080259916 | OPPORTUNISTIC QUEUEING INJECTION STRATEGY FOR NETWORK LOAD BALANCING - Embodiments of the invention include a method, system, and article of manufacture that provide opportunistic queuing injection strategy used for data communication between nodes of a parallel computer system. A message may be encapsulated into a set of data packets. When the packets are sent, an opportunistic injection queue may be configured to transmit them to multiple hardware injection ports. This approach allows for complete network link saturation. In a parallel system with network links in multiple dimensions, sending message packets using more than one dimension may substantially increase network throughput. | 10-23-2008 |
20080263320 | Executing a Scatter Operation on a Parallel Computer - Executing a scatter operation on a parallel computer includes: configuring a send buffer on a logical root, the send buffer having positions, each position corresponding to a ranked node in an operational group of compute nodes and for storing contents scattered to that ranked node; and repeatedly for each position in the send buffer: broadcasting, by the logical root to each of the other compute nodes on a global combining network, the contents of the current position of the send buffer using a bitwise OR operation, determining, by each compute node, whether the current position in the send buffer corresponds with the rank of that compute node, if the current position corresponds with the rank, receiving the contents and storing the contents in a reception buffer of that compute node, and if the current position does not correspond with the rank, discarding the contents. | 10-23-2008 |
20080263329 | Parallel-Prefix Broadcast for a Parallel-Prefix Operation on a Parallel Computer - A parallel-prefix broadcast for a parallel-prefix operation on a parallel computer includes: configuring, on each node, a parallel-prefix contribution buffer for storing the node's parallel-prefix contribution; configuring, on each node, a parallel-prefix results buffer for storing results of a operation, the results buffer having a position for each node that corresponds to node's rank; and repeatedly for each position in the results buffer: processing in parallel by each node, including: determining, by the node, whether the current position in the results buffer is to include the node's contribution, if the current position is not to include the contribution, contributing the identity element, and if the current position is to include the contribution, contributing the contribution, performing, by each node, the operation using the contributed identity elements and the contributed contributions, yielding a result from the operation, and storing, by each node, the result in the position in the results buffer. | 10-23-2008 |
20080267066 | Remote Direct Memory Access - Methods, parallel computers, and computer program products are disclosed for remote direct memory access. Embodiments include transmitting, from an origin DMA engine on an origin compute node to a plurality target DMA engines on target compute nodes, a request to send message, the request to send message specifying a data to be transferred from the origin DMA engine to data storage on each target compute node; receiving, by each target DMA engine on each target compute node, the request to send message; preparing, by each target DMA engine, to store data according to the data storage reference and the data length, including assigning a base storage address for the data storage reference; sending, by one or more of the target DMA engines, an acknowledgment message acknowledging that all the target DMA engines are prepared to receive a data transmission from the origin DMA engine; receiving, by the origin DMA engine, the acknowledgement message from the one or more of the target DMA engines; and transferring, by the origin DMA engine, data to data storage on each of the target compute nodes according to the data storage reference using a single direct put operation. | 10-30-2008 |
20080281997 | Low Latency, High Bandwidth Data Communications Between Compute Nodes in a Parallel Computer - Methods, parallel computers, and computer program products are disclosed for low latency, high bandwidth data communications between compute nodes in a parallel computer. Embodiments include receiving, by an origin direct memory access (‘DMA’) engine of an origin compute node, data for transfer to a target compute node; sending, by the origin DMA engine of the origin compute node to a target DMA engine on the target compute node, a request to send (‘RTS’) message; transferring, by the origin DMA engine, a predetermined portion of the data to the target compute node using memory FIFO operation; determining, by the origin DMA engine whether an acknowledgement of the RTS message has been received from the target DMA engine; if the an acknowledgement of the RTS message has not been received, transferring, by the origin DMA engine, another predetermined portion of the data to the target compute node using a memory FIFO operation; and if the acknowledgement of the RTS message has been received by the origin DMA engine, transferring, by the origin DMA engine, any remaining portion of the data to the target compute node using a direct put operation. | 11-13-2008 |
20080281998 | Direct Memory Access Transfer Completion Notification - DMA transfer completion notification includes: inserting, by an origin DMA engine on an origin node in an injection first-in-first-out (‘FIFO’) buffer, a data descriptor for an application message to be transferred to a target node on behalf of an application on the origin node; inserting, by the origin DMA engine, a completion notification descriptor in the injection FIFO buffer after the data descriptor for the message, the completion notification descriptor specifying a packet header for a completion notification packet; transferring, by the origin DMA engine to the target node, the message in dependence upon the data descriptor; sending, by the origin DMA engine, the completion notification packet to a local reception FIFO buffer using a local memory FIFO transfer operation; and notifying, by the origin DMA engine, the application that transfer of the message is complete in response to receiving the completion notification packet in the local reception FIFO buffer. | 11-13-2008 |
20080301327 | Direct Memory Access Transfer Completion Notification - Methods, apparatus, and products are disclosed for DMA transfer completion notification that include: inserting, by an origin DMA engine on an origin compute node in an injection FIFO buffer, a data descriptor for an application message to be transferred to a target compute node on behalf of an application on the origin compute node; inserting, by the origin DMA engine, a completion notification descriptor in the injection FIFO buffer after the data descriptor for the message, the completion notification descriptor specifying an address of a completion notification field in application storage for the application; transferring, by the origin DMA engine to the target compute node, the message in dependence upon the data descriptor; and notifying, by the origin DMA engine, the application that the transfer of the message is complete, including performing a local direct put operation to store predesignated notification data at the address of the completion notification field. | 12-04-2008 |
20080301683 | Performing an Allreduce Operation Using Shared Memory - Methods, apparatus, and products are disclosed for performing an allreduce operation using shared memory that include: receiving, by at least one of a plurality of processing cores on a compute node, an instruction to perform an allreduce operation; establishing, by the core that received the instruction, a job status object for specifying a plurality of shared memory allreduce work units, the plurality of shared memory allreduce work units together performing the allreduce operation on the compute node; determining, by an available core on the compute node, a next shared memory allreduce work unit in the job status object; and performing, by that available core on the compute node, that next shared memory allreduce work unit. | 12-04-2008 |
20080301704 | Controlling Data Transfers from an Origin Compute Node to a Target Compute Node - Methods, apparatus, and products are disclosed for controlling data transfers from an origin compute node to a target compute node that include: receiving, by an application messaging module on the target compute node, an indication of a data transfer from an origin compute node to the target compute node; and administering, by the application messaging module on the target compute node, the data transfer using one or more messaging primitives of a system messaging module in dependence upon the indication. | 12-04-2008 |
20080307194 | Parallel, Low-Latency Method for High-Performance Deterministic Element Extraction From Distributed Arrays - The present invention provides a system and method for extracting elements from distributed arrays on a parallel processing system. The system includes a module that populates a local array with elements from input, a module that submits a largest element value in the local array and a processor ID for a local processor, and a module that determines a globally largest element value from the largest element values submitted by each one of the plurality of processors. The system further includes a module that broadcasts a winning globally largest element value and winning processor ID to the plurality of processors, and a module that increments an element pointer to the next value in the local array if the winning processor ID equals the processor ID for the local processor. | 12-11-2008 |
20080307195 | Parallel, Low-Latency Method for High-Performance Speculative Element Extraction From Distributed Arrays - The present invention provides a system and method for extracting elements from distributed arrays on a parallel processing system. The system includes a module that populates a result array with globally largest elements from the input, a module that generates a partition element, a module that counts the number of local elements greater than the partition and a module that determines the globally largest elements. The method for extracting elements from distributed arrays on a parallel processing system includes populating a result array with globally largest elements from the input, generating a partition element, counting the number of local elements greater than the partition and determining the globally largest elements. | 12-11-2008 |
20080313341 | Data Communications - Data communications, including issuing, by an application program to a high level data communications library, a request for initialization of a data communications service; issuing to a low level data communications library a request for registration of data communications functions; registering the data communications functions, including instantiating a factory object for each of the one or more data communications functions; issuing by the application program an instruction to execute a designated data communications function; issuing, to the low level data communications library, an instruction to execute the designated data communications function, including passing to the low level data communications library a call parameter that identifies a factory object; creating with the identified factory object the data communications object that implements the data communications function according to the protocol; and executing by the low level data communications library the designated data communications function. | 12-18-2008 |
20080313376 | Heuristic Status Polling - Methods, compute nodes, and computer program products are provided for heuristic status polling of a component in a computing system. Embodiments include receiving, by a polling module from a requesting application, a status request requesting status of a component; determining, by the polling module, whether an activity history for the component satisfies heuristic polling criteria; polling, by the polling module, the component for status if the activity history for the component satisfies the heuristic polling criteria; and not polling, by the polling module, the component for status if the activity history for the component does not satisfy the heuristic criteria. | 12-18-2008 |
20090006663 | Direct Memory Access ('DMA') Engine Assisted Local Reduction - Methods, compute nodes, and computer program products are provided for DMA engine assisted local reduction. Embodiments include receiving, by a DMA engine, one or more data descriptors, each descriptor identifying a buffer containing an array for reduction; selecting, in dependence upon the arrays in the buffers and local hardware functional units available to the DMA engine, at least one local hardware functional unit; and reducing one or more arrays in the buffers identified by the data descriptors with the selected local hardware functional unit. | 01-01-2009 |
20090031001 | Repeating Direct Memory Access Data Transfer Operations for Compute Nodes in a Parallel Computer - Methods, apparatus, and products are disclosed for repeating DMA data transfer operations for nodes in a parallel computer that include: receiving, by a DMA engine on an origin node, a RGET data descriptor that specifies a DMA transfer operation data descriptor and a second RGET data descriptor, the second RGET data descriptor also specifying the DMA transfer operation data descriptor; creating, in dependence upon the RGET data descriptor, an RGET packet that contains the DMA transfer operation data descriptor and the second RGET data descriptor; processing the DMA transfer operation data descriptor included in the RGET packet, including performing a DMA data transfer operation between the origin node and a target node in dependence upon the DMA transfer operation data descriptor; and processing the second RGET data descriptor included in the RGET packet, thereby performing again the DMA transfer operation in dependence upon the DMA transfer operation data descriptor. | 01-29-2009 |
20090031055 | Chaining Direct Memory Access Data Transfer Operations for Compute Nodes in a Parallel Computer - Methods, systems, and products are disclosed for chaining DMA data transfer operations for compute nodes in a parallel computer that include: receiving, by an origin DMA engine on an origin node in an origin injection FIFO buffer for the origin DMA engine, a RGET data descriptor specifying a DMA transfer operation data descriptor on the origin node and a second RGET data descriptor on the origin node, the second RGET data descriptor specifying a target RGET data descriptor on the target node, the target RGET data descriptor specifying an additional DMA transfer operation data descriptor on the origin node; creating, by the origin DMA engine, an RGET packet in dependence upon the RGET data descriptor, the RGET packet containing the DMA transfer operation data descriptor and the second RGET data descriptor; and transferring, by the origin DMA engine to a target DMA engine on the target node, the RGET packet. | 01-29-2009 |
20090031325 | Direct Memory Access Transfer completion Notification - Methods, systems, and products are disclosed for DMA transfer completion notification that include: inserting, by an origin DMA on an origin node in an origin injection FIFO, a data descriptor for an application message; inserting, by the origin DMA, a reflection descriptor in the origin injection FIFO, the reflection descriptor specifying a remote get operation for injecting a completion notification descriptor in a reflection injection FIFO on a reflection node; transferring, by the origin DMA to a target node, the message in dependence upon the data descriptor; in response to completing the message transfer, transferring, by the origin DMA to the reflection node, the completion notification descriptor in dependence upon the reflection descriptor; receiving, by the origin DMA from the reflection node, a completion packet; and notifying, by the origin DMA in response to receiving the completion packet, the origin node's processing core that the message transfer is complete. | 01-29-2009 |
20090037511 | Effecting a Broadcast with an Allreduce Operation on a Parallel Computer - Methods, parallel computers, and computer program products are disclosed for effecting a broadcast with an allreduce operation on a parallel computer, the parallel computer comprising a plurality of compute nodes, the compute nodes organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer, each compute node in the operational group assigned a unique rank, the compute nodes of the operational group coupled for data communications through a global combining network; and one compute node assigned to be a logical root. Embodiments include configuring, by the logical root node, a send buffer having a contribution to be broadcast to each ranked node in the operational group; configuring, by all ranked nodes other than the logical root, a receive buffer for receiving the contribution from the logical root; and repeatedly for each element of the contribution of the logical root in the send buffer: contributing, by the logical root, the element of the contribution in the send buffer; injecting, by all ranked nodes other than the logical root, one or more zeros corresponding to a size of the element; performing, by all the compute nodes of the operational group, an allreduce operation with a bitwise OR using the element and the injected zeros, yielding a result for the allreduce operation; and storing in each receive buffer, by all ranked nodes other than the logical root, the result of the allreduce. | 02-05-2009 |
20090037598 | Providing Nearest Neighbor Point-to-Point Communications Among Compute Nodes of an Operational Group in a Global Combining Network of a Parallel Computer - Methods, apparatus, and products are disclosed for providing nearest neighbor point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: identifying each link in the global combining network for each compute node of the operational group; designating one of a plurality of point-to-point class routing identifiers for each link such that no compute node in the operational group is connected to two adjacent compute nodes in the operational group with links designated for the same class routing identifiers; and configuring each compute node of the operational group for point-to-point communications with each adjacent compute node in the global combining network through the link between that compute node and that adjacent compute node using that link's designated class routing identifier. | 02-05-2009 |
20090037773 | Link Failure Detection in a Parallel Computer - Methods, apparatus, and products are disclosed for link failure detection in a parallel computer including compute nodes connected in a rectangular mesh network, each pair of adjacent compute nodes in the rectangular mesh network connected together using a pair of links, that includes: assigning each compute node to either a first group or a second group such that adjacent compute nodes in the rectangular mesh network are assigned to different groups; sending, by each of the compute nodes assigned to the first group, a first test message to each adjacent compute node assigned to the second group; determining, by each of the compute nodes assigned to the second group, whether the first test message was received from each adjacent compute node assigned to the first group; and notifying a user, by each of the compute nodes assigned to the second group, whether the first test message was received. | 02-05-2009 |
20090040946 | Executing an Allgather Operation on a Parallel Computer - Methods, apparatus, and products are disclosed for executing an allgather operation on a parallel computer that includes a plurality of compute nodes organized into at least one operational group of compute nodes for collective parallel operations, each compute node in the operational group assigned a unique rank, that includes: determining a contention-free logical ring topology for the compute nodes in the operational group; configuring, for each compute node in the operational group according to the contention-free logical ring topology, a routing table to specify a forwarding path to the next compute node in the logical ring topology; and repeatedly, for each compute node in the operational group until each compute node has received contributions for all of the other compute nodes in the operational group, forwarding a contribution for the allgather operation to the next compute node in the logical ring topology along the forwarding path. | 02-12-2009 |
20090043912 | Providing Full Point-To-Point Communications Among Compute Nodes of an Operational Group in a Global Combining Network of a Parallel Computer - Methods, apparatus, and products are disclosed for providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: receiving a network packet in a compute node, the network packet specifying a destination compute node; selecting, in dependence upon the destination compute node, at least one of the links for the compute node along which to forward the network packet toward the destination compute node; and forwarding the network packet along the selected link to the adjacent compute node connected to the compute node through the selected link. | 02-12-2009 |
20090043988 | Configuring Compute Nodes of a Parallel Computer in an Operational Group into a Plurality of Independent Non-Overlapping Collective Networks - Methods, apparatus, and products are disclosed for configuring compute nodes of a parallel computer in an operational group into a plurality of independent non-overlapping collective networks, the compute nodes in the operational group connected together for data communications through a global combining network, that include: partitioning the compute nodes in the operational group into a plurality of non-overlapping subgroups; designating one compute node from each of the non-overlapping subgroups as a master node; and assigning, to the compute nodes in each of the non-overlapping subgroups, class routing instructions that organize the compute nodes in that non-overlapping subgroup as a collective network such that the master node is a physical root. | 02-12-2009 |
20090052462 | Line-Plane Broadcasting in a Data Communications Network of a Parallel Computer - Methods, apparatus, and products are disclosed for line-plane broadcasting in a data communications network of a parallel computer, the parallel computer comprising a plurality of compute nodes connected together through the network, the network optimized for point to point data communications and characterized by at least a first dimension, a second dimension, and a third dimension, that include: initiating, by a broadcasting compute node, a broadcast operation, including sending a message to all of the compute nodes along an axis of the first dimension for the network; sending, by each compute node along the axis of the first dimension, the message to all of the compute nodes along an axis of the second dimension for the network; and sending, by each compute node along the axis of the second dimension, the message to all of the compute nodes along an axis of the third dimension for the network. | 02-26-2009 |
20090055474 | Line-Plane Broadcasting in a Data Communications Network of a Parallel Computer - Methods, apparatus, and products are disclosed for line-plane broadcasting in a data communications network of a parallel computer, the parallel computer comprising a plurality of compute nodes connected together through the network, the network optimized for point to point data communications and characterized by at least a first dimension, a second dimension, and a third dimension, that include: initiating, by a broadcasting compute node, a broadcast operation, including sending a message to all of the compute nodes along an axis of the first dimension for the network; sending, by each compute node along the axis of the first dimension, the message to all of the compute nodes along an axis of the second dimension for the network; and sending, by each compute node along the axis of the second dimension, the message to all of the compute nodes along an axis of the third dimension for the network. | 02-26-2009 |
20090113308 | Administering Communications Schedules for Data Communications Among Compute Nodes in a Data Communications Network of a Parallel Computer - Methods, apparatus, and products are disclosed for creating and administering communications schedules for data communications among compute nodes in a data communications network of a parallel computer that include: receiving a communications schedule specifying data communications steps in a message passing operation performed by the compute nodes in the data communications network of the parallel computer; parsing the communications schedule to identify the data communications steps; and generating a graphical representation of the communications schedule, including graphing the data communications steps for the message passing operation. | 04-30-2009 |
20090138892 | Dispatching Packets on a Global Combining Network of a Parallel Computer - Methods, apparatus, and products are disclosed for dispatching packets on a global combining network of a parallel computer comprising a plurality of nodes connected for data communications using the network capable of performing collective operations and point to point operations that include: receiving, by an origin system messaging module on an origin node from an origin application messaging module on the origin node, a storage identifier and an operation identifier, the storage identifier specifying storage containing an application message for transmission to a target node, and the operation identifier specifying a message passing operation; packetizing, by the origin system messaging module, the application message into network packets for transmission to the target node, each network packet specifying the operation identifier and an operation type for the message passing operation specified by the operation identifier; and transmitting, by the origin system messaging module, the network packets to the target node. | 05-28-2009 |
20090154486 | Tracking Network Contention - Methods, apparatus, and product for tracking network contention on links among compute nodes of an operational group in a point-to-point data communications network of a parallel computer are disclosed. In embodiments of the present invention, each compute node is connected to an adjacent compute node in the point-to-point data communications network through a link. Tracking network contention according to embodiments of the present invention includes maintaining, by a network contention module on each compute node in the operational group, a local contention counter for each compute node, each local contention counter representing network contention on links among the compute nodes originating from the compute node; and maintaining a global contention counter, the global contention counter representing network contention currently on all links among the compute nodes in the operational group. | 06-18-2009 |
20090177828 | Executing Application Function Calls in Response to an Interrupt - Executing application function calls in response to an interrupt including creating a thread; receiving an interrupt having an interrupt type; determining whether a value of a semaphore represents that interrupts are disabled; if the value of the semaphore represents that interrupts are not disabled: calling, by the thread, one or more preconfigured functions in dependence upon the interrupt type of the interrupt; yielding the thread; and if the value of the semaphore represents that interrupts are disabled: setting the value of the semaphore to represent to a kernel that interrupts are hard-disabled; and hard-disabling interrupts at the kernel. | 07-09-2009 |
20090245134 | Broadcasting A Message In A Parallel Computer - Methods, systems, and products are disclosed for broadcasting a message in a parallel computer that includes: transmitting, by the logical root to all of the nodes directly connected to the logical root, a message; and for each node except the logical root: receiving the message; if that node is the physical root, then transmitting the message to all of the child nodes except the child node from which the message was received; if that node received the message from a parent node and if that node is not a leaf node, then transmitting the message to all of the child nodes; and if that node received the message from a child node and if that node is not the physical root, then transmitting the message to all of the child nodes except the child node from which the message was received and transmitting the message to the parent node. | 10-01-2009 |
20090248894 | Determining A Path For Network Traffic Between Nodes In A Parallel Computer - Determining a path for network traffic between a source compute node and a destination compute node in a parallel computer including identifying a group of compute nodes, the group of compute nodes having topological network locations included in a predefined topological shape; selecting, from the predefined topological shape, in dependence upon a global contention counter stored on the source compute node, a path on which to send a data communications message from the source compute node to the destination compute node; and sending, by the messaging module of the source compute node, the data communications message along the selected path for network traffic between the source and destination compute nodes. | 10-01-2009 |
20090248895 | Determining A Path For Network Traffic Between Nodes In A Parallel Computer - Determining a path for network traffic between a source compute node and a destination compute node in a parallel computer including: beginning with an identified group of compute nodes that includes the source compute node and iteratively until an identified group of compute nodes includes the destination compute node: identifying a group of compute nodes, the group of compute nodes having topological network locations included in a predefined topological shape; selecting a path for network traffic between compute nodes having topological network locations included in the predefined topological shape, and when an identified group of compute nodes includes the destination compute node: selecting a final path for network traffic; and sending a data communications message along the path for network traffic between the source compute node and the destination compute node, the path including, in order of selection, the selected paths and the selected final path. | 10-01-2009 |
20090300384 | Reducing Power Consumption While Performing Collective Operations On A Plurality Of Compute Nodes - Methods, apparatus, and products are disclosed for reducing power consumption while performing collective operations on a plurality of compute nodes that include: receiving, by each compute node, instructions to perform a type of collective operation; selecting, by each compute node from a plurality of collective operations for the collective operation type, a particular collective operation in dependence upon power consumption characteristics for each of the plurality of collective operations; and executing, by each compute node, the selected collective operation. | 12-03-2009 |
20090300385 | Reducing Power Consumption While Synchronizing A Plurality Of Compute Nodes During Execution Of A Parallel Application - Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation. | 12-03-2009 |
20090300386 | Reducing power consumption during execution of an application on a plurality of compute nodes - Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: powering up, during compute node initialization, only a portion of computer memory of the compute node, including configuring an operating system for the compute node in the powered up portion of computer memory; receiving, by the operating system, an instruction to load an application for execution; allocating, by the operating system, additional portions of computer memory to the application for use during execution; powering up the additional portions of computer memory allocated for use by the application during execution; and loading, by the operating system, the application into the powered up additional portions of computer memory. | 12-03-2009 |
20090300394 | Reducing Power Consumption During Execution Of An Application On A Plurality Of Compute Nodes - Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: executing, by each compute node, an application, the application including power consumption directives corresponding to one or more portions of the application; identifying, by each compute node, the power consumption directives included within the application during execution of the portions of the application corresponding to those identified power consumption directives; and reducing power, by each compute node, to one or more components of that compute node according to the identified power consumption directives during execution of the portions of the application corresponding to those identified power consumption directives. | 12-03-2009 |
20090300399 | Profiling power consumption of a plurality of compute nodes while processing an application - Methods, apparatus, and products are disclosed for profiling power consumption of a plurality of compute nodes while processing an application that include: executing the application on the plurality of compute nodes; monitoring performance characteristics for components of the plurality of compute nodes during execution of the application; and recording, in a power profile for the application, power consumption during execution of the application in dependence upon the performance characteristics for components of the plurality of compute nodes. | 12-03-2009 |
20090307036 | Budget-Based Power Consumption For Application Execution On A Plurality Of Compute Nodes - Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications. | 12-10-2009 |
20090307703 | Scheduling Applications For Execution On A Plurality Of Compute Nodes Of A Parallel Computer To Manage temperature of the nodes during execution - Methods, apparatus, and products are disclosed for scheduling applications for execution on a plurality of compute nodes of a parallel computer to manage temperature of the plurality of compute nodes during execution that include: identifying one or more applications for execution on the plurality of compute nodes; creating a plurality of physically discontiguous node partitions in dependence upon temperature characteristics for the compute nodes and a physical topology for the compute nodes, each discontiguous node partition specifying a collection of physically adjacent compute nodes; and assigning, for each application, that application to one or more of the discontiguous node partitions for execution on the compute nodes specified by the assigned discontiguous node partitions. | 12-10-2009 |
20090307708 | Thread Selection During Context Switching On A Plurality Of Compute Nodes - Methods, apparatus, and products are disclosed for thread selection during context switching on a plurality of compute nodes that includes: executing, by a compute node, an application using a plurality of threads of execution, including executing one or more of the threads of execution; selecting, by the compute node from a plurality of available threads of execution for the application, a next thread of execution in dependence upon power characteristics for each of the available threads; determining, by the compute node, whether criteria for a thread context switch are satisfied; and performing, by the compute node, the thread context switch if the criteria for a thread context switch are satisfied, including executing the next thread of execution. | 12-10-2009 |
20090327444 | Dynamic Network Link Selection For Transmitting A Message Between Compute Nodes Of A Parallel Comput - Methods, apparatus, and products are disclosed for dynamic network link selection for transmitting a message between nodes of a parallel computer. The nodes are connected using a data communications network. Each node connects to adjacent nodes in the data communications network through a plurality of network links. Each link provides a different data communication path through the network between the nodes of the parallel computer. Such dynamic link selection includes: identifying, by an origin node, a current message for transmission to a target node; determining, by the origin node, whether transmissions of previous messages to the target node have completed; selecting, by the origin node from the plurality of links for the origin node, a link in dependence upon the determination and link characteristics for the plurality of links for the origin node; and transmitting, by the origin node, the current message to the target node using the selected link. | 12-31-2009 |
20090327464 | Load Balanced Data Processing Performed On An Application Message Transmitted Between Compute Nodes - Methods, apparatus, and products are disclosed for load balanced data processing performed on an application message transmitted between compute nodes of a parallel computer that include: identifying, by an origin compute node, an application message for transmission to a target compute node, the message to be processed by a data processing operation; determining, by the origin compute node, origin sub-operations used to carry out a portion of the data processing operation on the origin compute node; determining, by the origin compute node, target sub-operations used to carry out a remaining portion of the data processing operation on the target compute node; processing, by the origin compute node, the message using the origin sub-operations; and transmitting, by the origin compute node, the processed message to the target compute node for processing using the target sub-operations. | 12-31-2009 |
20100005189 | Pacing Network Traffic Among A Plurality Of Compute Nodes Connected Using A Data Communications Network - Methods, apparatus, and products are disclosed for pacing network traffic among a plurality of compute nodes connected using a data communications network. The network has a plurality of network regions, and the plurality of compute nodes are distributed among these network regions. Pacing network traffic among a plurality of compute nodes connected using a data communications network includes: identifying, by a compute node for each region of the network, a roundtrip time delay for communicating with at least one of the compute nodes in that region; determining, by the compute node for each region, a pacing algorithm for that region in dependence upon the roundtrip time delay for that region; and transmitting, by the compute node, network packets to at least one of the compute nodes in at least one of the network regions in dependence upon the pacing algorithm for that region. | 01-07-2010 |
20100005326 | Profiling An Application For Power Consumption During Execution On A Compute Node - Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application. | 01-07-2010 |
20100014523 | Providing Point To Point Communications Among Compute Nodes In A Global Combining Network Of A Parallel Computer - Methods, apparatus, and products are disclosed for providing point to point data communications among compute nodes in a global combining network of a parallel computer that include: determining a class route identifier available for all of the nodes along a communications path from an origin node to a target node; configuring network hardware of each node along the communications path with routing instructions in dependence upon the available class route identifier and the network's topology; transmitting, by the origin node along the communications path, a network packet to the target node, including encoding the available class route identifier in the network packet; and routing, by the network hardware of each node along the communications path, the network packet to the target node in dependence upon the routing instructions for each node and the available class route identifier. | 01-21-2010 |
20100017420 | Performing An All-To-All Data Exchange On A Plurality Of Data Buffers By Performing Swap Operations - Methods, apparatus, and products are disclosed for performing an all-to-all exchange on n number of data buffers using XOR swap operations. Each data buffer has n number of data elements. Performing an all-to-all exchange on n number of data buffers using XOR swap operations includes for each rank value of i and j where i is greater than j and where i is less than or equal to n: selecting data element i in data buffer j; selecting data element j in data buffer i; and exchanging contents of data element i in data buffer j with contents of data element j in data buffer i using an XOR swap operation. | 01-21-2010 |
20100023631 | Processing Data Access Requests Among A Plurality Of Compute Nodes - Methods, apparatus, and products are disclosed for processing data access requests among a plurality of compute nodes. One compute node operates as a processing node, and one compute nodes operates as a requesting node. The processing node receives, from the requesting node, a data access request to access data currently being processed by the processing node. The processing node also receives, from the requesting node, a processing directive. The processing directive specifies data processing operations to be performed on the data specified by the data access request. The processing node performs, on behalf of the requesting node, the data processing operations specified by the processing directive on the data specified by the data access request. The processing node transmits, to the requesting node, results of the data processing operations performed on the data by the processing node on behalf of the requesting node. | 01-28-2010 |
20100023723 | Paging Memory Contents Between A Plurality Of Compute Nodes In A Parallel Computer - Methods, apparatus, and products are disclosed for paging memory contents between a plurality of compute nodes in a parallel computer that includes: identifying, by a master node, a memory allocation request for an application executing on the master node, the memory allocation request requesting additional computer memory for use by the application during execution; requesting, by the master node from a slave node, an available memory notification specifying to the master node the computer memory available for allocation on the slave node; allocating, by the master node, at least a portion of the computer memory available for allocation on the slave node in dependence upon the memory allocation request and the available memory notification; and transferring, by the master node, contents of a portion of the computer memory on the master node to the allocated portion of the computer memory on the slave node. | 01-28-2010 |
20100037035 | Generating An Executable Version Of An Application Using A Distributed Compiler Operating On A Plurality Of Compute Nodes - Methods, apparatus, and products are disclosed for generating an executable version of an application using a distributed compiler operating on a plurality of compute nodes that include: receiving, by each compute node, a portion of source code for an application; compiling, in parallel by each compute node, the portion of the source code received by that compute node into a portion of object code for the application; performing, in parallel by each compute node, inter-procedural analysis on the portion of the object code of the application for that compute node, including sharing results of the inter-procedural analysis among the compute nodes; optimizing, in parallel by each compute node, the portion of the object code of the application for that compute node using the shared results of the inter-procedural analysis; and generating the executable version of the application in dependence upon the optimized portions of the object code of the application. | 02-11-2010 |
20100095303 | Balancing A Data Processing Load Among A Plurality Of Compute Nodes In A Parallel Computer - Methods, apparatus, and products are disclosed for balancing a data processing load among a plurality of compute nodes in a parallel computer that include: partitioning application data for processing on the plurality of compute nodes into data chunks; receiving, by each compute node, at least one of the data chunks for processing; estimating, by each compute node, processing time involved in processing the data chunks received by that compute node for processing; and redistributing, by at least one of the compute nodes to at least one of the other compute nodes, a portion of the data chunks received by that compute node in dependence upon the processing time estimated by that compute node. | 04-15-2010 |
20100191822 | Broadcasting Data In A Hybrid Computing Environment - Methods, apparatus, and products for broadcasting data in a hybrid computing environment that includes a host computer, a number of accelerators, the host computer and the accelerators adapted to one another for data communications by a system level message passing module, the host computer having local memory shared remotely with the accelerators, the accelerators having local memory for the accelerators shared remotely with the host computer, where broadcasting data according to embodiments of the present invention includes: writing, by the host computer remotely to the shared local memory for the accelerators, the data to be broadcast; reading, by each of the accelerators from the shared local memory for the accelerators, the data; and notifying the host computer, by the accelerators, that the accelerators have read the data. | 07-29-2010 |
20100191823 | Data Processing In A Hybrid Computing Environment - Data processing in a hybrid computing environment that includes a host computer, a plurality of accelerators, the host computer and the accelerators adapted to one another for data communications by a system level message passing module, the host computer having local memory shared remotely with the accelerators, the accelerators having local memory for the plurality of accelerators shared remotely with the host computer, where data processing according to embodiments of the present invention includes performing, by the plurality of accelerators, a local reduction operation with the local shared memory for the accelerators; writing remotely, by one of the plurality of accelerators to the shared memory local to the host computer, a result of the local reduction operation; and reading, by the host computer from shared memory local to the host computer, the result of the local reduction operation. | 07-29-2010 |
20100191909 | Administering Registered Virtual Addresses In A Hybrid Computing Environment Including Maintaining A Cache Of Ranges Of Currently Registered Virtual Addresses - Administering registered virtual addresses in a hybrid computing environment that includes a host computer, an accelerator, the accelerator architecture optimized, with respect to the host computer architecture, for speed of execution of a particular class of computing functions, the host computer and the accelerator adapted to one another for data communications by a system level message passing module, where administering registered virtual addresses includes maintaining a cache of ranges of currently registered virtual addresses, the cache including entries associating a range of currently registered virtual addresses, a handle representing physical addresses mapped to the range of currently registered virtual addresses, and a counter; determining whether to register ranges of virtual addresses in dependence upon the cache of ranges of currently registered virtual addresses; and determining whether to deregister ranges of virtual addresses in dependence upon the cache of ranges of currently registered virtual addresses. | 07-29-2010 |
20100191917 | Administering Registered Virtual Addresses In A Hybrid Computing Environment Including Maintaining A Watch List Of Currently Registered Virtual Addresses By An Operating System - Administering registered virtual addresses in a hybrid computing environment that includes a host computer and an accelerator, the accelerator architecture optimized, with respect to the host computer architecture, for speed of execution of a particular class of computing functions, the host computer and the accelerator adapted to one another for data communications by a system level message passing module, where administering registered virtual addresses includes maintaining, by an operating system, a watch list of ranges of currently registered virtual addresses; upon a change in physical to virtual address mappings of a particular range of virtual addresses falling within the ranges included in the watch list, notifying the system level message passing module by the operating system of the change; and updating, by the system level message passing module, a cache of ranges of currently registered virtual addresses to reflect the change in physical to virtual address mappings. | 07-29-2010 |
20100191923 | Data Processing In A Computing Environment - Methods, apparatus, and products for data processing in a computing environment including allocating, by an operating system for an application, a virtual address spaces with each virtual address space mapped to a same physical address space and each virtual address space associated with an operation; receiving, from the application, an instruction to store a value in a specific virtual address, the specific virtual address contained within one of the allocated virtual address spaces; identifying a physical address associated with the specific virtual address; performing, with the value and the contents of the identified physical address, the operation associated with the virtual address space containing the specific virtual address; and storing a result of the operation in the identified physical address. | 07-29-2010 |
20100198997 | Direct Memory Access In A Hybrid Computing Environment - Direct memory access (‘DMA’) in a hybrid computing environment that includes a host computer, an accelerator, the host computer and the accelerator adapted to one another for data communications by a system level message passing module, where DMA includes identifying, by the system level message passing module, a buffer of data to be transferred from the host computer to the accelerator according to a DMA protocol; segmenting, by the system level message passing module, the buffer of data into a predefined number of memory segments; pinning, by the system level message passing module, the memory segments against paging; and asynchronously with respect to pinning the memory segments, effecting, by the system level message passing module, DMA transfers of the pinned memory segments from the host computer to the accelerator. | 08-05-2010 |
20100268852 | Replenishing Data Descriptors in a DMA Injection FIFO Buffer - Methods, apparatus, and products are disclosed for replenishing data descriptors in a Direct Memory Access (‘DMA’) injection first-in-first-out (‘FIFO’) buffer that include: determining, by a messaging module on an origin compute node, whether a number of data descriptors in a DMA injection FIFO buffer exceeds a predetermined threshold, each data descriptor specifying an application message for transmission to a target compute node; queuing, by the messaging module, a plurality of new data descriptors in a pending descriptor queue if the number of the data descriptors in the DMA injection FIFO buffer exceeds the predetermined threshold; establishing, by the messaging module, interrupt criteria that specify when to replenish the injection FIFO buffer with the plurality of new data descriptors in the pending descriptor queue; and injecting, by the messaging module, the plurality of new data descriptors into the injection FIFO buffer in dependence upon the interrupt criteria. | 10-21-2010 |
20100274997 | Executing a Gather Operation on a Parallel Computer - Methods, apparatus, and computer program products are disclosed for executing a gather operation on a parallel computer according to embodiments of the present invention. Embodiments include configuring, by the logical root, a result buffer or the logical root, the result buffer having positions, each position corresponding to a ranked node in the operational group and for storing contribution data gathered from that ranked node. Embodiments also include repeatedly for each position in the result buffer: determining, by each compute node of an operational group, whether the current position in the result buffer corresponds with the rank of the compute node, if the current position in the result buffer corresponds with the rank of the compute node, contributing, by that compute node, the compute node's contribution data, if the current position in the result buffer does not correspond with the rank of the compute node, contributing, by that compute node, a value of zero for the contribution data, and storing, by the logical root in the current position in the result buffer, results of a bitwise OR operation of all the contribution data by all compute nodes of the operational group for the current position, the results received through the global combining network. | 10-28-2010 |
20110035556 | Reducing Remote Reads Of Memory In A Hybrid Computing Environment By Maintaining Remote Memory Values Locally - Reducing remote reads of memory in a hybrid computing environment by maintaining remote memory values locally, the hybrid computing environment including a host computer and a plurality of accelerators, the host computer and the accelerators each having local memory shared remotely with the other, including writing to the shared memory of the host computer packets of data representing changes in accelerator memory values, incrementing, in local memory and in remote shared memory on the host computer, a counter value representing the total number of packets written to the host computer, reading by the host computer from the shared memory in the host computer the written data packets, moving the read data to application memory, and incrementing, in both local memory and in remote shared memory on the accelerator, a counter value representing the total number of packets read by the host computer. | 02-10-2011 |
20110191785 | Terminating An Accelerator Application Program In A Hybrid Computing Environment - Terminating an accelerator application program in a hybrid computing environment that includes a host computer having a host computer architecture and an accelerator having an accelerator architecture, where the host computer and the accelerator are adapted to one another for data communications by a system level message passing module (‘SLMPM’), and terminating an accelerator application program in a hybrid computing environment includes receiving, by the SLMPM from a host application executing on the host computer, a request to terminate an accelerator application program executing on the accelerator; terminating, by the SLMPM, execution of the accelerator application program; returning, by the SLMPM to the host application, a signal indicating that execution of the accelerator application program was terminated; and performing, by the SLMPM, a cleanup of the execution environment associated with the terminated accelerator application program. | 08-04-2011 |
20110197204 | Processing Data Communications Messages With Input/Output Control Blocks - Processing data communications messages with an Input/Output Control Block (‘IOCB’) ring that includes a number of IOCBs characterized by a priority and arranged in sequential priority for serial operation, where processing the messages includes depositing message data in one or more IOCBs according to depositing criteria; processing, by a message processing module associated with an IOCB having a priority less than the present value of a state counter, the message data in the IOCB while a message processing module associated with an IOCB having a next priority waits; increasing, upon completion of processing the message data of the IOCB having a priority less than the present value of the state counter, the present value of the state counter to a value greater than the next priority; and processing, by the message processing module associated with the IOCB having the next priority, the message data in the IOCB. | 08-11-2011 |
20110225226 | Assigning A Unique Identifier To A Communicator - Creating, by a parent master process of a parent communicator, a child communicator, including configuring the child communicator with a child master process, wherein a communicator includes a collection of one or more processes executing on compute nodes of a distributed computing system; determining, by the parent master process, whether a unique identifier is available to assign to the child communicator; if a unique identifier is available to assign to the child communicator, assigning, by the parent master process, the available unique identifier to the child communicator; and if a unique identifier is not available to assign to the child communicator: retrieving, by the parent master process, an available unique identifier from a master process of another communicator in a tree of communicators and assigning the retrieved unique identifier to the child communicator. | 09-15-2011 |
20110225255 | Discovering A Resource In A Distributed Computing System - Sending, by a node requesting information regarding a resource to one or more nodes in a distributed computing system, an active message to perform a collective operation; contributing, by each node not having a resource, a value of zero to the collective operation; contributing, by a node having the resource, the node's rank; storing the result of the collective operation in a buffer of the requesting node; and identifying, in dependence upon the result of the collective operation, the rank of the node having the resource. | 09-15-2011 |
20110225297 | Controlling Access To A Resource In A Distributed Computing System With A Distributed Access Request Queue - Controlling access to a resource in a distributed computing system that includes nodes having a status field, a next field, a source data buffer, and that are characterized by a unique node identifier, where controlling access includes receiving a request for access to the resource implemented as an active message that includes the requesting node's unique node identifier, the value stored in the requesting node's source data buffer, and an instruction to perform a reduction operation with the value stored in the requesting node's source data buffer and the value stored in the receiving node's source data buffer; returning the requesting node's unique node identifier as a result of the reduction operation; and updating the status and next fields to identify the requesting node as a next node to have sole access to the resource. | 09-15-2011 |
20110238949 | Distributed Administration Of A Lock For An Operational Group Of Compute Nodes In A Hierarchical Tree Structured Network - Distributed administration of a lock for an operational group of compute nodes in a hierarchical tree structured network including assigning the root node of the operational group to send acknowledgments for lock requests, the root lock administration module comprising a module of automated computing machinery; receiving a lock request assigned to a particular node from a child node; determining whether another request from another child is directly ahead in an acknowledgement queue; if a request from another child is directly ahead in the acknowledgement queue, putting the lock request for the particular node in the acknowledgement queue until the lock request directly ahead in the acknowledgement queue is satisfied and when the lock request ahead in the queue is satisfied, sending the particular node for whom the lock request is assigned a message acknowledging the particular node has the lock; and if a request from another child is not directly ahead in a queue, sending to the particular node for whom the lock request is assigned a message acknowledging that the particular node has the lock. | 09-29-2011 |
20110238950 | Performing A Scatterv Operation On A Hierarchical Tree Network Optimized For Collective Operations - Performing a scattery operation on a hierarchical tree network optimized for collective operations including receiving, by the scattery module installed on the node, from a nearest neighbor parent above the node a chunk of data having at least a portion of data for the node; maintaining, by the scattery module installed on the node, the portion of the data for the node; determining, by the scattery module installed on the node, whether any portions of the data are for a particular nearest neighbor child below the node or one or more other nodes below the particular nearest neighbor child; and sending, by the scattery module installed on the node, those portions of data to the nearest neighbor child if any portions of the data are for a particular nearest neighbor child below the node or one or more other nodes below the particular nearest neighbor child. | 09-29-2011 |
20110239003 | Direct Injection of Data To Be Transferred In A Hybrid Computing Environment - Direct injection of a data to be transferred in a hybrid computing environment that includes a host computer and a plurality of accelerators, the host computer and the accelerators adapted to one another for data communications by a system level message passing module. Each accelerator includes a Power Processing Element (‘PPE’) and a plurality of Synergistic Processing Elements (‘SPEs’). Direct injection includes reserving, by each SPE, a slot in a shared memory region accessible by the host computer; loading, by each SPE into local memory of the SPE, a portion of data to be transferred to the host computer; executing, by each SPE in parallel, a data processing operation on the portion of the data loaded in local memory of each SPE; and writing, by each SPE, the processed data to the SPE's reserved slot in the shared memory region accessible by the host computer. | 09-29-2011 |
20110258281 | QUERY PERFORMANCE DATA ON PARALLEL COMPUTER SYSTEM HAVING COMPUTE NODES - Embodiments of the invention provide a method for querying performance counter data on a massively parallel computing system, while minimizing the costs associated with interrupting computer processors and limited memory resources. DMA descriptors may be inserted into an injection FIFO of a remote compute node in the massively parallel computing system. Upon executing the DMA operations described by the DMA descriptors, performance counter data may be transferred from the remote compute node to a destination node. | 10-20-2011 |
20110267197 | Monitoring Operating Parameters In A Distributed Computing System With Active Messages - In a distributed computing system including a nodes organized for collective operations: initiating, by a root node through an active message to all other nodes, a collective operation, the active message including an instruction to each node to store operating parameter data in each node's send buffer; and, responsive to the active message: storing, by each node, the node's operating parameter data in the node's send buffer and returning, by the node, the operating parameter data as a result of the collective operation. | 11-03-2011 |
20110270942 | COMBINING MULTIPLE HARDWARE NETWORKS TO ACHIEVE LOW-LATENCY HIGH-BANDWIDTH POINT-TO-POINT COMMUNICATION - Systems, methods and articles of manufacture are disclosed for performing a collective operation on a parallel computing system that includes multiple compute nodes and multiple networks connecting the compute nodes. Each of the networks may have different characteristics. A source node may broadcast a DMA descriptor over a first network to a target node, to initialize the collective operation. The target node may perform the collective operation over a second network and using the broadcast DMA descriptor. | 11-03-2011 |
20110270986 | Optimizing Collective Operations - Optimizing collective operations including receiving an instruction to perform a collective operation type; selecting an optimized collective operation for the collective operation type; performing the selected optimized collective operation; determining whether a resource needed by the one or more nodes to perform the collective operation is not available; if a resource needed by the one or more nodes to perform the collective operation is not available: notifying the other nodes that the resource is not available; selecting a next optimized collective operation; and performing the next optimized collective operation. | 11-03-2011 |
20110271006 | PIPELINING PROTOCOLS IN MISALIGNED BUFFER CASES - Systems, methods and articles of manufacture are disclosed for effecting a desired collective operation on a parallel computing system that includes multiple compute nodes. The compute nodes may pipeline multiple collective operations to effect the desired collective operation. To select protocols suitable for the multiple collective operations, the compute nodes may also perform additional collective operations. The compute nodes may pipeline the multiple collective operations and/or the additional collective operations to effect the desired collective operation more efficiently. | 11-03-2011 |
20110271059 | REDUCING REMOTE READS OF MEMORY IN A HYBRID COMPUTING ENVIRONMENT - A hybrid computing environment in which the host computer allocates, in the shadow memory area of the host computer, a memory region for a packet to be written to the shared memory of an accelerator; writes packet data to the accelerator's shared memory in a memory region corresponding to the allocated memory region; inserts, in a next available element of the accelerator's descriptor array, a descriptor identifying the written packet data; increments the copy of the head pointer of the accelerator's descriptor array maintained on the host computer; and updates a copy of the head pointer of the accelerator's descriptor array maintained on the accelerator with the incremented copy. | 11-03-2011 |
20110271263 | Compiling Software For A Hierarchical Distributed Processing System - Compiling software for a hierarchical distributed processing system including providing to one or more compiling nodes software to be compiled, wherein at least a portion of the software to be compiled is to be executed by one or more other nodes; compiling, by the compiling node, the software; maintaining, by the compiling node, any compiled software to be executed on the compiling node; selecting, by the compiling node, one or more nodes in a next tier of the hierarchy of the distributed processing system in dependence upon whether any compiled software is for the selected node or the selected node's descendants; sending to the selected node only the compiled software to be executed by the selected node or selected node's descendant. | 11-03-2011 |
20110288848 | PASSING NON-ARCHITECTED REGISTERS VIA A CALLBACK/ADVANCE MECHANISM IN A SIMULATOR ENVIRONMENT - Embodiments of the invention provide a method of calculating performance counter data for a computer simulator, while minimizing the performance costs associated with cycle-accurate simulation. A callback may be associated with the instructions of a user program and, when the instructions are executed, the associated callbacks may be executed as well. Upon execution, the callbacks may calculate performance counter data related to the associated instruction. | 11-24-2011 |
20110289177 | Effecting Hardware Acceleration Of Broadcast Operations In A Parallel Computer - Compute nodes of a parallel computer organized for collective operations via a network, each compute node having a receive buffer and establishing a topology for the network; selecting a schedule for a broadcast operation; depositing, by a root node of the topology, broadcast data in a target node's receive buffer, including performing a DMA operation with a well-known memory location for the target node's receive buffer; depositing, by the root node in a memory region designated for storing broadcast data length, a length of the broadcast data, including performing a DMA operation with a well-known memory location of the broadcast data length memory region; and triggering, by the root node, the target node to perform a next DMA operation, including depositing, in a memory region designated for receiving injection instructions for the target node, an instruction to inject the broadcast data into the receive buffer of a subsequent target node. | 11-24-2011 |
20110296137 | Performing A Deterministic Reduction Operation In A Parallel Computer - A parallel computer that includes compute nodes having computer processors and a CAU (Collectives Acceleration Unit) that couples processors to one another for data communications. In embodiments of the present invention, deterministic reduction operation include: organizing processors of the parallel computer and a CAU into a branched tree topology, where the CAU is a root of the branched tree topology and the processors are children of the root CAU; establishing a receive buffer that includes receive elements associated with processors and configured to store the associated processor's contribution data; receiving, in any order from the processors, each processor's contribution data; tracking receipt of each processor's contribution data; and reducing, the contribution data in a predefined order, only after receipt of contribution data from all processors in the branched tree topology. | 12-01-2011 |
20110296139 | Performing A Deterministic Reduction Operation In A Parallel Computer - Performing a deterministic reduction operation in a parallel computer that includes compute nodes, each of which includes computer processors and a CAU (Collectives Acceleration Unit) that couples computer processors to one another for data communications, including organizing processors and a CAU into a branched tree topology in which the CAU is a root and the processors are children; receiving, from each of the processors in any order, dummy contribution data, where each processor is restricted from sending any other data to the root CAU prior to receiving an acknowledgement of receipt from the root CAU; sending, by the root CAU to the processors in the branched tree topology, in a predefined order, acknowledgements of receipt of the dummy contribution data; receiving, by the root CAU from the processors in the predefined order, the processors' contribution data to the reduction operation; and reducing, by the root CAU, the processors' contribution data. | 12-01-2011 |
20120036384 | Reducing Power Consumption While Synchronizing A Plurality Of Compute Nodes During Execution Of A Parallel Application - Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation. | 02-09-2012 |
20120066284 | Send-Side Matching Of Data Communications Messages - Send-side matching of data communications messages in a distributed computing system comprising a plurality of compute nodes organized for collective operations, including: issuing by a receiving node to source nodes a receive message that specifies receipt of a single message to be sent from any source node, the receive message including message matching information, a specification of a hardware-level mutual exclusion device, and an identification of a receive buffer; matching by two or more of the source nodes the receive message with pending send messages in the two or more source nodes; operating by one of the source nodes having a matching send message the mutual exclusion device, excluding messages from other source nodes with matching send messages and identifying to the receiving node the source node operating the mutual exclusion device; and sending to the receiving node from the source node operating the mutual exclusion device a matched pending message. | 03-15-2012 |
20120066310 | COMBINING MULTIPLE HARDWARE NETWORKS TO ACHIEVE LOW-LATENCY HIGH-BANDWIDTH POINT-TO-POINT COMMUNICATION OF COMPLEX TYPES - Systems, methods and articles of manufacture are disclosed for performing a vector collective operation on a parallel computing system that includes multiple compute nodes and a network connecting the compute nodes that includes an ALU. A collective operation may be performed to determine displacements for the vector collective operation. Descriptors for the vector collective operation may be generated based on the displacements. The vector collective operation may then be performed using the descriptors. | 03-15-2012 |
20120079035 | Administering Truncated Receive Functions In A Parallel Messaging Interface - Administering truncated receive functions in a parallel messaging interface (‘PMI’) of a parallel computer comprising a plurality of compute nodes coupled for data communications through the PMI and through a data communications network, including: sending, through the PMI on a source compute node, a quantity of data from the source compute node to a destination compute node; specifying, by an application on the destination compute node, a portion of the quantity of data to be received by the application on the destination compute node and a portion of the quantity of data to be discarded; receiving, by the PMI on the destination compute node, all of the quantity of data; providing, by the PMI on the destination compute node to the application on the destination compute node, only the portion of the quantity of data to be received by the application; and discarding, by the PMI on the destination compute node, the portion of the quantity of data to be discarded. | 03-29-2012 |
20120079133 | Routing Data Communications Packets In A Parallel Computer - Routing data communications packets in a parallel computer that includes compute nodes organized for collective operations, each compute node including an operating system kernel and a system-level messaging module that is a module of automated computing machinery that exposes a messaging interface to applications, each compute node including a routing table that specifies, for each of a multiplicity of route identifiers, a data communications path through the compute node, including: receiving in a compute node a data communications packet that includes a route identifier value; retrieving from the routing table a specification of a data communications path through the compute node; and routing, by the compute node, the data communications packet according to the data communications path identified by the compute node's routing table entry for the data communications packet's route identifier value. | 03-29-2012 |
20120079165 | Paging Memory From Random Access Memory To Backing Storage In A Parallel Computer - Paging memory from random access memory (‘RAM’) to backing storage in a parallel computer that includes a plurality of compute nodes, including: executing a data processing application on a virtual machine operating system in a virtual machine on a first compute node; providing, by a second compute node, backing storage for the contents of RAM on the first compute node; and swapping, by the virtual machine operating system in the virtual machine on the first compute node, a page of memory from RAM on the first compute node to the backing storage on the second compute node. | 03-29-2012 |
20120117361 | Processing Data Communications Events In A Parallel Active Messaging Interface Of A Parallel Computer - Processing data communications events in a parallel active messaging interface (‘PAMI’) of a parallel computer that includes compute nodes that execute a parallel application, with the PAMI including data communications endpoints, and the endpoints are coupled for data communications through the PAMI and through other data communications resources, including determining by an advance function that there are no actionable data communications events pending for its context, placing by the advance function its thread of execution into a wait state, waiting for a subsequent data communications event for the context; responsive to occurrence of a subsequent data communications event for the context, awakening by the thread from the wait state; and processing by the advance function the subsequent data communications event now pending for the context. | 05-10-2012 |
20120137294 | Data Communications In A Parallel Active Messaging Interface Of A Parallel Computer - Data communications in a parallel active messaging interface (‘PAMI’) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, endpoints coupled for data communications through the PAMI and through data communications resources, including receiving in an origin endpoint of the PAMI a SEND instruction, the SEND instruction specifying a transmission of transfer data from the origin endpoint to a first target endpoint; transmitting from the origin endpoint to the first target endpoint a Request-To-Send (‘RTS’) message advising the first target endpoint of the location and size of the transfer data; assigning by the first target endpoint to each of a plurality of target endpoints separate portions of the transfer data; and receiving by the plurality of target endpoints the transfer data. | 05-31-2012 |
20120151485 | Data Communications In A Parallel Active Messaging Interface Of A Parallel Computer - Data communications in a parallel active messaging interface (‘PAMI’) of a parallel computer, the parallel computer including a plurality of compute nodes that execute a parallel application, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources, including receiving in an origin endpoint of the PAMI a data communications instruction, the instruction characterized by an instruction type, the instruction specifying a transmission of transfer data from the origin endpoint to a target endpoint and transmitting, in accordance with the instruction type, the transfer data from the origin endpoint to the target endpoint. | 06-14-2012 |
20120174105 | Locality Mapping In A Distributed Processing System - Topology mapping in a distributed processing system that includes a plurality of compute nodes, including: initiating a message passing operation; including in a message generated by the message passing operation, topological information for the sending task; mapping the topological information for the sending task; determining whether the sending task and the receiving task reside on the same topological unit; if the sending task and the receiving task reside on the same topological unit, using an optimal local network pattern for subsequent message passing operations between the sending task and the receiving task; otherwise, using a data communications network between the topological unit of the sending task and the topological unit of the receiving task for subsequent message passing operations between the sending task and the receiving task. | 07-05-2012 |
20120179881 | Performing An Allreduce Operation Using Shared Memory - Methods, apparatus, and products are disclosed for performing an allreduce operation using shared memory that include: receiving, by at least one of a plurality of processing cores on a compute node, an instruction to perform an allreduce operation; establishing, by the core that received the instruction, a job status object for specifying a plurality of shared memory allreduce work units, the plurality of shared memory allreduce work units together performing the allreduce operation on the compute node; determining, by an available core on the compute node, a next shared memory allreduce work unit in the job status object; and performing, by that available core on the compute node, that next shared memory allreduce work unit. | 07-12-2012 |
20120185230 | Distributed Hardware Device Simulation - Distributed hardware device simulation, including: identifying a plurality of hardware components of the hardware device; providing software components simulating the functionality of each hardware component, wherein the software components are installed on compute nodes of a distributed processing system; receiving, in at least one of the software components, one or more messages representing an input to the hardware component; simulating the operation of the hardware component with the software component, thereby generating an output of the software component representing the output of the hardware component; and sending, from the software component to at least one other software component, one or more messages representing the output of the hardware component. | 07-19-2012 |
20120185679 | Endpoint-Based Parallel Data Processing With Non-Blocking Collective Instructions In A Parallel Active Messaging Interface Of A Parallel Computer - Endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface (‘PAMI’) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing by the parallel application a data communications geometry, the geometry specifying a set of endpoints that are used in collective operations of the PAMI, including associating with the geometry a list of collective algorithms valid for use with the endpoints of the geometry; registering in each endpoint in the geometry a dispatch callback function for a collective operation; and executing without blocking, through a single one of the endpoints in the geometry, an instruction for the collective operation. | 07-19-2012 |
20120185867 | Optimizing The Deployment Of A Workload On A Distributed Processing System - Optimizing the deployment of a workload on a distributed processing system, the distributed processing system having a plurality of nodes, each node having a plurality of attributes, including: profiling during operations on the distributed processing system attributes of the nodes of the distributed processing system; selecting a workload for deployment on a subset of the nodes of the distributed processing system; determining specific resource requirements for the workload to be deployed; determining a required geometry of the nodes to run the workload; selecting a set of nodes having attributes that meet the specific resource requirements and arranged to meet the required geometry; deploying the workload on the selected nodes. | 07-19-2012 |
20120185873 | Data Communications In A Parallel Active Messaging Interface Of A Parallel Computer - Data communications in a parallel active messaging interface (‘PAMI’) of a parallel computer composed of compute nodes that execute a parallel application, each compute node including application processors that execute the parallel application and at least one management processor dedicated to gathering information regarding data communications. The PAMI is composed of data communications endpoints, each endpoint composed of a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources. Embodiments function by gathering call site statistics describing data communications resulting from execution of data communications instructions and identifying in dependence upon the call cite statistics a data communications algorithm for use in executing a data communications instruction at a call site in the parallel application. | 07-19-2012 |
20120189012 | Providing Point To Point Communications Among Compute Nodes In A Global Combining Network Of A Parallel Computer - Methods, apparatus, and products are disclosed for providing point to point data communications among compute nodes in a global combining network of a parallel computer that include: determining a class route identifier available for all of the nodes along a communications path from an origin node to a target node; configuring network hardware of each node along the communications path with routing instructions in dependence upon the available class route identifier and the network's topology; transmitting, by the origin node along the communications path, a network packet to the target node, including encoding the available class route identifier in the network packet; and routing, by the network hardware of each node along the communications path, the network packet to the target node in dependence upon the routing instructions for each node and the available class route identifier. | 07-26-2012 |
20120191920 | Reducing Remote Reads Of Memory In A Hybrid Computing Environment By Maintaining Remote Memory Values Locally - Reducing remote reads of memory in a hybrid computing environment by maintaining remote memory values locally, the hybrid computing environment including a host computer and a plurality of accelerators, the host computer and the accelerators each having local memory shared remotely with the other, including writing to the shared memory of the host computer packets of data representing changes in accelerator memory values, incrementing, in local memory and in remote shared memory on the host computer, a counter value representing the total number of packets written to the host computer, reading by the host computer from the shared memory in the host computer the written data packets, moving the read data to application memory, and incrementing, in both local memory and in remote shared memory on the accelerator, a counter value representing the total number of packets read by the host computer. | 07-26-2012 |
20120204041 | Profiling An Application For Power Consumption During Execution On A Compute Node - Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application. | 08-09-2012 |
20120216021 | Performing An All-To-All Data Exchange On A Plurality Of Data Buffers By Performing Swap Operations - Methods, apparatus, and products are disclosed for performing an all-to-all exchange on n number of data buffers using XOR swap operations. Each data buffer has n number of data elements. Performing an all-to-all exchange on n number of data buffers using XOR swap operations includes for each rank value of i and j where i is greater than j and where i is less than or equal to n: selecting data element i in data buffer j; selecting data element j in data buffer i; and exchanging contents of data element i in data buffer j with contents of data element j in data buffer i using an XOR swap operation. | 08-23-2012 |
20120254344 | Endpoint-Based Parallel Data Processing In A Parallel Active Messaging Interface Of A Parallel Computer - Endpoint-based parallel data processing in a parallel active messaging interface (‘PAMI’) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks. | 10-04-2012 |
20120265835 | QUERY PERFORMANCE DATA ON PARALLEL COMPUTER SYSTEM HAVING COMPUTE NODES - Embodiments of the invention provide a method for querying performance counter data on a massively parallel computing system, while minimizing the costs associated with interrupting computer processors and limited memory resources. DMA descriptors may be inserted into an injection FIFO of a remote compute node in the massively parallel computing system. Upon executing the DMA operations described by the DMA descriptors, performance counter data may be transferred from the remote compute node to a destination node. | 10-18-2012 |
20120290863 | Budget-Based Power Consumption For Application Execution On A Plurality Of Compute Nodes - Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications. | 11-15-2012 |
20120304193 | Scheduling Applications For Execution On A Plurality Of Compute Nodes Of A Parallel Computer To Manage Temperature Of The Nodes During Execution - Methods, apparatus, and products are disclosed for scheduling applications for execution on a plurality of compute nodes of a parallel computer to manage temperature of the plurality of compute nodes during execution that include: identifying one or more applications for execution on the plurality of compute nodes; creating a plurality of physically discontiguous node partitions in dependence upon temperature characteristics for the compute nodes and a physical topology for the compute nodes, each discontiguous node partition specifying a collection of physically adjacent compute nodes; and assigning, for each application, that application to one or more of the discontiguous node partitions for execution on the compute nodes specified by the assigned discontiguous node partitions. | 11-29-2012 |
20120331270 | Compressing Result Data For A Compute Node In A Parallel Computer - Compressing result data for a compute node in a parallel computer, the parallel computer including a collection of compute nodes organized as a tree, including: initiating a collective gather operation by a logical root of the collection of compute nodes, including adding result data of the logical root to a gather buffer; for each compute node in the collection of compute nodes, determining whether result data of the compute node is already written in the gather buffer; and if the result data of the compute node is already written in the gather buffer, incrementing a counter assigned to that result data already written in the gather buffer; and if the result data of the compute node is not already written in the gather buffer, writing the result data of the compute node as new result data in the gather buffer, incrementing a counter assigned to that new result data, and writing in the gather buffer a node ID. | 12-27-2012 |
20130018935 | Performing Collective Operations In A Distributed Processing SystemAANM ARCHER; Charles J.AACI RochesterAAST MNAACO USAAGP ARCHER; Charles J. Rochester MN USAANM CAREY; James E.AACI RochesterAAST MNAACO USAAGP CAREY; James E. Rochester MN USAANM MARKLAND; Matthew W.AACI RochesterAAST MNAACO USAAGP MARKLAND; Matthew W. Rochester MN USAANM SANDERS; Philip J.AACI RochesterAAST MNAACO USAAGP SANDERS; Philip J. Rochester MN US - Methods, apparatuses, and computer program products for performing collective operations on a hybrid distributed processing system are provided. The hybrid distributed processing system includes a plurality of compute nodes, each compute node having a plurality of tasks, each task assigned a unique rank, each compute node coupled for data communications by at least one data communications network implementing at least two different networking topologies. A first networking topology includes a tiered tree topology having a root task, and at least two child tasks, where the two child tasks are peers of one another in the same tier. Embodiments include determining by at least one task that a parent of the task has failed to send the task data through the tree topology; and determining whether to request the data from a grandparent of the task or a peer of the task in the same tier in the tree topology; and if the task requests the data from the grandparent, requesting the data and receiving the data from the grandparent of the task through the second networking topology; and if the task requests the data from a peer of the task in the same tier in the tree, requesting the data and receiving the data from a peer of the task through the second networking topology. | 01-17-2013 |
20130018947 | Performing Collective Operations In A Distributed Processing SystemAANM Archer; Charles J.AACI RochesterAAST MNAACO USAAGP Archer; Charles J. Rochester MN USAANM Carey; James E.AACI RochesterAAST MNAACO USAAGP Carey; James E. Rochester MN USAANM Markland; Matthew W.AACI RochesterAAST MNAACO USAAGP Markland; Matthew W. Rochester MN USAANM Sanders; Philip J.AACI RochesterAAST MNAACO USAAGP Sanders; Philip J. Rochester MN US - Methods, apparatuses, and computer program products for performing collective operations on a hybrid distributed processing system are provided. The hybrid distributed processing system includes a plurality of compute nodes where each compute node has a plurality of tasks, each task is assigned a unique rank, and each compute node is coupled for data communications by at least one data communications network implementing at least two different networking topologies. At least one of the two networking topologies is a tiered tree topology having a root task and at least two child tasks and the at least two child tasks are peers of one another in the same tier. Embodiments include for each task, sending at least a portion of data corresponding to the task to all child tasks of the task through the tree topology; and sending at least a portion of the data corresponding to the task to all peers of the task at the same tier in the tree topology through the second topology. | 01-17-2013 |
20130024866 | Topology Mapping In A Distributed Processing System - Topology mapping in a distributed processing system, the distributed processing system including a plurality of compute nodes, each compute node having a plurality of tasks, each task assigned a unique rank, including: assigning each task to a geometry defining the resources available to the task; selecting, from a list of possible data communications algorithms, one or more algorithms configured for the assigned geometry; and identifying, by each task to all other tasks, the selected data communications algorithms of each task in a single collective operation. | 01-24-2013 |
20130042088 | Collective Operation Protocol Selection In A Parallel Computer - Collective operation protocol selection in a parallel computer that includes compute nodes may be carried out by calling a collective operation with operating parameters; selecting a protocol for executing the operation and executing the operation with the selected protocol. Selecting a protocol includes: iteratively, until a prospective protocol meets predetermined performance criteria: providing, to a protocol performance function for the prospective protocol, the operating parameters; determining whether the prospective protocol meets predefined performance criteria by evaluating a predefined performance fit equation, calculating a measure of performance of the protocol for the operating parameters; determining that the prospective protocol meets predetermined performance criteria and selecting the protocol for executing the operation only if the calculated measure of performance is greater than a predefined minimum performance threshold. | 02-14-2013 |
20130042245 | Performing A Global Barrier Operation In A Parallel Computer - Performing a global barrier operation in a parallel computer that includes compute nodes coupled for data communications, where each compute node executes tasks, with one task on each compute node designated as a master task, including: for each task on each compute node until all master tasks have joined a global barrier: determining whether the task is a master task; if the task is not a master task, joining a single local barrier; if the task is a master task, joining the global barrier and the single local barrier only after all other tasks on the compute node have joined the single local barrier. | 02-14-2013 |
20130042254 | Performing A Local Barrier Operation - Performing a local barrier operation with parallel tasks executing on a compute node including, for each task: retrieving a present value of a counter; calculating, in dependence upon the present value of the counter and a total number of tasks performing the local barrier operation, a base value of the counter, the base value representing the counter's value prior to any task joining the local barrier; calculating, in dependence upon the base value and the total number of tasks performing the local barrier operation, a target value of the counter, the target value representing the counter's value when all tasks have joined the local barrier; joining the local barrier, including atomically incrementing the value of the counter; and repetitively, until the present value of the counter is no less than the target value of the counter: retrieving the present value of the counter and determining whether the present value equals the target value. | 02-14-2013 |
20130060557 | DISTRIBUTED HARDWARE DEVICE SIMULATION - Distributed hardware device simulation, including: identifying a plurality of hardware components of the hardware device; providing software components simulating the functionality of each hardware component, wherein the software components are installed on compute nodes of a distributed processing system; receiving, in at least one of the software components, one or more messages representing an input to the hardware component; simulating the operation of the hardware component with the software component, thereby generating an output of the software component representing the output of the hardware component; and sending, from the software component to at least one other software component, one or more messages representing the output of the hardware component. | 03-07-2013 |
20130060833 | TOPOLOGY MAPPING IN A DISTRIBUTED PROCESSING SYSTEM - Topology mapping in a distributed processing system, the distributed processing system including a plurality of compute nodes, each compute node having a plurality of tasks, each task assigned a unique rank, including: assigning each task to a geometry defining the resources available to the task; selecting, from a list of possible data communications algorithms, one or more algorithms configured for the assigned geometry; and identifying, by each task to all other tasks, the selected data communications algorithms of each task in a single collective operation. | 03-07-2013 |
20130060844 | DIRECT INJECTION OF DATA TO BE TRANSFERRED IN A HYBRID COMPUTING ENVIRONMENT - Direct injection of a data to be transferred in a hybrid computing environment that includes a host computer and a plurality of accelerators, the host computer and the accelerators adapted to one another for data communications by a system level message passing module. Each accelerator includes a Power Processing Element (‘PPE’) and a plurality of Synergistic Processing Elements (‘SPEs’). Direct injection includes reserving, by each SPE, a slot in a shared memory region accessible by the host computer; loading, by each SPE into local memory of the SPE, a portion of data to be transferred to the host computer; executing, by each SPE in parallel, a data processing operation on the portion of the data loaded in local memory of each SPE; and writing, by each SPE, the processed data to the SPE's reserved slot in the shared memory region accessible by the host computer. | 03-07-2013 |
20130060944 | CONTROLLING ACCESS TO A RESOURCE IN A DISTRIBUTED COMPUTING SYSTEM WITH A DISTRIBUTED ACCESS REQUEST QUEUE - Controlling access to a resource in a distributed computing system that includes nodes having a status field, a next field, a source data buffer, and that are characterized by a unique node identifier, where controlling access includes receiving a request for access to the resource implemented as an active message that includes the requesting node's unique node identifier, the value stored in the requesting node's source data buffer, and an instruction to perform a reduction operation with the value stored in the requesting node's source data buffer and the value stored in the receiving node's source data buffer; returning the requesting node's unique node identifier as a result of the reduction operation; and updating the status and next fields to identify the requesting node as a next node to have sole access to the resource. | 03-07-2013 |
20130061238 | OPTIMIZING THE DEPLOYMENT OF A WORKLOAD ON A DISTRIBUTED PROCESSING SYSTEM - Optimizing the deployment of a workload on a distributed processing system, the distributed processing system having a plurality of nodes, each node having a plurality of attributes, including: profiling during operations on the distributed processing system attributes of the nodes of the distributed processing system; selecting a workload for deployment on a subset of the nodes of the distributed processing system; determining specific resource requirements for the workload to be deployed; determining a required geometry of the nodes to run the workload; selecting a set of nodes having attributes that meet the specific resource requirements and arranged to meet the required geometry; deploying the workload on the selected nodes. | 03-07-2013 |
20130061246 | PROCESSING DATA COMMUNICATIONS MESSAGES WITH INPUT/OUTPUT CONTROL BLOCKS - Processing data communications messages with an Input/Output Control Block (‘IOCB’) ring that includes a number of IOCBs characterized by a priority and arranged in sequential priority for serial operation, where processing the messages includes depositing message data in one or more IOCBs according to depositing criteria; processing, by a message processing module associated with an IOCB having a priority less than the present value of a state counter, the message data in the IOCB while a message processing module associated with an IOCB having a next priority waits; increasing, upon completion of processing the message data of the IOCB having a priority less than the present value of the state counter, the present value of the state counter to a value greater than the next priority; and processing, by the message processing module associated with the IOCB having the next priority, the message data in the IOCB. | 03-07-2013 |
20130066938 | PERFORMING COLLECTIVE OPERATIONS IN A DISTRIBUTED PROCESSING SYSTEM - Methods, apparatuses, and computer program products for performing collective operations on a hybrid distributed processing system that includes a plurality of compute nodes and a plurality of tasks, each task is assigned a unique rank, and each compute node is coupled for data communications by at least two different networking topologies. At least one of the two networking topologies is a tiered tree topology having a root task and at least two child tasks and the at least two child tasks are peers of one another in the same tier. Embodiments include for each task, sending at least a portion of data corresponding to the task to all child tasks of the task through the tree topology; and sending at least a portion of the data corresponding to the task to all peers of the task at the same tier in the tree topology through the second topology. | 03-14-2013 |
20130067111 | ROUTING DATA COMMUNICATIONS PACKETS IN A PARALLEL COMPUTER - Routing data communications packets in a parallel computer that includes compute nodes organized for collective operations, each compute node including an operating system kernel and a system-level messaging module that is a module of automated computing machinery that exposes a messaging interface to applications, each compute node including a routing table that specifies, for each of a multiplicity of route identifiers, a data communications path through the compute node, including: receiving in a compute node a data communications packet that includes a route identifier value; retrieving from the routing table a specification of a data communications path through the compute node; and routing, by the compute node, the data communications packet according to the data communications path identified by the compute node's routing table entry for the data communications packet's route identifier value. | 03-14-2013 |
20130067198 | COMPRESSING RESULT DATA FOR A COMPUTE NODE IN A PARALLEL COMPUTER - A parallel computer is provided that includes a collection of compute nodes organized as a tree, including: initiating a collective gather operation by a logical root of the collection of compute nodes, including adding result data of the logical root to a gather buffer; for each compute node in the collection of compute nodes, determining whether result data of the compute node is already written in the gather buffer; and if the result data of the compute node is already written in the gather buffer, incrementing a counter assigned to that result data already written in the gather buffer; and if the result data of the compute node is not already written in the gather buffer, writing the result data of the compute node as new result data in the gather buffer, incrementing a counter assigned to that new result data, and writing in the gather buffer a node ID. | 03-14-2013 |
20130067206 | Endpoint-Based Parallel Data Processing In A Parallel Active Messaging Interface Of A Parallel Computer - Endpoint-based parallel data processing in a parallel active messaging interface (‘PAMI’) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks. | 03-14-2013 |
20130067479 | Establishing A Group Of Endpoints In A Parallel Computer - A parallel computer executes a number of tasks, each task includes a number of endpoints and the endpoints are configured to support collective operations. In such a parallel computer, establishing a group of endpoints receiving a user specification of a set of endpoints included in a global collection of endpoints, where the user specification defines the set in accordance with a predefined virtual representation of the endpoints, the predefined virtual representation is a data structure setting forth an organization of tasks and endpoints included in the global collection of endpoints and the user specification defines the set of endpoints without a user specification of a particular endpoint; and defining a group of endpoints in dependence upon the predefined virtual representation of the endpoints and the user specification. | 03-14-2013 |
20130067483 | LOCALITY MAPPING IN A DISTRIBUTED PROCESSING SYSTEM - Topology mapping in a distributed processing system that includes a plurality of compute nodes, including: initiating a message passing operation; including in a message generated by the message passing operation, topological information for the sending task; mapping the topological information for the sending task; determining whether the sending task and the receiving task reside on the same topological unit; if the sending task and the receiving task reside on the same topological unit, using an optimal local network pattern for subsequent message passing operations between the sending task and the receiving task; otherwise, using a data communications network between the topological unit of the sending task and the topological unit of the receiving task for subsequent message passing operations between the sending task and the receiving task. | 03-14-2013 |
20130073603 | SEND-SIDE MATCHING OF DATA COMMUNICATIONS MESSAGES - Send-side matching of data communications messages in a distributed computing system comprising a plurality of compute nodes, including: issuing by a receiving node to source nodes a receive message that specifies receipt of a single message to be sent from any source node, the receive message including message matching information, a specification of a hardware-level mutual exclusion device, and an identification of a receive buffer; matching by two or more of the source nodes the receive message with pending send messages in the two or more source nodes; operating by one of the source nodes having a matching send message the mutual exclusion device, excluding messages from other source nodes with matching send messages and identifying to the receiving node the source node operating the mutual exclusion device; and sending to the receiving node from the source node operating the mutual exclusion device a matched pending message. | 03-21-2013 |
20130073733 | BALANCING A DATA PROCESSING LOAD AMONG A PLURALITY OF COMPUTE NODES IN A PARALLEL COMPUTER - Methods, apparatus, and products are disclosed for balancing a data processing load among a plurality of compute nodes in a parallel computer that include: partitioning application data for processing on the plurality of compute nodes into data chunks; receiving, by each compute node, at least one of the data chunks for processing; estimating, by each compute node, processing time involved in processing the data chunks received by that compute node for processing; and redistributing, by at least one of the compute nodes to at least one of the other compute nodes, a portion of the data chunks received by that compute node in dependence upon the processing time estimated by that compute node. | 03-21-2013 |
20130073832 | PERFORMING A DETERMINISTIC REDUCTION OPERATION IN A PARALLEL COMPUTER - A parallel computer that includes compute nodes having computer processors and a CAU (Collectives Acceleration Unit) that couples processors to one another for data communications. In embodiments of the present invention, deterministic reduction operation include: organizing processors of the parallel computer and a CAU into a branched tree topology, where the CAU is a root of the branched tree topology and the processors are children of the root CAU; establishing a receive buffer that includes receive elements associated with processors and configured to store the associated processor's contribution data; receiving, in any order from the processors, each processor's contribution data; tracking receipt of each processor's contribution data; and reducing, the contribution data in a predefined order, only after receipt of contribution data from all processors in the branched tree topology. | 03-21-2013 |
20130074086 | PIPELINING PROTOCOLS IN MISALIGNED BUFFER CASES - Systems, methods and articles of manufacture are disclosed for effecting a desired collective operation on a parallel computing system that includes multiple compute nodes. The compute nodes may pipeline multiple collective operations to effect the desired collective operation. To select protocols suitable for the multiple collective operations, the compute nodes may also perform additional collective operations. The compute nodes may pipeline the multiple collective operations and/or the additional collective operations to effect the desired collective operation more efficiently. | 03-21-2013 |
20130074097 | ENDPOINT-BASED PARALLEL DATA PROCESSING WITH NON-BLOCKING COLLECTIVE INSTRUCTIONS IN A PARALLEL ACTIVE MESSAGING INTERFACE OF A PARALLEL COMPUTER - Endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface (‘PAMI’) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing by the parallel application a data communications geometry, the geometry specifying a set of endpoints that are used in collective operations of the PAMI, including associating with the geometry a list of collective algorithms valid for use with the endpoints of the geometry; registering in each endpoint in the geometry a dispatch callback function for a collective operation; and executing without blocking, through a single one of the endpoints in the geometry, an instruction for the collective operation. | 03-21-2013 |
20130074098 | PROCESSING DATA COMMUNICATIONS EVENTS IN A PARALLEL ACTIVE MESSAGING INTERFACE OF A PARALLEL COMPUTER - Processing data communications events in a parallel active messaging interface (‘PAMI’) of a parallel computer that includes compute nodes that execute a parallel application, with the PAMI including data communications endpoints, and the endpoints are coupled for data communications through the PAMI and through other data communications resources, including determining by an advance function that there are no actionable data communications events pending for its context, placing by the advance function its thread of execution into a wait state, waiting for a subsequent data communications event for the context; responsive to occurrence of a subsequent data communications event for the context, awakening by the thread from the wait state; and processing by the advance function the subsequent data communications event now pending for the context. | 03-21-2013 |
20130080563 | EFFECTING HARDWARE ACCELERATION OF BROADCAST OPERATIONS IN A PARALLEL COMPUTER - Compute nodes of a parallel computer organized for collective operations via a network, each compute node having a receive buffer and establishing a topology for the network; selecting a schedule for a broadcast operation; depositing, by a root node of the topology, broadcast data in a target node's receive buffer, including performing a DMA operation with a well-known memory location for the target node's receive buffer; depositing, by the root node in a memory region designated for storing broadcast data length, a length of the broadcast data, including performing a DMA operation with a well-known memory location of the broadcast data length memory region; and triggering, by the root node, the target node to perform a next DMA operation, including depositing, in a memory region designated for receiving injection instructions for the target node, an instruction to inject the broadcast data into the receive buffer of a subsequent target node. | 03-28-2013 |
20130081037 | PERFORMING COLLECTIVE OPERATIONS IN A DISTRIBUTED PROCESSING SYSTEM - Methods, apparatuses, and computer program products for performing collective operations on a hybrid distributed processing system including: determining by at least one task that a parent of the task has failed to send the task data through the tree topology; and determining whether to request the data from a grandparent of the task or a peer of the task in the same tier in the tree topology; and if the task requests the data from the grandparent, requesting the data and receiving the data from the grandparent of the task through the second networking topology; and if the task requests the data from a peer of the task in the same tier in the tree, requesting the data and receiving the data from a peer of the task through the second networking topology. | 03-28-2013 |
20130081059 | DATA COMMUNICATIONS IN A PARALLEL ACTIVE MESSAGING INTERFACE OF A PARALLEL COMPUTER - Data communications in a parallel active messaging interface (‘PAMI’) of a parallel computer, the parallel computer including a plurality of compute nodes that execute a parallel application, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources, including receiving in an origin endpoint of the PAMI a data communications instruction, the instruction characterized by an instruction type, the instruction specifying a transmission of transfer data from the origin endpoint to a target endpoint and transmitting, in accordance with the instruction type, the transfer data from the origin endpoint to the target endpoint. | 03-28-2013 |
20130086358 | COLLECTIVE OPERATION PROTOCOL SELECTION IN A PARALLEL COMPUTER - Collective operation protocol selection in a parallel computer that includes compute nodes may be carried out by calling a collective operation with operating parameters; selecting a protocol for executing the operation and executing the operation with the selected protocol. Selecting a protocol includes: iteratively, until a prospective protocol meets predetermined performance criteria: providing, to a protocol performance function for the prospective protocol, the operating parameters; determining whether the prospective protocol meets predefined performance criteria by evaluating a predefined performance fit equation, calculating a measure of performance of the protocol for the operating parameters; determining that the prospective protocol meets predetermined performance criteria and selecting the protocol for executing the operation only if the calculated measure of performance is greater than a predefined minimum performance threshold. | 04-04-2013 |
20130086551 | Providing A User With A Graphics Based IDE For Developing Software For Distributed Computing Systems - Graphics based IDE for distributed computing systems software development including providing a graphical representation of a topology of a distributed computing system for which the user is to develop a software application; receiving an identification of a system component upon which a portion of the application is to execute; providing a text editor for receiving from the user computer program instructions forming the portion of the application; inserting, without user intervention as part of the portion of the application, predetermined computer program instructions configured to support the identified system component; receiving, through the text editor, the portion of the application including the predetermined computer program instructions configured to support the identified system component; and storing, the computer program instructions forming the portion of the application, at a user specified location within the application. | 04-04-2013 |
20130091510 | DATA COMMUNICATIONS IN A PARALLEL ACTIVE MESSAGING INTERFACE OF A PARALLEL COMPUTER - Data communications in a parallel active messaging interface (‘PAMI’) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, endpoints coupled for data communications through the PAMI and through data communications resources, including receiving in an origin endpoint of the PAMI a SEND instruction, the SEND instruction specifying a transmission of transfer data from the origin endpoint to a first target endpoint; transmitting from the origin endpoint to the first target endpoint a Request-To-Send (‘RTS’) message advising the first target endpoint of the location and size of the transfer data; assigning by the first target endpoint to each of a plurality of target endpoints separate portions of the transfer data; and receiving by the plurality of target endpoints the transfer data. | 04-11-2013 |
20130111482 | ESTABLISHING A GROUP OF ENDPOINTS IN A PARALLEL COMPUTER | 05-02-2013 |
20130111496 | PERFORMING A LOCAL BARRIER OPERATION | 05-02-2013 |
20130117403 | Managing Internode Data Communications For An Uninitialized Process In A Parallel Computer - A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior to initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory. | 05-09-2013 |
20130117761 | Intranode Data Communications In A Parallel Computer - Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a compute node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process. | 05-09-2013 |
20130117764 | Internode Data Communications In A Parallel Computer - Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory. | 05-09-2013 |
20130124666 | MANAGING INTERNODE DATA COMMUNICATIONS FOR AN UNINITIALIZED PROCESS IN A PARALLEL COMPUTER - A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior to initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory. | 05-16-2013 |
20130125135 | INTRANODE DATA COMMUNICATIONS IN A PARALLEL COMPUTER - Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a compute node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process. | 05-16-2013 |
20130125140 | INTRANODE DATA COMMUNICATIONS IN A PARALLEL COMPUTER - Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory. | 05-16-2013 |
20130173675 | PERFORMING A GLOBAL BARRIER OPERATION IN A PARALLEL COMPUTER - Performing a global barrier operation in a parallel computer that includes compute nodes coupled for data communications, where each compute node executes tasks, with one task on each compute node designated as a master task, including: for each task on each compute node until all master tasks have joined a global barrier: determining whether the task is a master task; if the task is not a master task, joining a single local barrier; if the task is a master task, joining the global barrier and the single local barrier only after all other tasks on the compute node have joined the single local barrier. | 07-04-2013 |
20130176904 | Providing Full Point-To-Point Communications Among Compute Nodes Of An Operational Group In A Global Combining Network Of A Parallel Computer - Methods, apparatus, and products are disclosed for providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: receiving a network packet in a compute node, the network packet specifying a destination compute node; selecting, in dependence upon the destination compute node, at least one of the links for the compute node along which to forward the network packet toward the destination compute node; and forwarding the network packet along the selected link to the adjacent compute node connected to the compute node through the selected link. | 07-11-2013 |
20130179897 | Thread Selection During Context Switching On A Plurality Of Compute Nodes - Methods, apparatus, and products are disclosed for thread selection during context switching on a plurality of compute nodes that includes: executing, by a compute node, an application using a plurality of threads of execution, including executing one or more of the threads of execution; selecting, by the compute node from a plurality of available threads of execution for the application, a next thread of execution in dependence upon power characteristics for each of the available threads; determining, by the compute node, whether criteria for a thread context switch are satisfied; and performing, by the compute node, the thread context switch if the criteria for a thread context switch are satisfied, including executing the next thread of execution. | 07-11-2013 |
20130191851 | Monitoring Operating Parameters In A Distributed Computing System With Active Messages - In a distributed computing system including a nodes organized for collective operations: initiating, by a root node through an active message to all other nodes, a collective operation, the active message including an instruction to each node to store operating parameter data in each node's send buffer; and, responsive to the active message: storing, by each node, the node's operating parameter data in the node's send buffer and returning, by the node, the operating parameter data as a result of the collective operation. | 07-25-2013 |
20130212145 | Initiating A Collective Operation In A Parallel Computer - Initiating a collective operation in a parallel computer that includes compute nodes coupled for data communications and organized in an operational group for collective operations with one compute node assigned as a root node, including: identifying, by a non-root compute node, a collective operation to execute in the operational group of compute nodes; initiating, by the non-root compute node, execution of the collective operation amongst the compute nodes of the operational group including: sending, by the non-root compute node to one or more of the other compute nodes in the operational group, an active message, the active message including information configured to initiate execution of the collective operation amongst the compute nodes of the operational group; and executing, by the compute nodes of the operational group, the collective operation. | 08-15-2013 |
20130212555 | Developing A Collective Operation For Execution In A Parallel Computer - Developing a collective operation for execution in a parallel computer that includes compute nodes coupled for data communications, including: receiving, by a collective development tool, a specification of a target collective operation to develop; receiving, by the collective development tool, a specification of computer hardware characteristics of the parallel computer within which the target collective operation will be executed; selecting, by the collective development tool automatically without user interaction, iteratively for each stage of the target collective operation, a collective primitive in dependence upon the specification of computer hardware characteristics and a predefined set of rules specifying selection criteria of collective primitives based on computer hardware characteristics; and generating, by the collective development tool, the target collective operation in dependence upon the selected collective primitives. | 08-15-2013 |
20130212558 | Developing Collective Operations For A Parallel Computer - Developing collective operations for a parallel computer that includes compute nodes includes: presenting, by a collective development tool, a graphical user interface (‘GUI’) to a collective developer; receiving, by the collective development tool from the collective developer through the GUI, a selection of one or more collective primitives; receiving, by the collective development tool from the collective developer through the GUI, a specification of a serial order of the collective primitives and a specification of input and output buffers for each collective primitive; and generating, by the collective development tool in dependence upon the selection of collective primitives, the serial order of the collective primitives, and the input and output buffers for each collective primitive, executable code that carries out the collective operation specified by the collective primitives. | 08-15-2013 |
20130212561 | DEVELOPING COLLECTIVE OPERATIONS FOR A PARALLEL COMPUTER - Developing collective operations for a parallel computer that includes compute nodes includes: presenting, by a collective development tool, a graphical user interface (‘GUI’) to a collective developer; receiving, by the collective development tool from the collective developer through the GUI, a selection of one or more collective primitives; receiving, by the collective development tool from the collective developer through the GUI, a specification of a serial order of the collective primitives and a specification of input and output buffers for each collective primitive; and generating, by the collective development tool in dependence upon the selection of collective primitives, the serial order of the collective primitives, and the input and output buffers for each collective primitive, executable code that carries out the collective operation specified by the collective primitives. | 08-15-2013 |
20130212572 | Implementing Updates To Source Code Executing On A Plurality Of Compute Nodes - Methods, apparatuses, and computer program products for implementing updates to source code executing on a plurality of compute nodes are provided. Embodiments include receiving, by a compute node, a broadcast update-notification message indicating there is an update to the source code executing on the plurality of compute nodes; in response to receiving the update-notification message, implementing a distributed barrier; based on the distributed barrier, halting execution of the source code at a particular location within the source code; based on the distributed barrier, updating in-place the source code including retaining workpiece data in memory of the compute node, the workpiece data corresponding to the execution of the source code; and based on completion of the updating of the source code, resuming with the retained workpiece data execution of the source code at the particular location within the source code where execution was halted. | 08-15-2013 |
20130212573 | IMPLEMENTING UPDATES TO SOURCE CODE EXECUTING ON A PLURALITY OF COMPUTE NODES - Methods, apparatuses, and computer program products for implementing updates to source code executing on a plurality of compute nodes are provided. Embodiments include receiving, by a compute node, a broadcast update-notification message indicating there is an update to the source code executing on the plurality of compute nodes; in response to receiving the update-notification message, implementing a distributed barrier; based on the distributed barrier, halting execution of the source code at a particular location within the source code; based on the distributed barrier, updating in-place the source code including retaining workpiece data in memory of the compute node, the workpiece data corresponding to the execution of the source code; and based on completion of the updating of the source code, resuming with the retained workpiece data execution of the source code at the particular location within the source code where execution was halted. | 08-15-2013 |
20130219410 | Processing Unexpected Messages At A Compute Node Of A Parallel Computer - Methods, apparatuses, and computer program products for processing unexpected messages at a compute node of a parallel computer are provided. Embodiments include receiving, by the compute node, a portion of a message from another compute node of the parallel computer, the message comprising a plurality of separate portions; in response to receiving the portion of the message, determining, by the compute node, whether one of the applications executing on the compute node, has indicated that the message is expected; if one of the applications executing on the compute node has not indicated that the message is expected, storing, by the compute node, the portion of the message in an unexpected message buffer within the compute node; and if one of the applications executing on the compute node has indicated that the message is expected, storing the portion of the message at a storage destination indicated by the message. | 08-22-2013 |
20130238860 | Administering Registered Virtual Addresses In A Hybrid Computing Environment Including Maintaining A Watch List Of Currently Registered Virtual Addresses By An Operating System - Administering registered virtual addresses in a hybrid computing environment that includes a host computer and an accelerator, the accelerator architecture optimized, with respect to the host computer architecture, for speed of execution of a particular class of computing functions, the host computer and the accelerator adapted to one another for data communications by a system level message passing module, where administering registered virtual addresses includes maintaining, by an operating system, a watch list of ranges of currently registered virtual addresses; upon a change in physical to virtual address mappings of a particular range of virtual addresses falling within the ranges included in the watch list, notifying the system level message passing module by the operating system of the change; and updating, by the system level message passing module, a cache of ranges of currently registered virtual addresses to reflect the change in physical to virtual address mappings. | 09-12-2013 |
20130246533 | Broadcasting A Message In A Parallel Computer - Methods, systems, and products are disclosed for broadcasting a message in a parallel computer that includes: transmitting, by the logical root to all of the nodes directly connected to the logical root, a message; and for each node except the logical root: receiving the message; if that node is the physical root, then transmitting the message to all of the child nodes except the child node from which the message was received; if that node received the message from a parent node and if that node is not a leaf node, then transmitting the message to all of the child nodes; and if that node received the message from a child node and if that node is not the physical root, then transmitting the message to all of the child nodes except the child node from which the message was received and transmitting the message to the parent node. | 09-19-2013 |
20130290673 | PERFORMING A DETERMINISTIC REDUCTION OPERATION IN A PARALLEL COMPUTER - Performing a deterministic reduction operation in a parallel computer that includes compute nodes, each of which includes computer processors and a CAU (Collectives Acceleration Unit) that couples computer processors to one another for data communications, including organizing processors and a CAU into a branched tree topology in which the CAU is a root and the processors are children; receiving, from each of the processors in any order, dummy contribution data, where each processor is restricted from sending any other data to the root CAU prior to receiving an acknowledgement of receipt from the root CAU; sending, by the root CAU to the processors in the branched tree topology, in a predefined order, acknowledgements of receipt of the dummy contribution data; receiving, by the root CAU from the processors in the predefined order, the processors' contribution data to the reduction operation; and reducing, by the root CAU, the processors' contribution data. | 10-31-2013 |
20130304995 | Scheduling Synchronization In Association With Collective Operations In A Parallel Computer - Methods, apparatuses, and computer program products for scheduling synchronization in association with collective operations in a parallel computer that includes a shared memory and a plurality of compute nodes that execute a parallel application utilizing the shared memory are provided. Embodiments include acquiring an available channel of the shared memory; posting to the acquired channel of the shared memory one or more collective operations and a synchronization point; determining that processing within the acquired channel has reached the synchronization point; and posting to the acquired channel, in response to determining that processing within the acquired channel has reached the synchronization point, a background synchronization operation corresponding to the one or more collective operations. | 11-14-2013 |
20140047451 | Optimizing Collective Communications Within A Parallel Computer - Methods, apparatuses, and computer program products for optimizing collective communications within a parallel computer comprising a plurality of hardware threads for executing software threads of a parallel application are provided. Embodiments include a processor of a parallel computer determining for each software thread, an affinity of the software thread to a particular hardware thread. Each affinity indicates an assignment of a software thread to a particular hardware thread. The processor also generates one or more affinity domains based on the affinities of the software threads. Embodiments also include a processor generating, for each affinity domain, a topology of the affinity domain based on the affinities of the software threads to the hardware threads. According to embodiments of the present application, a processor also performs, based on the generated topologies of the affinity domains, a collective operation on one or more software threads. | 02-13-2014 |
20140164592 | DETERMINING A SYSTEM CONFIGURATION FOR PERFORMING A COLLECTIVE OPERATION ON A PARALLEL COMPUTER - Determining a system configuration for performing a collective operation on a parallel computer that includes a plurality of compute nodes, the compute nodes coupled for data communications over a data communications network, including: selecting a system configuration on the parallel computer for executing the collective operation; executing the collective operation on the selected system configuration on the parallel computer; determining performance metrics associated with executing the collective operation on the selected system configuration on the parallel computer; selecting, using a simulated annealing algorithm, a plurality of test system configurations on the parallel computer for executing the collective operation, wherein the simulated annealing algorithm specifies a similarity threshold between a plurality of system configurations; executing, the collective operation on each of the test system configurations; and determining performance metrics associated with executing the collective operation on each of the test system configurations. | 06-12-2014 |
20140164600 | DETERMINING A SYSTEM CONFIGURATION FOR PERFORMING A COLLECTIVE OPERATION ON A PARALLEL COMPUTER - Determining a system configuration for performing a collective operation on a parallel computer that includes a plurality of compute nodes, the compute nodes coupled for data communications over a data communications network, including: selecting a system configuration on the parallel computer for executing the collective operation; executing the collective operation on the selected system configuration on the parallel computer; determining performance metrics associated with executing the collective operation on the selected system configuration on the parallel computer; selecting, using a simulated annealing algorithm, a plurality of test system configurations on the parallel computer for executing the collective operation, wherein the simulated annealing algorithm specifies a similarity threshold between a plurality of system configurations; executing, the collective operation on each of the test system configurations; and determining performance metrics associated with executing the collective operation on each of the test system configurations. | 06-12-2014 |
20140165075 | EXECUTING A COLLECTIVE OPERATION ALGORITHM IN A PARALLEL COMPUTER - Executing a collective operation algorithm in a parallel computer includes a compute node of an operational group determining a required number of participants for execution of a collective operation algorithm and determining a number of contributing nodes having data to participate in the algorithm. Embodiments also include the compute node calculating a number of ghost nodes to participate in the algorithm. According to embodiments of the present invention, the number of ghost nodes is the required number of participants minus the number of contributing nodes having data to participate. Embodiments also include the compute node selecting from a plurality of ghost nodes, the calculated number of ghost nodes for participation in the execution of the algorithm and executing the algorithm with both the selected ghost nodes and the contributing nodes. | 06-12-2014 |
20140165076 | EXECUTING A COLLECTIVE OPERATION ALGORITHM IN A PARALLEL COMPUTER - Executing a collective operation algorithm in a parallel computer includes a compute node of an operational group determining a required number of participants for execution of a collective operation algorithm and determining a number of contributing nodes having data to participate in the algorithm. Embodiments also include the compute node calculating a number of ghost nodes to participate in the algorithm. According to embodiments of the present invention, the number of ghost nodes is the required number of participants minus the number of contributing nodes having data to participate. Embodiments also include the compute node selecting from a plurality of ghost nodes, the calculated number of ghost nodes for participation in the execution of the algorithm and executing the algorithm with both the selected ghost nodes and the contributing nodes. | 06-12-2014 |
20140173201 | ACQUIRING REMOTE SHARED VARIABLE DIRECTORY INFORMATION IN A PARALLEL COMPUTER - Methods, parallel computers, and computer program products for acquiring remote shared variable directory (SVD) information in a parallel computer are provided. Embodiments include a runtime optimizer determining that a first thread of a first task requires shared resource data stored in a memory partition corresponding to a second thread of a second task. Embodiments also include the runtime optimizer requesting from the second thread, in response to determining that the first thread of the first task requires the shared resource data, SVD information associated with the shared resource data. Embodiments also include the runtime optimizer receiving from the second thread, the SVD information associated with the shared resource data. | 06-19-2014 |
20140173204 | ANALYZING UPDATE CONDITIONS FOR SHARED VARIABLE DIRECTORY INFORMATION IN A PARALLEL COMPUTER - Methods, parallel computers, and computer program products for analyzing update conditions for shared variable directory (SVD) information in a parallel computer are provided. Embodiments include a runtime optimizer receiving a compare-and-swap operation header. The compare-and-swap operation header includes an SVD key, a first SVD address, and an updated first SVD address. The first SVD address is associated with the SVD key in a first SVD associated with a first task. Embodiments also include the runtime optimizer retrieving from a remote address cache associated with the second task, a second SVD address indicating a location within a memory partition associated with the first SVD in response to receiving the compare-and-swap operation header. Embodiments also include the runtime optimizer determining whether the second SVD address matches the first SVD address and transmitting a result indicating whether the second SVD address matches the first SVD address. | 06-19-2014 |
20140173205 | ANALYZING UPDATE CONDITIONS FOR SHARED VARIABLE DIRECTORY INFORMATION IN A PARALLEL COMPUTER - Methods, parallel computers, and computer program products for analyzing update conditions for shared variable directory (SVD) information in a parallel computer are provided. Embodiments include a runtime optimizer receiving a compare-and-swap operation header. The compare-and-swap operation header includes an SVD key, a first SVD address, and an updated first SVD address. The first SVD address is associated with the SVD key in a first SVD associated with a first task. Embodiments also include the runtime optimizer retrieving from a remote address cache associated with the second task, a second SVD address indicating a location within a memory partition associated with the first SVD in response to receiving the compare-and-swap operation header. Embodiments also include the runtime optimizer determining whether the second SVD address matches the first SVD address and transmitting a result indicating whether the second SVD address matches the first SVD address. | 06-19-2014 |
20140173212 | ACQUIRING REMOTE SHARED VARIABLE DIRECTORY INFORMATION IN A PARALLEL COMPUTER - Methods, parallel computers, and computer program products for acquiring remote shared variable directory (SVD) information in a parallel computer are provided. Embodiments include a runtime optimizer determining that a first thread of a first task requires shared resource data stored in a memory partition corresponding to a second thread of a second task. Embodiments also include the runtime optimizer requesting from the second thread, in response to determining that the first thread of the first task requires the shared resource data, SVD information associated with the shared resource data. Embodiments also include the runtime optimizer receiving from the second thread, the SVD information associated with the shared resource data. | 06-19-2014 |
20140173257 | REQUESTING SHARED VARIABLE DIRECTORY (SVD) INFORMATION FROM A PLURALITY OF THREADS IN A PARALLEL COMPUTER - Methods, parallel computers, and computer program products for requesting shared variable directory (SVD) information from a plurality of threads in a parallel computer are provided. Embodiments include a runtime optimizer detecting that a first thread requires a plurality of updated SVD information associated with shared resource data stored in a plurality of memory partitions. Embodiments also include a runtime optimizer broadcasting, in response to detecting that the first thread requires the updated SVD information, a gather operation message header to the plurality of threads. The gather operation message header indicates an SVD key corresponding to the required updated SVD information and a local address associated with the first thread to receive a plurality of updated SVD information associated with the SVD key. Embodiments also include the runtime optimizer receiving at the local address, the plurality of updated SVD information from the plurality of threads. | 06-19-2014 |
20140173604 | CONDITIONALLY UPDATING SHARED VARIABLE DIRECTORY (SVD) INFORMATION IN A PARALLEL COMPUTER - Methods, parallel computers, and computer program products for conditionally updating shared variable directory (SVD) information in a parallel computer are provided. Embodiments include a runtime optimizer receiving a broadcast reduction operation header. The broadcast reduction operation header includes an SVD key and a first SVD address. The first SVD address is associated with the SVD key in a first SVD associated with a first task. Embodiments also include the runtime optimizer retrieving from a remote address cache associated with the second task, a second SVD address indicating a location within a memory partition associated with the first SVD, in response to receiving the broadcast reduction operation header. Embodiments also include the runtime optimizer determining that the first SVD address does not match the second SVD address and updating the remote address cache with the first SVD address. | 06-19-2014 |
20140173615 | CONDITIONALLY UPDATING SHARED VARIABLE DIRECTORY (SVD) INFORMATION IN A PARALLEL COMPUTER - Methods, parallel computers, and computer program products for conditionally updating shared variable directory (SVD) information in a parallel computer are provided. Embodiments include a runtime optimizer receiving a broadcast reduction operation header. The broadcast reduction operation header includes an SVD key and a first SVD address. The first SVD address is associated with the SVD key in a first SVD associated with a first task. Embodiments also include the runtime optimizer retrieving from a remote address cache associated with the second task, a second SVD address indicating a location within a memory partition associated with the first SVD, in response to receiving the broadcast reduction operation header. Embodiments also include the runtime optimizer determining that the first SVD address does not match the second SVD address and updating the remote address cache with the first SVD address. | 06-19-2014 |
20140173626 | BROADCASTING SHARED VARIABLE DIRECTORY (SVD) INFORMATION IN A PARALLEL COMPUTER - Methods, parallel computers, and computer program products for broadcasting shared variable directory (SVD) information in a parallel computer are provided. Embodiments include a runtime optimizer detecting, by a runtime optimizer of the parallel computer, a change in SVD information within an SVD associated with a first thread. Embodiments also include a runtime optimizer identifying a plurality of threads requiring notification of the change in the SVD information. Embodiments also include the runtime optimizer in response to detecting the change in the SVD information, broadcasting to each thread of the identified plurality of threads, a broadcast message header and update data indicating the change in the SVD information. | 06-19-2014 |
20140173627 | REQUESTING SHARED VARIABLE DIRECTORY (SVD) INFORMATION FROM A PLURALITY OF THREADS IN A PARALLEL COMPUTER - Methods, parallel computers, and computer program products for requesting shared variable directory (SVD) information from a plurality of threads in a parallel computer are provided. Embodiments include a runtime optimizer detecting that a first thread requires a plurality of updated SVD information associated with shared resource data stored in a plurality of memory partitions. Embodiments also include a runtime optimizer broadcasting, in response to detecting that the first thread requires the updated SVD information, a gather operation message header to the plurality of threads. The gather operation message header indicates an SVD key corresponding to the required updated SVD information and a local address associated with the first thread to receive a plurality of updated SVD information associated with the SVD key. Embodiments also include the runtime optimizer receiving at the local address, the plurality of updated SVD information from the plurality of threads. | 06-19-2014 |
20140173629 | BROADCASTING SHARED VARIABLE DIRECTORY (SVD) INFORMATION IN A PARALLEL COMPUTER - Methods, parallel computers, and computer program products for broadcasting shared variable directory (SVD) information in a parallel computer are provided. Embodiments include a runtime optimizer detecting, by a runtime optimizer of the parallel computer, a change in SVD information within an SVD associated with a first thread. Embodiments also include a runtime optimizer identifying a plurality of threads requiring notification of the change in the SVD information. Embodiments also include the runtime optimizer in response to detecting the change in the SVD information, broadcasting to each thread of the identified plurality of threads, a broadcast message header and update data indicating the change in the SVD information. | 06-19-2014 |
20140192652 | TOKEN-BASED FLOW CONTROL OF MESSAGES IN A PARALLEL COMPUTER - Token-based flow control of messages in a parallel computer, the parallel computer including a plurality of compute nodes, each compute node including one or more computer processors, including: allocating, by a token administration module to a plurality of the computer processors in the parallel computer, a number of data communications tokens; identifying all communicators executing on each computer processor, where each communicator is participating in a distinct parallel operation executing on the parallel computer; allocating, to the communicators, the data communications tokens; determining, by a communicator attempting to send data to the destination, whether the communicator has enough available data communications tokens to send the data to the destination; and responsive to determining that the communicator has enough available data communications tokens to send the data, sending, by the communicator, the data to the destination. | 07-10-2014 |
20140195688 | TOKEN-BASED FLOW CONTROL OF MESSAGES IN A PARALLEL COMPUTER - Token-based flow control of messages in a parallel computer, the parallel computer including a plurality of compute nodes, each compute node including one or more computer processors, including: allocating, by a token administration module to a plurality of the computer processors in the parallel computer, a number of data communications tokens; identifying all communicators executing on each computer processor, where each communicator is participating in a distinct parallel operation executing on the parallel computer; allocating, to the communicators, the data communications tokens; determining, by a communicator attempting to send data to the destination, whether the communicator has enough available data communications tokens to send the data to the destination; and responsive to determining that the communicator has enough available data communications tokens to send the data, sending, by the communicator, the data to the destination. | 07-10-2014 |
20140244974 | Background Collective Operation Management In A Parallel Computer - Background collective operation management in a parallel computer, the parallel computer including one or more compute nodes operatively coupled for data communications over one or more data communications networks, including: determining, by a management availability module, whether a compute node in the parallel computer is available to perform a background collective operation management task; responsive to determining that the compute node is available to perform the background collective operation management task, determining, by the management availability module, whether the compute node has access to sufficient resources to perform the background collective operation management task; and responsive to determining that the compute node has access to sufficient resources to perform the background collective operation management task, initiating, by the management availability module, execution of the background collective operation management task. | 08-28-2014 |
20140245316 | Background Collective Operation Management In A Parallel Computer - Background collective operation management in a parallel computer, the parallel computer including one or more compute nodes operatively coupled for data communications over one or more data communications networks, including: determining, by a management availability module, whether a compute node in the parallel computer is available to perform a background collective operation management task; responsive to determining that the compute node is available to perform the background collective operation management task, determining, by the management availability module, whether the compute node has access to sufficient resources to perform the background collective operation management task; and responsive to determining that the compute node has access to sufficient resources to perform the background collective operation management task, initiating, by the management availability module, execution of the background collective operation management task. | 08-28-2014 |
20140258417 | Collective Operation Management In A Parallel Computer - Methods, apparatuses, and computer program products for collective operation management in a parallel computer are provided. Embodiments include a parallel computer having a first compute node operatively coupled for data communications over a tree data communications network with a plurality of child compute nodes. Embodiments also include each child compute node performing a first collective operation. The first compute rode, for each child compute node, receives from the child compute node, a result of the first collective operation performed by the child compute node. For each result received from a child compute node, the first compute node stores a timestamp indicating a time that the child compute node completed the first collective operation. The first compute node also manages, based on the stored timestamps, execution of a second collective operation over the tree data communications network. | 09-11-2014 |
20140258538 | Collective Operation Management In A Parallel Computer - Methods, apparatuses, and computer program products for collective operation management in a parallel computer are provided. Embodiments include a parallel computer having a first compute node operatively coupled for data communications over a tree data communications network with a plurality of child compute nodes. Embodiments also include each child compute node performing a first collective operation. The first compute rode, for each child compute node, receives from the child compute node, a result of the first collective operation performed by the child compute node. For each result received from a child compute node, the first compute node stores a timestamp indicating a time that the child compute node completed the first collective operation. The first compute node also manages, based on the stored timestamps, execution of a second collective operation over the tree data communications network. | 09-11-2014 |
20140258746 | Collective Operation Management In A Parallel Computer - Methods, apparatuses, and computer program products for collective operation management in a parallel computer are provided. Embodiments include a parallel computer having a first compute node operatively coupled for data communications over a tree data communications network with a plurality of child compute nodes. Embodiments also include each child compute node performing a first collective operation. The first compute rode, for each child compute node, receives from the child compute node, a result of the first collective operation performed by the child compute node. In response to receiving at least one result, the first compute node reduces a power consumption level of the child compute node. | 09-11-2014 |
20140258748 | Collective Operation Management In A Parallel Computer - Methods, apparatuses, and computer program products for collective operation management in a parallel computer are provided. Embodiments include a parallel computer having a first compute node operatively coupled for data communications over a tree data communications network with a plurality of child compute nodes. Embodiments also include each child compute node performing a first collective operation. The first compute rode, for each child compute node, receives from the child compute node, a result of the first collective operation performed by the child compute node. In response to receiving at least one result, the first compute node reduces a power consumption level of the child compute node. | 09-11-2014 |
20140280601 | Collective Operation Management In A Parallel Computer - Methods, apparatuses, and computer program products for collective operation management in a parallel computer are provided. Embodiments include a parallel computer having a plurality of compute nodes coupled for data communications over a data communications network. Embodiments include a first compute node entering a collective operation. Each compute node of the plurality of compute nodes is associated with the collective operation. In response to entering the collective operation, the first compute node decreases power consumption of the first compute node. | 09-18-2014 |
20140280820 | Collective Operation Management In A Parallel Computer - Methods, apparatuses, and computer program products for collective operation management in a parallel computer are provided. Embodiments include a parallel computer having a plurality of compute nodes coupled for data communications over a data communications network. Embodiments include a first compute node entering a collective operation. Each compute node of the plurality of compute nodes is associated with the collective operation. In response to entering the collective operation, the first compute node decreases power consumption of the first compute node. | 09-18-2014 |
20140281723 | Algorithm Selection For Collective Operations In A Parallel Computer - Algorithm selection for collective operations in a parallel computer that includes a plurality of compute nodes may include: profiling a plurality of algorithms for each of a set of collective operations, including for each collective operation: executing the operation a plurality times with each execution varying one or more of: geometry, message size, data type, and algorithm to effect the collective operation, thereby generating performance metrics for each execution; storing the performance metrics in a performance profile; at load time of a parallel application including a plurality of parallel processes configured in a particular geometry, filtering the performance profile in dependence upon the particular geometry; during run-time of the parallel application, selecting, for at least one collective operation, an algorithm to effect the operation in dependence upon characteristics of the parallel application and the performance profile; and executing the operation using the selected algorithm. | 09-18-2014 |
20150055889 | PARALLEL APPLICATION CHECKPOINT IMAGE COMPRESSION - Parallel application checkpoint image compression may be carried out in a parallel computer. The parallel computer may include a plurality of compute nodes, where each node is configured to execute one or more parallel tasks of the parallel application. The parallel tasks may be organized into an operational group for collective communications. In such a parallel computer, checkpoint image compression may include: generating, by each task of the parallel application, an image for checkpointing the parallel application; selecting, by an image management task, one of the images as a base template image; constructing, by the image management task, a binary radix tree, including storing differences between each task's image and the base template image in the binary radix tree; and storing, by the image management task as a checkpoint for the parallel application, the binary radix tree and the base template image, without storing every task's image. | 02-26-2015 |
20150057829 | Managing Cooling Operations In A Parallel Computer Comprising A Plurality Of Compute Nodes - Managing cooling operations in a parallel computer comprising a plurality of compute nodes, including: receiving, by a target compute node from an origin compute node, a message; identifying, by the target compute node, one or more characteristics of the message; and controlling, by the target compute node, cooling operations in dependence upon the one or more characteristics of the message. | 02-26-2015 |
20150058657 | ADAPTIVE CLOCK THROTTLING FOR EVENT PROCESSING - Methods, apparatuses, and computer program products for adaptive clock throttling for event processing are provided. Embodiments include an event processing system receiving a plurality of events from one or more components of the distributed processing system. Embodiments also include the event processing system determining that an arrival attribute of the plurality of events exceeds an arrival threshold. Embodiments also include the event processing system, adjusting, in response to determining that the arrival attribute of the plurality of events exceeds the arrival threshold, a clock speed of at least one of the event processing system and a component of the distributed processing system. | 02-26-2015 |
20150058926 | Shared Page Access Control Among Cloud Objects In A Distributed Cloud Environment - A management system in a distributed cloud environment that includes a plurality of cloud object, may administer shared page access control among cloud objects. Such shared access control includes: receiving, by the management system from a requesting cloud object, a request to access a shared page; discovering, by the management system, one or more page attributes of the shared page, where the one or more page attributes of the shared page include attributes specified by one or more cloud objects of the distributed cloud environment; identifying, by the management system in dependence upon the page attributes, one more access control measures to perform; performing, by the management system in dependence upon the page attributes, the access control measures; and determining, by the management system, whether to grant the requesting cloud object access to the shared page. | 02-26-2015 |
20150063100 | Data Communications In A Distributed Computing Environment - Data communications may be carried out in a distributed computing environment that includes computers coupled for data communications through communications adapters and an active messaging interface (‘AMI’). Such data communications may be carried out by: issuing, by a sender to a receiver, an eager SEND data communications instruction to transfer SEND data, the instruction including information describing data location at the sender and data size; transmitting, by the sender to the receiver, the SEND data as eager data packets; discarding, by the receiver in dependence upon data flow conditions, eager data packets as they are received from the sender; and transferring, in dependence upon the data flow conditions, by the receiver from the sender's data location to a receive buffer by remote direct memory access (“RDMA”), the SEND data. | 03-05-2015 |
20150067067 | Data Communications In A Distributed Computing Environment - Data communications may be carried out in a distributed computing environment that includes a plurality of computers coupled for data communications through communications adapters and an active messaging interface (‘AMI’). In such an environment, data communications may include: issuing, by a sender to a receiver, an eager SEND data communications instruction to transfer SEND data, the instruction including information describing a location and size of a send buffer in which the SEND data is stored; transmitting, by the sender to the receiver, the SEND data as eager data packets; issuing, by the receiver to the sender in dependence upon data flow conditions, a STOP instruction, the STOP instruction including an order to stop transmitting the eager data packets; and transferring the SEND data by the receiver from the sender's data location to a receive buffer by remote direct memory access (“RDMA”). | 03-05-2015 |
20150067068 | Data Communications In A Distributed Computing Environment - Data communications may be carried out in a distributed computing environment that includes a plurality of computers coupled for data communications through communications adapters and an active messaging interface (‘AMI’). In distributed computing environment, data communications may include: receiving in the AMI from an application an eager SEND instruction that describes the location and size of send data in an application SEND buffer; copying by the AMI the send data from the application SEND buffer to a temporary AMI buffer; advising the application of completion of the SEND instruction before sending the SEND data to the receiver; and after advising the application of completion of the SEND instruction, sending the SEND data by the sender to the receiver. | 03-05-2015 |
20150081862 | ADMINISTERING GROUP IDENTIFIERS OF PROCESSES IN A PARALLEL COMPUTER - Administering group identifiers of processes in a parallel computer includes each process in a set of processes, receiving from a compute node of the plurality of compute nodes, a request to establish the set of processes as an operational group including receiving a list of process identifiers for each process of the set of processes. Embodiments also include each process generating without communication amongst the processes, a unique group identifier in dependence upon the list of process identifiers. | 03-19-2015 |
20150081985 | ADMINISTERING INTER-CORE COMMUNICATION VIA SHARED MEMORY - Administering inter-core communication via shared memory may be carried out in a system in which each core is associated with a mailbox in a shared memory region. Such administration may include constructing a mailbox latency table describing latency of writing data from each core to each mailbox; constructing a locking latency table describing latency of each core in acquiring a lock for each of the mailboxes; identifying, from the tables, groups of a cores having mailbox and locking latency within a predefined range of acceptable latency values; and for each identified group of cores, establishing, for every pair of cores in the group of cores, a private channel, including pinning, for each private channel established for a pair of cores, one local memory segment per core. | 03-19-2015 |