Patent application number | Description | Published |
20080229157 | Performing Externally Assisted Calls in a Heterogeneous Processing Complex - A mechanism is provided for accessing, by an application running on a first processor, operating system services from an operating system running on a second processor by performing an assisted call. A data plane processor first constructs a parameter area based on the input and output parameters for the function that requires control processor assistance. The current values for the input parameters are copied into the parameter area. An assisted call message is generated based on a combination of a pointer to the parameter area and a specific library function opcode for the library function that is being called. The assisted call message is placed into the processor's stack immediately following a stop-and-signal instruction. The control plane processor is signaled to perform the library function corresponding to the opcode on behalf of the data plane processor by executing a stop and signal instruction. | 09-18-2008 |
20080229295 | Ensuring Maximum Code Motion of Accesses to DMA Buffers - A “kill” intrinsic that may be used in programs for designating specific data objects as having been “killed” by a preceding action is provided. The concept of a data object being “killed” is that the compiler is informed that no operations (e.g., loads and stores) on that data object, or its aliases, can be moved across the point in the program flow where the data object is designated as having been “killed.” The “kill” intrinsic limits the reordering capability of an optimization scheduler of a compiler with regard to operations performed on “killed” data objects. The “kill” intrinsic may be used with DMA operations. Data objects being DMA'ed from a local store of a processor may be “killed” through use of the “kill” intrinsic prior to submitting the DMA request. Data objects being DMA'ed to the local store of the processor may be “killed” after verifying the transfer completes. | 09-18-2008 |
20090037620 | Apparatus and Method for Efficient Communication of Producer/Consumer Buffer Status - An apparatus and method for efficient communication of producer/consumer buffer status are provided. With the apparatus and method, devices in a data processing system notify each other of updates to head and tail pointers of a shared buffer region when the devices perform operations on the shared buffer region using signal notification channels of the devices. Thus, when a producer device that produces data to the shared buffer region writes data to the shared buffer region, an update to the head pointer is written to a signal notification channel of a consumer device. When a consumer device reads data from the shared buffer region, the consumer device writes a tail pointer update to a signal notification channel of the producer device. In addition, channels may operate in a blocking mode so that the corresponding device is kept in a low power state until an update is received over the channel. | 02-05-2009 |
20090037653 | METHODS AND ARRANGEMENTS FOR MULTI-BUFFERING DATA - Embodiments may logic such as hardware and/or code within heterogeneous multi-core processor or the like to coordinate reading from and writing to buffers substantially simultaneously. Many embodiments include multi-buffering logic for implementing a procedure for a processing unit of a specialized processing element. The multi-buffering logic may instruct a direct memory access controller of the specialized processing element to read data from some memory location and store the data in a first buffer. The specialized processing element can then process data in the second buffer and, thereafter, the multi-buffering logic can block read access to the first buffer until the direct memory access controller indicates that the read from the memory location is complete. In such embodiments, the multi-buffering logic may then instruct the direct memory access controller to write the processed data to other memory. | 02-05-2009 |
20090292758 | Optimized Corner Turns for Local Storage and Bandwidth Reduction - A block matrix multiplication mechanism is provided for reversing the visitation order of blocks at corner turns when performing a block matrix multiplication operation in a data processing system. By reversing the visitation order, the mechanism eliminates a block load at the corner turns. In accordance with the illustrative embodiment, a corner return is referred to as a “bounce” corner turn and results in a serpentine patterned processing order of the matrix blocks. The mechanism allows the data processing system to perform a block matrix multiplication operation with a maximum of three block transfers per time step. Therefore, the mechanism reduces maximum throughput and increases performance. In addition, the mechanism also reduces the number of multi-buffered local store buffers. | 11-26-2009 |
20090300091 | Reducing Bandwidth Requirements for Matrix Multiplication - A block matrix multiplication mechanism is provided for reversing the visitation order of blocks at corner turns when performing a block matrix multiplication operation in a data processing system. The mechanism increases block size and divides each block into sub-blocks. By reversing the visitation order, the mechanism eliminates a sub-block load at the corner turns. The mechanism per forms sub-block matrix multiplication for each sub-block in a given block, and then repeats operation for a next block until all blocks are computed. The mechanism may determine block size and sub-block size to optimize load balancing and memory bandwidth. Therefore, the mechanism reduces maximum throughput and increases performance. In addition, the mechanism also reduces the number of multi-buffered local store buffers. | 12-03-2009 |
20120203816 | Optimized Corner Turns for Local Storage and Bandwidth Reduction - A block matrix multiplication mechanism is provided for reversing the visitation order of blocks at corner turns when performing a block matrix multiplication operation in a data processing system. By reversing the visitation order, the mechanism eliminates a block load at the corner turns. In accordance with the illustrative embodiment, a corner return is referred to as a “bounce” corner turn and results in a serpentine patterned processing order of the matrix blocks. The mechanism allows the data processing system to perform a block matrix multiplication operation with a maximum of three block transfers per time step. Therefore, the mechanism reduces maximum throughput and increases performance. In addition, the mechanism also reduces the number of multi-buffered local store buffers. | 08-09-2012 |
20120317372 | Efficient Communication of Producer/Consumer Buffer Status - A mechanism is provided for efficient communication of producer/consumer buffer status. With the mechanism, devices in a data processing system notify each other of updates to head and tail pointers of a shared buffer region when the devices perform operations on the shared buffer region using signal notification channels of the devices. Thus, when a producer device that produces data to the shared buffer region writes data to the shared buffer region, an update to the head pointer is written to a signal notification channel of a consumer device. When a consumer device reads data from the shared buffer region, the consumer device writes a tail pointer update to a signal notification channel of the producer device. In addition, channels may operate in a blocking mode so that the corresponding device is kept in a low power state until an update is received over the channel. | 12-13-2012 |
Patent application number | Description | Published |
20080209127 | System and method for efficient implementation of software-managed cache - A system and method for an efficient implementation of a software-managed cache is presented. When an application thread executes on a simple processor, the application thread uses a conditional data select instruction for eliminating a conditional branch instruction when accessing a software-managed cache. An application thread issues a conditional data select instruction (DMA transfer) after a cache directory lookup, wherein the size of the requested data is dependent upon the outcome of the cache directory lookup. When the cache directory lookup results in a cache hit, the application thread requests a transfer of zero bits of data, which results in a DMA controller (DMAC) performing a no-op instruction. When the cache directory lookup results in a cache miss, the application thread requests a data block transfer the size of a corresponding cache line. | 08-28-2008 |
20080250414 | Dynamically Partitioning Processing Across A Plurality of Heterogeneous Processors - A program is into at least two object files: one object file for each of the supported processor environments. During compilation, code characteristics, such as data locality, computational intensity, and data parallelism, are analyzed and recorded in the object file. During run time, the code characteristics are combined with runtime considerations, such as the current load on the processors and the size of the data being processed, to arrive at an overall value. The overall value is then used to determine which of the processors will be assigned the task. The values are assigned based on the characteristics of the various processors. For example, if one processor is better at handling intensive computations against large streams of data, programs that are highly computationally intensive and process large quantities of data are weighted in favor of that processor. The corresponding object is then loaded and executed on the assigned processor. | 10-09-2008 |
20090138689 | Partitioning Processor Resources Based on Memory Usage - Processor resources are partitioned based on memory usage. A compiler determines the extent to which a process is memory-bound and accordingly divides the process into a number of threads. When a first thread encounters a prolonged instruction, the compiler inserts a conditional branch to a second thread. When the second thread encounters a prolonged instruction, a conditional branch to a third thread is executed. This continues until the last thread conditionally branches back to the first thread. An indirect segmented register file is used so that the “return to” and “branch to” logical registers within each thread are the same (e.g., R | 05-28-2009 |
20090327728 | Methods for Supplying Cryptographic Algorithm Constants to a Storage-Constrained Target - The present invention provides for authenticating a message. A security function is performed upon the message. The message is sent to a target. The output of the security function is sent to the target. At least one publicly known constant is sent to the target. The received message is authenticated as a function of at least a shared key, the received publicly known constants, the security function, the received message, and the output of the security function. If the output of the security function received by the target is the same as the output generated as a function of at least the received message, the received publicly known constants, the security function, and the shared key, neither the message nor the constants have been altered. | 12-31-2009 |
20120096278 | Authenticating Messages Using Cryptographic Algorithm Constants Supplied to a Storage-Constrained Target - The present invention provides for authenticating a message. A security function is performed upon the message. The message is sent to a target. The output of the security function is sent to the target. At least one publicly known constant is sent to the target. The received message is authenticated as a function of at least a shared key, the received publicly known constants, the security function, the received message, and the output of the security function. If the output of the security function received by the target is the same as the output generated as a function of at least the received message, the received publicly known constants, the security function, and the shared key, neither the message nor the constants have been altered. | 04-19-2012 |