Patent application number | Description | Published |
20080229413 | Authorizing Information Flows - Authorizing information flows between devices of a data processing system is provided. In one illustrative embodiment, an information flow request is received from a first device to authorize an information flow from the first device to a second device. The information flow request includes an identifier of the second device. Based on an identifier of the first device and the second device, security information identifying an authorization level of the first device and second device is retrieved. A sensitivity of an information object that is to be transferred in the information flow is determined and the information flow is authorized or denied based only on the sensitivity of the information object and the authorization level of the first and second devices irregardless of the particular action being performed on the information object as part of the information flow. | 09-18-2008 |
20090119507 | Reference Monitor for Enforcing Information Flow Policies - A reference monitor that authorizes information flows between elements of a data processing system is provided. The elements of the data processing system are associated with security data structures in a reference monitor. An information flow request is received from a first element to authorize an information flow from the first element to a second element. A first security data structure associated with the first element and a second security data structure associated with the second element are retrieved. At least one set theory operation is then performed on the first security data structure and the second security data structure to determine if the information flow from the first element to the second element is to be authorized. The security data structures may be labelsets having one or more labels identifying security policies to be applied to information flows involving the associated element. | 05-07-2009 |
20110119642 | Simultaneous Photolithographic Mask and Target Optimization - A mechanism is provided for simultaneous photolithographic mask and target optimization (SMATO). A lithographic simulator generates an image of a mask shape on a wafer thereby forming one or more lithographic contours. A mask and target movement module analytically evaluates a direction for mask and target movement thereby forming a plurality of pairs of mask and target movements. The mask and target movement module identifies a best pair of mask and target movements from the plurality of mask and target movements that minimizes a weighted cost function. A shape adjustment module adjusts at least one of a target shape or the mask shape based on the best pair of mask and target movements. | 05-19-2011 |
20110302413 | Authorizing Information Flows Based on a Sensitivity of an Information Object - A system, apparatus, computer program product and method for authorizing information flows between devices of a data processing system are provided. In one illustrative embodiment, an information flow request is received from a first device to authorize an information flow from the first device to a second device. The information flow request includes an identifier of the second device. Based on an identifier of the first device and the second device, security information identifying an authorization level of the first device and second device is retrieved. A sensitivity of an information object that is to be transferred in the information flow is determined and the information flow is authorized or denied based only on the sensitivity of the information object and the authorization level of the first and second devices irregardless of the particular action being performed on the information object as part of the information flow. | 12-08-2011 |
20120227051 | Composite Contention Aware Task Scheduling - A mechanism is provided for composite contention aware task scheduling. The mechanism performs task scheduling with shared resources in computer systems. A task is a group of instructions. A compute task is a group of compute instructions. A memory task, also referred to as a communication task, may be a group of load/store operations, for example. The mechanism performs composite contention-aware scheduling that considers the interaction among compute tasks, communication tasks, and application threads that include compute and communication tasks. The mechanism performs a composite of memory task throttling and application thread throttling. | 09-06-2012 |
20120254603 | Run-Ahead Approximated Computations - Mechanisms are provided for performing approximate run-ahead computations. A first group of compute engines is selected to execute full computations on a full set of input data. A second group of compute engines is selected to execute computations on a sampled subset of the input data. A third group of compute engines is selected to compute a difference in computation results between first computation results generated by the first group of compute engines and second computation results generated by the second group of compute engines. The second group of compute engines is reconfigured based on the difference generated by the third group of compute engines. | 10-04-2012 |
20120254604 | Run-Ahead Approximated Computations - Mechanisms are provided for performing approximate run-ahead computations. A first group of compute engines is selected to execute full computations on a full set of input data. A second group of compute engines is selected to execute computations on a sampled subset of the input data. A third group of compute engines is selected to compute a difference in computation results between first computation results generated by the first group of compute engines and second computation results generated by the second group of compute engines. The second group of compute engines is reconfigured based on the difference generated by the third group of compute engines. | 10-04-2012 |
20120317582 | Composite Contention Aware Task Scheduling - A mechanism is provided for composite contention aware task scheduling. The mechanism performs task scheduling with shared resources in computer systems. A task is a group of instructions. A compute task is a group of compute instructions. A memory task, also referred to as a communication task, may be a group of load/store operations, for example. The mechanism performs composite contention-aware scheduling that considers the interaction among compute tasks, communication tasks, and application threads that include compute and communication tasks. The mechanism performs a composite of memory task throttling and application thread throttling. | 12-13-2012 |
20130147644 | High Bandwidth Decompression of Variable Length Encoded Data Streams - Mechanisms are provided for decoding a variable length encoded data stream. A decoder of a data processing system receives an input line of data. The input line of data is a portion of the variable length encoded data stream. The decoder determines an amount of bit spill over of the input line of data onto a next input line of data. The decoder aligns the input line of data to begin at a symbol boundary based on the determined amount of bit spill over. The decoder tokenizes the aligned input line of data to generate a set of tokens. Each token corresponds to an encoded symbol in the aligned next input line of data. The decoder generates an output word of data based on the set of tokens. The output word of data corresponds to a word of data in the original set of data. | 06-13-2013 |
20130148745 | High Bandwidth Decompression of Variable Length Encoded Data Streams - Mechanisms are provided for decoding a variable length encoded data stream. A decoder of a data processing system receives an input line of data. The input line of data is a portion of the variable length encoded data stream. The decoder determines an amount of bit spill over of the input line of data onto a next input line of data. The decoder aligns the input line of data to begin at a symbol boundary based on the determined amount of bit spill over. The decoder tokenizes the aligned input line of data to generate a set of tokens. Each token corresponds to an encoded symbol in the aligned next input line of data. The decoder generates an output word of data based on the set of tokens. The output word of data corresponds to a word of data in the original set of data. | 06-13-2013 |
20140049410 | SELECTIVE RECOMPRESSION OF A STRING COMPRESSED BY A PLURALITY OF DIVERSE LOSSLESS COMPRESSION TECHNIQUES - In response to receiving an input string to be compressed, a plurality of diverse lossless compression techniques are applied to the input string to obtain a plurality of compressed strings. The plurality of diverse lossless compression techniques include a template-based compression technique and a non-template-based compression technique. A most compressed string among the plurality of compressed strings is selected. A determination is made regarding whether or not the most compressed string was obtained by application of the template-based compression technique. In response to determining that the most compressed string was obtained by application of the template-based compression technique, the most compressed string is compressed utilizing the non-template-based compression technique to obtain an output string and outputting the output string. In response to determining that the most compressed string was not obtained by application of the template-based compression technique, the most compressed string is output as the output string. | 02-20-2014 |
20140049412 | DATA COMPRESSION UTILIZING LONGEST COMMON SUBSEQUENCE TEMPLATE - In response to receipt of an input string, an attempt is made to identify, in a template store, a closely matching template for use as a compression template. In response to identification of a closely matching template that can be used as a compression template, the input string is compressed into a compressed string by reference to a longest common subsequence compression template. Compressing the input string includes encoding, in a compressed string, an identifier of the compression template, encoding substrings of the input string not having commonality with the compression template of at least a predetermined length as literals, and encoding substrings of the input string having commonality with the compression template of at least the predetermined length as a jump distance without reference to a base location in the compression template. The compressed string is then output. | 02-20-2014 |
20140049413 | DATA COMPRESSION UTILIZING LONGEST COMMON SUBSEQUENCE TEMPLATE - In response to receipt of an input string, an attempt is made to identify, in a template store, a closely matching template for use as a compression template. In response to identification of a closely matching template that can be used as a compression template, the input string is compressed into a compressed string by reference to a longest common subsequence compression template. Compressing the input string includes encoding, in a compressed string, an identifier of the compression template, encoding substrings of the input string not having commonality with the compression template of at least a predetermined length as literals, and encoding substrings of the input string having commonality with the compression template of at least the predetermined length as a jump distance without reference to a base location in the compression template. The compressed string is then output. | 02-20-2014 |
20150066878 | Efficient Context Save/Restore During Hardware Decompression of DEFLATE Encoded Data - An approach is provided in which a hardware accelerator receives a request to decompress a data stream that includes multiple deflate blocks and multiple deflate elements compressed according to block-specific compression configuration information. The hardware accelerator identifies a commit point that is based upon an interruption of a first decompression session of the data stream and corresponds to one of the deflate blocks. As such, the hardware accelerator configures a decompression engine based upon the corresponding deflate block's configuration information and, in turn, recommences decompression of the data stream at an input bit location corresponding to the commit point. | 03-05-2015 |
20150089128 | Reordering the Output of Recirculated Transactions Within a Pipeline - A mechanism is provided for recirculating transactions within a pipeline while reordering outputs. A set of transactions associated with a block of data is received and each transaction in the set of transactions is processed via the pipeline. For each transaction processed via the pipeline, responsive to the transaction exiting the pipeline, a determination is made as to whether the transaction needs further processing. Responsive to the transaction needing further processing, the transaction is re-circulated via the pipeline forming a recirculated transaction. | 03-26-2015 |