Patent application number | Description | Published |
20090299958 | REORDERING OF DATA ELEMENTS IN A DATA PARALLEL SYSTEM - A query that identifies an input data source is received. The input data source is partitioned into a plurality of partitions. Each of the partitions includes a set of data elements with an associated set of indices for indicating an ordering of the data elements. A query type for a query operator in the received query is identified. It is determined whether a reordering of data elements will be performed based on the identified query type. The data elements in at least one of the partitions are reordered when it is determined based on the identified query type that reordering will be performed. | 12-03-2009 |
20090299959 | QUERY RESULT GENERATION BASED ON QUERY CATEGORY AND DATA SOURCE CATEGORY - A method includes receiving a query that identifies an input data source. A query category for a query operator in the received query is identified. A data source category for the input data source is also identified. A results object is generated based on the identified query category and the identified data source category. The results object supports at least one of random access and sequential access to results produced by the query operator. | 12-03-2009 |
20090300766 | BLOCKING AND BOUNDING WRAPPER FOR THREAD-SAFE DATA COLLECTIONS - A membership interface provides procedure headings to add and remove elements of a data collection, without specifying the organizational structure of the data collection. A membership implementation associated with the membership interface provides thread-safe operations to implement the interface procedures. A blocking-bounding wrapper on the membership implementation provides blocking and bounding support separately from the thread-safety mechanism. | 12-03-2009 |
20090319992 | CONFIGURABLE PARTITIONING FOR PARALLEL DATA - A data partitioning interface provides procedure headings to create data partitions for processing data elements in parallel, and for obtaining data elements to process, without specifying the organizational structure of a data partitioning. A data partitioning implementation associated with the data partitioning interface provides operations to implement the interface procedures, and may also provide dynamic partitioning to facilitate load balancing. | 12-24-2009 |
20090320005 | CONTROLLING PARALLELIZATION OF RECURSION USING PLUGGABLE POLICIES - A parallelism policy object provides a control parallelism interface whose implementation evaluates parallelism conditions that are left unspecified in the interface. User-defined and other parallelism policy procedures can make recommendations to a worker program for transitioning between sequential program execution and parallel execution. Parallelizing assistance values obtained at runtime can be used in the parallelism conditions on which the recommendations are based. A consistent parallelization policy can be employed across a range of parallel constructs, and inside recursive procedures. | 12-24-2009 |
20100077384 | PARALLEL PROCESSING OF AN EXPRESSION - A method includes compiling an expression into executable code that is configured to create a data structure that represents the expression. The expression includes a plurality of sub-expressions. The code is executed to create the data structure. The data structure is evaluated using a plurality of concurrent threads, thereby processing the expression in a parallel manner. | 03-25-2010 |
20100162211 | PROVIDING ACCESS TO A DATASET IN A TYPE-SAFE MANNER - A method of providing access to a dataset in a type-safe manner includes storing a dataset including a plurality of data elements and a corresponding plurality of order keys for indicating an ordering of the data elements. Each order key is associated with one of the data elements. An interface to the dataset is generated that is parameterized by an element type parameter and a key type parameter. The interface is configured to provide access to the data elements and the order keys in the dataset in a type-safe manner. | 06-24-2010 |
20100250564 | TRANSLATING A COMPREHENSION INTO CODE FOR EXECUTION ON A SINGLE INSTRUCTION, MULTIPLE DATA (SIMD) EXECUTION - A method of translating a comprehension into executable code for execution on a SIMD (Single Instruction, Multiple Data stream) execution unit, includes receiving a user specified comprehension. The comprehension is compiled into a first set of executable code. An intermediate representation is generated based on the first set of executable code. The intermediate representation is translated into a second set of executable code that is configured to be executed by a SIMD execution unit. | 09-30-2010 |
20100250613 | QUERY PROCESSING USING ARRAYS - A method of processing a query includes receiving a language integrated query including at least one operator, and operating on an input array with the at least one operator. An output array is generated by the at least one operator based on the operation on the input array. | 09-30-2010 |
20110125805 | GROUPING MECHANISM FOR MULTIPLE PROCESSOR CORE EXECUTION - A concurrent grouping operation for execution on a multiple core processor is provided. The grouping operation is provided with a sequence or set of elements. In one phase, each worker receives a partition of a sequence of elements to be grouped. The elements of each partition are arranged into a data structure, which includes one or more keys where each key corresponds to a value list of one or more of the received elements associated with that key. In another phase, the data structures created by each worker are merged so that the keys and corresponding elements for the entire sequence of elements exist in one data structure. Recursive merging can be completed in a constant time, which is not proportional to the length of the sequence. | 05-26-2011 |
20110154359 | HASH PARTITIONING STREAMED DATA - The present invention extends to methods, systems, and computer program products for partitioning streaming data. Embodiments of the invention can be used to hash partition a stream of data and thus avoids unnecessary memory usage (e.g., associated with buffering). Hash partitioning can be used to split an input sequence (e.g., a data stream) into multiple partitions that can be processed independently. Other embodiments of the invention can be used to hash repartition a plurality of streams of data. Hash repartitioning converts a set of partitions into another set of partitions with the hash partitioned property. Partitioning and repartitioning can be done in a streaming manner at runtime by exchanging values between worker threads responsible for different partitions. | 06-23-2011 |
20110185358 | PARALLEL QUERY ENGINE WITH DYNAMIC NUMBER OF WORKERS - Partitioning query execution work of a sequence including a plurality of elements. A method includes a worker core requesting work from a work queue. In response, the worker core receives a task from the work queue. The task is a replicable sequence-processing task including two distinct steps: scheduling a copy of the task on the scheduler queue and processing a sequence. The worker core processes the task by: creating a replica of the task and placing the replica of the task on the work queue, and beginning processing the sequence. The acts are repeated for one or more additional worker cores, where receiving a task from the work queue is performed by receiving one or more replicas of tasks placed on the task queue by earlier performances of creating a replica of the task and placing the replica of the task on the work queue by a different worker core. | 07-28-2011 |
20110208872 | DYNAMIC PARTITIONING OF DATA FOR DATA-PARALLEL APPLICATIONS - Dynamic data partitioning is disclosed for use with a multiple node processing system that consumes items from a data stream of any length and independent of whether the length is undeclared. Dynamic data partitioning takes items from the data stream when a thread is idle and assigns the taken items to an idle thread, and it varies the size of data chunks taken from the stream and assigned to a thread to efficiently distribute work loads among the nodes. In one example, data chunk sizes taken from the beginning of the data stream are relatively smaller than data chunk sizes taken towards the middle or end of the data stream. Dynamic data partitioning employs a growth function where chunks have a size related to single aligned cache lines and efficiently increases the size of the data chunks to occasionally double the amount of data assigned to concurrent threads. | 08-25-2011 |
20110307905 | INDICATING PARALLEL OPERATIONS WITH USER-VISIBLE EVENTS - The present invention extends to methods, systems, and computer program products for indicating parallel operations with user-visible events. Event markers can be used to indicate an abstracted outer layer of execution as well as expose internal specifics of parallel processing systems, including systems that provide data parallelism. Event markers can be used to show a variety of execution characteristics including higher-level markers to indicate the beginning and end of an execution program (e.g., a query). Inside the execution program (query) individual fork/join operations can be indicated with sub-levels of markers to expose their operations. Additional decisions made by an execution engine, such as, for example, when elements initially yield, when queries overlap or nest, when the query is cancelled, when the query bails to sequential operation, when premature merging or re-partitioning are needed can also be exposed. | 12-15-2011 |
20120066250 | CUSTOM OPERATORS FOR A PARALLEL QUERY ENGINE - Embodiments are directed to implementing custom operators in a query for a parallel query engine and to generating a partitioned representation of a sequence of query operators in a parallel query engine. A computer system receives a portion of partitioned input data at a parallel query engine, where the parallel query engine is configured to process data queries in parallel, and where the queries include a sequence of built-in operators. The computer system incorporates a custom operator into the sequence of built-in operators for a query and accesses the sequence of operators to determine how the partitioned input data is to be processed. The custom operator is accessed in the same manner as the built-in operators. The computer system also processes the sequence of operators including both the built-in operators and at least one custom operator according to the determination indicating how the data is to be processed. | 03-15-2012 |
20120066667 | SIMULATION ENVIRONMENT FOR DISTRIBUTED PROGRAMS - A scheduler receives a job graph which includes a graph of computational vertices that are designed to be executed on multiple distributed computer systems. The scheduler queries a graph manager to determine which computational vertices of the job graph are ready for execution in a local execution environment. The scheduler queries a cluster manager to determine the organizational topology of the distributed computer systems to simulate the determined topology in the local execution environment. The scheduler queries a data manager to determine data storage locations for each of the computational vertices indicated as being ready for execution in the local execution environment. The scheduler also indicates to a vertex spawner that an instance of each computational vertex is to be spawned in the local execution environment based on the organizational topology and indicated data storage locations, and indicates to the local execution environment that the spawned vertices are to be executed. | 03-15-2012 |
20130067160 | PRODUCER-CONSUMER DATA TRANSFER USING PIECEWISE CIRCULAR QUEUE - A method includes producing values with a producer thread, and providing a queue data structure including a first array of storage locations for storing the values. The first array has a first tail pointer and a first linking pointer. If a number of values stored in the first array is less than a capacity of the first array, an enqueue operation writes a new value at a storage location pointed to by the first tail pointer and advances the first tail pointer. If the number of values stored in the first array is equal to the capacity of the first array, a second array of storage locations is allocated in the queue. The second array has a second tail pointer. The first array is linked to the second array with the first linking pointer. An enqueue operation writes the new value at a storage location pointed to by the second tail pointer and advances the second tail pointer. | 03-14-2013 |
20140201717 | SIMULATION ENVIRONMENT FOR DISTRIBUTED PROGRAMS - A dataflow of a distributed application is visualized in a locally simulated execution environment. A scheduler receives a job graph which includes a graph of computational vertices that are designed to be executed on multiple distributed computer systems. The scheduler queries a graph manager to determine which computational vertices of the job graph are ready for execution in a local execution environment. The scheduler queries a cluster manager to determine the organizational topology of the distributed computer systems to simulate the determined topology in the local execution environment. The scheduler queries a data manager to determine data storage locations for each of the computational vertices indicated as being ready for execution in the local execution environment. The scheduler also indicates an instance of each computational vertex to be spawned and executed in the local execution environment based on the organizational topology and indicated data storage locations. | 07-17-2014 |