TERADATA US INC. Patent applications |
Patent application number | Title | Published |
20160070763 | PARALLEL FREQUENT SEQUENTIAL PATTERN DETECTING - Techniques for parallel frequent sequential pattern detection are provided. A sequence database is split into separate datasets and each node is given a specific dataset to resolve specific frequent items occurring in its specific dataset based on counts. Then, each node groups its item frequent items into “n” (varying) length sequences representing sequential patterns present in the original sequence database. The nodes process in parallel with one another and collectively produce a complete set of the sequential patterns defined in the original sequence database. | 03-10-2016 |
20150331724 | WORKLOAD BALANCING TO HANDLE SKEWS FOR BIG DATA ANALYTICS - Data partitions are assigned to reducer tasks using a cost-based and workload balancing approach. At least one of the initial data partitions remains unassigned in an unassigned partitions pool. Each reducer while working on its assigned partitions makes dynamic run-time decisions as to whether to: reassign a partition to another reducer, accept a partition from another reducer, select a partition from the unassigned partitions pool, and/or reassign a partition back to the unassigned partitions pool. | 11-19-2015 |
20150310069 | METHODS AND SYSTEM TO PROCESS STREAMING DATA - Streaming data is populated to an in-memory data table and a continuous query is executed against an in-memory data table using a database interface to perform analytical operations on the populated in-memory data table. Results from the analytical operations performed are streamed to consuming applications. | 10-29-2015 |
20140280274 | PROBABILISTIC RECORD LINKING - Probabilistic record linking methods and a system are provided. Selections are acquired; the selections identify the two data sources, column identifiers from each of the two data sources, pairs of column identifiers from each of the two data sources, a confidence values for matching each record associated with each pair. The selections are used to compare data housed in the two data sources. Based on the comparison, matched records and non matched records are identified from the two data sources. | 09-18-2014 |
20140280218 | TECHNIQUES FOR DATA INTEGRATION - Techniques for data integration are provided. Source attributes for source data are interactively mapped to target attributes for target data. Rules define how records from the source data are merged, selected, and for duplication detection. The mappings and rules are recorded as a profile for the source data and processed against the source data to transform the source attributes to the target attributes. | 09-18-2014 |
20140280036 | TECHNIQUES FOR IMPROVING THE PERFORMANCE OF COMPLEX QUERIES - Techniques for improving complex database queries are provided. A determination is made whether to adopt a static or dynamic query execution plan based on metrics. When the dynamic query execution plan is used, a request fragment of the request is planned and the corresponding plan fragment is executed. The processed fragment provides feedback related to its processing to the remaining request and the process is repeated on the remaining request until the request is completed. | 09-18-2014 |
20140279972 | CLEANSING AND STANDARDIZING DATA - Data cleansing and standardization techniques are provided. A user interactively defines rules for cleansing and standardizing data of a source dataset. The rules are applied to the data and varying degrees of results and metrics associated with applying the rules are presented to the user for inspection and analysis. | 09-18-2014 |
20140279831 | DATA MODELING TECHNIQUES - Techniques for data modeling are provided. Enterprise data is organized into reference data for entities that an enterprise wants to track and monitor. Relationship data is created that establishes relationships among the various entities within the enterprise data. The reference data and the relationship data are published within an enterprise data warehouse for accessing the enterprise data. | 09-18-2014 |
20140222871 | TECHNIQUES FOR DATA ASSIGNMENT FROM AN EXTERNAL DISTRIBUTED FILE SYSTEM TO A DATABASE MANAGEMENT SYSTEM - Techniques for data assignment from an external distributed file system (DFS) to a database management system (DBMS) are provided. Data blocks from the DFS are represented as first nodes and access module processors of the DBMS are represented as second nodes. A graph is produced with the first and second nodes. Assignments are made for the first nodes to the second nodes based on evaluation of the graph to integrate the DFS with the DBMS. | 08-07-2014 |
20140222787 | TECHNIQUES FOR ACCESSING A PARALLEL DATABASE SYSTEM VIA EXTERNAL PROGRAMS USING VERTICAL AND/OR HORIZONTAL PARTITIONING - Techniques for accessing a parallel database system via an external program using vertical and/or horizontal partitioning are provided. An external program to a database management system (DBMS) configures external mappers to process a specific portion of query results on specific access module processors of the DBMS that are to house query results. The query is submitted by the external program to the DBMS and the DBMS is directed to organize the query results in a vertical or horizontal manner. Each external mapper accesses its portion of the query results for processing in parallel on its designated AMP or set of AMPS to process the query results. | 08-07-2014 |
20140188924 | TECHNIQUES FOR ORDERING PREDICATES IN COLUMN PARTITIONED DATABASES FOR QUERY OPTIMIZATION - Techniques for ordering predicates in column partitioned databases for query optimization. Predicates on a single CP table within a query are organized to predicate-CP nodes with various sets of column partitions. The predicates within each predicate-CP node and the predicate-CP nodes as a whole are ordered in ascending order of cost which is determined by CPU/IO cost and predicate selectivity. The reorganized query is then executed. | 07-03-2014 |
20140188820 | TECHNIQUES FOR FINDING A COLUMN WITH COLUMN PARTITIONING - Techniques for finding a column with column partitioning are provided. Metadata for a container row is expanded to include information for searching ranges of partitioned column values. The metadata identifies offsets to specific ranges and specific columns within a specific range. The offsets also identify where compressed data for a desired column resides. Thereby, permitting partitioned columns having compressed data to be located without being decompressed and decompressed on demand as needed. | 07-03-2014 |
20140181077 | TECHNIQUES FOR THREE-STEP JOIN PROCESSING ON COLUMN PARTITIONED TABLES - Techniques for processing joins on column partitioned tables are provided. A query includes a first-Column Partition (CP) table joined with a second-CP table. The query is decomposed into a three-step process and rewritten and processed. | 06-26-2014 |
20140181076 | TECHNIQUES FOR JOIN PROCESSING ON COLUMN PARTITIONED TABLES - Techniques for processing joins on column partitioned tables are provided. A join operation having a column partitioned table within a query is decomposed into a two-step process. The first process performs the join condition on the column portioned table with optional filtering conditions and a non-column partitioned table and spools the resulting columns to a spooled table. The spooled table is then rowid joined back to the column partitioned table to acquire remaining columns not present in the spooled table. Both the first and second steps can be separately resolved for costs when determining a query execution plan. | 06-26-2014 |
20140181075 | TECHNIQUES FOR QUERY STATISTICS INHERITANCE - Techniques for query statistics inheritance are provided. Statistics for a database are used to determine selectivity estimates for sparse joins and tables being joined together within a given query. These statistics are inherited up to the given query along with the selectivity estimates and provided to a database optimizer to use when developing query plans and selecting an optimal query plan for the given query. | 06-26-2014 |
20140149349 | PROVIDING METADATA TO DATABASE SYSTEMS AND ENVIRONMENTS WITH MULTIPLE PROCESSING UNITS OR MODULES - Metadata can be provided to multiple processing units of a database system by using local storages respectively provided for the processing units, such that a local storage is accessible only to its respective processing unit. As a result, processing units can access metadata when needed (e.g., when needed to process a database request at runtime) without having to access a source external to the database system. In addition, metadata (e.g., an XML object, XML schema, XSLT stylesheets, XQuery modules) can be provided using a database request or command, for example, by using a register statement. | 05-29-2014 |
20140067755 | TIME-BOUND BATCH STATUS ROLLUP FOR LOGGED EVENTS - Techniques for time-bound batch status rollup for logged events are provided. A status for each action defined in a database log is resolved during a configured interval of time. The statuses for the actions are aggregated at the end of the interval of time and then joined back into the log. | 03-06-2014 |
20140032614 | DATABASE PARTITION MANAGEMENT - Apparatus, systems, and methods may operate to receive a request to move at least a portion of a database table stored on a tangible medium from a current partition to a history partition, wherein the database table is partitioned into physical partitions according to a selected mapping update frequency. In response to receiving the request, activities may include modifying a logical partitioning of the database table by updating a mapping of the physical partitions to logical partitions. Other apparatus, systems, and methods are disclosed. | 01-30-2014 |
20130254238 | TECHNIQUES FOR PROCESSING RELATIONAL DATA WITH A USER-DEFINED FUNCTION (UDF) - Techniques for processing relational data with a user-defined function (UDF) are provided. Relational input data being requested by the UDF, from within a relational database system, is intercepted and normalized. The UDF is called with the normalized input data and as the UDF produces output data in response to the normalized input data that output data is captured and normalized. In an embodiment, the normalized output data is used to dynamically update a data model within the relational database for the input data. | 09-26-2013 |
20130173588 | TECHNIQUES FOR UPDATING JOIN INDEXES - Techniques for updating join indexes are provided. A determination is made to update date criteria in a join index query statement. The join index is parsed for current date and current time criteria. The join index is revised based on the location of the current date and current time criteria as they appear in the original join index. The revisions include new criteria that minimize the effort in maintaining and using the join index. | 07-04-2013 |
20120259892 | SECURELY EXTENDING ANALYTICS WITHIN A DATA WAREHOUSE ENVIRONMENT - A vendor is authenticated for use of a retailer's data warehouse and limited access rights are assigned to the vendor for access. The vendor accesses a graphical user interface (GUI) to select an available analysis module for execution against the data warehouse. Schemas are presented in the GUI based on the access rights, and specific schema selections are made by the vendor. The analysis module is then configured and executed against the data warehouse and filtered results are presented to the vendor; the results filtered based on the access rights assigned to the vendor. | 10-11-2012 |
20120254800 | INDEPENDENT ATTRIBUTE FILTERING - A graphical user interface (GUI) tool is presented to a user for interacting with an underlying database. The GUI tool includes a field selection and attribute selections for the field. The user selects a field and an attribute for that field and is presented with a first list of values retrieved from the database for the selected attribute. Next, the user selects a filter for the attribute within the GUI tool and a second reduced list of values is presented to the user within the GUI tool representing the filtered first list of values acquired by applying the filter. | 10-04-2012 |
20120173831 | DATA AWARE STORAGE SYSTEM OPERATIONS - Apparatus, systems, and methods may operate to classify storage locations in a storage medium according to at least three response time grades, to classify data to be stored in the storage locations according to at least three access frequency grades, and to migrate the data between the storage locations according to a predicted access frequency assigned to preemptive allocations of some of the storage locations, based on the response time grade and the access frequency grade associated with the data prior to migration. Other apparatus, systems, and methods are disclosed. | 07-05-2012 |
20120173496 | NUMERIC, DECIMAL AND DATE FIELD COMPRESSION - A method, apparatus, and article of manufacture for accessing data in a computer system. Compression and decompression functions are associated with a column of the table, in order to perform compression of decimal, numeric or date data stored in the column when the data is inserted or updated in the table, and in order to perform decompression of the data stored in the column when the data is retrieved from the table. The compression function compresses and stores the data in a fixed-length compressed field in the column without a length value, and the fixed-length compressed field has a size that is determined by a range of values for the data stored in the fixed-length compressed field. The decompression function retrieves and decompresses the data from the fixed-length compressed field. | 07-05-2012 |
20120173477 | PREDICTIVE RESOURCE MANAGEMENT - Apparatus, systems, and methods may operate to monitor database system resource consumption over various time periods, in conjunction with scheduled data loading, data export, and query operations. Additional activities may include generating a database system resource consumption map based on the monitoring, and adjusting database system workload throttling to accommodate predicted database system resource consumption based on the resource consumption map and current system loading, prior to the current database resource consumption reaching a predefined critical consumption level. The current system loading may be induced by data loading, data export, or query activity. Other apparatus, systems, and methods are disclosed. | 07-05-2012 |
20120166423 | COLLECTION OF STATISTICS FOR SPATIAL COLUMNS OR R-TREE INDEXES - Techniques for collecting statistics of column data or R-Tree indexes are provided. A distributed database system includes a plurality of processing nodes controlling portions of spatial data. The nodes are instructed to create minimum bounding rectangles (MBR's) for their spatial data or R-Trees. The individual MBR's are merged and reformatted into a grid of equally sized cells. Each processing node is provided a copy of the grid to update based on statistics of each processing node's spatial data for a target table. The updated grids are then merged into a single grid and used by an optimizer to evaluate queries before the queries are executed. | 06-28-2012 |
20120166402 | TECHNIQUES FOR EXTENDING HORIZONTAL PARTITIONING TO COLUMN PARTITIONING - Techniques for extending horizontal partitioning to column partitioning are provided. A database table is partitioned into custom groups of rows and custom groups of columns. Each partitioned column is managed as a series of containers representing all values appearing under the partitioned column. A logical row represents a row of the table logically indicating each column value of a row. Compression, deletion, and insertion within the containers are managed via a control header maintained with each container. | 06-28-2012 |
20120166400 | TECHNIQUES FOR PROCESSING OPERATIONS ON COLUMN PARTITIONS IN A DATABASE - Techniques for processing operations on column partitions of a table in a database are provided. A table includes a control column partition. Each delete container of the control column partition representing multiple rows in the table (or a row partition, if any), and each row represented by a bit flag within a bit string. Rows of the table set for deletion have their corresponding bits within a particular delete container set to indicate those rows are deleted. | 06-28-2012 |
20120158736 | VIRTUAL R-TREE MAPPED TO AN EXTENDIBLE-HASH BASED FILE SYSTEM - Techniques for mapping a virtual R-Tree to an extensible-hash based file system for databases are provided. Spatial data is identified within an existing file system, which stores data for a database. Rows of the spatial data are organized into collections; each collection represents a virtual block. The virtual blocks are used to form an R-Tree spatial index that overlays an existing index for the database on the existing file system. Each row within its particular virtual block includes a pointer to its native storage location within the existing file system. | 06-21-2012 |
20120158722 | DATABASE PARTITION MANAGEMENT - Apparatus, systems, and methods may operate to receive a request to move at least a portion of a database table stored on a tangible medium from a current partition to a history partition, wherein the database table is partitioned into physical partitions according to a selected mapping update frequency. In response to receiving the request, activities may include modifying a logical partitioning of the database table by updating a mapping of the physical partitions to logical partitions. Other apparatus, systems, and methods are disclosed. | 06-21-2012 |
20120144234 | AUTOMATIC ERROR RECOVERY MECHANISM FOR A DATABASE SYSTEM - A computer-implemented method, apparatus and article of manufacture for performing an automatic error recovery in a database system. Automatic error recovery is performed for a query execution plan, following errors, problems or failures that occur during execution, by automatically or manually deactivating and/or activating components, features or code paths, and then re-submitting the query execution plan for execution in the computer system. | 06-07-2012 |
20120136874 | TECHNIQUES FOR ORGANIZING SINGLE OR MULTI-COLUMN TEMPORAL DATA IN R-TREE SPATIAL INDEXES - Techniques for organizing single or multi-column temporal data into R-tree spatial indexes are provided. Temporal data for single or multiple column data, within a database system, is converted into one or more line segments. The resulting line segments are transformed into a minimum bounding rectangle (MBR). Finally, the MBR is inserted into an R-tree spatial index. | 05-31-2012 |
20120130963 | USER DEFINED FUNCTION DATABASE PROCESSING - Apparatus, systems, and methods may operate to retrieve multiple rows of a database in response to receiving a request to execute an aggregate user defined function (UDF) over the multiple rows, to sort each of the multiple rows into common groups, grouping together individual ones of the multiple rows that share one of the common groups, and to send UDF execution requests to apply the aggregate UDF to aggregate buffers of the common groups to produce an aggregate result, so that one of the UDF execution requests and one context switch are used to process each of the aggregate buffers used within one of the groups to provide at least one intermediate result that can be processed to form the aggregate result. Other apparatus, systems, and methods are disclosed. | 05-24-2012 |
20120117027 | METHODS AND SYSTEMS FOR HARDWARE ACCELERATION OF DATABASE OPERATIONS AND QUERIES FOR A VERSIONED DATABASE BASED ON MULTIPLE HARDWARE ACCELERATORS - Embodiments of the present invention provide a hardware accelerator that assists a host database system in processing its queries. The hardware accelerator comprises special purpose processing elements that are capable of receiving database query/operation tasks in the form of machine code database instructions, execute them in hardware without software, and return the query/operation result back to the host system. | 05-10-2012 |
20120084325 | MASTER DATA MANAGEMENT HIERARCHY MERGING - A method, system, apparatus, and article of manufacture is configured to merge hierarchies in a computer system. A relational database management system (RDBMS) stores information in the computer system. As part of a process and framework, a series of business rules and process workflows that manage data (that is hierarchical in nature) that resides in one or more RDBMS tables are maintained. A first and second hierarchy table are obtained/defined. A placeholder column that will contain mapping information may be defined with the database schema. User input is accepted that identifies data in the second table that maps to data in the first table. Based on the user input, the data in the second table is mapped to the data in the first table. The mapping is utilized to create a merged hierarchy in RDBMS. | 04-05-2012 |
20120084319 | MASTER DATA MANAGEMENT DATABASE ASSET AS A WEB SERVICE - A method, system, apparatus, and article of manufacture is configured to expose a database asset as a web service. A relational database management system (RDBMS) that stores information is executed in a computer system. As part of a process and framework, a series of business rules and process workflows are maintained that manage data that resides in RDBMS tables. A rule is created that contains an application programming interface definition with predefined input and output for exposing the database asset as the web service. The rule is exposed as the web service. The web service is used to invoke a database operation based on the database asset, and to output a result. | 04-05-2012 |
20120084257 | MASTER DATA MANAGEMENT VERSIONING - A method, system, apparatus, and article of manufacture provide the ability to maintain multiple versions of structured views of data in a computer system. A relational database management system (RDBMS) is executed that stores master data in the computer system in master RDBMS tables. The master data is hierarchical in nature and hierarchy metadata for the master data is stored in the RDBMS tables. As part of a process and framework, a series of business rules and process workflows are maintained to manage the master data. Version tables are created in the RDBMS that correspond to each of the master RDBMS tables. Each of the version tables includes an attribute denoting version information. Versions of the master data are defined by replicating the master data and hierarchy metadata into the corresponding version tables. The version tables are used to graphically visualize, manage, and manipulate the versions of the master data. | 04-05-2012 |
20120078941 | QUERY ENHANCEMENT APPARATUS, METHODS, AND SYSTEMS - Apparatus, systems, and methods may operate to receive user-specified input data from a user input device as a segment query that includes a plurality of criteria, and to store individual counts and at least one additional count in a storage medium. The individual counts are derived from processing the segment query as a corresponding plurality of queries associated with each of the criteria, and the at least one additional count comprises an intersection of at least two of the criteria, regardless of whether the user-specified input data includes an intersection operation. Other apparatus, systems, and methods are disclosed. | 03-29-2012 |
20120078860 | ALGORITHMIC COMPRESSION VIA USER-DEFINED FUNCTIONS - A method, apparatus, and article of manufacture for accessing data in a computer system. One or more user-defined functions (UDFs) implementing a desired compression or decompression algorithm are created, wherein the UDFs are associated with one or more columns of a table when the table is created or altered, in order to perform compression or decompression of data stored in the associated columns, such that the data is compressed by the UDF implementing the desired compression algorithm when the data is inserted or updated in the table, and the data is decompressed by the UDF implementing the desired decompression algorithm when the data is retrieved from the table. | 03-29-2012 |
20120059817 | METHOD FOR INCREASING THE EFFICIENCY OF SYNCHRONIZED SCANS THROUGH INTELLIGENT QUERY DISPATCHING - A computer-implemented method, apparatus and article of manufacture for optimizing execution of database queries in a computer system. In one embodiment, the steps and functions include: generating first and second query execution plans for first and second requests, wherein the first and second query execution plans are each comprised of one or more steps that scan a specified table in a database stored on the computer system in order to retrieve data from the table; and executing the first and second query execution plans, wherein intelligent query dispatching is performed on the steps of the first and second query execution plans to ensure that the steps share the data retrieved from the table and cached in memory. | 03-08-2012 |
20120047126 | METHODS AND SYSTEMS FOR HARDWARE ACCELERATION OF STREAMED DATABASE OPERATIONS AND QUERIES BASED ON MULTIPLE HARDWARE ACCELERATORS - Embodiments of the present invention provide a hardware accelerator that assists a host database system in processing its queries. The hardware accelerator comprises special purpose processing elements that are capable of receiving database query/operation tasks in the form of machine code database instructions, execute them in hardware without software, and return the query/operation result back to the host system. | 02-23-2012 |
20110320418 | DATABASE COMPRESSION ANALYZER - Apparatus, systems, and methods may operate to receive requests to execute a plurality of compression and/or decompression mechanisms on one or more database objects; to execute each of the compression and/or decompression mechanisms, on a sampled basis, on the database objects; to determine comparative performance characteristics associated with each of the compression and/or decompression mechanisms; and to record at least some of the performance characteristics and/or derivative characteristics derived from the performance characteristics in a performance summary table. The table may be published to a storage medium or a display screen. Other apparatus, systems, and methods are disclosed. | 12-29-2011 |
20110320417 | DATABASE COMPRESSION - Apparatus, systems, and methods may operate to receive a set of ordered user-selected compression rules as a compression rule set comprising at least one compression threshold condition, to create or transform a database object with rows to be selectively compressed according to the compression rules in the compression rule set (providing a transformed object), and to publish at least a portion of the transformed object to one of a storage medium or a display screen. Other apparatus, systems, and methods are disclosed. | 12-29-2011 |
20110270896 | GLOBAL & PERSISTENT MEMORY FOR USER-DEFINED FUNCTIONS IN A PARALLEL DATABASE - User Defined Functions (UDFs) for a parallel database system are enhanced by making memory persist even when the UDFs terminate. The memory can be shared between different instances of the UDF and the memory can be custom mapped, encrypted, and use custom security. | 11-03-2011 |
20110246432 | ACCESSING DATA IN COLUMN STORE DATABASE BASED ON HARDWARE COMPATIBLE DATA STRUCTURES - Embodiments of the present invention provide one or more hardware-friendly data structures that enable efficient hardware acceleration of database operations. In particular, the present invention employs a column-store format for the database. In the database, column-groups are stored with implicit row ids (RIDs) and a RID-to-primary key column having both column-store and row-store benefits via column hopping and a heap structure for adding new data. Fixed-width column compression allow for easy hardware database processing directly on the compressed data. A global database virtual address space is utilized that allows for arithmetic derivation of any physical address of the data regardless of its location. A word compression dictionary with token compare and sort index is also provided to allow for efficient hardware-based searching of text. A tuple reconstruction process is provided as well that allows hardware to reconstruct a row by stitching together data from multiple column groups. | 10-06-2011 |
20110218987 | HARDWARE ACCELERATED RECONFIGURABLE PROCESSOR FOR ACCELERATING DATABASE OPERATIONS AND QUERIES - Embodiments of the present invention provide a hardware accelerator that assists a host database system in processing its queries. The hardware accelerator comprises special purpose processing elements that are capable of receiving database query/operation tasks in the form of machine code database instructions, execute them in hardware without software, and return the query/operation result back to the host system. For example, table and column descriptors are embedded in the machine code database instructions. For ease of installation, the hardware accelerators employ a standard interconnect, such as a PCle or HT interconnect. The processing elements implement a novel dataflow design and Inter Macro-Op Communication (IMC) data structures to execute the machine code database instructions. The hardware accelerator may also comprise a relatively large memory to enhance the hardware execution of the query/operation tasks requested. The hardware accelerator utilizes hardware-friendly memory addressing, which allows for arithmetic derivation of a physical address from a global database virtual address simply based on a row identifier. The hardware accelerator minimizes memory reads/writes by keeping most intermediate results flowing through IMCs in pipelined and parallel fashion. Furthermore, the hardware accelerator may employ task pipelining and pre-fetch pipelining to enhance its performance. | 09-08-2011 |
20110167083 | HARDWARE ACCELERATED RECONFIGURABLE PROCESSOR FOR ACCELERATING DATABASE OPERATIONS AND QUERIES - Embodiments of the present invention provide a hardware accelerator that assists a host database system in processing its queries. The hardware accelerator comprises special purpose processing elements that are capable of receiving database query/operation tasks in the form of machine code database instructions, execute them in hardware without software, and return the query/operation result back to the host system. For example, table and column descriptors are embedded in the machine code database instructions. For ease of installation, the hardware accelerators employ a standard interconnect, such as a PCle or HT interconnect. The processing elements implement a novel dataflow design and Inter Macro-Op Communication (IMC) data structures to execute the machine code database instructions. The hardware accelerator may also comprise a relatively large memory to enhance the hardware execution of the query/operation tasks requested. The hardware accelerator utilizes hardware-friendly memory addressing, which allows for arithmetic derivation of a physical address from a global database virtual address simply based on a row identifier. The hardware accelerator minimizes memory reads/writes by keeping most intermediate results flowing through IMCs in pipelined and parallel fashion. Furthermore, the hardware accelerator may employ task pipelining and pre-fetch pipelining to enhance its performance. | 07-07-2011 |
20110167055 | HARDWARE ACCELERATED RECONFIGURABLE PROCESSOR FOR ACCELERATING DATABASE OPERATIONS AND QUERIES - Embodiments of the present invention provide a hardware accelerator that assists a host database system in processing its queries. The hardware accelerator comprises special purpose processing elements that are capable of receiving database query/operation tasks in the form of machine code database instructions, execute them in hardware without software, and return the query/operation result back to the host system. For example, table and column descriptors are embedded in the machine code database instructions. For ease of installation, the hardware accelerators employ a standard interconnect, such as a PCle or HT interconnect. The processing elements implement a novel dataflow design and Inter Macro-Op Communication (IMC) data structures to execute the machine code database instructions. The hardware accelerator may also comprise a relatively large memory to enhance the hardware execution of the query/operation tasks requested. The hardware accelerator utilizes hardware-friendly memory addressing, which allows for arithmetic derivation of a physical address from a global database virtual address simply based on a row identifier. The hardware accelerator minimizes memory reads/writes by keeping most intermediate results flowing through IMCs in pipelined and parallel fashion. Furthermore, the hardware accelerator may employ task pipelining and pre-fetch pipelining to enhance its performance. | 07-07-2011 |
20110161401 | DYNAMIC RESOURCE MANAGEMENT - Techniques for dynamic resource management are presented. A World-Wide Web (WWW) page is acquired on first access to a WWW site and rendered with a script tag. When a browser loads the WWW page, the script tag is processed to remotely execute a script on the WWW site. The script produces a single file having code for the resources that are referenced in the WWW page. The single file is provided back to the browser where it is cached so that when any of the resources are accessed via the WWW page, the needed code for those resources are available for execution within the cache of the browser. | 06-30-2011 |
20110161135 | METHOD AND SYSTEMS FOR COLLATERAL PROCESSING - Methods and systems for processing collaterals are described. A method may include receiving qualifying criteria from a client. The qualifying criteria may define assignments of one or more collaterals. An assignment tool may be generated based on the qualifying criteria. The assignment tool may include a number of stored attributes and one or more interaction attribute to be determined based on a customer interaction. The assignment tool may be used to assign a collateral to a customer. Additional methods and systems are disclosed. | 06-30-2011 |
20110154254 | SYSTEM AND METHOD FOR SETTING GOALS AND MODIFYING SEGMENT CRITERIA COUNTS - Methods and systems for setting goals and modifying segment criteria counts are described. A method may include displaying to a user a graphical user interface (GUI) to enable the user to combine multiple search criteria having variable parameters, used in searching of a database, to produce a predefined count of search results. User selections of the multiple search criteria, values for the variable parameters, and Boolean operations to combine the search criteria may be received from the user. As the received user selections change, a count of search results retrieved from the database, based on the user selections, may be dynamically displayed. Additional methods and systems are disclosed. | 06-23-2011 |
20110153270 | OUTLIER PROCESSING - Apparatus, systems, and methods may operate to acquire an original data set comprising a series of data points having an independent portion and a dependent portion, the dependent portion representing a measure of device performance that depends on at least one device characteristic represented by the independent portion. Additional activity may include identifying outlier data points in the series by determining, in comparison with all other members of the series, whether the outlier data points conform to a known trend of the series; transforming the original data set into a transformed data set by removing the outlier data points from the series; and publishing the transformed data set. Other apparatus, systems, and methods are disclosed. | 06-23-2011 |
20110145699 | ANNOTATION DRIVEN REPRESENTATIONAL STATE TRANSFER (REST) WEB SERVICES - Techniques for annotation driven Representational State Transfer (REST) web services are presented. A platform-independent World-Wide Web application is annotated to expose the methods of the application when accessed via a WWW site. The methods are described when rendered via a WWW site in a REST compliant format (RESTful). | 06-16-2011 |
20110145297 | SYSTEM AND METHOD FOR ENHANCED USER INTERACTIONS WITH A GRID - Methods and systems that implement enhanced user interactions with a grid are described. A method may include generating a grid of cells arranged in a number of rows and columns. Each row may correspond to a data record of a database. The grid may be displayed to a user while identifying one or more cells as editable cells. Input data may be received from the user for each of the editable cells. The input data may be validated using predefined criteria to identify incorrect input data and errors associated with the incorrect input data may be displayed to the user. Additional methods and systems are disclosed. | 06-16-2011 |
20110145200 | PRECEDENCE BASED STORAGE - Techniques for precedence based storage are presented. Storage for a database is organized into storage pools; collections of pools form storage classes. The storage pools within a particular class are organized in a precedence-based order so that when storage for the database is needed, the storage pools are used in the defined order of precedence. Additionally, each storage pool or storage class can be circumscribed by security limitations, quality of service limitations, and/or backup procedures. | 06-16-2011 |
20110137961 | TECHNIQUES FOR CROSS REFERENCING DATA - Techniques for cross referencing data are presented. A first database object and a second database object are linked together. The linkage is automatically cross referenced to a third database object. Access to any of the database objects can be achieved via any of the remaining database objects and vice versa. Additionally, the link and cross reference can be visualized and visually manipulated and modified. | 06-09-2011 |
20110137957 | TECHNIQUES FOR MANAGING DATA RELATIONSHIPS - Techniques for managing data relationships are presented. A database element from a first database table is linked with a database element of a second database table via a Graphical User Interface as directed by a user. The link establishes a data relationship having attributes and properties. The relationship along with the attributes and properties are graphically presented to the user for inspection and analysis. | 06-09-2011 |
20110099155 | FAST BATCH LOADING AND INCREMENTAL LOADING OF DATA INTO A DATABASE - Embodiments of the present invention provide for batch and incremental loading of data into a database. In the present invention, the loader infrastructure utilizes machine code database instructions and hardware acceleration to parallelize the load operations with the I/O operations. A large, hardware accelerator memory is used as staging cache for the load process. The load process also comprises an index profiling phase that enables balanced partitioning of the created indexes to allow for pipelined load. The online incremental loading process may also be performed while serving queries. | 04-28-2011 |
20110093477 | METHOD FOR ESTIMATION OF ORDER-BASED STATISTICS ON SLOWLY CHANGING DISTRIBUTIONS - A computer-implemented method for estimation of order-based statistics on slowly changing distributions of data stored on a computer. An initial set of data is converted to an initial histogram based representation of the data set's distribution. New or removed data is converted into a new histogram separate from the initial histogram. The new histogram is combined with the initial histogram to build a combined histogram. Percentiles and order-based statistics are estimated from the combined histogram to provide analysis of a combination of the initial set of data combined with the new or removed data. | 04-21-2011 |
20110078607 | WORKFLOW INTEGRATION WITH ADOBE.TM.FLEX.TM.USER INTERFACE - A method, system, apparatus, and article of manufacture provides the ability to visualize master data management (MDM) data as part of a MDM workflow user interface (UI) in a computer system. MDM data resides in one or more tables of a relational database management system. An MDM system maintains, as part of a process and framework, a first process workflow to manage relationship data. The relationship data is data required to manage an association of one piece of MDM data to another piece of MDM data. A first process workflow provides a UI node that contains a link to a file that describes UI components to display when the first process workflow is executed. A first component of the UI component identifies an Adobe™ Flex™ based UI component. The Adobe™ Flex™ based UI component enables the representation and viewing of the MDM data in a hierarchy. | 03-31-2011 |
20110078201 | RAGGED AND UNBALANCED HIERARCHY MANAGEMENT AND VISUALIZATION - A method, apparatus, and article of manufacture provide the ability to define a view of data in a computer system A relational database management system (RDBMS) executes and stores the information in the computer system. As part of a process and framework, a series of business rules and process workflows are maintained to manage hierarchical data that resides in RDBMS tables. User input is accepted that defines a hierarchy that is projected onto the data. The hierarchy may be parent-child relationships with no level consistency. Alternatively, the hierarchy may have branches and levels, with each of the levels having a consistent meaning but inconsistent depths due to one level of a branch being unpopulated. The hierarchy is stored as metadata in the RDBMS and utilized to graphically visualize, manage, and manipulate the data. | 03-31-2011 |
20110040773 | DATA ROW PACKING APPARATUS, SYSTEMS, AND METHODS - Apparatus, systems, and methods may operate to receive a designation of multiple rows to supply data to a single user defined function, which is made available in a structured query language SELECT statement. Further activities may include retrieving the data from at least one storage medium, packing each of the multiple rows having a common key into a single row, and transforming the data from a first state into a second state by applying the single function to the data using a single access module processor. Other apparatus, systems, and methods are disclosed. | 02-17-2011 |
20110029959 | TECHNIQUES FOR DISCOVERING DATABASE CONNECTIVITY LEAKS - Techniques for discovering database connectivity leaks are presented. Each connection made by an application to a database is monitored. When the application is shut down, if information regarding a particular connection remains in memory, then that connection is reported as a potential database connectivity leak. | 02-03-2011 |
20110029539 | METADATA AS COMMENTS FOR SEARCH PROBLEM DETERMINATION AND ANALYSIS - Techniques for using metadata as comments to assist with search problem determination and analysis are provided. Before an action is taken on a search, contextual information is gathered as metadata about the action and actor requesting the action. The metadata is embedded in the search as comments and the comments are subsequently logged when the action is performed on the search. The comments combine with other comments previously recorded to permit subsequent analysis on searches. | 02-03-2011 |
20100314441 | TECHNIQUES FOR MANAGING FRAUD INFORMATION - Techniques are presented for managing fraud information. Metadata defines user profiles, security levels, fraud cases, and presentation information. One or more queries or reports are processed against disparate data store tables and the results are aggregated into a repository. The repository is also defined by the metadata. Furthermore, operations associated with sharing, viewing, and accessing the results from the repository is defined and controlled by the metadata. In an embodiment, portions of the metadata may be viewed and navigated in a hierarchical and graphical formatted presentation. | 12-16-2010 |
20100145929 | ACCURATE AND TIMELY ENFORCEMENT OF SYSTEM RESOURCE ALLOCATION RULES - A computer-implemented method, apparatus and article of manufacture for optimizing a database query. A query execution plan for the database query is generated using estimated cost information; one or more steps of the query execution plan are executed to retrieve data from a database stored on the computer system. Actual cost information is generated for each of the executed steps, and the estimated cost information is re-calculated using the actual cost information. One or more resource allocation rules defined on one or more steps of the query execution plan are executed, based on the estimated cost information, wherein the resource allocation rules include one or more defined actions. The estimated cost information may be re-calculated using the actual cost information when confidence in the estimated cost information is low, but the estimated cost information may not be re-calculated when confidence in the estimated cost information is high. In addition, the estimated cost information may be re-calculated using the actual cost information, only when the step has one or more resource allocation rules defined thereon. | 06-10-2010 |
20100114898 | PUBLICATION SERVICES - An apparatus, method, and article of manufacture provide the ability to publish information to an external source as part of an integrated workflow in a computer system. The computer system executes a relational database management system (RDBMS). A publication services processing engine utilizes the RDBMS to publish the information based on a publication node. A publication object defines a collection of information that is published to the external source. A publication action defines a specification of a manner in which the information in the publication object is to be published to the external source. The publication node defines a workflow data process that specifies the publication object and the publication action. | 05-06-2010 |
20100088334 | HIERARCHY MANAGER FOR MASTER DATA MANAGEMENT - A method, apparatus, and article of manufacture is configured to define a structured view of data in a computer system. A relational database management system (RDBMS) stores information in the computer system. As part of a process and framework, a series of business rules and process workflows that manage data (that is hierarchical in nature) that resides in one or more RDBMS tables are maintained. User input is accepted that defines a hierarchical structure that is projected onto the data. The hierarchical structure is stored as metadata in the RDBMS. The hierarchical structure is utilized to graphically visualize, manage, and manipulate the data. | 04-08-2010 |
20100088286 | DEPLOYMENT MANAGER FOR MASTER DATA MANAGEMENT - A method, apparatus, and article of manufacture provide the ability to deploy a data management application to a target computer system. Metadata for a master data management (MDM) application is stored in a deployment database. The metadata is representative of business rules and process workflows that manage business data from multiple sources and a model definition for a model for a central business database. Configuration settings for the MDM application are stored in the deployment database. The metadata and configuration settings are retrieved from the deployment database. Installation instructions of the MDM application are confirmed based on input into a graphical user interface. The master data management application is installed on the target computer system based on the installation instructions, metadata, and configuration settings. | 04-08-2010 |
20100036886 | DEFERRED MAINTENANCE OF SPARSE JOIN INDEXES - A system and method include defining a snapshot join index using a sparse condition in a join index definition. A new sparse condition of the snapshot join index is compared with an old sparse condition. Rows in a base table are identified as a function of the comparing, and the join index table is updated using the identified rows. | 02-11-2010 |
20100036800 | AGGREGATE JOIN INDEX UTILIZATION IN QUERY PROCESSING - A system and method include obtaining a query and identifying an aggregate join index (AJI) at a high level of aggregation. The dimension table may be rolled-up with the grouping key being the union of the grouping key in the AJI and the grouping key of the query. The identified AJI is joined with the rolled-up dimension table to obtain columns in the query that are not in the identified AJI. The joined AJI and rolled-up dimension table are then rolled up to answer the query. | 02-11-2010 |
20100036799 | QUERY PROCESSING USING HORIZONTAL PARTIAL COVERING JOIN INDEX - A computer implemented system and method includes obtaining a query referring to rows in a relational database. A sparse index of the database that has a set of rows that is a subset of the rows referred to in the query is obtained. Rows referred to in the query that are not in the sparse index are then obtained and a union of such rows and the rows of the sparse index is performed to obtain a complete row set for processing the query. | 02-11-2010 |
20100030731 | COST-BASED QUERY REWRITE USING MATERIALIZED VIEWS - A system and method of rewriting a database query where the query contains an aggregate includes the following. If one or more aggregate materialized views are considered, rewriting the query using an aggregate materialized view. If one or more non-aggregate multi-table materialized views are considered, the query is rewritten using a multi-table materialized view. A join cost is determined for each such non-aggregate multi-table materialized view. If one or more non-aggregate single table materialized views are considered, the query is rewritten using the single table materialized view. A join cost is determined for each such non-aggregate single table materialized view. Finally, a current total cost is determined for use of various materialized views as a function of join cost, aggregation cost and spool cost to select a rewritten query. | 02-04-2010 |