Patent application number | Description | Published |
20080235220 | METHODOLOGIES AND ANALYTICS TOOLS FOR IDENTIFYING WHITE SPACE OPPORTUNITIES IN A GIVEN INDUSTRY - A method for analyzing predefined subject matter in a patent database being for use with a set of target patents, each target patent related to the predefined subject matter, the method comprising: creating a feature space based on frequently occurring terms found in the set of target patents; creating a partition taxonomy based on a clustered configuration of the feature space; editing the partition taxonomy using domain expertise to produce an edited partition taxonomy; creating a classification taxonomy based on structured features present in the edited partition taxonomy; creating a contingency table by comparing the edited partition taxonomy and the classification taxonomy to provide entries in the contingency table; and identifying all significant relationships in the contingency table to help determine the presence of any white space. | 09-25-2008 |
20080243889 | INFORMATION MINING USING DOMAIN SPECIFIC CONCEPTUAL STRUCTURES - A method and analytics tools for information mining incorporating domain specific knowledge and conceptual structures are disclosed, the method including: providing a first set of documents related to a first topic of interest; using a first taxonomy to categorize the first set of documents into a set of categories; providing a second set of documents related to a second topic of interest; categorizing the second set of documents according to the set of categories of the first set of documents; using an element of domain knowledge to re-categorize the first set of documents; and examining a category to identify a document of interest. | 10-02-2008 |
20080301105 | METHODOLOGIES AND ANALYTICS TOOLS FOR LOCATING EXPERTS WITH SPECIFIC SETS OF EXPERTISE - A method and analytics tools for locating experts with specific sets of expertise are disclosed, the method including providing a collection of documents P | 12-04-2008 |
20080306987 | BUSINESS INFORMATION WAREHOUSE TOOLKIT AND LANGUAGE FOR WAREHOUSING SIMPLIFICATION AND AUTOMATION - A method for use with an information (or data) warehouse comprises managing the information warehouse with instructions in a declarative language. The instructions specify information warehouse-level tasks to be done without specifying certain details of how the tasks are to be implemented, for example, using databases and text indexers. The details are hidden from the user and include, for example, in an information warehouse having a FACT table that joins two or more dimension tables, details of database level operations when structured data are being handled, including database command line utilities, database drivers, and structured query language (SQL) statements; and details of text-indexing engines when unstructured data are being handled. The information warehouse is managed in a dynamic way in which different tasks—such as data loading tasks and information warehouse construction tasks—may be interleaved (i.e., there is no particular order in which the different tasks must be completed). | 12-11-2008 |
20080307011 | FAILURE RECOVERY AND ERROR CORRECTION TECHNIQUES FOR DATA LOADING IN INFORMATION WAREHOUSES - A method of data loading for large information warehouses includes performing checkpointing concurrently with data loading into an information warehouse, the checkpointing ensuring consistency among multiple tables; and recovering from a failure in the data loading using the checkpointing. A method is also disclosed for performing versioning concurrently with data loading into an information warehouse. The versioning method enables processing undo and redo operations of the data loading between a later version and a previous version. Data load failure recovery is performed without starting a data load from the beginning but rather from a latest checkpoint for data loading at an information warehouse level using a checkpoint process characterized by a state transition diagram having a multiplicity of states; and tracking state transitions among the states using a system state table. | 12-11-2008 |
20080307255 | FAILURE RECOVERY AND ERROR CORRECTION TECHNIQUES FOR DATA LOADING IN INFORMATION WAREHOUSES - A method of data loading for large information warehouses includes performing checkpointing concurrently with data loading into an information warehouse, the checkpointing ensuring consistency among multiple tables; and recovering from a failure in the data loading using the checkpointing. A method is also disclosed for performing versioning concurrently with data loading into an information warehouse. The versioning method enables processing undo and redo operations of the data loading between a later version and a previous version. Data load failure recovery is performed without starting a data load from the beginning but rather from a latest checkpoint for data loading at an information warehouse level using a checkpoint process characterized by a state transition diagram having a multiplicity of states; and tracking state transitions among the states using a system state table. | 12-11-2008 |
20080307386 | BUSINESS INFORMATION WAREHOUSE TOOLKIT AND LANGUAGE FOR WAREHOUSING SIMPLIFICATION AND AUTOMATION - A method for use with an information (or data) warehouse comprises managing the information warehouse with instructions in a declarative language. The instructions specify information warehouse-level tasks to be done without specifying certain details of how the tasks are to be implemented, for example, using databases and text indexers. The details are hidden from the user and include, for example, in an information warehouse having a FACT table that joins two or more dimension tables, details of database level operations when structured data are being handled, including database command line utilities, database drivers, and structured query language (SQL) statements; and details of text-indexing engines when unstructured data are being handled. The information warehouse is managed in a dynamic way in which different tasks—such as data loading tasks and information warehouse construction tasks—may be interleaved (i.e., there is no particular order in which the different tasks must be completed). | 12-11-2008 |
20090119275 | METHOD OF MONITORING ELECTRONIC MEDIA - Consumer-generated media (CGM) and/or other media are monitored to allow an organization to become aware of, and respond to, issues that may affect how it is perceived by the public. An extract, transform, load (ETL) engine is used to process CGM and other media content, and an analytical engine utilizes a multi-step progressive filtering approach to identify those documents that are most relevant. The filtering approach includes executing broad queries to extract relevant content from different CGM and other sources, extracting text snippets from the relevant content and performing de-duplication, defining organizational identity (e.g., brand name, trade name, or company name) and hot-topic models using a rule-based and statistical-based approach, and using the models together in an orthogonal filtering approach to effectively generate alerts and reports. The methodology is found to be substantially more effective compared to a conventional keyword based approach. | 05-07-2009 |
20090187582 | EFFICIENT UPDATE METHODS FOR LARGE VOLUME DATA UPDATES IN DATA WAREHOUSES - A system and method for ensuring large and frequent updates to a data warehouse. The process leverages a set of temporary staging tables to track the updates. A set of intermediate steps are performed to accomplish bulk deletions of the outdated changed records, and perform modifications to the map tables for models such as snowflake. Finally, bulk load operations load the updates and insert them into the final dimension tables. The process ensures performance comparable to insertion-only schemes with at most only slight performance degradation. Furthermore, a modified process is applied on the newfact data warehouse dimension model. The process can be readily adapted to handle star schema and other hierarchical data warehouse models. | 07-23-2009 |
20090187602 | Efficient Update Methods For Large Volume Data Updates In Data Warehouses - A system and method for ensuring large and frequent updates to a data warehouse. The process leverages a set of temporary staging tables to track the updates. A set of intermediate steps are performed to accomplish bulk deletions of the outdated changed records, and perform modifications to the map tables for models such as snowflake. Finally, bulk load operations load the updates and insert them into the final dimension tables. The process ensures performance comparable to insertion-only schemes with at most only slight performance degradation. Furthermore, a modified process is applied on the newfact data warehouse dimension model. The process can be readily adapted to handle star schema and other hierarchical data warehouse models. | 07-23-2009 |
20090198570 | METHODOLOGIES AND ANALYTICS TOOLS FOR IDENTIFYING POTENTIAL LICENSEE MARKETS - A method is disclosed for use with at least one initial document describing a technical concept suitable for licensing, the method comprising: retrieving a set of intellectual property documents from a data warehouse; partitioning the set of intellectual property documents into a plurality of document categories; classifying the set of intellectual property documents by an industry parameter; constructing a contingency table that includes a listing of industry classifications for each of the document categories, and identifying documents within a particular one of the document categories that have different industry classifications so as to identify at least one potential new licensee industry of the technical concept described in the initial document. | 08-06-2009 |
20090292660 | USING RULE INDUCTION TO IDENTIFY EMERGING TRENDS IN UNSTRUCTURED TEXT STREAMS - A method for identifying emerging concepts in unstructured text streams comprises: selecting a subset V of documents from a set U of documents; generating at least one Boolean combination of terms that partitions the set U into a plurality of categories that represent a generalized, statistically based model of the selected subset V wherein the categories are disjoint inasmuch as each document of U is included in only one category of the partition; and generating a descriptive label for each of the disjoint categories from the Boolean combination of terms for that category. | 11-26-2009 |
20090292704 | ADAPTIVE AGGREGATION: IMPROVING THE PERFORMANCE OF GROUPING AND DUPLICATE ELIMINATION BY AVOIDING UNNECESSARY DISK ACCESS - A method for use with an aggregation operation (e.g., on a relational database table) includes a sorting pass and a merging pass. The sorting pass includes: (a) reading blocks of the table from a storage medium into a memory using an aggregation method until the memory is substantially full or until all the data have been read into the memory; (b) determining a number k of blocks to write back to the storage medium from the memory; (c) selecting k blocks from memory, sorting the k blocks, and then writing the k blocks back to the storage medium as a new sublist; and (d) repeating steps (a), (b), and (c) for any unprocessed tuples in the database table. The merging pass includes: merging all the sublists to form an aggregation result using a merge-sort algorithm. | 11-26-2009 |
20090300038 | Methods and Apparatus for Reuse Optimization of a Data Storage Process Using an Ordered Structure - Techniques for reducing a number of computations in a data storage process are provided. One or more computational elements are identified in the data storage process. An ordered structure of one or more nodes is generated using the one or more computational elements. Each of the one or more nodes represents one or more computational elements. Further, a weight is assigned to each of the one or more nodes. An ordered structure of one or more reusable nodes is generated by deleting one or more nodes in accordance with the assigned weights. The ordered structure of one or more reusable nodes is utilized to reduce the number of computations in the data storage process. The data storage process converts data from a first format into a second format, and stores the data in the second format on a computer readable medium for data analysis purposes. | 12-03-2009 |
20100145940 | SYSTEMS AND METHODS FOR ANALYZING ELECTRONIC TEXT - Systems and methods for systematically analyzing an electronic text are described. In one embodiment, the method includes receiving the electronic text from a plurality of sources. The method also includes determining an at least one term of interest to be identified in the electronic text. The method further includes identifying a plurality of locations within the electronic text including the at least one term of interest. The method also includes for each location within a plurality of locations, creating a snippet from a text segment around the at least one term of interest at the location within the electronic text. The method further includes creating multiple taxonomies for the at least one term of interest from the snippets, wherein the taxonomies include an at least one category. The method also includes determining co-occurrences between the multiple taxonomies to determine associations between categories of a different taxonomies of the multiple taxonomies. | 06-10-2010 |
20100161576 | DATA FILTERING AND OPTIMIZATION FOR ETL (EXTRACT, TRANSFORM, LOAD) PROCESSES - A method and system are disclosed for use with an ETL (Extract, Transform, Load) process, comprising optimizing a filter expression to select a subset of data and evaluating the filter expression on the data after the extracting, before the loading, but not during the transforming of the ETL process. The method and system optimizes the filtering using a pipelined evaluation for single predicate filtering and an adaptive optimization for multiple predicate filtering. The adaptive optimization includes an initial phase and a dynamic phase. | 06-24-2010 |
20100280991 | METHOD AND SYSTEM FOR VERSIONING DATA WAREHOUSES - A method, system, and computer program product are disclosed. Exemplary embodiments of the method, system, and computer program product may include hardware, process steps, and computer program instructions for supporting versioning in a data warehouse. The data warehouse may include a data warehouse engine for creating a data warehouse including a fact table and temporary tables. Updated or new data records may be transferred into the data warehouse and bulk loaded into the temporary tables. The updated or new data records may be evaluated for attributes matching existing data records. A version number may be assigned to data records and data records may be marked as being the most current version. Updated and new data records may be bulk loaded from the temporary tables into the fact table when a version number or a version status is calculated. | 11-04-2010 |
20110113005 | SUPPORTING SET-LEVEL SLICE AND DICE IN DATA WAREHOUSES - A method and system for coping with slice and dice operations in data warehouses is disclosed. An external approach may be utilized, creating queries using structured query language on a computer. An algorithm may be used to rewrite the queries. The resulting predicates may be joined to dimension tables corresponding to fact tables. An internal approach may be utilized, using aggregation functions with early aggregation for creating the queries. The results of the slice and dice operations may be outputted to a user on a computer monitor. | 05-12-2011 |
20110213756 | CONCURRENCY CONTROL FOR EXTRACTION, TRANSFORM, LOAD PROCESSES - System and methods manage concurrent ETL processes accessing a database. Exemplary embodiments include a method for concurrency management for ETL processes in a database having database tables and communicatively coupled to a computer, the method including establishing a session lock for the database, determining that a current ETL process is accessing the database at a current time, associating a current expiration time with the session lock, the expiration time being stored in a lock table in the database, sending the session lock to the current ETL process and performing ETL-level locking for the current ETL process. | 09-01-2011 |
20110219038 | SIMPLIFIED ENTITY RELATIONSHIP MODEL TO ACCESS STRUCTURE DATA - A method, system and program product for modeling data as an undirected graph is disclosed. A set of entities and a set of attributes are defined. A set of relationships is defined to represent semantic associations with each association connecting at least two entities. Attributes are associated with entities rather than with relationships. A hierarchical query language with a set of atomic operations on modeled data is employed. The modeled data is displayed on a display unit. | 09-08-2011 |
20110276553 | CLASSIFYING DOCUMENTS ACCORDING TO READERSHIP - One embodiment is a computer-implemented method for classifying documents in a collection of documents according to their intended readerships. The method comprises using a computer to select a document in the collection of documents; and using a computer to determine a characteristic of the selected document, the characteristic being: misleading when the document includes one or more features that are determined to be for a purpose other than reading the document; commercial when the document includes features that are presented for a commercial purpose; or personal when the document includes features of a personal opinion. The method further includes using a computer to classify the selected document as misleading, commercial, or personal according to its determined characteristic; and using a computer to repeat the steps of select document, determine a characteristic of the selected document, and classify the selected document for additional documents in the collection. At least some documents are classified as misleading, at least some documents are classified as commercial, and at least some documents are classified as personal. Other methods and computer program products are also disclosed according to even more embodiments. | 11-10-2011 |
20120226695 | CLASSIFYING DOCUMENTS ACCORDING TO READERSHIP - A system for classifying documents in a collection of documents according to their intended readerships includes: a computer configured to select a document in the collection of documents; and a computer to determine a characteristic of the selected document, the characteristic being: misleading when the document includes one or more features that are determined to be for a purpose other than reading the document; commercial when the document includes features that are presented for a commercial purpose; or personal when the document includes features of a personal opinion. A computer classifies the selected document as misleading, commercial, or personal according to its determined characteristic; and a computer repeats the steps of select document, determines a characteristic of the selected document, and classifies the selected document for additional documents in the collection. At least some documents are classified as misleading, some as commercial, and at least some as personal. | 09-06-2012 |
20130006969 | SUPPORTING SET-LEVEL SLICE AND DICE IN DATA WAREHOUSES - A method and system for coping with slice and dice operations in data warehouses is disclosed. An external approach may be utilized, creating queries using structured query language on a computer. An algorithm may be used to rewrite the queries. The resulting predicates may be joined to dimension tables corresponding to fact tables. An internal approach may be utilized, using aggregation functions with early aggregation for creating the queries. The results of the slice and dice operations may be outputted to a user on a computer monitor. | 01-03-2013 |
20130054226 | RECOGNIZING CHEMICAL NAMES IN A CHINESE DOCUMENT - A method and system for recognizing chemical names in a Chinese document. The method includes: receiving a Chinese document including chemical names; recognizing chemical name segments in the document; recognizing non-chemical name segments in the document; and combining the chemical name segments to get chemical names based on the recognized chemical name segments and non-chemical name segments. Specific embodiments of the present invention can effectively recognize chemical names from a chemical document. | 02-28-2013 |
20130104132 | COMPOSING ANALYTIC SOLUTIONS - An approach for composing an analytic solution is provided. After associating descriptive schemas with web services and web-based applets, a set of input data sources is enumerated for selection. A desired output type is received. Based on the descriptive schemas that specify required inputs and outputs of the web services and web-based applets, combinations of web services and web-based applets are generated. The generated combinations achieve a result of the desired output type from one of the enumerated input data sources. Each combination is derived from available web services and web-based applets. The combinations include one or more workflows that provide an analytic solution. A workflow whose result satisfies the business objective may be saved. Steps in a workflow may be iteratively refined to generate a workflow whose result satisfies the business objective. | 04-25-2013 |
20130104134 | COMPOSING ANALYTIC SOLUTIONS - An approach for composing an analytic solution is provided. After associating descriptive schemas with web services and web-based applets, a set of input data sources is enumerated for selection. A desired output type is received. Based on the descriptive schemas that specify required inputs and outputs of the web services and web-based applets, combinations of web services and web-based applets are generated. The generated combinations achieve a result of the desired output type from one of the enumerated input data sources. Each combination is derived from available web services and web-based applets. The combinations include one or more workflows that provide an analytic solution. A workflow whose result satisfies the business objective may be saved. Steps in a workflow may be iteratively refined to generate a workflow whose result satisfies the business objective. | 04-25-2013 |
20130290861 | PERMITTING PARTICIPANT CONFIGURABLE VIEW SELECTION WITHIN A SCREEN SHARING SESSION - A screen sharing session between a participating computer and a presenting computer can be established. A copy of a graphical user interface screen from the presenting computer can be presented within a display on the participating computer. A selection of the sub-portion of the copy of the graphical user interface screen from the participating computer can be received. Boundaries of the sub-portion can be determined and can be transmitted from the participating computer to the presenting computer. Responsive to receiving the boundaries, the remotely located computer can define the sub-portion of the graphical user interface screen of the presenting computer. The defined sub-portion of the graphical user interface screen can be conveyed over a network from the presenting computer to the participating computer without conveying data for other portions of the graphical user interface screen. | 10-31-2013 |
20140032603 | SIMPLIFIED ENTITY RELATIONSHIP MODEL TO ACCESS STRUCTURE DATA - A system and program product for modeling data as an undirected graph is disclosed. A set of entities and a set of attributes are defined. A set of relationships is defined to represent semantic associations with each association connecting at least two entities. Attributes are associated with entities rather than with relationships. A hierarchical query language with a set of atomic operations on modeled data is employed. The modeled data is displayed on a display unit. | 01-30-2014 |
20140163958 | APPROXIMATE NAMED-ENTITY EXTRACTION - According to one embodiment, approximate named-entity extraction from a dictionary that includes entries is provided, where each of the entries includes one or more words. Words are read from the entries of the dictionary, and network resources are searched to determine a frequency of occurrence of the words on the network resources. In view of the frequency of occurrence of the words located on the network resources, domain relevancy of the words in the entries of the dictionary is determined. A domain repository is created using top-ranked words as determined by the domain relevancy of the words. In view of the domain repository, signatures for both the entries of the dictionary and strings of an input document are computed. The strings of the input document are filtered by comparing the signatures of the strings against the signatures of the entries to identify approximate-match entity names. | 06-12-2014 |
20140163964 | APPROXIMATE NAMED-ENTITY EXTRACTION - According to one embodiment, a method is provided for approximate named-entity extraction from a dictionary that includes entries, where each of the entries includes one or more words. Words are read from the entries of the dictionary, and network resources are searched to determine a frequency of occurrence of the words on the network resources. In view of the frequency of occurrence of the words located on the network resources, domain relevancy of the words in the entries of the dictionary is determined. A domain repository is created using top-ranked words as determined by the domain relevancy of the words. In view of the domain repository, signatures for both the entries of the dictionary and strings of an input document are computed. The strings of the input document are filtered by comparing the signatures of the strings against the signatures of the entries to identify approximate-match entity names. | 06-12-2014 |
20140195471 | TECHNOLOGY PREDICTION - Embodiments of the invention relate to technology prediction. A technical dictionary of technical terms is constructed based on a collection of documents. The technical terms are partitioned into equivalence classes. A table is generated that correlates technical terms across equivalence classes based on temporal co-occurrence of the technical terms across the equivalence classes. For a given technical term the table is accessed to determine a first set of technical terms that correlate to the given technical term. The table is accessed again to determine a second set of technical terms that correlate to the first set of technical terms. It is predicted that the second set of technical terms will correlate to the given technical term in the future. | 07-10-2014 |