Patent application number | Description | Published |
20080208893 | SYSTEM AND METHOD FOR ADAPTIVE CONTENT PROCESSING AND CLASSIFICATION IN A HIGH-AVAILABILITY ENVIRONMENT - The embodiments of the invention provide a systems, methods, etc. for adaptive content processing and classification in a high-availability environment. More specifically, a system is provided having a plurality of processing engines and at least one server that classifies data objects on the computer system. The classification includes analyzing the data objects for the presence of a type of content. This can include assigning a score corresponding to the amount of the type of content in each of the data objects. Moreover, the server can remove a data object from the computer system based on the results of the analyzing. The results of the analyzing are stored and the computer system is updated with feedback information. This can include allowing a user to review the results of the analyzing and aggregating reviews of the user into the feedback information. | 08-28-2008 |
20100070495 | FAST-APPROXIMATE TFIDF - Our approach seeks to reduce the complexity of this type of calculation through approximation and pre-computation. It is designed to work efficiently with modern relational database constructs for content management. The approach is designed to enable the kinds of highly interactive data-driven visualizations that are the hallmark of third generation business intelligence. | 03-18-2010 |
20100223226 | SYSTEM FOR MONITORING GLOBAL ONLINE OPINIONS VIA SEMANTIC EXTRACTION - A system for transforming domain specific unstructured data into structured data including an intake platform controlled by feed back from a control platform. The intake platform includes an intake acquisition module for acquiring data building baseline data related to a domain and problem of interest, an intake pre-processing module, an intake language module, an intake application descriptors module, and an intake adjudication module. The control platform includes a control data acquisition module, a control data consistency collator, a control auditor, a control event definition and policy repository, an error resolver, and an output that outputs results of the workflow into structured data enabled to be used in data analytics. | 09-02-2010 |
20100223292 | HOLISTIC DISAMBIGUATION FOR ENTITY NAME SPOTTING - A method resolves ambiguous spotted entity names in a data corpus by determining an activation level value for each of a plurality of nodes corresponding to a single ambiguous entity name. The activation levels for each of the nodes may be modified by inputting outside domain knowledge corresponding to the nodes to increase the activation value of the nodes, spotting entity names corresponding to the nodes to increase the activation value of the nodes, searching the data corpus to spot newly posted entity names to increase the activation value of the nodes, and searching the data corpus to reduce or deactivate the activation value of the nodes by eliminating false positives. The ambiguous entity name is assigned to the node determined to have the highest activation level and is then outputted to a user. | 09-02-2010 |
20110113016 | Method and Apparatus for Data Compression - A method, system, and article for compressing an input stream of uncompressed data. The input stream is divided into one or more data segments. A hash is applied to a first data segment, and an offset and length are associated with this first segment. This hash, together with the offset and length data for the first segment, is stored in a hash table. Thereafter, a subsequent segment within the input stream is evaluated and compared with all other hash entries in the hash table, and a reference is written to a prior hash for an identified duplicate segment. The reference includes a new offset location for the subsequent segment. Similarly, a new hash is applied to an identified non-duplicate segment, with the new hash and its corresponding offset stored in the hash table. A compressed output stream of data is created from the hash table retained on storage media. | 05-12-2011 |
20110113049 | Anonymization of Unstructured Data - A method for anonymization of unstructured data comprises determining structured references in the unstructured data; populating a table with the structured references; anonymizing the structured references in the table using ontological analysis; and rewriting the structured references in the unstructured data with the anonymized structured references from the table to produce anonymized data. A system for anonymizing unstructured data comprises an entity spotting module configured to determine structured references in the unstructured data and populate a table with the determined structured references; an anonymization module configured to anonymizing the structured references in the table using ontological analysis; and a replacement module configured to rewrite the structured references in the unstructured data with the anonymized structured references from the table to produce anonymized data. | 05-12-2011 |
20110185149 | DATA DEDUPLICATION FOR STREAMING SEQUENTIAL DATA STORAGE APPLICATIONS - Data deduplication compression in a streaming storage application, is provided. The disclosed deduplication process provides a deduplication archive that enables storage of the archive to, and extraction from, a streaming storage medium. One implementation involves compressing fully sequential data stored in a data repository to a sequential streaming storage, by: splitting fully sequential data into data blocks; hashing content of each data block and comparing each hash to an in-memory lookup table for a match, the in-memory lookup table storing all hashes that have been encountered during the compression of the fully sequential data; for each data block without a hash match, adding the data block as a new data block for compression of fully sequential data; and encoding duplicate data blocks using the in-memory lookup table into data segments. | 07-28-2011 |
20120197848 | VALIDATION OF INGESTED DATA - Methods and systems for validating ingested data are disclosed. In accordance with the methods and systems, data elements can be received for storage in slots of an individual descriptor in a storage medium. In addition, at least one validation test can be selected based on a weighting of the data elements that indicates a respective degree of importance of the data elements. The selected validation test or tests can be applied to the data elements stored in the slots to generate respective validation results. Further, a validation score indicating a sufficiency of the stored data elements can be generated based on the validation results. | 08-02-2012 |
20120197902 | DATA INGEST OPTIMIZATION - Methods and systems for optimizing the retrieval of data from multiple sources are described. A slot map including slots for the storage of data elements can be obtained. The data elements associated with the slots can be prioritized by weighting values with costs of retrieving the data elements from respective data sources. Each value can be associated with a different data element and can indicate a respective degree of importance of the associated data element. Further, the systems and methods can direct the retrieval of data elements from the respective data sources in an order in accordance with the priority of the data elements to optimize the quality of data obtainable within a critical time constraint. In addition, the retrieved data elements can be stored in corresponding slots on a storage medium. | 08-02-2012 |
20120330901 | VALIDATION OF INGESTED DATA - Methods and systems for validating ingested data are disclosed. In accordance with the methods and systems, data elements can be received for storage in slots of an individual descriptor in a storage medium. In addition, at least one validation test can be selected based on a weighting of the data elements that indicates a respective degree of importance of the data elements. The selected validation test or tests can be applied to the data elements stored in the slots to generate respective validation results. Further, a validation score indicating a sufficiency of the stored data elements can be generated based on the validation results. | 12-27-2012 |
20120330972 | DATA INGEST OPTIMIZATION - Methods and systems for optimizing the retrieval of data from multiple sources are described. A slot map including slots for the storage of data elements can be obtained. The data elements associated with the slots can be prioritized by weighting values with costs of retrieving the data elements from respective data sources. Each value can be associated with a different data element and can indicate a respective degree of importance of the associated data element. Further, the systems and methods can direct the retrieval of data elements from the respective data sources in an order in accordance with the priority of the data elements to optimize the quality of data obtainable within a critical time constraint. In addition, the retrieved data elements can be stored in corresponding slots on a storage medium. | 12-27-2012 |
20140359625 | DETECTION AND CORRECTION OF RACE CONDITIONS IN WORKFLOWS - A race condition in a workflow representation is detected and corrected. First and second contracts are retrieved for respective first and second analytics of the workflow representation, wherein the contracts specify input types and output types of their analytics. Both contracts include information required to execute their respective analytics by a workflow executor. It is determined that the output type of the first analytic matches the input type of the second analytic based on a comparison of the first contract and the second contract, and that the workflow representation does not include a directed edge connecting the first analytic to the second analytic. The inclusion of a directed edge in the workflow representation connecting the first analytic to the second analytic will correct the race condition in the workflow representation. | 12-04-2014 |