Patent application number | Description | Published |
20080201604 | Kernel Error Recovery Disablement and Shared Recovery Routine Footprint Areas - A method, computer program product, and data processing system for providing optional failure recovery features in operating system kernel code are disclosed. In accordance with a preferred embodiment, a segment of mainline code may designate a recovery routine for that segment by calling a kernel service provided for that purpose. The kernel service allocates a “footprint” region on the recovery stack for storing state information arising from the execution of the recovery-enabled code. In the event of an exception, a recovery manager routine uses information from the recovery stack to recover from the exception. Recovery may be disabled altogether for performance purposes by way of boot-time patching to disable the use of the recovery stack and to allow state information to be written to a static “scratchpad” area, which unlike the recovery stack, is allowed to be overwritten, its contents being ignored. | 08-21-2008 |
20080201606 | Recovery Routine Masking and Barriers to Support Phased Recovery Development - A method, computer program product, and data processing system for providing optional exception recovery features in operating system kernel code are disclosed. In a preferred embodiment, a segment of mainline code may designate a recovery routine for that segment by calling a kernel service provided for that purpose. The kernel service pushes the address of the designated recovery routine, context, and re-entry point information corresponding to the segment to a recovery stack. An additional “footprint” region is also allocated on the recovery stack and used to store other state information needed for recovery. A mask value or barrier count value is also stored on the recovery stack to allow recovery to be disabled for non-recoverable routines. | 08-21-2008 |
20120084778 | MANAGING EXECUTION OF MIXED WORKLOADS IN A SIMULTANEOUS MULTI-THREADED (SMT) ENABLED SYSTEM - A kernel of a SMT enabled processor system facilitates construction of an exclusive set of processors to simulate an ST mode for handling the tasks of the ST workload, wherein the ST workload runs more efficiently on single threaded processors. The kernel schedules the ST workload on the exclusive set of processors by selecting one hardware thread per processor within said exclusive set of processors to handle a separate one of the tasks of the ST workload, while requiring the remaining hardware threads per processor within the exclusive set to idle. As a result, the ST workload is executed on the SMT enabled processor system as if the exclusive set of processors run in ST mode, but without actually deactivating the remaining idle hardware threads per processor within the exclusive set of processors. | 04-05-2012 |
Patent application number | Description | Published |
20080288807 | SYSTEM, METHOD, AND COMPUTER PROGRAM FOR PRESENTING AND UTILIZING FOOTPRINT DATA AS A DIAGNOSTIC TOOL - A data processing system for storing and identifying footprint data in a data processing system enabling automated collection, identification and formatting recovery of footprint data executing on a mainline routine. A footprint area is allocated onto a failure recovery routine stack for use by the mainline routine for storing footprint data. The mainline routine stores footprint data within the first footprint area. The data processing system can then receive a request from a diagnostic tool, where the request includes at least one search parameter. The data processing system can output any footprint data to a diagnostic tool corresponding to the search parameters in the request. | 11-20-2008 |
20080295081 | FRAMEWORK FOR CONDITIONALLY EXECUTING CODE IN AN APPLICATION USING CONDITIONS IN THE FRAMEWORK AND IN THE APPLICATION - A computer implemented method, apparatus, and computer usable program code for returning a return code to an error hook in an application using a framework. An identifier and a pass-through are received from the error hook. The error hook is software code in the application. The pass-through is a set of parameters. If the identifier has an active status, a set of framework conditions is retrieved using the identifier. If the set of framework conditions is met, an inject callback is retrieved using the error identifier. The inject callback is called with the error identifier and the pass-through. An inject callback return code is received. If the inject callback return code is an execute return code, the execute return code is returned to the error hook. | 11-27-2008 |
20120137082 | GLOBAL AND LOCAL COUNTS FOR EFFICIENT MEMORY PAGE PINNING IN A MULTIPROCESSOR SYSTEM - Embodiments of the disclosure relate to the management of memory pages available for pin operations by groups of processors in a multiprocessor system to reduce cache contention and improve system performance. An exemplary embodiment comprises a system that may include interconnected processors, a global count of the number of pages available for pinning, and a plurality of local counts of pages available for pinning by groups of processors. Each local count may be in proximity to a processor group and include a subset of the pages allocated from the global count that are available for pinning by processors in the group. The local counts are adjusted accordingly in response to page pinning and unpinning by processors in the respective processor groups. | 05-31-2012 |
20120216078 | FRAMEWORK FOR CONDITIONALLY EXECUTING CODE IN AN APPLICATION USING CONDITIONS IN THE FRAMEWORK AND IN THE APPLICATION - A computer implemented method, apparatus, and computer usable program code for returning a return code to an error hook in an application using a framework. An identifier and a pass-through are received from the error hook. The error hook is software code in the application. The pass-through is a set of parameters. If the identifier has an active status, a set of framework conditions is retrieved using the identifier. If the set of framework conditions is met, an inject callback is retrieved using the error identifier. The inject callback is called with the error identifier and the pass-through. An inject callback return code is received. If the inject callback return code is an execute return code, the execute return code is returned to the error hook. | 08-23-2012 |
Patent application number | Description | Published |
20130151363 | RECOGNIZING MISSING OFFERINGS IN A MARKETPLACE - A data fulfillment system is described herein that identifies data needs of data marketplace consumers and actively seeks out and attempts to fulfill those needs by adding new data and data providers to the marketplace. After a user enters a search, the system captures the search term(s). If no matching data is found, the data fulfillment system presents to the consumer a screen to suggest a new data offering and to provide a description of data for which the consumer was looking. The system then mines these consumer wants to seek partnerships programmatically by seeing who has this data or operates in this space. Thus, the data fulfillment system provides implicit and explicit ways for consumers to provide information describing data offerings that they want and for potential providers to learn about opportunities to fill current data gaps. | 06-13-2013 |
20130282748 | Self-Service Composed Web APIs - Individual datasets are accessed using an application programming interface (API). Multiple APIs may be combined into a composite API that allows a user to access multiple datasets using a single query. The composite API may be designed to provide a simpler way to consume information from multiple datasets in response to a particular scenario or problem. The composite API may comprise multiple levels of intermediate APIs that call on each other to access desired datasets. A user may select the datasets that the composite API accesses and/or the composite API may require certain specific datasets. The composite API may be offered for sale or use by other users via a website, such as a data market. | 10-24-2013 |
20130339382 | EXTENSIBLE DATA QUERY SCENARIO DEFINITION AND CONSUMPTION - Content providers define a set of scenarios that are addressed by their datasets. The scenarios include user-friendly, human-readable attributes such as a title, description, and visualization. The scenarios may also include a technical description that can be used to generate sample queries that can then be executed against the dataset. The technical description may be machine translated to arbitrary data querying protocols while maintaining the semantic meaning of the query. A user interface may be provided to allow users to intuitively generate the scenarios. In one embodiment, an extensible framework provides for the creation of protocol-specific translation plug-ins that are used to generate implementations of the scenario suitable for selected protocols. Known market-relevant translator plug-ins may also be implemented. | 12-19-2013 |
Patent application number | Description | Published |
20110225074 | SYSTEM AND METHOD FOR PROVIDING INFORMATION AS A SERVICE VIA WEB SERVICES - Aspects are disclosed for providing information as a service via web services. Access to at least one application programming interface (API) database is facilitated and requests for a requested API are parsed. Here, such API requests facilitate a processing of data provided by at least one content provider. In an aspect, each request includes a key associated with a developer of the requested API and a unique identifier associated with a user of the requested API. A usage of the requested API is then tracked based on the key and/or unique identifier. | 09-15-2011 |
20110225143 | QUERY MODEL OVER INFORMATION AS A NETWORKED SERVICE - Data is published by publishers to an information service configured to receive data sets and allow consumers to consume the data sets via queries. Structural information of the data sets (e.g., column information) is presented to the publishers to select which information of the data sets can be a search parameter and which information can be returned in query results. Query interfaces are automatically created based on the selections by the publisher, and the back end databases are optimized for such query interfaces, e.g., creation of indexes based on the search parameters or query results selected by the publisher. A query aggregator can automatically combine a given query interface with other query interfaces to form more complicated (but still permitted) queries based on the intersection of permissions for the given query interface and the other query interfaces. | 09-15-2011 |
20110225658 | END USER LICENSE AGREEMENT ON DEMAND - Systems and methods for providing end user license agreements on demand for information as a service is provided. In some embodiments, a computer-implemented system can include: at least one processor; and at least one publication module configured to publish content to a consumer. The computer-implemented system can also include at least one condition generation module configured to generate a representation of one or more conditions associated with use by the consumer for published content from the at least one publication module. The conditions can be canonicalized conditions representing standard terms to be included in the representation. In some embodiments, the representation is a license agreement for the consumer. The computer-implemented system can also a computer-readable storage medium storing computer-executable instructions that, when executed, cause the at least one processor to perform one or more functions of the at least one publication module or the at least one condition generation module. | 09-15-2011 |
20120096093 | AVAILABILITY MANAGEMENT FOR REFERENCE DATA SERVICES - Various aspects for scaling an availability of information are disclosed. In one aspect, a response performance associated with responding to data consumption requests is monitored. A characterization of the response performance is ascertained, and a scaling of resources is facilitated based on the characterization. In another aspect, a data consumption status indicative of data consumed is ascertained. Here, a scalability interface is provided, which displays aspects of the status, and receives an input from a content provider. An allocation of resources is then modified in response to the input. In yet another aspect, a response performance associated with responding to data consumption requests is monitored. An application programming interface (API) call is generated based on a characterization of the response performance, and transmitted to a content provider. An API response is then received from the content provider indicating whether a scaling of resources for hosting the data was performed. | 04-19-2012 |
20130091138 | Contextualization, mapping, and other categorization for data semantics - Semantic categorization of data includes submitting obtained data values to a data enhancement service which has a semantic criterion for incoming data. A response from the service indicates whether the submitted data values meet the criterion, and is used to assign a likelihood that the values belong to a semantic category matching the criterion. Other semantic categorization operations do not necessarily use a data enhancement service. Some are based on which device was used to collect the data values, on a subject heading in which data was published, and/or on syntactic patterns. A semantic taxonomy shows semantic categorizations for one or more datasets and connections between datasets, possibly filtered per user request. Different versions of the taxonomy are stored for respective different users. Similarity between the data values can be assessed using semantic categorization. Taxonomies can be federated to allow exploration and understanding across multiple repositories. | 04-11-2013 |
20130124372 | INTEGRATED MULTI-LICENSOR APPLICATION AND DATA PURVEYANCE - A single integrated offering includes a dataset license and a license to an application tailored for using the dataset. The dataset licensor and the application licensor are distinct entities. However, the integrated offering is electronically purveyed under a single offering price, in a public online marketplace and/or on licensor websites. In some cases, purveyance includes obtaining a purchaser's consents to the licenses, disclosing one or both of the licensors' identities, provisioning a purchaser with the dataset and the application, making payments to licensors, tax authorities, and/or other parties in response to a purchaser's payment, and reporting dataset/application usage to the licensors. Purveyor code permits cancelation of a purchase of the integrated offering only as a unified whole. | 05-16-2013 |
20140149589 | Enforcing Conditions of Use Associated with Disparate Data Sets - Techniques are described herein that are capable of enforcing conditions of use associated with disparate data sets. For example, content may be published. Conditions of use that are associated with the published content may be specified. The published content may include disparate data sets. Each data set may be associated with its own condition(s) of use. The condition(s) of use associated with each data set may be enforced. | 05-29-2014 |
20150213128 | QUERY MODEL OVER INFORMATION AS A NETWORKED SERVICE - Techniques for hosting data or connecting to hosted data are disclosed herein. In one embodiment, a first computing device in a first region of control can receive a data set from a second computing device in a second region of control via a communication network. The first computing device can then analyze the received data set to determine structural information, such as one or more structural features associated with the received data set. The determined structural information can then be transmitted to the second computing device. In response to the transmission, the first computing device can receive input from the second computing device regarding a query capability to enforce over the received data set. | 07-30-2015 |
Patent application number | Description | Published |
20090026852 | Indexing apparatus and method for installation of stator bars - A stator bar installation fixture and method for installing stator bars into specific stator core slots within a stator core of rotating electrical equipment. The stator bar installation fixture includes rotating mechanisms, rotatingly fixed at each end of the rotating electrical equipment, for supporting and controlling an angular positioning of a stator bar insertion mechanism relative to the stator core. The stator bar insertion mechanism supports a stator bar within the stator core, angularly locates a stator bar in alignment with the specific stator core slot, and inserts the stator bar into the specific stator core slot. | 01-29-2009 |
20090045692 | Capped stator core wedge and related method - A slot wedge for a generator stator includes a wedge body having top and bottom surfaces and a pair of oppositely inclined side surfaces, wherein at least the oppositely inclined surfaces are covered with a woven aramid fabric. A related method includes the steps of: (a) providing a wedge shaped body having top and bottom surfaces connected by oppositely inclined side surfaces; and (b) covering at least the oppositely inclined side surfaces with a woven aramid fabric. | 02-19-2009 |
20090172934 | METHODS AND SYSTEMS FOR IN-SITU MACHINE MAINTENANCE - Methods and systems for maintaining a machine using an in-situ vehicle (IV) is provided. The IV may be used with a machine that includes a work-piece and an interference body positioned proximate a surface of the work-piece such that a relatively small gap extends between the work-piece and the interference body. The method includes transporting the IV from external to the machine into the gap, positioning the IV in the gap such that at least a portion of the IV circumscribes a work area of the work-piece, locking the IV between the work-piece and the interference body, and manipulating a tool coupled to the IV from external to the machine, the tool configured to transfer a component between the work-piece and a storage cassette on the IV. | 07-09-2009 |