Patent application number | Description | Published |
20150161237 | SYSTEM AND METHOD FOR CREATING STRUCTURED EVENT OBJECTS - The present invention envisages a system and method for converting a voluminous stream of unstructured short text messages into event-objects of specific event type that may be of potential interest to users at far away locations. The method of structuring involves detecting long tail of events in spite of their sparsity. This is followed by extracting and correlating detected short text messages that describe same event type to create structured event objects. | 06-11-2015 |
20150205803 | ENTITY RESOLUTION FROM DOCUMENTS - The present subject matter relates to entity resolution, and in particular, relates to providing an entity resolution from documents. The method comprises obtaining the plurality of documents from at least one data source. The plurality of documents is blocked into at least one bucket based on textual similarity and inter-document references among the plurality of documents. Further, within each bucket, a merged document for each entity may be created based on an iterative match-merge technique. The iterative match-merge technique identifies, from the plurality of documents, at least one matching pair of documents and merges the at least one matching pair of documents to create the merged document for each entity. The merged documents may be merged to generate a resolved entity-document for each entity based on a graph clustering technique. | 07-23-2015 |
20150254329 | ENTITY RESOLUTION FROM DOCUMENTS - The present subject matter relates to entity resolution, and in particular, relates to providing an entity resolution from documents. The method comprises obtaining a plurality of documents corresponding to a plurality of entities, from at least one data source. Upon receiving the plurality of documents, the plurality of documents is blocked into at least one bucket based on textual similarity. Further, a graph including a plurality of record vertices and at least one bucket vertex is created. The plurality of record vertices and the at least one bucket vertex are indicative of the plurality of documents and the at least one bucket, respectively. Subsequently, a notification is provided to a user for selecting one of a Bucket-Centric Parallelization (BCP) technique and a Record-Centric Parallelization (RCP) technique for resolving entities from the plurality of documents. Based on the selection, a resolved entity-document for each entity is created. | 09-10-2015 |
20150378963 | DETECTING AN EVENT FROM TIME-SERIES DATA SEQUENCES - The present subject matter discloses a system and a method for detecting an event from time-series data sequences. The system receives time-series data sequences generated by sensors, wherein the time-series data sequences comprise sample points. The system pairs the sample points with one another for determining pairs of the sample points. The system computes Euclidean distances and angles between the sample points for determining distance matrix and angle matrix corresponding to the sample points. Further, the system determines global distribution of the plurality of pairs of sample points, wherein the global distribution of the plurality of pairs of sample points represent 2D shape histogram for the time-series data sequence. Further, the system concatenates the 2D shape histogram for each time-series data sequence to generate a concatenated shape histogram. Finally the system matches the concatenated shape histogram to pre-stored shape histograms for determining the event. | 12-31-2015 |
20160004987 | SYSTEM AND METHOD FOR PRESCRIPTIVE ANALYTICS - The present subject matter discloses system and method for executing prescriptive analytics. Simulation is performed from an input data (x | 01-07-2016 |
Patent application number | Description | Published |
20150259989 | METHODS AND APPARATUS FOR MITIGATING DOWNHOLE TORSIONAL VIBRATION - A well tool apparatus for damping torsional vibration of a drill string comprises stabilizing members projecting radially outwards from a housing that is, in operation, rotationally integrated in the drill string, to stabilize the drill string by engagement with a borehole wall. The stabilizing members are displaceably mounted on the housing to permit limited angular movement thereof relative to the housing about its rotational axis. The well tool apparatus includes a hydraulic damping mechanism to damp angular displacement of the stabilizing members relative to the housing, thereby damping torsional vibration of the housing and the connected drill string, in use. | 09-17-2015 |
20150275581 | Torque Transfer Mechanism for Downhole Drilling Tools - A well tool drilling tool can include a torque transfer mechanism with an inner mandrel, an outer housing, and at least one pawl which displaces radially and thereby selectively permits and prevents relative rotation between the inner mandrel and the outer housing. A drill string can include a drill bit, a drilling motor, and a torque transfer mechanism which permits rotation of the drill bit in only one direction relative to the drilling motor, the torque transfer mechanism including at least one pawl which displaces linearly and thereby prevents rotation of the drill bit in an opposite direction relative to the drilling motor. | 10-01-2015 |
20150368973 | ROLL REDUCTION SYSTEM FOR ROTARY STEERABLE SYSTEM - Roll reduction system for rotary steerable system. A well drilling system includes a tubular housing that attaches inline in a drill string and a bit drive shaft supported to rotate in the housing by a roll reduction system. The roll reduction system includes a first gear carried by the housing to rotate relative to the housing and coupled to rotate with the bit drive shaft, and a second gear carried by the housing to rotate relative to the housing and coupled to the first gear to rotate in an opposite direction to the first gear. | 12-24-2015 |
Patent application number | Description | Published |
20120269191 | SYSTEM AND METHOD FOR IMPLEMENTING A MULTISTAGE NETWORK USING A TWO-DIMENSIONAL ARRAY OF TILES - A network, including: a first tile having a processor, a first top brick connected to the processor, a first bottom brick, and a first intermediate brick; a second tile having a second intermediate brick and a second bottom brick; multiple connections connecting the first top brick with the second intermediate brick and the first intermediate brick with the second bottom brick using a passthrough on an intermediate tile between the first and second tiles, where the first, the intermediate, and the second tiles are positioned in a row; and a third tile having a plurality of caches connected to a third bottom brick, where the second and third tiles are positioned in a column, and the first bottom brick, the second bottom brick, and the third bottom brick belong to a bottom layer of the network, and where the first and second intermediate bricks belong to an intermediate layer of the network. | 10-25-2012 |
20120275341 | SYSTEM AND METHOD FOR IMPLEMENTING A MULTISTAGE NETWORK USING A TWO-DIMENSIONAL ARRAY OF TILES - A network, including: a first tile having a processor, a first top brick connected to the processor, a first bottom brick, and a first intermediate brick; a second tile having a second intermediate brick and a second bottom brick; multiple connections connecting the first top brick with the second intermediate brick and the first intermediate brick with the second bottom brick using a passthrough on an intermediate tile between the first and second tiles, where the first, the intermediate, and the second tiles are positioned in a row; and a third tile having a plurality of caches connected to a third bottom brick, where the second and third tiles are positioned in a column, and the first bottom brick, the second bottom brick, and the third bottom brick belong to a bottom layer of the network, and where the first and second intermediate bricks belong to an intermediate layer of the network. | 11-01-2012 |
Patent application number | Description | Published |
20100293401 | Power Managed Lock Optimization - In an embodiment, a timer unit may be provided that may be programmed to a selected time interval, or wakeup interval. A processor may execute a wait for event instruction, and enter a low power state for the thread that includes the instruction. The timer unit may signal a timer event at the expiration of the wakeup interval, and the processor may exit the low power state in response to the timer event. The thread may continue executing with the instruction following the wait for event instruction. In an embodiment, the processor/timer unit may be used to implement a power-managed lock acquisition mechanism, in which the processor is awakened a number of times to check the lock and execute the wait for event instruction if the lock is not free, after which the thread may block until the lock is free. | 11-18-2010 |
20120167107 | Power Managed Lock Optimization - In an embodiment, a timer unit may be provided that may be programmed to a selected time interval, or wakeup interval. A processor may execute a wait for event instruction, and enter a low power state for the thread that includes the instruction. The timer unit may signal a timer event at the expiration of the wakeup interval, and the processor may exit the low power state in response to the timer event. The thread may continue executing with the instruction following the wait for event instruction. In an embodiment, the processor/timer unit may be used to implement a power-managed lock acquisition mechanism, in which the processor is awakened a number of times to check the lock and execute the wait for event instruction if the lock is not free, after which the thread may block until the lock is free. | 06-28-2012 |
20130042074 | Prefetch Unit - In one embodiment, a processor comprises a prefetch unit coupled to a data cache. The prefetch unit is configured to concurrently maintain a plurality of separate, active prefetch streams. Each prefetch stream is either software initiated via execution by the processor of a dedicated prefetch instruction or hardware initiated via detection of a data cache miss by one or more load/store memory operations. The prefetch unit is further configured to generate prefetch requests responsive to the plurality of prefetch streams to prefetch data in to the data cache. | 02-14-2013 |
20130067257 | Power Managed Lock Optimization - In an embodiment, a timer unit may be provided that may be programmed to a selected time interval, or wakeup interval. A processor may execute a wait for event instruction, and enter a low power state for the thread that includes the instruction. The timer unit may signal a timer event at the expiration of the wakeup interval, and the processor may exit the low power state in response to the timer event. The thread may continue executing with the instruction following the wait for event instruction. In an embodiment, the processor/timer unit may be used to implement a power-managed lock acquisition mechanism, in which the processor is awakened a number of times to check the lock and execute the wait for event instruction if the lock is not free, after which the thread may block until the lock is free. | 03-14-2013 |
Patent application number | Description | Published |
20080222317 | Data Flow Control Within and Between DMA Channels - In one embodiment, a direct memory access (DMA) controller comprises a transmit circuit and a data flow control circuit coupled to the transmit circuit. The transmit circuit is configured to perform DMA transfers, each DMA transfer described by a DMA descriptor stored in a data structure in memory. There is a data structure for each DMA channel that is in use. The data flow control circuit is configured to control the transmit circuit's processing of DMA descriptors for each DMA channel responsive to data flow control data in the DMA descriptors in the corresponding data structure. | 09-11-2008 |
20090119488 | Prefetch Unit - In one embodiment, a processor comprises a prefetch unit coupled to a data cache. The prefetch unit is configured to concurrently maintain a plurality of separate, active prefetch streams. Each prefetch stream is either software initiated via execution by the processor of a dedicated prefetch instruction or hardware initiated via detection of a data cache miss by one or more load/store memory operations. The prefetch unit is further configured to generate prefetch requests responsive to the plurality of prefetch streams to prefetch data in to the data cache. | 05-07-2009 |
20100268894 | Prefetch Unit - In one embodiment, a processor comprises a prefetch unit coupled to a data cache. The prefetch unit is configured to concurrently maintain a plurality of separate, active prefetch streams. Each prefetch stream is either software initiated via execution by the processor of a dedicated prefetch instruction or hardware initiated via detection of a data cache miss by one or more load/store memory operations. The prefetch unit is further configured to generate prefetch requests responsive to the plurality of prefetch streams to prefetch data in to the data cache. | 10-21-2010 |
20110264864 | Prefetch Unit - In one embodiment, a processor comprises a prefetch unit coupled to a data cache. The prefetch unit is configured to concurrently maintain a plurality of separate, active prefetch streams. Each prefetch stream is either software initiated via execution by the processor of a dedicated prefetch instruction or hardware initiated via detection of a data cache miss by one or more load/store memory operations. The prefetch unit is further configured to generate prefetch requests responsive to the plurality of prefetch streams to prefetch data in to the data cache. | 10-27-2011 |
20120036289 | Data Flow Control Within and Between DMA Channels - In one embodiment, a direct memory access (DMA) controller comprises a transmit circuit and a data flow control circuit coupled to the transmit circuit. The transmit circuit is configured to perform DMA transfers, each DMA transfer described by a DMA descriptor stored in a data structure in memory. There is a data structure for each DMA channel that is in use. The data flow control circuit is configured to control the transmit circuit's processing of DMA descriptors for each DMA channel responsive to data flow control data in the DMA descriptors in the corresponding data structure. | 02-09-2012 |
20120297096 | Data Flow Control Within and Between DMA Channels - In one embodiment, a direct memory access (DMA) controller comprises a transmit circuit and a data flow control circuit coupled to the transmit circuit. The transmit circuit is configured to perform DMA transfers, each DMA transfer described by a DMA descriptor stored in a data structure in memory. There is a data structure for each DMA channel that is in use. The data flow control circuit is configured to control the transmit circuit's processing of DMA descriptors for each DMA channel responsive to data flow control data in the DMA descriptors in the corresponding data structure. | 11-22-2012 |
Patent application number | Description | Published |
20100095300 | Online Computation of Cache Occupancy and Performance - Methods, computer programs, and systems for managing thread performance in a computing environment based on cache occupancy are provided. In one embodiment, a computer implemented method assigns a thread performance counter to threads being created to measure the number of cache misses for the threads. The thread performance counter is deduced in one embodiment based on performance counters associated with each core in a processor. The method further calculates a self-thread value as the change in the thread performance counter of a given thread during a predetermined period, and an other-thread value as the sum of all the changes in the thread performance counters for all threads except for the given thread. Further, the method estimates a cache occupancy for the given thread based on a previous occupancy for the given thread, and the calculated shelf-thread and other-thread values. The estimated cache occupancy is used to assign computing environment resources to the given thread. In another embodiment, cache miss-rate curves are constructed for a thread to help analyze performance tradeoffs when changing cache allocations of the threads in the system. | 04-15-2010 |
20110055479 | Thread Compensation For Microarchitectural Contention - A thread (or other resource consumer) is compensated for contention for system resources in a computer system having at least one processor core, a last level cache (LLC), and a main memory. In one embodiment, at each descheduling event of the thread following an execution interval, an effective CPU time is determined. The execution interval is a period of time during which the thread is being executed on the central processing unit (CPU) between scheduling events. The effective CPU time is a portion of the execution interval that excludes delays caused by contention for microarchitectural resources, such as time spent repopulating lines from the LLC that were evicted by other threads. The thread may be compensated for microarchitectural contention by increasing its scheduling priority based on the effective CPU time. | 03-03-2011 |
20110231857 | CACHE PERFORMANCE PREDICTION AND SCHEDULING ON COMMODITY PROCESSORS WITH SHARED CACHES - A method is described for scheduling in an intelligent manner a plurality of threads on a processor having a plurality of cores and a shared last level cache (LLC). In the method, a first and second scenario having a corresponding first and second combination of threads are identified. The cache occupancies of each of the threads for each of the scenarios are predicted. The predicted cache occupancies being a representation of an amount of the LLC that each of the threads would occupy when running with the other threads on the processor according to the particular scenario. One of the scenarios is identified that results in the least objectionable impacts on all threads, the least objectionable impacts taking into account the impact resulting from the predicted cache occupancies. Finally, a scheduling decision is made according to the one of the scenarios that results in the least objectionable impacts. | 09-22-2011 |
20120117299 | EFFICIENT ONLINE CONSTRUCTION OF MISS RATE CURVES - Miss rate curves are constructed in a resource-efficient manner so that they can be constructed and memory management decisions can be made while the workloads are running. The resource-efficient technique includes the steps of selecting a subset of memory pages for the workload, maintaining a least recently used (LRU) data structure for the selected memory pages, detecting accesses to the selected memory pages and updating the LRU data structure in response to the detected accesses, and generating data for constructing a miss-rate curve for the workload using the LRU data structure. After a memory page is accessed, the memory page may be left untraced for a period of time, after which the memory page is retraced. | 05-10-2012 |
20130232500 | CACHE PERFORMANCE PREDICTION AND SCHEDULING ON COMMODITY PROCESSORS WITH SHARED CACHES - A method is described for scheduling in an intelligent manner a plurality of threads on a processor having a plurality of cores and a shared last level cache (LLC). In the method, a first and second scenario having a corresponding first and second combination of threads are identified. The cache occupancies of each of the threads for each of the scenarios are predicted. The predicted cache occupancies being a representation of an amount of the LLC that each of the threads would occupy when running with the other threads on the processor according to the particular scenario. One of the scenarios is identified that results in the least objectionable impacts on all threads, the least objectionable impacts taking into account the impact resulting from the predicted cache occupancies. Finally, a scheduling decision is made according to the one of the scenarios that results in the least objectionable impacts. | 09-05-2013 |
20140189248 | EFFICIENT ONLINE CONSTRUCTION OF MISS RATE CURVES - Miss rate curves are constructed in a resource-efficient manner so that they can be constructed and memory management decisions can be made while the workloads are running. The resource-efficient technique includes the steps of selecting a subset of memory pages for the workload, maintaining a least recently used (LRU) data structure for the selected memory pages, detecting accesses to the selected memory pages and updating the LRU data structure in response to the detected accesses, and generating data for constructing a miss-rate curve for the workload using the LRU data structure. After a memory page is accessed, the memory page may be left untraced for a period of time, after which the memory page is retraced. | 07-03-2014 |
20150052287 | NUMA Scheduling Using Inter-vCPU Memory Access Estimation - In a system having non-uniform memory access architecture, with a plurality of nodes, memory access by entities such as virtual CPUs is estimated by invalidating a selected sub-set of memory units, and then detecting and compiling access statistics, for example by counting the page faults that arise when any virtual CPU accesses an invalidated memory unit. The entities, or pairs of entities, may then be migrated or otherwise co-located on the node for which they have greatest memory locality. | 02-19-2015 |
20150058400 | NUMA-BASED CLIENT PLACEMENT - A management server and method for performing resource management operations in a distributed computer system takes into account information regarding multi-processor memory architectures of host computers of the distributed computer system, including information regarding Non-Uniform Memory Access (NUMA) architectures of at least some of the host computers, to make a placement recommendation to place a client in one of the host computers. | 02-26-2015 |
20150186185 | CACHE PERFORMANCE PREDICTION AND SCHEDULING ON COMMODITY PROCESSORS WITH SHARED CACHES - A method includes assigning a thread performance counter to threads being created in the computing environment, the thread performance counter measuring a number of cache misses for a corresponding thread. The method also includes calculating a self-thread value S as a change in the thread performance counter of a given thread during a predetermined period, calculating an other-thread value O as a sum of changes in all the thread performance counters during the predetermined period minus S, and calculating an estimation adjustment value associated with a first probability that a second set of cache misses for the corresponding thread replace a cache area currently occupied by the corresponding thread. The method also includes estimating a cache occupancy for the thread based on a previous occupancy for the thread, S, O, and the estimation adjustment value, and assigning computing environment resources to the thread based on the estimated cache occupancy. | 07-02-2015 |
20150363117 | DATA REUSE TRACKING AND MEMORY ALLOCATION MANAGEMENT - Exemplary methods, apparatuses, and systems determine a miss-rate at various amounts of memory allocation for each of a plurality of workloads running within a computer. A value representing an estimated change in miss-rate for each of the workloads based upon an increase in a current allocation of memory to the workload is determined. The workload with a value representing a greatest improvement in hit rate is selected. Additional memory is allocated to the selected workload. | 12-17-2015 |
20150363236 | DATA REUSE TRACKING AND MEMORY ALLOCATION MANAGEMENT - Exemplary methods, apparatuses, and systems receive a first request for a storage address at a first access time. Entries are added to first and second data structures. Each entry includes the storage address and the first access time. The first data structure is sorted in an order of storage addresses. The second data structure is sorted in an order of access times. A second request for the storage address is received at a second access time. The first access time is determined by looking up the entry in first data structure using the storage address received in the second request. The entry in the second data structure is looked up using the determined first access time. A number of entries in second data structure that were subsequent to the second entry is determined. A hit count for a reuse distance corresponding to the determined number of entries is incremented. | 12-17-2015 |
20160085571 | Adaptive CPU NUMA Scheduling - Examples perform selection of non-uniform memory access (NUMA) nodes for mapping of virtual central processing unit (vCPU) operations to physical processors. A CPU scheduler evaluates the latency between various candidate processors and the memory associated with the vCPU, and the size of the working set of the associated memory, and the vCPU scheduler selects an optimal processor for execution of a vCPU based on the expected memory access latency and the characteristics of the vCPU and the processors. Some examples contemplate monitoring system characteristics and rescheduling the vCPUs when other placements may provide improved performance and/or efficiency. | 03-24-2016 |