Patent application number | Description | Published |
20110247018 | API For Launching Work On a Processor - One embodiment of the present invention sets forth a technique for launching work on a processor. The method includes the steps of initializing a first state object within a memory region accessible to a program executing on the processor, populating the first state object with data associated with a first workload that is generated by the program, and triggering the processing of the first workload on the processor according to the data within the first state object. | 10-06-2011 |
20130120412 | METHOD FOR HANDLING STATE TRANSITIONS IN A NETWORK OF VIRTUAL PROCESSING NODES - One embodiment of the present invention sets forth a technique for executing an operation once work associated with a version of a state object has been completed. The method includes receiving the version of the state object at a first stage in a processing pipeline, where the version of the state object is associated with a reference count object, determining that the version of the state object is relevant to the first stage, incrementing a counter included in the reference count object, transmitting the version of the state object to a second stage in the processing pipeline, processing work associated with the version of the state object, decrementing the counter, determining that the counter is equal to zero, and in response, executing an operation specified by the reference count object. | 05-16-2013 |
20130120413 | METHOD FOR HANDLING STATE TRANSITIONS IN A NETWORK OF VIRTUAL PROCESSING NODES - One embodiment of the present invention sets forth a technique for receiving versions of state objects at one or more stages in a processing pipeline. The method includes receiving a first version of a state object at a first stage in the processing pipeline, determining that the first version of the state object is relevant to the first stage, incrementing a first reference counter associated with the first version of the state object, assigning the first version of the state object to work requests that arrive at the first stage subsequent to the receipt of the first version of the state object, and transmitting the first version of the state object to a second stage in the processing pipeline. | 05-16-2013 |
20130152093 | Multi-Channel Time Slice Groups - A time slice group (TSG) is a grouping of different streams of work (referred to herein as “channels”) that share the same context information. The set of channels belonging to a TSG are processed in a pre-determined order. However, when a channel stalls while processing, the next channel with independent work can be switched to fully load the parallel processing unit. Importantly, because each channel in the TSG shares the same context information, a context switch operation is not needed when the processing of a particular channel in the TSG stops and the processing of a next channel in the TSG begins. Therefore, multiple independent streams of work are allowed to run concurrently within a single context increasing utilization of parallel processing units. | 06-13-2013 |
20130187935 | LOW LATENCY CONCURRENT COMPUTATION - One embodiment of the present invention sets forth a technique for performing low latency computation on a parallel processing subsystem. A low latency functional node is exposed to an operating system. The low latency functional node and a generic functional node are configured to target the same underlying processor resource within the parallel processing subsystem. The operating system stores low latency tasks generated by a user application within a low latency command buffer associated with the low latency functional node. The parallel processing subsystem advantageously executes tasks from the low latency command buffer prior to completing execution of tasks in the generic command buffer, thereby reducing completion latency for the low latency tasks. | 07-25-2013 |
20130298133 | TECHNIQUE FOR COMPUTATIONAL NESTED PARALLELISM - One embodiment of the present invention sets forth a technique for performing nested kernel execution within a parallel processing subsystem. The technique involves enabling a parent thread to launch a nested child grid on the parallel processing subsystem, and enabling the parent thread to perform a thread synchronization barrier on the child grid for proper execution semantics between the parent thread and the child grid. This technique advantageously enables the parallel processing subsystem to perform a richer set of programming constructs, such as conditionally executed and nested operations and externally defined library functions without the additional complexity of CPU involvement. | 11-07-2013 |
20140122838 | WORK-QUEUE-BASED GRAPHICS PROCESSING UNIT WORK CREATION - One embodiment of the present invention enables threads executing on a processor to locally generate and execute work within that processor by way of work queues and command blocks. A device driver, as an initialization procedure for establishing memory objects that enable the threads to locally generate and execute work, generates a work queue, and sets a GP_GET pointer of the work queue to the first entry in the work queue. The device driver also, during the initialization procedure, sets a GP_PUT pointer of the work queue to the last free entry included in the work queue, thereby establishing a range of entries in the work queue into which new work generated by the threads can be loaded and subsequently executed by the processor. The threads then populate command blocks with generated work and point entries in the work queue to the command blocks to effect processor execution of the work stored in the command blocks. | 05-01-2014 |
20140123144 | WORK-QUEUE-BASED GRAPHICS PROCESSING UNIT WORK CREATION - One embodiment of the present invention enables threads executing on a processor to locally generate and execute work within that processor by way of work queues and command blocks. A device driver, as an initialization procedure for establishing memory objects that enable the threads to locally generate and execute work, generates a work queue, and sets a GP_GET pointer of the work queue to the first entry in the work queue. The device driver also, during the initialization procedure, sets a GP_PUT pointer of the work queue to the last free entry included in the work queue, thereby establishing a range of entries in the work queue into which new work generated by the threads can be loaded and subsequently executed by the processor. The threads then populate command blocks with generated work and point entries in the work queue to the command blocks to effect processor execution of the work stored in the command blocks. | 05-01-2014 |
Patent application number | Description | Published |
20090281989 | Micro-Bucket Testing For Page Optimization - Methods for optimizing webpage content by micro-bucket testing user customization to the webpage include presenting a plurality of modules at a webpage based on a request from a user. The modules define an intent of the webpage. A change defining customization to one or more modules within the webpage is detected. A test case representing the change is automatically generated. The generated test case is a modified webpage having the customization. The webpage is presented to a first segment of users as a control page and the modified webpage is presented to a second segment of users in response to a request for the webpage. User interaction by the first and segment of users is monitored at the webpage and the modified webpage to determine website metrics of the corresponding webpages. The website metrics is used in defining a new control page of the webpage from the modified webpage or retaining the webpage as the control page. | 11-12-2009 |
20090282013 | ALGORITHMICALLY GENERATED TOPIC PAGES - A method and system for generating a topic page for a search query on a search webpage includes receiving a query at the search webpage on a client. The query is transmitted from the search webpage on the client to a search engine on a server. A topic page generator available to the search engine analyzes the query to identify a plurality of dimensions. One or more content modules that match one or more of the dimensions are selected from a plurality of sources based on a weight associated with each of the content modules. The weight defines the ranking of a content module. The content modules for the plurality of dimensions are glued together and presented on the topic page in the order of the corresponding weight of the content modules. The order of presentation identifies the relevancy of the content modules to the query. The presented topic page provides the most relevant content modules for the query, and for a user located in a specific geo location. | 11-12-2009 |
20100082640 | GUIDING USER MODERATION BY CONFIDENCE LEVELS - Methods for guiding user moderation at a topic page by confidence levels includes presenting a topic page in response to a query. The topic page includes a plurality of modules with content that match the query. The topic page is associated with a confidence level and with one or more page attributes that define the characteristics of the topic page and the modules included therein. One or more modifications to the topic page are received as part of customization of the topic page. The modifications include a plurality of page attributes that define the modification and a plurality of user attributes of a user performing the modification. The modifications are evaluated based on the page attributes and user attributes including confidence levels associated with the topic page and the user. The modifications are implemented based on the evaluation. The implemented modifications enhance the quality and confidence level of the topic page. | 04-01-2010 |
20100228712 | Algorithmically Generated Topic Pages with Interactive Advertisements - A method and system for generating a topic page for a search query on a search webpage includes receiving a query at the search webpage on a client. The query is transmitted from the search webpage on the client to a search engine on a server. A topic page generator available to the search engine analyzes the query to identify a plurality of dimensions. One or more content modules, including at least one interactive advertising module, that match one or more of the dimensions are selected from a plurality of sources based on a weight associated with each of the content modules. The weight defines the ranking of a content module. The content modules for the plurality of dimensions are glued together and presented on the topic page in the order of the corresponding weight of the content modules. The order of presentation identifies the relevancy of the content modules to the query. The presented topic page provides the most relevant content modules for the query, and for a user located in a specific geo location. | 09-09-2010 |
20110125739 | ALGORITHMICALLY CHOOSING WHEN TO USE BRANDED CONTENT VERSUS AGGREGATED CONTENT - A method and apparatus for optimizing content on a topic page includes receiving a query for a topic at a topic page on a client and transmitting the query from the topic page on the client to a web application on a server. The web application includes algorithm to analyze the query to identify a plurality of content modules that match the query. The content modules are identified from anyone of a branded source or an un-branded source. One or more module performance indicators are computed for each of the identified content modules. An aggregate module performance indicator for each of the plurality of content modules is generated from the one or more computed module performance indicators. One or more content modules from the identified plurality of content modules are automatically selected for rendering on the topic page based on the aggregate module performance indicator associated with each of the identified content modules. The resulting topic page includes content modules from just the branded source, just the un-branded source or an aggregate of content modules from both branded source and un-branded source providing optimal content that is most relevant to the query. | 05-26-2011 |
20120084347 | PRESENTING MODULES IN A BROWSER - Module management software receives a request from the browser for a presentation composed of at least one module. The module management software transmits a request for module data associated with the module to a first server that caches the module data after retrieving the module data from a website. The module management software then receives the requested module data from the first server and transmits a request for each of the resource files described in the module data to a second server that caches each of the resource files after retrieving the resource file from an external (or internal) website. Each request for a resource file can be handled by a corresponding thread. The module management software delays transmission of the module data to the browser, if any requested resource file is not received within a time limit derived at least in part from a service level agreement. | 04-05-2012 |
20150026255 | DETERMINATION OF GENERAL AND TOPICAL NEWS AND GEOGRAPHICAL SCOPE OF NEWS CONTENT - Methods for categorizing news are presented. One method groups articles into clusters that share a common topic. A first category is identified for each article that indicates if the article is news or not. Further, the method includes an operation for determining use data for each article that has information about people that have accessed or referenced the article. Additionally, the method includes an operation for combining the use data and the first category for all the articles in each cluster to determine the geographical scope of interest for the cluster. The use data and the first category are combined for all the articles in each cluster to determine a second category for each article that indicates if the article is general news, topical news, or not news. The articles are presented to the user based on the geographical scope of interest, the second category, and the attributes of the user. | 01-22-2015 |
20150193540 | CONTENT RANKING BASED ON USER FEATURES IN CONTENT - Methods, systems, and computer programs are presented for providing a personalized news stream to a user. One method includes an operation for identifying user features associated with a user. The user features include personal features and social features. The personal features are based on activities of the user and the profile of the user. The social features are based on information about social connections of the user. The method further includes operations for extracting content features from a corpus of content items, for identifying intersections between user features and content features, and for assigning weights to the content features from the corpus based on the identified intersections. A score for each content item is determined based on the content features and the respective weights of the content items. The content items are then ranked based on the scores. One or more of the ranked content items are displayed. | 07-09-2015 |
20150220615 | CATEGORIZING HASH TAGS - A content item categorizer system retrieves content items from Internet sources. If a retrieved content item includes sufficient information for traditional categorization methods, then the system assigns one or more categories to the content item using such traditional methods. The system creates a metadata model, based on information about traditionally-categorized content items, that maps at least hashtags from the content items to one or more content categories. When the system retrieves a sparse-info item that does not include sufficient information for traditional categorization, the system applies the metadata model to categorize the content item using at least hashtags in the sparse-info item. The metadata model may also include information indicating mappings between categories and coincidence of hashtags and additional content item attributes. Also, the metadata model may provide information for categorizing sparse-info items based on multiple hashtags in the sparse-info item metadata. | 08-06-2015 |