Patent application number | Description | Published |
20080225180 | Display Information Feedback - In general, in an aspect, the invention provides a multimedia entertainment system including a communication link, a video source coupled to the communication link and configured to produce a video signal and provide the video signal to the communication link, a video display coupled to the communication link and configured to receive the video signal from the video source via the communication link, and to provide dynamic display characteristic information indicative of a display capability of the video display to the video source via the communication link, wherein the video source is configured to receive the dynamic display characteristic information and to produce the video signal as a function of the dynamic display characteristic information, and wherein the video display is configured to display a video image in accordance with the video signal provided by the video source. | 09-18-2008 |
20120185671 | COMPUTATIONAL RESOURCE PIPELINING IN GENERAL PURPOSE GRAPHICS PROCESSING UNIT - This disclosure describes techniques for extending the architecture of a general purpose graphics processing unit (GPGPU) with parallel processing units to allow efficient processing of pipeline-based applications. The techniques include configuring local memory buffers connected to parallel processing units operating as stages of a processing pipeline to hold data for transfer between the parallel processing units. The local memory buffers allow on-chip, low-power, direct data transfer between the parallel processing units. The local memory buffers may include hardware-based data flow control mechanisms to enable transfer of data between the parallel processing units. In this way, data may be passed directly from one parallel processing unit to the next parallel processing unit in the processing pipeline via the local memory buffers, in effect transforming the parallel processing units into a series of pipeline stages. | 07-19-2012 |
20130021360 | SYNCHRONIZATION OF SHADER OPERATION - The example techniques described in this disclosure may be directed to synchronization between producer shaders and consumer shaders. For example, a graphics processing unit (GPU) may execute a producer shader to produce graphics data. After the completion of the production of graphics data, the producer shader may store a value indicative of the amount of produced graphics data. The GPU may execute one or more consumer shaders, after the storage of the value indicative of the amount of produced graphics data, to consume the produced graphics data. | 01-24-2013 |
20130215128 | MULTI-THREAD GRAPHICS PROCESSING SYSTEM - A graphics processing system comprises at least one memory device storing a plurality of pixel command threads and a plurality of vertex command threads. An arbiter coupled to the at least one memory device is provided that selects a pixel command thread from the plurality of pixel command threads and a vertex command thread from the plurality of vertex command threads. The arbiter further selects a command thread from the previously selected pixel command thread and the vertex command thread, which command thread is provided to a command processing engine capable of processing pixel command threads and vertex command threads. | 08-22-2013 |
20130229414 | TECHNIQUES FOR REDUCING MEMORY ACCESS BANDWIDTH IN A GRAPHICS PROCESSING SYSTEM BASED ON DESTINATION ALPHA VALUES - This disclosure describes techniques for reducing memory access bandwidth in a graphics processing system based on destination alpha values. The techniques may include retrieving a destination alpha value from a bin buffer, the destination alpha value being generated in response to processing a first pixel associated with a first primitive. The techniques may further include determining, based on the destination alpha value, whether to perform an action that causes one or more texture values for a second pixel to not be retrieved from a texture buffer. In some examples, the action may include discarding the second pixel from a pixel processing pipeline prior to the second pixel arriving at a texture mapping stage of the pixel processing pipeline. The second pixel may be associated with a second primitive different than the first primitive. | 09-05-2013 |
20130241938 | VISIBILITY-BASED STATE UPDATES IN GRAPHICAL PROCESSING UNITS - In general, techniques are described for visibility-based state updates in graphical processing units (GPUs). A device that renders image data comprising a memory configured to store state data and a GPU may implement the techniques. The GPU may be configured to perform a multi-pass rendering process to render an image from the image data. The GPU determines visibility information for a plurality of objects defined by the image data during a first pass of the multi-pass rendering process. The visibility information indicates whether each of the plurality of objects will be visible in the image rendered from the image data during a second pass of the multi-pass rendering process. The GPU then retrieves the state data from the memory for use by the second pass of the multi-pass rendering process in rendering the plurality of objects of the image data based on the visibility information. | 09-19-2013 |
20140022266 | DEFERRED PREEMPTION TECHNIQUES FOR SCHEDULING GRAPHICS PROCESSING UNIT COMMAND STREAMS - This disclosure is directed to deferred preemption techniques for scheduling graphics processing unit (GPU) command streams for execution on a GPU. A host CPU is described that is configured to control a GPU to perform deferred-preemption scheduling. For example, a host CPU may select one or more locations in a GPU command stream as being one or more locations at which preemption is allowed to occur in response to receiving a preemption notification, and may place one or more tokens in the GPU command stream based on the selected one or more locations. The tokens may indicate to the GPU that preemption is allowed to occur at the selected one or more locations. This disclosure further describes a GPU configured to preempt execution of a GPU command stream based on one or more tokens placed in a GPU command stream. | 01-23-2014 |
20140047223 | SELECTIVELY ACTIVATING A RESUME CHECK OPERATION IN A MULTI-THREADED PROCESSING SYSTEM - This disclosure describes techniques for selectively activating a resume check operation in a single instruction, multiple data (SIMD) processing system. A processor is described that is configured to selectively enable or disable a resume check operation for a particular instruction based on information included in the instruction that indicates whether a resume check operation is to be performed for the instruction. A compiler is also described that is configured to generate compiled code which, when executed, causes a resume check operation to be selectively enabled or disabled for particular instructions. The compiled code may include one or more instructions that each specify whether a resume check operation is to be performed for the respective instruction. The techniques of this disclosure may be used to reduce the power consumption of and/or improve the performance of a SIMD system that utilizes a resume check operation to manage the reactivation of deactivated threads. | 02-13-2014 |
20140292784 | MULTI-THREAD GRAPHICS PROCESSING SYSTEM - A graphics processing system comprises at least one memory device storing a plurality of pixel command threads and a plurality of vertex command threads. An arbiter coupled to the at least one memory device is provided that selects a pixel command thread from the plurality of pixel command threads and a vertex command thread from the plurality of vertex command threads. The arbiter further selects a command thread from the previously selected pixel command thread and the vertex command thread, which command thread is provided to a command processing engine capable of processing pixel command threads and vertex command threads. | 10-02-2014 |
Patent application number | Description | Published |
20100156915 | Multi-Thread Graphics Processing System - A graphics processing system comprises at least one memory device storing a plurality of pixel command threads and a plurality of vertex command threads. An arbiter coupled to the at least one memory device is provided that selects a command thread from either the plurality of pixel or vertex command threads based on relative priorities of the plurality of pixel command threads and the plurality of vertex command threads. The selected command thread is provided to a command processing engine capable of processing pixel command threads and vertex command threads. | 06-24-2010 |
20110057940 | Processing Unit to Implement Video Instructions and Applications Thereof - Disclosed herein is a processing unit configured to process video data, and applications thereof. In an embodiment, the processing unit includes a buffer and an execution unit. The buffer is configured to store a data word, wherein the data word comprises a plurality of bytes of video data. The execution unit is configured to execute a single instruction to (i) shift bytes of video data contained in the data word to align a desired byte of video data and (ii) process the desired byte of the video data to provide processed video data. | 03-10-2011 |
20120019543 | MULTI-THREAD GRAPHICS PROCESSING SYSTEM - A graphics processing system comprises at least one memory device storing a plurality of pixel command threads and a plurality of vertex command threads. An arbiter coupled to the at least one memory device is provided that selects a command thread from either the plurality of pixel or vertex command threads based on relative priorities of the plurality of pixel command threads and the plurality of vertex command threads. The selected command thread is provided to a command processing engine capable of processing pixel command threads and vertex command threads. | 01-26-2012 |
20130061027 | TECHNIQUES FOR HANDLING DIVERGENT THREADS IN A MULTI-THREADED PROCESSING SYSTEM - This disclosure describes techniques for handling divergent thread conditions in a multi-threaded processing system. In some examples, a control flow unit may obtain a control flow instruction identified by a program counter value stored in a program counter register. The control flow instruction may include a target value indicative of a target program counter value for the control flow instruction. The control flow unit may select one of the target program counter value and a minimum resume counter value as a value to load into the program counter register. The minimum resume counter value may be indicative of a smallest resume counter value from a set of one or more resume counter values associated with one or more inactive threads. Each of the one or more resume counter values may be indicative of a program counter value at which a respective inactive thread should be activated. | 03-07-2013 |
20130265307 | PATCHED SHADING IN GRAPHICS PROCESSING - Aspects of this disclosure relate to a process for rendering graphics that includes designating a hardware shading unit of a graphics processing unit (GPU) to perform first shading operations associated with a first shader stage of a rendering pipeline. The process also includes switching operational modes of the hardware shading unit upon completion of the first shading operations. The process also includes performing, with the hardware shading unit of the GPU designated to perform the first shading operations, second shading operations associated with a second, different shader stage of the rendering pipeline. | 10-10-2013 |
20130265308 | PATCHED SHADING IN GRAPHICS PROCESSING - Aspects of this disclosure relate to a process for rendering graphics that includes performing, with a hardware unit of a graphics processing unit (GPU) designated for vertex shading, a vertex shading operation to shade input vertices so as to output vertex shaded vertices, wherein the hardware unit adheres to an interface that receives a single vertex as an input and generates a single vertex as an output. The process also includes performing, with the hardware unit of the GPU designated for vertex shading, a hull shading operation to generate one or more control points based on one or more of the vertex shaded vertices, wherein the one or more hull shading operations operate on at least one of the one or more vertex shaded vertices to output the one or more control points. | 10-10-2013 |
20140176586 | MULTI-MODE MEMORY ACCESS TECHNIQUES FOR PERFORMING GRAPHICS PROCESSING UNIT-BASED MEMORY TRANSFER OPERATIONS - This disclosure describes techniques for performing memory transfer operations with a graphics processing unit (GPU) based on a selectable memory transfer mode, and techniques for selecting a memory transfer mode for performing all or part of a memory transfer operation with a GPU. In some examples, the techniques of this disclosure may include selecting a memory transfer mode for performing at least part of a memory transfer operation, and performing, with a GPU, the memory transfer operation based on the selected memory transfer mode. The memory transfer mode may be selected from a set of at least two different memory transfer modes that includes an interleave memory transfer mode and a sequential memory transfer mode. The techniques of this disclosure may be used to improve the performance of GPU-assisted memory transfer operations. | 06-26-2014 |
20140198119 | RENDERING GRAPHICS DATA USING VISIBILITY INFORMATION - In some examples, aspects of this disclosure relate to a method for rendering an image. For example, the method includes generating visibility information indicating visible primitives of the image. The method also includes rendering the image using a binning configuration, wherein the binning configuration is based on the visibility information. | 07-17-2014 |
20140237609 | HARDWARE ENFORCED CONTENT PROTECTION FOR GRAPHICS PROCESSING UNITS - This disclosure proposes techniques for graphics processing. In one example, a graphics processing unit (GPU) is configured to access a first memory unit according to one of an unsecure mode and a secure mode. The GPU comprises a memory access controller configured to allow the GPU to read data from only an unsecure portion of the first memory unit when the GPU is in the unsecure mode, and configured to allow the GPU to write data only to a secure portion of the first memory unit when the GPU is in the secure mode. | 08-21-2014 |
20140300613 | GRAPHICS PROCESSING ARCHITECTURE EMPLOYING A UNIFIED SHADER - A graphics processing architecture in one example performs vertex manipulation operations and pixel manipulation operations by transmitting vertex data to a general purpose register block, and performing vertex operations on the vertex data by a processor unless the general purpose register block does not have enough available space therein to store incoming vertex data; and continues pixel calculation operations that are to be or are currently being performed by the processor based on instructions maintained in an instruction store until enough registers within the general purpose register block become available. | 10-09-2014 |
20150154731 | GRAPHICS PROCESSING ARCHITECTURE EMPLOYING A UNIFIED SHADER - A graphics processing architecture in one example performs vertex manipulation operations and pixel manipulation operations by transmitting vertex data to a general purpose register block, and performing vertex operations on the vertex data by a processor unless the general purpose register block does not have enough available space therein to store incoming vertex data; and continues pixel calculation operations that are to be or are currently being performed by the processor based on instructions maintained in an instruction store until enough registers within the general purpose register block become available. | 06-04-2015 |
Patent application number | Description | Published |
20130265309 | PATCHED SHADING IN GRAPHICS PROCESSING - Aspects of this disclosure generally relate to a process for rendering graphics that includes performing, with a hardware shading unit of a graphics processing unit (GPU) designated for vertex shading, vertex shading operations to shade input vertices so as to output vertex shaded vertices, wherein the hardware unit is configured to receive a single vertex as an input and generate a single vertex as an output. The process also includes performing, with the hardware shading unit of the GPU, a geometry shading operation to generate one or more new vertices based on one or more of the vertex shaded vertices, wherein the geometry shading operation operates on at least one of the one or more vertex shaded vertices to output the one or more new vertices. | 10-10-2013 |
20140040552 | MULTI-CORE COMPUTE CACHE COHERENCY WITH A RELEASE CONSISTENCY MEMORY ORDERING MODEL - A method includes storing, with a first programmable processor, shared variable data to cache lines of a first cache of the first processor. The method further includes executing, with the first programmable processor, a store-with-release operation, executing, with a second programmable processor, a load-with-acquire operation, and loading, with the second programmable processor, the value of the shared variable data from a cache of the second programmable processor. | 02-06-2014 |
20140204080 | INDEXED STREAMOUT BUFFERS FOR GRAPHICS PROCESSING - A graphics processing unit (GPU) includes an indexed streamout buffer. The indexed streamout buffer is configured to: receive vertex data of a primitive, and determine if any entries in a reuse table of the indexed streamout buffer reference the vertex data. Responsive to determining that an entry of in the reuse table references the vertex data, the buffer is further configured to: generate an index that references the vertex data, store the index in the buffer, and store a reference to the index in the reuse table. Responsive to determining that an entry does not reference the vertex data, the indexed streamout buffer is configured to: store the vertex data in the buffer, generate an index that references the vertex data, store the index in the buffer, and store a reference to the index in the reuse table. | 07-24-2014 |
20140267259 | TILE-BASED RENDERING - This disclosure describes techniques for using bounding regions to perform tile-based rendering with a graphics processing unit (GPU) that supports an on-chip, tessellation-enabled graphics rendering pipeline. Instead of generating binning data based on rasterized versions of the actual primitives to be rendered, the techniques of this disclosure may generate binning data based on a bounding region that encompasses one or more of the primitives to be rendered. Moreover, the binning data may be generated based on data that is generated by at least one tessellation processing stage of an on-chip, tessellation-enabled graphics rendering pipeline that is implemented by the GPU. The techniques of this disclosure may, in some examples, be used to improve the performance of an on-chip, tessellation-enabled GPU when performing tile-based rendering without sacrificing the quality of the resulting rendered image. | 09-18-2014 |
20150070369 | FAULT-TOLERANT PREEMPTION MECHANISM AT ARBITRARY CONTROL POINTS FOR GRAPHICS PROCESSING - This disclosure presents techniques and structures for preemption at arbitrary control points in graphics processing. A method of graphics processing may comprise executing commands in a command buffer, the commands operating on data in a read-modify-write memory resource, double buffering the data in the read-modify-write memory resource, such that a first buffer stores original data of the read-modify-write memory resource and a second buffer stores any modified data produced by executing the commands in the command buffer, receiving a request to preempt execution of the commands in the command buffer before completing all commands in the command buffer, and restarting execution of the commands at the start of the command buffer using the original data in the first buffer. | 03-12-2015 |
20150089146 | CONDITIONAL PAGE FAULT CONTROL FOR PAGE RESIDENCY - The present disclosure provides for systems and methods to process a non-resident page that may include attempting to access the non-resident page, an address for the non-resident page pointing to a memory page containing default values, determining that the non-resident page should not cause a page fault based on an indicator indicating that a particular non-resident page should not generate a page fault, returning an indication that a memory read did not translate and returning the default value when the access of the non-resident page is a read and the non-resident page should not cause a page fault. Another example may discontinue a write when the access of the non-resident page is a write and the non-resident page should not cause a page fault. | 03-26-2015 |
20150109293 | SELECTIVELY MERGING PARTIALLY-COVERED TILES TO PERFORM HIERARCHICAL Z-CULLING - This disclosure describes techniques for performing hierarchical z-culling in a graphics processing system. In some examples, the techniques for performing hierarchical z-culling may involve selectively merging partially-covered source tiles for a tile location into a fully-covered merged source tile based on whether conservative farthest z-values for the partially-covered source tiles are nearer than a culling z-value for the tile location, and using a conservative farthest z-value associated with the fully-covered merged source tile to update the culling z-value for the tile location. In further examples, the techniques for performing hierarchical z-culling may use a cache unit that is not associated with an underlying memory to store conservative farthest z-values and coverage masks for merged source tiles. The capacity of the cache unit may be smaller than the size of cache needed to store merged source tile data for all of the tile locations in a render target. | 04-23-2015 |
Patent application number | Description | Published |
20130060474 | ESTIMATION OF PETROPHYSICAL AND FLUID PROPERTIES USING INTEGRAL TRANSFORMS IN NUCLEAR MAGNETIC RESONANCE - Apparatus and method of characterizing a subterranean formation including observing a formation using nuclear magnetic resonance measurements, calculating an answer product by computing an integral transform on the indications in measurement-domain, and using answer products to estimate a property of the formation. Apparatus and a method for characterizing a subteranean formation including collecting NMR data of a formation, calculating an answer product comprising the data, wherein the calculating comprises a formula | 03-07-2013 |
20130179083 | ESTIMATIONS OF NUCLEAR MAGNETIC RESONANCE MEASUREMENT DISTRIBUTIONS - A nuclear magnetic resonance (NMR) related distribution is estimated that is consistent with NMR measurements and uses linear functionals directly estimated from the measurement indications by integral transforms as constraints in a cost function. The cost function includes indications of the measurement data, Laplace transform elements and the constraints, and a distribution estimation is made by minimizing the cost function. The distribution estimation may be used to find parameters of the sample. Where the sample is a rock or a formation, the parameters may include parameters such as rock permeability and/or hydrocarbon viscosity, bound and free fluid volumes, among others. The parameters may be used in models, equations, or otherwise to act on the sample, such as in recovering hydrocarbons from the formation. | 07-11-2013 |
20150177351 | METHODS OF INVESTIGATING FORMATION SAMPLES USING NMR DATA - A methods are provided for investigating a sample containing hydrocarbons by subjecting the sample to a nuclear magnetic resonance (NMR) sequence using NMR equipment, using the NMR equipment to detect signals from the sample in response to the NMR sequence, analyzing the signals to extract a distribution of relaxation times (or diffusions), and computing a value for a parameter of the sample as a function of at least one of the relaxation times (or diffusions), wherein the computing utilizes a correction factor that modifies the value for the parameter as a function of relaxation time for at least short relaxation times (or as a function of diffusion for at least large diffusion coefficients). | 06-25-2015 |
Patent application number | Description | Published |
20080249534 | METHOD AND DEVICE FOR DISTENDING A GYNECOLOGICAL CAVITY - Method and device for distending a gynecological cavity. According to one embodiment, a mechanical, non-fluid device is used to distend the gynecological cavity. Such devices include, for example, self-expanding members, such as resilient baskets, coils, whisks, prongs, and loops, or mechanically expanded members, such as inflatable balloons, mechanically-expanded cages and loops, and scissor jacks. The device may serve a purpose in addition to distension, such as illumination, imaging, irrigation, drug delivery, resection and cauterization. | 10-09-2008 |
20110054488 | SYSTEMS AND METHODS FOR PREVENTING INTRAVASATION DURING INTRAUTERINE PROCEDURES - Systems, methods, apparatus and devices for performing improved gynecologic and urologic procedures are disclosed. Patient benefit is achieved through improved outcomes, reduced pain, especially peri-procedural pain, and reduced recovery times. The various embodiments enable procedures to be performed outside the hospital setting, such as in a doctor's office or clinic. Distension is achieved mechanically, rather than with liquid distension media, thereby eliminating the risk of intravasation. | 03-03-2011 |