Patent application number | Description | Published |
20080297528 | METHOD AND SYSTEM FOR PROCESSING TEXTURE SAMPLES WITH PROGRAMMABLE OFFSET POSITIONS - A method and system for performing a texture operation with user-specified offset positions are disclosed. Specifically, one embodiment of the present invention sets forth a method, which includes the steps of deriving a first destined texel position based on an original sample position associated with a pixel projected in a texture map and a first offset position specified by a user and fetching texel attributes at the first destined texel position for the texture operation. | 12-04-2008 |
20100118043 | RECONFIGURABLE HIGH-PERFORMANCE TEXTURE PIPELINE WITH ADVANCED FILTERING - Circuits, methods, and apparatus that provide texture caches and related circuits that store and retrieve texels in a fast and efficient manner. One such texture circuit provides an increased number of bilerps for each pixel in a group of pixels, particularly when trilinear or aniso filtering is needed. For trilinear filtering, texels in a first and second level of detail are retrieved for a number of pixels during a clock cycle. When aniso filtering is performed, multiple bilerps can be retrieved for each of a number of pixels during one clock cycle. | 05-13-2010 |
20110078381 | Cache Operations and Policies For A Multi-Threaded Client - A method for managing a parallel cache hierarchy in a processing unit. The method including receiving an instruction that includes a cache operations modifier that identifies a level of the parallel cache hierarchy in which to cache data associated with the instruction; and implementing a cache replacement policy based on the cache operations modifier. | 03-31-2011 |
20110082961 | Sharing Data Crossbar for Reads and Writes in a Data Cache - The invention sets forth an L1 cache architecture that includes a crossbar unit configured to transmit data associated with both read data requests and write data requests. Data associated with read data requests is retrieved from a cache memory and transmitted to the client subsystems. Similarly, data associated with write data requests is transmitted from the client subsystems to the cache memory. To allow for the transmission of both read and write data on the crossbar unit, an arbiter is configured to schedule the crossbar unit transmissions as well and arbitrate between data requests received from the client subsystems. | 04-07-2011 |
20110292065 | RECONFIGURABLE DUAL TEXTURE PIPELINE WITH SHARED TEXTURE CACHE - Circuits, methods, and apparatus that provide texture caches and related circuits that store and retrieve texels in an efficient manner. One such texture circuit can provide a configurable number of texel quads for a configurable number of pixels. For bilinear filtering, texels for a comparatively greater number of pixels can be retrieved. For trilinear filtering, texels in a first LOD are retrieved for a number of pixels during a first clock cycle, during a second clock cycle, texels in a second LOD are retrieved. When aniso filtering is needed, a greater number of texels can be retrieved for a comparatively lower number of pixels. | 12-01-2011 |
20130169651 | TEXTURE PIPELINE CONTEXT SWITCH - Circuits, methods, and apparatus that perform a context switch quickly while not wasting a significant amount of in-progress work. A texture pipeline includes a cutoff point or stage. After receipt of a context switch instruction, texture requests and state updates above the cutoff point are stored in a memory, while those below the cutoff point are processed before the context switch is completed. After this processing is complete, global states in the texture pipeline are stored in the memory. A previous context may then be restored by reading its texture requests and global states from the memory and loading them into the texture pipeline. The location of the cutoff point can be a point in the pipeline where a texture request can no longer result in a page fault in the memory. | 07-04-2013 |
Patent application number | Description | Published |
20110078367 | CONFIGURABLE CACHE FOR MULTIPLE CLIENTS - One embodiment of the present invention sets forth a technique for providing a L1 cache that is a central storage resource. The L1 cache services multiple clients with diverse latency and bandwidth requirements. The L1 cache may be reconfigured to create multiple storage spaces enabling the L1 cache may replace dedicated buffers, caches, and FIFOs in previous architectures. A “direct mapped” storage region that is configured within the L1 cache may replace dedicated buffers, FIFOs, and interface paths, allowing clients of the L1 cache to exchange attribute and primitive data. The direct mapped storage region may used as a global register file. A “local and global cache” storage region configured within the L1 cache may be used to support load/store memory requests to multiple spaces. These spaces include global, local, and call-return stack (CRS) memory. | 03-31-2011 |
20150084975 | LOAD/STORE OPERATIONS IN TEXTURE HARDWARE - Approaches are disclosed for performing memory access operations in a texture processing pipeline having a first portion configured to process texture memory access operations and a second portion configured to process non-texture memory access operations. A texture unit receives a memory access request. The texture unit determines whether the memory access request includes a texture memory access operation. If the memory access request includes a texture memory access operation, then the texture unit processes the memory access request via at least the first portion of the texture processing pipeline, otherwise, the texture unit processes the memory access request via at least the second portion of the texture processing pipeline. One advantage of the disclosed approach is that the same processing and cache memory may be used for both texture operations and load/store operations to various other address spaces, leading to reduced surface area and power consumption. | 03-26-2015 |
Patent application number | Description | Published |
20140267315 | MULTI-SAMPLE SURFACE PROCESSING USING ONE SAMPLE - A system, method, and computer program product are provided for multi-sample processing. The multi-sample pixel data is received and an encoding state associated with the multi-sample pixel data is determined. Data for one sample of a multi-sample pixel and the encoding state are provided to a processing unit. The one sample of the multi-sample pixel is processed by the processing unit to generate processed data for the one sample that represents processed multi-sample pixel data for all samples of the multi-sample pixel or two or more samples of the multi-sample pixel. | 09-18-2014 |
20140267356 | MULTI-SAMPLE SURFACE PROCESSING USING SAMPLE SUBSETS - A system, method, and computer program product are provided for multi-sample processing. The multi-sample pixel data is received and is analyzed to identify subsets of samples of a multi-sample pixel that have equal data, such that data for one sample in a subset represents multi-sample pixel data for all samples in the subset. An encoding state is generated that indicates which samples of the multi-sample pixel are included in each one of the subsets. | 09-18-2014 |
20140267376 | SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR ACCESSING MULTI-SAMPLE SURFACES - A system, method, and computer program product are provided for accessing multi-sample surfaces. A multi-sample store instruction that specifies data for a single sample of a multi-sample pixel and a sample mask is received and the data for the single sample is stored to each sample of the multi-sample pixel that is enabled according to the sample mask. A multi-sample load instruction that specifies a multi-sample pixel is received, and, in response to executing the multi-sample load instruction, data for one sample of the multi-sample pixel is received. A determination is made that the data for the one sample of the multi-sample pixel represents multi-sample pixel data for at least one additional sample of the multi-sample pixel. | 09-18-2014 |
20150054836 | SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR REDISTRIBUTING A MULTI-SAMPLE PROCESSING WORKLOAD BETWEEN THREADS - A system, method, and computer program product are provided for redistributing multi-sample processing workloads between threads. A workload for a plurality of multi-sample pixels is received and each thread in a parallel thread group is associated with a corresponding multi-sample pixel of the plurality of pixels. The workload is redistributed between the threads in the parallel thread group based on a characteristic of the workload and the workload is processed by the parallel thread group. In one embodiment, the characteristic is rasterized coverage information for the plurality of multi-sample pixels. | 02-26-2015 |
Patent application number | Description | Published |
20140257156 | SYSTEMS, METHODS, AND DEVICES FOR AUTOMATIC CLOSURE OF MEDICAL DEVICES - According to an embodiment, a brace may include a motorized tensioning device, a tensioning member operationally coupled with the motorized tensioning device to tighten the brace about the limb, and a control unit communicatively coupled with the motorized tensioning device to control adjustment of a tension of the tensioning member. A method for providing therapy with the brace fitted about a limb may include communicating a first instruction from the control unit to the motorized tensioning device to adjust the tension of the tensioning member according to a therapeutic regimen that is designed to aid in recovery of the limb via repetitive movement of the limb. | 09-11-2014 |
20150059206 | GUIDES AND COMPONENTS FOR CLOSURE SYSTEMS AND METHODS THEREFOR - According to an embodiment, a component for attachment to an article includes an upper component that is made of a thermoplastic material having a first melting temperature and a flange member that is molded onto the upper component and made of a thermoplastic elastomer material having a second melting temperature that is lower than the first melting temperature of the upper component. The flange member extends laterally from a bottom end of the upper component so that a bottom surface of the flange member is flush with or positioned axially below a bottom surface of the upper component. The melting temperature of the thermoplastic elastomer material enables the flange member to be directly coupled to the article via heat welding and the like without substantially affecting the upper component. | 03-05-2015 |
20150151070 | CLOSURE METHODS AND DEVICES FOR HEAD RESTRAINTS AND MASKS - A system for securing a mask about a user's face includes a mask that is configured to fit about the user's face, a padded member that is postionable on the back of the user's head, and at least one strap that extends from the padded member toward the mask. The system also includes a closure system that is coupled with the mask and with the strap. The closure system includes a tension member, a guide that routes the tension member about the mask and/or the strap, and a tensioning device that is operable to tension the tension member and thereby pull the strap and padded member toward the mask to secure and/or tighten the mask about the user's face. | 06-04-2015 |
20160058127 | DEVICES AND METHODS FOR ENHANCING THE FIT OF BOOTS AND OTHER FOOTWEAR - A closure system for a boot or other footwear includes a tension member that is disposed within the boot and routed or guided about a path within the boot via one or more guides. The closure system also includes an adjustment member that is disposed within the boot and operably coupled with the tension member. The closure system further includes a reel based closure device having a knob that is operable to tension the tension member and to release tension from the tension member. Tensioning of the tension member adjusts a fit of the adjustment member about a foot within the boot to secure the foot within the boot and loosening of the tension member adjusts the fit of the adjustment member about the foot to allow the foot to be more easily removed from the boot. | 03-03-2016 |
20160058130 | MULTI-PURPOSE CLOSURE SYSTEM - A reel-based mechanism for tightening footwear includes a tension member and a plurality of guide members that are positioned about an opening of the footwear. The plurality of guide members guide or direct the tension member about a path along the footwear. The reel-based mechanism further includes a tightening mechanism that is operationally coupled with the tension member to effect tensioning of the tension member and tightening of the footwear upon operation of the tightening mechanism. The tightening mechanism performs one or more secondary functions that are not related to tightening of the footwear. | 03-03-2016 |
Patent application number | Description | Published |
20090010187 | System and Method for an Adaptive Access Point Mode - A system an adaptive access point node includes (a) a switch disposed within a network, the network comprising at least one virtual local area network; (b) an anchor access point disposed in the at least one virtual local area network, the anchor access point connected to the switch via a data path, the anchor access point configured to receive a broadcast data packet from the switch via the data path; and (c) at least one access point connected to the anchor access point via a local data path to receive the broadcast data packet from the anchor access point via the local data path. The anchor access point and the access points further forward the broadcast data packet to other devices connected thereto. | 01-08-2009 |
20140068622 | PACKET PROCESSING ON A MULTI-CORE PROCESSOR - A method for packet processing on a multi-core processor. According to one embodiment of the invention, a first set of one or more processing cores are configured to include the capability to process packets belonging to a first set of one or more packet types, and a second set of one or more processing cores are configured to include the capability to process packets belonging to a second set of one or more packet types, where the second set of packet types is a subset of the first set of packet types. Packets belonging to the first set of packet types are processed at a processing core of either the first or second set of processing cores. Packets belonging to the second set of packet types are processed at a processing core of the first set of processing cores. | 03-06-2014 |
20140359764 | REASSEMBLY-FREE DEEP PACKET INSPECTION ON MULTI-CORE HARDWARE - Some embodiments of reassembly-free deep packet inspection (DPD on multicore hardware have been presented. In one embodiment, a set of packets of one or more files is received at a networked device from one or more connections. Each packet is scanned using one of a set of processing cores in the networked device without buffering the one or more files in the networked device. Furthermore, the set of processing cores may scan the packets substantially concurrently. | 12-04-2014 |
20140369234 | Method And Apparatus For Scanning And Device Detection In A Communication System - In a communication system wherein a plurality of electronic devices connect and disconnect from communication over a medium and wherein the communication system has a protocol such that it is followed by the plurality of electronic devices when using the communication system, a probing device attempts to detect presence of a listening device and parameters associated with a connection to be set up between the probing device and the listening device by sending a probe request packet directed to the listening device and sending, from the listening device, a probe response packet in response to the probe request packet, wherein the listening device bypasses at least one step of the protocol when sending the probe response packet. The bypassed step might be medium arbitration, the communication system might be a wireless network fully or partially based on an 802.11x specification, or a wireless network that uses 802.11x frame formatting and/or modifications/extensions thereof. | 12-18-2014 |
20150348158 | REPEAT-ORDERING SYSTEMS AND METHODS - A repeat-ordering service may facilitate convenient “one-touch” ordering of a particular good and/or service. In one embodiment, a special-purpose signaling device may be configured to be associated with a fulfillment profile describing one or more particular goods and/or services, as well as payment and fulfillment information. Such a signaling device may communicate wirelessly with a repeat-ordering service and may be physically configured to be placed in a locale that is near to where a consumer would typically become aware of a need for the particular goods and/or services. | 12-03-2015 |
20160026516 | PACKET PROCESSING ON A MULTI-CORE PROCESSOR - A method for packet processing on a multi-core processor. According to one embodiment of the invention, a first set of one or more processing cores are configured to include the capability to process packets belonging to a first set of one or more packet types, and a second set of one or more processing cores are configured to include the capability to process packets belonging to a second set of one or more packet types, where the second set of packet types is a subset of the first set of packet types. Packets belonging to the first set of packet types are processed at a processing core of either the first or second set of processing cores. Packets belonging to the second set of packet types are processed at a processing core of the first set of processing cores. | 01-28-2016 |