Patent application number | Description | Published |
20140129690 | CUSTOM RESOURCES IN A RESOURCE STACK - A resource stack managed by a resource stack provider is created based on a resource stack template that integrates a custom resource from a second provider into the resource stack using a notification system with the second provider. For example, a customer may create a template that defines a resource stack that comprises resources available from the resource stack provider and one or more custom resources provided by a second provider. When a resource stack is created, resources available from the resource stack provider may be provisioned. Custom resources may be initialized by notifying the provider of the custom resource of the requested integration of the custom resource with the resource stack and requested configuration details. The custom resource provider may respond with an indication of successful integration when the custom resource has been successfully initialized. After initializing the resources, the resource stack may be enabled for use. | 05-08-2014 |
20140365668 | USING A TEMPLATE TO UPDATE A STACK OF RESOURCES - Techniques are described that enable a user to upgrade a stack of resources by providing a template that reflects the desired end state of the stack. Upon receiving a new template, the system automatically detects changes that should be performed and determines the order in which the changes should be performed. The system can also detect if the desired changes to the stack are a result of direct modifications; because parameters to the resources have changed; or the changes are indirectly caused by other dependency/attribute changing. Additionally, the system determines whether the changes require creating new resources or if the changes can be made to the resources live. In case of replacement of resources, the system will first create the new resource, move that new resource into the stack and remove the old resource(s). In case of failures, the system ensures that the stack rolls back to the initial state. | 12-11-2014 |
20150150081 | TEMPLATE REPRESENTATION OF SECURITY RESOURCES - Systems and methods are described for enabling users to model security resources and user access keys as resources in a template language. The template can be used to create and update a stack of resources that will provide a network-accessible service. The security resources and access keys can be referred to in the template during both stack creation process and the stack update process. The security resources can include users, groups and policies. Additionally, users can refer to access keys in the template as dynamic parameters without any need to refer to the access keys in plaintext. The system securely stores access keys within the system and allows for templates to refer to them once defined. These key references can then be passed within a template to resources that need them as well as passing them on securely to resources like server instances through the use of the user-data field. | 05-28-2015 |
20150288618 | CUSTOM RESOURCES IN A RESOURCE STACK - A resource stack managed by a resource stack provider is created based on a resource stack template that integrates a custom resource from a second provider into the resource stack using a notification system with the second provider. For example, a customer may create a template that defines a resource stack that comprises resources available from the resource stack provider and one or more custom resources provided by a second provider. When a resource stack is created, resources available from the resource stack provider may be provisioned. Custom resources may be initialized by notifying the provider of the custom resource of the requested integration of the custom resource with the resource stack and requested configuration details. The custom resource provider may respond with an indication of successful integration when the custom resource has been successfully initialized. After initializing the resources, the resource stack may be enabled for use. | 10-08-2015 |
20160132313 | CANCEL AND ROLLBACK UPDATE STACK REQUESTS - Techniques for cancel and rollback of update stack requests are disclosed herein. At a time after receiving a request to cancel and rollback an update request for a computer system, one or more computer resources within a computer system invoke one or more computer system capabilities at least to cancel computer system operations to update the computer. When the computer system operations to update the computer system are cancelled, one or more computer resources within a computer system invoke one or more computer system capabilities at least to roll back the computer system to a previous good state. | 05-12-2016 |
Patent application number | Description | Published |
20100070949 | PROCESS AND SYSTEM FOR ASSESSING MODULARITY OF AN OBJECT-ORIENTED PROGRAM - The present invention describes a process, system and computer program product for assessing the modularity of an object-oriented program. The process includes calculation of metrics associated with various properties of the object-oriented program. Analysis is performed on the basis of the calculated metrics. | 03-18-2010 |
20100073686 | CLUSTERING PROTOCOL FOR DIRECTIONAL SENSOR NETWORKS - A method and apparatus for tracking target objects includes a network of unidirectional sensors which correspond to nodes on the network. The sensors identify the presence of a target object, determine a first criteria for the relationship of the target object to the sensors and send a message to sensors neighboring the sensor. The message includes a unique identification of the target object and the first criteria. The sensors are ranked to determine which of the sensors should head a cluster of sensors for tracking the target object. Clusters are propagated and fragmented as the target object moves through a field of sensors. | 03-25-2010 |
20110310255 | CALIBRATION OF LARGE CAMERA NETWORKS - The present disclosure relates to a sensor network including a plurality of nodes, each node having a directional sensor, a communication module, and a processor configured to receive local measurements of a calibration object from the directional sensor, receive additional measurements of the calibration object from neighboring nodes via the communication module, estimate an initial set of calibration parameters in response to the local and additional measurements, receive additional sets of calibration parameters from neighboring nodes via the communication module, and recursively estimate an updated set of calibration parameters in response to the additional sets of calibration parameters. Additional systems and methods for calibrating a large network of camera nodes are disclosed. | 12-22-2011 |
20110317017 | PREDICTIVE DUTY CYCLE ADAPTATION SCHEME FOR EVENT-DRIVEN WIRELESS SENSOR NETWORKS - Embodiments of a method for controlling access to a shared communications medium by a plurality of nodes are disclosed. The method may comprise predicting, for each node of the plurality of nodes, whether an event will occur within a sensing field of that node at a future time and adapting a communications schedule of each node in response to the prediction regarding that node. Wireless sensor networks and computer readable media implementing embodiments of a method for controlling access to a shared communications medium by a plurality of nodes are also disclosed. | 12-29-2011 |
20130061211 | SYSTEMS, METHODS, AND COMPUTER-READABLE MEDIA FOR MEASURING QUALITY OF APPLICATION PROGRAMMING INTERFACES - Systems, methods, and computer-readable media for determining the quality of an API by one or more computing devices includes: receiving documentation of the API; determining, based on the documentation, values that include at least one of names of the methods, parameters of the methods, and functions of the methods; determining one or more measurement value including at least one of a complexity of the API, a consistency of the parameters of the API, a level of confusion of the API, a logical method groupings of the API, a thread safety of the API, an exception notification of the API, and a documentation quality of the API; and specifying at least one quality metric for the API based on the measurement values. | 03-07-2013 |
Patent application number | Description | Published |
20130013499 | ELECTRONIC WALLET CHECKOUT PLATFORM APPARATUSES, METHODS AND SYSTEMS - The ELECTRONIC WALLET CHECKOUT PLATFORM APPARATUSES, METHODS AND SYSTEMS (“EWCP”) transform customer purchase requests triggering electronic wallet applications via EWCP components into electronic purchase confirmation and receipts. In one implementation, the EWCP receives a merchant payment request, and determines a payment protocol handler associated with the merchant payment request. The EWCP instantiates a wallet application via the payment protocol handler. The EWCP obtains a payment method selection via the wallet application, wherein the selected payment method is one of a credit card, a debit card, a gift card selected from an electronic wallet, and sends a transaction execution request for a transaction associated with the merchant payment request. Also, the EWCP receives a purchase response to the transaction execution request, and outputs purchase response information derived from the received purchase response. | 01-10-2013 |
20130086258 | MONITORING AND LIMITING REQUESTS TO ACCESS SYSTEM RESOURCES - Systems, apparatuses and methods for preventing requests to access a system's resources from having a negative impact on higher priority data processing operations being performed by the system. The invention is directed to preventing the number of calls made by a merchant's applications through an application programming interface (API) for access to the lower priority services of a merchant service provider from having a negative impact on the ability of the service provider to perform the processing necessary to support higher priority services. The invention provides a user interface that may be used by a merchant or the service provider to configure the operation of a “throttle” that is designed to generate an alert when the number of calls by a merchant application for access to a specific service provider function or application exceeds a value or limit, where exceeding the value or limit may indicate a malfunction of the merchant's application or an attack by a malicious agent. | 04-04-2013 |
20130205370 | MOBILE HUMAN CHALLENGE-RESPONSE TEST - Methods and systems for verifying whether a user requesting an online account is likely a human or an automated program are described. A request for an online account may be received from a mobile device. A human challenge-response test adapted for displaying on a mobile device is displayed on the mobile device. Upon viewing the human challenge-response test, the user enters the user's solution to the human challenge-response test on the mobile device. A response hash value is created based on the user's solution. The response hash value is sent to an account request server for verification. | 08-08-2013 |
20140025538 | Dual Encoding of Machine Readable Code for Automatic Scan-Initiated Purchase or Uniform Resource Locator Checkout - Embodiments of the invention are directed to systems and methods for allowing a single representation of a trigger for payment across different environments using machine readable codes. A machine readable code may be encoded with a URL and information about a product to which the machine readable code is attached. A first electronic device may be able to scan and decode the machine readable code into first recognizable product information using a complaint application. The compliant application may populate a first form on the first electronic device for a first transaction with the recognizable product information without having to access a server. A non-compliant application on a second electronic device will launch a browser using the URL and provide the unrecognizable product information to the server for decoding. The server will decode the unrecognizable product information into a second recognizable product information that may be used to populate a second form for a second transaction. | 01-23-2014 |
20150019944 | HYBRID APPLICATIONS UTILIZING DISTRIBUTED MODELS AND VIEWS APPARATUSES, METHODS AND SYSTEMS - The HYBRID APPLICATIONS UTILIZING DISTRIBUTED MODELS AND VIEWS APPARATUSES, METHODS AND SYSTEMS (“HAP”) transform hybrid application user inputs using HAP components into web-view secured data populated application views. In some implementations, the disclosure provides a processor-implemented method of providing distributed model views utilizing a hybrid application environment. | 01-15-2015 |
20150195289 | MOBILE HUMAN CHALLENGE-RESPONSE TEST - Methods and systems for verifying whether a submission of a request is likely from a human user or an automated program are described. A request may be received from a user device. A human challenge-response test adapted for displaying on the user device is displayed on the user device. Upon viewing the human challenge-response test, the user enters the user's solution to the human challenge-response test on the user device. A response hash value is created based on the user's solution. The response hash value is sent to a computing device for verification. | 07-09-2015 |
20150199682 | SYSTEMS AND METHODS FOR MERCHANT MOBILE ACCEPTANCE - Systems and methods are provided for merchant mobile acceptance of user device data. For example, a method comprises receiving encrypted user device data and reader metadata from a merchant mobile device, determining a device reader API and device reader encryption scheme using the device reader metadata, parsing the encrypted user device data using the device reader API to determine encrypted personal information, and decrypting the encrypted personal information using the reader encryption scheme. | 07-16-2015 |
20150302397 | ENCRYPTED PAYMENT TRANSACTIONS - Systems, apparatuses, and methods are provided for conducting encrypted payment transactions. In some embodiments, a payment device may send account credentials for a digital wallet to a wallet provider computer, and receive encrypted payment data from the wallet provider computer in response. The payment device may then send a request to initiate a transaction to a transaction processor computer (e.g., a merchant computer or a merchant processor computer), the request to initiate the transaction including the encrypted payment data. The transaction processor computer can then decrypt the encrypted payment data and conduct the transaction. | 10-22-2015 |
Patent application number | Description | Published |
20130174343 | CAPACITIVE WIRE SENSING FOR FURNITURE - A system and method for incorporating presence-sensing technology into furniture is provided. More particularly, the invention relates to detecting presence using a metal, adjustable bed frame. The bed frame is pulsed with a voltage to provide a charge, against which capacitance is measured. A controller determines the corresponding response based on presence detection by the frame. Conductive bushings may also be used to measure capacitance using the bed frame. In further embodiments, capacitance is measured by a foil tape surrounding a perimeter of the adjustable bed. The foil tape has a voltage based on proximity of an object to the tape, and may be embedded with a capacitive wire. A processor receives information regarding changes in capacitance and determines when a change in voltage satisfies a threshold. Based on a determination of presence, or lack of presence, a variety of corresponding features of the adjustable bed may be activated. | 07-11-2013 |
20130247302 | OCCUPANCY DETECTION FOR FURNITURE - A system and method for incorporating occupancy-detecting technology into furniture is provided. More particularly, the invention relates to detecting occupancy using a detection pad coupled a portion of a bed. The detection pad may include an aluminized polymer material, a metalized and/or conductive fabric, an aluminum sheet, a metal screen, an aluminum tape, a wire grid, or other metalized material or fabric. A controller determines the corresponding response based on single-occupancy or dual-occupancy detection by one or more detection pads. A processor receives information regarding changes in capacitance and determines when a change in voltage satisfies a threshold. Based on a determination of occupancy, or lack thereof, a variety of corresponding features of the adjustable bed may be activated. | 09-26-2013 |
20140302795 | USER IDENTIFICATION METHOD FOR AUTOMATED FURNITURE - A method of user identification in association with an automated furniture item is provided. In embodiments, a user identification method for an automated furniture item utilizes occupancy detection and proximity detection, such as via a BLE PXP. In some embodiments, a system associated with an automated furniture item is provided, which identifies a particular user's smart device (i.e., a device configured to connect to one or more other devices and/or networks, such as a tablet computing device or smartphone) within range of the automated furniture item controller, and generates a corresponding response based on occupancy detection of that particular user. In another embodiment, one or more environment features may be controlled and/or activated, in association with the automated furniture item, based on the coordinated response of both the proximity indication of user identity and the presence detection of a particular user with respect to the automated furniture item. | 10-09-2014 |
20150137833 | OCCUPANCY DETECTION FOR AUTOMATED RECLINER FURNITURE - A system and method for incorporating occupancy-detecting technology into furniture is provided. More particularly, the invention relates to detecting occupancy in a recliner using a sinuous wire detection array incorporated into a seat. Further embodiments of the invention are directed to a system and method for incorporating capacitance detection technology with one or more conductive features of a recliner mechanism. In some aspects, a sensor is provided based on coupling one or more conductive features to a control component of the capacitance detector control component. A controller may determine the corresponding response based on occupancy detection and/or presence detection. A processor may receive information regarding changes in capacitance and determines when a change in voltage satisfies a threshold. Based on a determination of occupancy and/or presence, a variety of corresponding features of the adjustable recliner may be activated. | 05-21-2015 |
20150137835 | CAPACITIVE SENSING FOR AUTOMATED RECLINER FURNITURE - A system and method for incorporating occupancy-detecting technology into furniture is provided. More particularly, the invention relates to detecting occupancy in a recliner using a sinuous wire detection array incorporated into a seat. Further embodiments of the invention are directed to a system and method for incorporating capacitance detection technology with one or more conductive features of a recliner mechanism. In some aspects, a sensor is provided based on coupling one or more conductive features to a control component of the capacitance detector control component. A controller may determine the corresponding response based on occupancy detection and/or presence detection. A processor may receive information regarding changes in capacitance and determines when a change in voltage satisfies a threshold. Based on a determination of occupancy and/or presence, a variety of corresponding features of the adjustable recliner may be activated. | 05-21-2015 |
20150327687 | CHARACTERIZATION AND CALIBRATION FOR AUTOMATED FURNITURE - A system and method for incorporating occupancy-detecting technology into furniture is provided. More particularly, the invention relates to detecting occupancy using a detection pad coupled a portion of a bed. The detection pad may include an aluminized polymer material, a metalized and/or conductive fabric, an aluminum sheet, a metal screen, an aluminum tape, a wire grid, or other metalized material or fabric. A controller determines the corresponding response based on single-occupancy or dual-occupancy detection by one or more detection pads. A processor receives information regarding changes in capacitance and determines when a change in voltage satisfies a threshold. Based on a determination of occupancy, or lack thereof, a variety of corresponding features of the adjustable bed may be activated. | 11-19-2015 |
20160084487 | STANDALONE CAPACITANCE SENSOR FOR FURNITURE - A system and method for incorporating occupancy-detecting technology into furniture is provided. More particularly, the invention relates to detecting occupancy by a standalone capacitance detection device. The standalone capacitance detection device is configured for integration with any number of furniture items. Further, the detected capacitance may be used to determine commands for controlling a variety of devices associated with the standalone capacitance detection device. Additionally, methods for determining occupancy of a furniture item and a system for monitoring occupancy are described herein. | 03-24-2016 |
Patent application number | Description | Published |
20130135322 | SWITCHING BETWEEN DIRECT RENDERING AND BINNING IN GRAPHICS PROCESSING USING AN OVERDRAW TRACKER - This disclosure presents techniques and structures for determining a rendering mode (e.g., a binning rendering mode and a direct rendering mode) as well as techniques and structures for switching between such rendering modes. Rendering mode may be determined by analyzing rendering characteristics. Rendering mode may also be determined by tracking overdraw in a bin. The rendering mode may be switched from a binning rendering mode to a direct rendering mode by patching commands that use graphics memory addresses to use system memory addresses. Patching may be handled by a CPU or by a second write command buffer executable by a GPU. | 05-30-2013 |
20130135329 | SWITCHING BETWEEN DIRECT RENDERING AND BINNING IN GRAPHICS PROCESSING - This disclosure presents techniques and structures for determining a rendering mode (e.g., a binning rendering mode and a direct rendering mode) as well as techniques and structures for switching between such rendering modes. Rendering mode may be determined by analyzing rendering characteristics. Rendering mode may also be determined by tracking overdraw in a bin. The rendering mode may be switched from a binning rendering mode to a direct rendering mode by patching commands that use graphics memory addresses to use system memory addresses. Patching may be handled by a CPU or by a second write command buffer executable by a GPU. | 05-30-2013 |
20130135341 | HARDWARE SWITCHING BETWEEN DIRECT RENDERING AND BINNING IN GRAPHICS PROCESSING - This disclosure presents techniques and structures for determining a rendering mode (e.g., a binning rendering mode and a direct rendering mode) as well as techniques and structures for switching between such rendering modes. Rendering mode may be determined by analyzing rendering characteristics. Rendering mode may also be determined by tracking overdraw in a bin. The rendering mode may be switched from a binning rendering mode to a direct rendering mode by patching commands that use graphics memory addresses to use system memory addresses. Patching may be handled by a CPU or by a second write command buffer executable by a GPU. | 05-30-2013 |
20130169642 | PACKING MULTIPLE SHADER PROGRAMS ONTO A GRAPHICS PROCESSOR - This disclosure describes techniques for packing multiple shader programs of a common shader program type onto a graphics processing unit (GPU). The techniques may include, for example, causing a plurality of shader programs of a common shader program type to be loaded into an on-chip shader program instruction memory of a graphics processor such that each shader program in the plurality of shader programs resides in the on-chip shader program instruction memory at a common point in time. In addition, various techniques for evicting shader programs from an on-chip shader program instruction memory are described. | 07-04-2013 |
20140146064 | GRAPHICS MEMORY LOAD MASK FOR GRAPHICS PROCESSING - Systems and methods are described including creating a mask that indicates which pixel groups do not need to be loaded from Graphics Memory (GMEM). The mask indicates a pixel group does not need to be loaded from GMEM. The systems and methods may further include rendering a tile on a screen. This may include loading the GMEM based on the indication from the mask and skipping a load from the GMEM based on the indication from the mask. | 05-29-2014 |
20140184623 | REORDERING OF COMMAND STREAMS FOR GRAPHICAL PROCESSING UNITS (GPUs) - In general, techniques are described for analyzing a command stream that configures a graphics processing unit (GPU) to render one or more render targets. A device comprising a processor may perform the techniques. The processor may be configured to analyze the command stream to determine a representation of the one or more render targets defined by the command stream. The processor may also be configured to, based on the representation of the render targets, and identify one or more rendering inefficiencies that will occur upon execution of the command stream by the GPU. The processor may also be configured to re-order one or more commands in the command stream so as to reduce the identified rendering inefficiencies that will occur upon execution of the command stream by the GPU. | 07-03-2014 |
20140198119 | RENDERING GRAPHICS DATA USING VISIBILITY INFORMATION - In some examples, aspects of this disclosure relate to a method for rendering an image. For example, the method includes generating visibility information indicating visible primitives of the image. The method also includes rendering the image using a binning configuration, wherein the binning configuration is based on the visibility information. | 07-17-2014 |
20140267074 | SYSTEM AND METHOD FOR VIRTUAL USER INTERFACE CONTROLS IN MULTI-DISPLAY CONFIGURATIONS - Methods, devices, and computer program products for virtual user interface controls in multi-display configurations are described herein. In one aspect, an electronic device includes a processor configured to generate a first image of the screen, the first image of the screen not containing a touch-sensitive user interface, generate a second image, the second image comprising a touch-sensitive user-interface configured to be overlayed onto the first image of the screen, transmit one or more of the first image of the screen and the second image to the first display device, and output the first image of the screen to a second display device. | 09-18-2014 |
20140267259 | TILE-BASED RENDERING - This disclosure describes techniques for using bounding regions to perform tile-based rendering with a graphics processing unit (GPU) that supports an on-chip, tessellation-enabled graphics rendering pipeline. Instead of generating binning data based on rasterized versions of the actual primitives to be rendered, the techniques of this disclosure may generate binning data based on a bounding region that encompasses one or more of the primitives to be rendered. Moreover, the binning data may be generated based on data that is generated by at least one tessellation processing stage of an on-chip, tessellation-enabled graphics rendering pipeline that is implemented by the GPU. The techniques of this disclosure may, in some examples, be used to improve the performance of an on-chip, tessellation-enabled GPU when performing tile-based rendering without sacrificing the quality of the resulting rendered image. | 09-18-2014 |
20140306971 | INTRA-FRAME TIMESTAMPS FOR TILE-BASED RENDERING - This disclosure describes techniques for supporting intra-frame timestamps in a graphics system that performs tile-based rendering. The techniques for supporting intra-frame timestamps may involve generating a timestamp value that is indicative of a point in time based on a plurality of per-bin timestamp values that are generated by a graphics processing unit (GPU) while performing tile-based rendering for a graphics frame. The timestamp value may be a function of at least two of the plurality of per-bin timestamp values. The timestamp value may be generated by a central processing unit (CPU), the GPU, another processor, or any combination thereof. By using per-bin timestamp values to generate timestamp values for intra-frame timestamp requests, intra-frame timestamps may be supported by a graphics system that performs tile-based rendering. | 10-16-2014 |
20140320512 | QUERY PROCESSING FOR TILE-BASED RENDERERS - Systems, methods, and apparatus for performing queries in a graphics processing system are disclosed. These systems, methods, and apparatus may be configured to read a running counter at the start of the query to determine a start value, wherein the running counter counts discrete graphical entities, read the running counter at the end of the query to determine an end value, and subtract the start value from the end value to determine a result. | 10-30-2014 |
20140354660 | COMMAND INSTRUCTION MANAGEMENT - Techniques are described for writing commands to memory units of a chain of memory units of a command buffer. The techniques may write the commands, and if during the writing, it is determined that there is not sufficient space in the chain of memory unit, the techniques may flush previously confirmed commands. If after the writing, the techniques determine that there is not sufficient space in an allocation list for the handles associated with the commands, the techniques may flush previously confirmed commands. | 12-04-2014 |
20140354661 | CONDITIONAL EXECUTION OF RENDERING COMMANDS BASED ON PER BIN VISIBILITY INFORMATION WITH ADDED INLINE OPERATIONS - A GPU may determine, based on a visibility stream, whether to execute instructions stored in an indirect buffer. The instructions include instructions for rendering primitives associated with a bin of a plurality of bins and include one or more secondary operations. The visibility stream indicate if one or more of the primitives associated with the bin will be visible in a finally rendered scene. The GPU may, responsive to determining not to execute the instructions stored in the indirect buffer, execute one or more secondary operations stored in a shadow indirect buffer. The GPU may, responsive to determining to execute the instructions stored in the indirect buffer, execute the instructions for rending the primitives associated with the bin of the plurality of bins and executing the one or more secondary operations stored in the indirect buffer. | 12-04-2014 |
20150070369 | FAULT-TOLERANT PREEMPTION MECHANISM AT ARBITRARY CONTROL POINTS FOR GRAPHICS PROCESSING - This disclosure presents techniques and structures for preemption at arbitrary control points in graphics processing. A method of graphics processing may comprise executing commands in a command buffer, the commands operating on data in a read-modify-write memory resource, double buffering the data in the read-modify-write memory resource, such that a first buffer stores original data of the read-modify-write memory resource and a second buffer stores any modified data produced by executing the commands in the command buffer, receiving a request to preempt execution of the commands in the command buffer before completing all commands in the command buffer, and restarting execution of the commands at the start of the command buffer using the original data in the first buffer. | 03-12-2015 |
20150187117 | OPTIMIZED MULTI-PASS RENDERING ON TILED BASE ARCHITECTURES - The present disclosure provides systems and methods for multi-path rendering on tile based architectures including executing, with a graphics processing unit (GPU), a query pass, executing, with the GPU, a condition true pass based on the query pass without executing a flush operation, executing, with the GPU, a condition false pass based on the query pass without executing a flush operation, and responsive to executing the condition true pass and the condition false pass, executing, with the GPU, a flush operation. | 07-02-2015 |
20150302546 | FLEX RENDERING BASED ON A RENDER TARGET IN GRAPHICS PROCESSING - A device comprising a graphics processing unit (GPU) includes a memory and at least one processor. The at least one processor may be configured to: receive a GPU command packet that indicates the GPU may select between a direct rendering mode or a binning rendering mode for a portion of a frame to be rendered by the GPU, determine whether to use the direct rendering mode or the binning rendering mode for the portion of the frame to be rendered by the GPU based on at least one of: information in the received command packet or a state of the GPU, and render the portion of the frame using the determined direct rendering mode or the binning rendering mode. | 10-22-2015 |
20160055608 | RENDER TARGET COMMAND REORDERING IN GRAPHICS PROCESSING - In an example, a method for rendering graphics data includes receiving a plurality of commands associated with a plurality of render targets, where the plurality of commands are received in an initial order. The method also includes determining an execution order for the plurality of commands including reordering one or more of the plurality of commands in a different order than the initial order based on data dependencies between commands. The method also includes executing the plurality of commands in the determined execution order. | 02-25-2016 |
Patent application number | Description | Published |
20080250233 | Providing thread fairness in a hyper-threaded microprocessor - A method and apparatus for providing fairness in a multi-processing element environment is herein described. Mask elements are utilized to associated portions of a reservation station with each processing element, while still allowing common access to another portion of reservation station entries. Additionally, bias logic biases selection of processing elements in a pipeline away from a processing element associated with a blocking stall to provide fair utilization of the pipeline. | 10-09-2008 |
20100169582 | Obtaining data for redundant multithreading (RMT) execution - In one embodiment, the present invention includes a method for providing a cache block in an exclusive state to a first cache and providing the same cache block in the exclusive state to a second cache when cores accessing the two caches are executing redundant threads. Other embodiments are described and claimed. | 07-01-2010 |
20100169628 | Controlling non-redundant execution in a redundant multithreading (RMT) processor - In one embodiment, the present invention includes a method for controlling redundant execution such that if an exceptional event occurs, the redundant execution is stopped, non-redundant execution is performed in one of the threads until the exceptional event has been-resolved, after which a state of the threads is synchronized, and redundant execution is continued. Other embodiments are described and claimed. | 07-01-2010 |
20110055524 | PROVIDING THREAD FAIRNESS IN A HYPER-THREADED MICROPROCESSOR - A method and apparatus for providing fairness in a multi-processing element environment is herein described. Mask elements are utilized to associated portions of a reservation station with each processing element, while still allowing common access to another portion of reservation station entries. Additionally, bias logic biases selection of processing elements in a pipeline away from a processing element associated with a blocking stall to provide fair utilization of the pipeline. | 03-03-2011 |
20110055525 | PROVIDING THREAD FAIRNESS IN A HYPER-THREADED MICROPROCESSOR - A method and apparatus for providing fairness in a multi-processing element environment is herein described. Mask elements are utilized to associated portions of a reservation station with each processing element, while still allowing common access to another portion of reservation station entries. Additionally, bias logic biases selection of processing elements in a pipeline away from a processing element associated with a blocking stall to provide fair utilization of the pipeline. | 03-03-2011 |
20110307894 | Redundant Multithreading Processor - A redundant multithreading processor is presented. In one embodiment, the processor performs execution of a thread and its duplicate thread in parallel and determines, when in a redundant multithreading mode, whether or not to synchronize an operation of the thread and an operation of the duplicate thread. | 12-15-2011 |
20130013898 | Managing Multiple Threads In A Single Pipeline - In one embodiment, the present invention includes a method for determining if an instruction of a first thread dispatched from a first queue associated with the first thread is stalled in a pipestage of a pipeline, and if so, dispatching an instruction of a second thread from a second queue associated with the second thread to the pipeline if the second thread is not stalled. Other embodiments are described and claimed. | 01-10-2013 |
20140052963 | TECHNIQUE TO PERFORM THREE-SOURCE OPERATIONS - A technique to perform three-source instructions. At least one embodiment of the invention relates to converting a three-source instruction into at least two instructions identifying no more than two source values. | 02-20-2014 |
20140095838 | Physical Reference List for Tracking Physical Register Sharing - A processor includes a processing unit including a storage module having stored thereon a physical reference list for storing identifications of physical registers that have been referenced by multiple logical registers, and a reclamation module for reclaiming physical registers to a free list based on a count of each of the physical registers on the physical reference list. | 04-03-2014 |
20140195790 | PROCESSOR WITH SECOND JUMP EXECUTION UNIT FOR BRANCH MISPREDICTION - A secondary jump execution unit (JEU) is incorporated in a micro-processor to operate concurrently with a primary JEU, enabling the execution of simultaneous branch operations with possible detection of multiple branch mispredicts. When branch operations are executed on both JEUs in a same instruction cycle, mispredict processing for the secondary JEU is skidded into the primary JEU's dispatch pipeline such that the branch processing for the secondary JEU occurs after processing of the branch for the primary JEU and while the primary JEU is not processing a branch. Moreover, in cases when a nuke command is also received from a reorder buffer of the processor, the branch processing for the secondary JEU is further delayed to accommodate processing of the nuke on the primary JEU. Further embodiments support the promotion of the secondary JEU to have access to the mispredict mechanisms of the primary JEU in certain circumstances. | 07-10-2014 |
20160062768 | INSTRUCTION AND LOGIC FOR PREFETCHER THROTTLING BASED ON DATA SOURCE - A processor includes a core, a prefetcher, and a prefetcher control module. The prefetcher includes logic to make speculative prefetch requests through a memory subsystem for an element for execution by the core, and logic to store prefetched elements in a cache. The prefetcher control module includes logic to determine counts of memory accesses to two types of memory and, based upon the counts and the type of memory, reduce the speculative prefetch requests of the prefetcher. | 03-03-2016 |
20160140039 | PROVIDING MULTIPLE MEMORY MODES FOR A PROCESSOR INCLUDING INTERNAL MEMORY - In one embodiment, a processor comprises: at least one core formed on a die to execute instructions; a first memory controller to interface with an in-package memory; a second memory controller to interface with a platform memory to couple to the processor; and the in-package memory located within a package of the processor, where the in-package memory is to be identified as a more distant memory with respect to the at least one core than the platform memory. Other embodiments are described and claimed. | 05-19-2016 |