Patent application number | Description | Published |
20090119630 | Arrangements for Developing Integrated Circuit Designs - In some embodiments, a method is disclosed for converging on an acceptable integrated circuit design for an integrated circuit. The method can include selecting a path, determining if the path has a timing deficiency, segmenting the path into path segments and allocating the timing deficiency across the segments according to attributes of the path segments. Segments can have attributes such as a design freeze when the design is mature or “optimum.” Allocating can include allocating the timing deficiency across path segments according to attributes such as the proportion of the length of a segmented path to the overall path length. Allocating can include allocating the timing deficiency to path segments based on attributes provided as user input. | 05-07-2009 |
20150355948 | DYNAMICALLY CONFIGURABLE HARDWARE QUEUES FOR DISPATCHING JOBS TO A PLURALITY OF HARDWARE ACCELERATION ENGINES - A computer system having a plurality of processing resources, including a sub-system for scheduling and dispatching processing jobs to a plurality of hardware accelerators, the subsystem further comprising a job requestor, for requesting jobs having bounded and varying latencies to be executed on the hardware accelerators; a queue controller to manage processing job requests directed to a plurality of hardware accelerators; and multiple hardware queues for dispatching jobs to the plurality of hardware acceleration engines, each queue having a dedicated head of queue entry, dynamically sharing a pool of queue entries, having configurable queue depth limits, and means for removing one or more jobs across all queues. | 12-10-2015 |
20150355949 | DYNAMICALLY CONFIGURABLE HARDWARE QUEUES FOR DISPATCHING JOBS TO A PLURALITY OF HARDWARE ACCELERATION ENGINES - A computer system having a plurality of processing resources, including a sub-system for scheduling and dispatching processing jobs to a plurality of hardware accelerators, the subsystem further comprising a job requestor, for requesting jobs having bounded and varying latencies to be executed on the hardware accelerators; a queue controller to manage processing job requests directed to a plurality of hardware accelerators; and multiple hardware queues for dispatching jobs to the plurality of hardware acceleration engines, each queue having a dedicated head of queue entry, dynamically sharing a pool of queue entries, having configurable queue depth limits, and means for removing one or more jobs across all queues. | 12-10-2015 |
Patent application number | Description | Published |
20080244130 | FLOW LOOKAHEAD IN AN ORDERED SEMAPHORE MANAGEMENT SUBSYSTEM - In an ordered semaphore management system a pending state allows threads not competing for a locked semaphore to bypass one or more threads waiting for the same locked semaphore. The number of pending levels determines the number of consecutive threads vying for the same locked semaphore which can be bypassed. When more than one level is provided the pending levels are prioritized in the queued order. | 10-02-2008 |
20100262720 | TECHNIQUES FOR WRITE-AFTER-WRITE ORDERING IN A COHERENCY MANAGED PROCESSOR SYSTEM THAT EMPLOYS A COMMAND PIPELINE - A technique for maintaining input/output (I/O) command ordering on a bus includes assigning a channel identifier to I/O commands of an I/O stream. In this case, the channel identifier indicates the I/O commands belong to the I/O stream. A command location indicator is assigned to each of the I/O commands. The command location indicator provides an indication of which one of the I/O commands is a start command in the I/O stream and which of the I/O commands are continue commands in the I/O stream. The I/O commands are issued in a desired completion order. When a first one of the I/O commands does not complete successfully, the I/O commands in the I/O stream are reissued on the bus starting at the first one of the I/O commands that did not complete successfully. | 10-14-2010 |
20130152099 | DYNAMICALLY CONFIGURABLE HARDWARE QUEUES FOR DISPATCHING JOBS TO A PLURALITY OF HARDWARE ACCELERATION ENGINES - A computer system having a plurality of processing resources, including a sub-system for scheduling and dispatching processing jobs to a plurality of hardware accelerators, the subsystem further comprising a job requestor, for requesting jobs having bounded and varying latencies to be executed on the hardware accelerators; a queue controller to manage processing job requests directed to a plurality of hardware accelerators; and multiple hardware queues for dispatching jobs to the plurality of hardware acceleration engines, each queue having a dedicated head of queue entry, dynamically sharing a pool of queue entries, having configurable queue depth limits, and means for removing one or more jobs across all queues. | 06-13-2013 |
20130304990 | Dynamic Control of Cache Injection Based on Write Data Type - Selective cache injection of write data generated or used by a coprocessor hardware accelerator in a multi-core processor system having a hierarchical bus architecture to facilitate transfer of address and data between multiple agents coupled to the bus. A bridge device maintains configuration settings for cache injection of write data and includes a set of n shared write data buffers used for write requests to memory. Each coprocessor hardware accelerator has m local write data cacheline buffers holding different types of write data. For write data produced by a coprocessor hardware accelerator, cache injection is accomplished based on configuration settings in a DMA channel dedicated to the coprocessor and a bridge controller. The access history of cache injected data for a particular processing thread or data flow is also tracked to determine whether to down grade or maintain a request for cache injection. | 11-14-2013 |
20140337855 | Termination of Requests in a Distributed Coprocessor System - A system and method of terminating processing requests dispatched to a coprocessor hardware accelerator in a multi-processor computer system based on matching various fields in the request made to the coprocessor to identify the process to be terminated. A kill command is initiated by a write operation to a coprocessor block kill register and has match enable and value for each field in the coprocessor request to be terminated. Enabled fields may have one or more values associated with a single request or multiple requests for the same coprocessor. At least one match enable must be set to initiate a kill request. A process kill active signal prevents other coprocessor jobs from moving between operational stages in the coprocessor hardware accelerator. Processing jobs that are idle or do not match the fields with match enables set signal done with no match and continue processing. Processing jobs that do match the fields with match enables set are terminated and signal done with match. When all processing jobs have signaled done, a done bit is set in the coprocessor block kill register to indicate completion of the kill to the initiating software. The register also holds the match status of each processing job. | 11-13-2014 |