Patent application number | Description | Published |
20100185820 | PROCESSOR POWER MANAGEMENT AND METHOD - A data processing device is disclosed that includes multiple processing cores, where each core is associated with a corresponding cache. When a processing core is placed into a first sleep mode, the data processing device initiates a first phase. If any cache probes are received at the processing core during the first phase, the cache probes are serviced. At the end of the first phase, the cache corresponding to the processing core is flushed, and subsequent cache probes are not serviced at the cache. Because it does not service the subsequent cache probes, the processing core can therefore enter another sleep mode, allowing the data processing device to conserve additional power. | 07-22-2010 |
20120017188 | METHOD OF DETERMINING EVENT BASED ENERGY WEIGHTS FOR DIGITAL POWER ESTIMATION - A method for determining event based energy weights for digital power estimation includes obtaining a reference energy value corresponding to a power consumed by at least a portion of an integrated circuit (IC) device during operation. The method includes determining and selecting a subset of signals from a set of all signals within the IC that correlates to energy use within the IC. The method includes determining an activity factor of each signal in the subset by monitoring each signal while simulating execution of a particular set of instructions. The method includes determining a weight factor or at least an approximation of a weight factor for each signal in the subset by solving within a predetermined accuracy, a multivariable equation in which the reference energy value equals a weighted sum of the activity of the signals of the selected subset multiplied by their respective weight factors. | 01-19-2012 |
20120144122 | METHOD AND APPARATUS FOR ACCELERATED SHARED DATA MIGRATION - A method and apparatus for accelerated shared data migration between cores is disclosed. | 06-07-2012 |
20120144124 | METHOD AND APPARATUS FOR MEMORY ACCESS UNITS INTERACTION AND OPTIMIZED MEMORY SCHEDULING - A method and an apparatus for modulating the prefetch training of a memory-side prefetch unit (MS-PFU) are described. An MS-PFU trains on memory access requests it receives from processors and their processor-side prefetch units (PS-PFUs). In the method and apparatus, an MS-PFU modulates its training based on one or more of a PS-PFU memory access request, a PS-PFU memory access request type, memory utilization, or the accuracy of MS-PFU prefetch requests. | 06-07-2012 |
20120155273 | SPLIT TRAFFIC ROUTING IN A PROCESSOR - A multi-chip module configuration includes two processors, each having two nodes, each node including multiple cores or compute units. Each node is connected to the other nodes by links that are high bandwidth or low bandwidth. Routing of traffic between the nodes is controlled at each node according to a routing table and/or a control register that optimize bandwidth usage and traffic congestion control. | 06-21-2012 |
20120159080 | NEIGHBOR CACHE DIRECTORY - A method and apparatus for utilizing a higher-level cache as a neighbor cache directory in a multi-processor system are provided. In the method and apparatus, when the data field of a portion or all of the cache is unused, a remaining portion of the cache is repurposed for usage as neighbor cache directory. The neighbor cache provides a pointer to another cache in the multi-processor system storing memory data. The neighbor cache directory can be searched in the same manner as a data cache. | 06-21-2012 |
20130077701 | METHOD AND INTEGRATED CIRCUIT FOR ADJUSTING THE WIDTH OF AN INPUT/OUTPUT LINK - A method and apparatus are described for adjusting a bit width of an input/output (I/O) link established between a transmitter and a receiver. The I/O link has a plurality of bit lanes. The transmitter may send to the receiver a command identifying at least one selected bit lane of the I/O link that will be powered off or powered on in response to detecting that a bit width adjustment threshold of the I/O link has been reached. | 03-28-2013 |
20130124805 | APPARATUS AND METHOD FOR SERVICING LATENCY-SENSITIVE MEMORY REQUESTS - A shared memory controller and method of operation are provided. The shared memory controller is configured for use with a plurality of processors such as a central processing unit or a graphics processing unit. The shared memory controller includes a command queue configured to hold a plurality of memory commands from the plurality of processors, each memory command having associated priority information. The shared memory controller includes boost logic configured to identify a latency sensitive memory command and update the priority information associated with the memory command to identify the memory command as latency sensitive. The boost logic may be configured to identify a latency sensitive processor command. The boost logic may be configured to track time duration between successive latency sensitive memory commands. | 05-16-2013 |