Patent application number | Description | Published |
20080235457 | Dynamic quality of service (QoS) for a shared cache - In one embodiment, the present invention includes a method for associating a first priority indicator with data stored in a first entry of a shared cache memory by a core to indicate a priority level of a first thread, and associating a second priority indicator with data stored in a second entry of the shared cache memory by a graphics engine to indicate a priority level of a second thread. Other embodiments are described and claimed. | 09-25-2008 |
20080235487 | Applying quality of service (QoS) to a translation lookaside buffer (TLB) - In one embodiment, the present invention includes a translation lookaside buffer (TLB) having storage locations each including a priority indicator field to store a priority level associated with an agent that requested storage of the data in the TLB, and an identifier field to store an identifier of the agent, where the TLB is apportioned according to a plurality of priority levels. Other embodiments are described and claimed. | 09-25-2008 |
20090006755 | Providing application-level information for use in cache management - In one embodiment, the present invention includes a method for associating a first identifier with data stored by a first agent in a cache line of a cache to indicate the identity of the first agent, and storing the first identifier with the data in the cache line and updating at least one of a plurality of counters associated with the first agent in a metadata storage in the cache, where the counter includes information regarding inter-agent interaction with respect to the cache line. Other embodiments are described and claimed. | 01-01-2009 |
20090165004 | Resource-aware application scheduling - In one embodiment, a method provides capturing resource monitoring information for a plurality of applications; accessing the resource monitoring information; and scheduling at least one of the plurality of applications on a selected processing core of a plurality of processing cores based, at least in part, on the resource monitoring information. | 06-25-2009 |
20090172315 | PRIORITY AWARE SELECTIVE CACHE ALLOCATION - A method and apparatus for is herein described providing priority aware and consumption guided dynamic probabilistic allocation for a cache memory. Utilization of a sample size of a cache memory is measured for each priority level of a computer system. Allocation probabilities for each priority level are updated based on the measured consumption/utilization, i.e. allocation is reduced for priority levels consuming too much of the cache and allocation is increased for priority levels consuming too little of the cache. In response to an allocation request, it is assigned a priority level. An allocation probability associated with the priority level is compared with a randomly generated number. If the number is less than the allocation probability, then a fill to the cache is performed normally. In contrast, a spatially or temporally limited fill is performed if the random number is greater than the allocation probability. | 07-02-2009 |
20100332788 | AUTOMATICALLY USING SUPERPAGES FOR STACK MEMORY ALLOCATION - In one embodiment, the present invention includes a page fault handler to create page table entries and TLB entries in response to a page fault, the page fault handler to determine if a page fault resulted from a stack access, to create a superpage table entry if the page fault did result from a stack access, and to create a TLB entry for the superpage. Other embodiments are described and claimed. | 12-30-2010 |
20110087843 | Monitoring cache usage in a distributed shared cache - An apparatus, method, and system are disclosed. In one embodiment the apparatus includes a cache memory, which a number of sets. Each of the sets in the cache memory have several cache lines. The apparatus also includes at least one process resource table. The process resource table maintains a cache line occupancy count of a number of cache lines. Specifically, the cache line occupancy count for each cache line describes the number of cache lines in the cache storing information utilized by a process running on a computer system. Additionally, the process resource table stores the occupancy count of less cache lines than the total number of cache lines in the cache memory. | 04-14-2011 |
20110090920 | PACKET COALESCING - In general, in one aspect, the disclosures describes a method that includes receiving multiple ingress Internet Protocol packets, each of the multiple ingress Internet Protocol packets having an Internet Protocol header and a Transmission Control Protocol segment having a Transmission Control Protocol header and a Transmission Control Protocol payload, where the multiple packets belonging to a same Transmission Control Protocol/Internet Protocol flow. The method also includes preparing an Internet Protocol packet having a single Internet Protocol header and a single Transmission Control Protocol segment having a single Transmission Control Protocol header and a single payload formed by a combination of the Transmission Control Protocol segment payloads of the multiple Internet Protocol packets. The method further includes generating a signal that causes receive processing of the Internet Protocol packet. | 04-21-2011 |
20110113200 | METHODS AND APPARATUSES FOR CONTROLLING CACHE OCCUPANCY RATES - Embodiments of an apparatus for controlling cache occupancy rates are presented. In one embodiment, an apparatus comprises a controller and monitor logic. The monitor logic determines a monitored occupancy rate associated with a first program class. The first controller regulates a first allocation probability corresponding to the first program class, based at least on the difference between a requested occupancy rate and the first monitored occupancy rate. | 05-12-2011 |
20110153926 | Controlling Access To A Cache Memory Using Privilege Level Information - In one embodiment, a cache memory includes entries each to store a ring level identifier, which may indicate a privilege level of information stored in the entry. This identifier may be used in performing read accesses to the cache memory. As an example, a logic coupled to the cache memory may filter an access to one or more ways of a selected set of the cache memory based at least in part on a current privilege level of a processor and the ring level identifier of the one or more ways. Other embodiments are described and claimed. | 06-23-2011 |
20110161595 | CACHE MEMORY POWER REDUCTION TECHNIQUES - Methods and apparatus to provide for power consumption reduction in memories (such as cache memories) are described. In one embodiment, a virtual tag is used to determine whether to access a cache way. The virtual tag access and comparison may be performed earlier in the read pipeline than the actual tag access or comparison. In another embodiment, a speculative way hit may be used based on pre-ECC partial tag match to wake up a subset of data arrays. Other embodiments are also described. | 06-30-2011 |
20120079235 | APPLICATION SCHEDULING IN HETEROGENEOUS MULTIPROCESSOR COMPUTING PLATFORMS - Methods and apparatus to schedule applications in heterogeneous multiprocessor computing platforms are described. In one embodiment, information regarding performance (e.g., execution performance and/or power consumption performance) of a plurality of processor cores of a processor is stored (and tracked) in counters and/or tables. Logic in the processor determines which processor core should execute an application based on the stored information. Other embodiments are also claimed and disclosed. | 03-29-2012 |
20120191896 | CIRCUITRY TO SELECT, AT LEAST IN PART, AT LEAST ONE MEMORY - An embodiment may include circuitry to select, at least in part, from a plurality of memories, at least one memory to store data. The memories may be associated with respective processor cores. The circuitry may select, at least in part, the at least one memory based at least in part upon whether the data is included in at least one page that spans multiple memory lines that is to be processed by at least one of the processor cores. If the data is included in the at least one page, the circuitry may select, at least in part, the at least one memory, such that the at least one memory is proximate to the at least one of the processor cores. Many alternatives, variations, and modifications are possible. | 07-26-2012 |
20120233393 | Scheduling Workloads Based On Cache Asymmetry - In one embodiment, a processor includes a first cache and a second cache, a first core associated with the first cache and a second core associated with the second cache. The caches are of asymmetric sizes, and a scheduler can intelligently schedule threads to the cores based at least in part on awareness of this asymmetry and resulting cache performance information obtained during a training phase of at least one of the threads. | 09-13-2012 |
20130185370 | EFFICIENT PEER-TO-PEER COMMUNICATION SUPPORT IN SOC FABRICS - Methods and apparatus for efficient peer-to-peer communication support in interconnect fabrics. Network interfaces associated with agents are implemented to facilitate peer-to-peer transactions between agents in a manner that ensures data accesses correspond to the most recent update for each agent. This is implemented, in part, via use of non-posted “dummy writes” that are sent from an agent when the destination between write transactions originating from the agent changes. The dummy writes ensure that data corresponding to previous writes reach their destination prior to subsequent write and read transactions, thus ordering the peer-to-peer transactions without requiring the use of a centralized transaction ordering entity. | 07-18-2013 |
20140095806 | CONFIGURABLE SNOOP FILTER ARCHITECTURE - Configurable snoop filters. A memory system is coupled with one or more processing cores. A coherent system fabric couples the memory system with the one or more processing cores. The coherent system fabric comprising at least a configurable snoop filter that is configured based on workload. The configurable snoop filter having a configurable snoop filter directory and a bloom filter. The configurable snoop filter and the bloom filter include runtime configuration parameters that are used to selectively limit snoop traffic. | 04-03-2014 |