Patent application number | Description | Published |
20100005246 | SATISFYING MEMORY ORDERING REQUIREMENTS BETWEEN PARTIAL READS AND NON-SNOOP ACCESSES - A method and apparatus for preserving memory ordering in a cache coherent link based interconnect in light of partial and non-coherent memory accesses is herein described. In one embodiment, partial memory accesses, such as a partial read, is implemented utilizing a Read Invalidate and/or Snoop Invalidate message. When a peer node receives a Snoop Invalidate message referencing data from a requesting node, the peer node is to invalidate a cache line associated with the data and is not to directly forward the data to the requesting node. In one embodiment, when the peer node holds the referenced cache line in a Modified coherency state, in response to receiving the Snoop Invalidate message, the peer node is to writeback the data to a home node associated with the data. | 01-07-2010 |
20120317369 | SATISFYING MEMORY ORDERING REQUIREMENTS BETWEEN PARTIAL READS AND NON-SNOOP ACCESSES - A method and apparatus for preserving memory ordering in a cache coherent link based interconnect in light of partial and non-coherent memory accesses is herein described. In one embodiment, partial memory accesses, such as a partial read, is implemented utilizing a Read Invalidate and/or Snoop Invalidate message. When a peer node receives a Snoop Invalidate message referencing data from a requesting node, the peer node is to invalidate a cache line associated with the data and is not to directly forward the data to the requesting node. In one embodiment, when the peer node holds the referenced cache line in a Modified coherency state, in response to receiving the Snoop Invalidate message, the peer node is to writeback the data to a home node associated with the data. | 12-13-2012 |
20140115275 | SATISFYING MEMORY ORDERING REQUIREMENTS BETWEEN PARTIAL READS AND NON-SNOOP ACCESSES - A method and apparatus for preserving memory ordering in a cache coherent link based interconnect in light of partial and non-coherent memory accesses is herein described. In one embodiment, partial memory accesses, such as a partial read, is implemented utilizing a Read Invalidate and/or Snoop Invalidate message. When a peer node receives a Snoop Invalidate message referencing data from a requesting node, the peer node is to invalidate a cache line associated with the data and is not to directly forward the data to the requesting node. In one embodiment, when the peer node holds the referenced cache line in a Modified coherency state, in response to receiving the Snoop Invalidate message, the peer node is to writeback the data to a home node associated with the data. | 04-24-2014 |
Patent application number | Description | Published |
20090089566 | SUPPORTING ADVANCED RAS FEATURES IN A SECURED COMPUTING SYSTEM - Systems and methods for enabling Reliability, Availability & Serviceability features after launching a secure environment under the control of LaGrande Technology (LT), or comparable security technology, without compromising security are provided. In one embodiment, the method comprises adding at least one specific capability to a processor to enable at least one of CPU hot-plug, CPU migration, CPU hot removal and capacity on demand. | 04-02-2009 |
20110153924 | CORE SNOOP HANDLING DURING PERFORMANCE STATE AND POWER STATE TRANSITIONS IN A DISTRIBUTED CACHING AGENT - A method and apparatus may provide for detecting a performance state transition in a processor core and bouncing a core snoop message on a shared interconnect ring in response to detecting the performance state transition. The core snoop message may be associated with the processor core, wherein a plurality of processor cores may be coupled to the shared interconnect ring via a distributed last level cache controller. | 06-23-2011 |
20110153948 | SYSTEMS, METHODS, AND APPARATUS FOR MONITORING SYNCHRONIZATION IN A DISTRIBUTED CACHE - Systems, apparatus, and method of monitoring synchronization in a distributed cache are described. In an exemplary embodiment, a first and second processing core process a first and second thread respectively. A first and second distributed cache slices store data for either or both of the first and second processing cores. A first and second core interface co-located with the first and second processing cores respectively maintain a finite state machine (FSM) to be executed in response to receiving a request from a thread of its co-located processing core to monitor a cache line in the distributed cache. | 06-23-2011 |
20110161585 | PROCESSING NON-OWNERSHIP LOAD REQUESTS HITTING MODIFIED LINE IN CACHE OF A DIFFERENT PROCESSOR - Methods and apparatus to efficiently process non-ownership load requests hitting modified line (M-line) in cache of a different processor are described. In one embodiment, a first agent changes the state of a first data and forwards it to a second, requesting agent who stores the first data in an alternative modified state. Other embodiments are also described. | 06-30-2011 |
20110161601 | INTER-QUEUE ANTI-STARVATION MECHANISM WITH DYNAMIC DEADLOCK AVOIDANCE IN A RETRY BASED PIPELINE - Methods and apparatus relating to an inter-queue anti-starvation mechanism with dynamic deadlock avoidance in a retry based pipeline are described. In one embodiment, logic may arbitrate between two queues based on various rules. The queues may store data including local or remote requests, data responses, non-data responses, external interrupts, etc. Other embodiments are also disclosed. | 06-30-2011 |
20110161705 | MULTIPLE-QUEUE MULTIPLE-RESOURCE ENTRY SLEEP AND WAKEUP FOR POWER SAVINGS AND BANDWIDTH CONSERVATION IN A RETRY BASED PIPELINE - Methods and apparatus relating to multiple-queue multiple-resource entry sleep and wakeup for power savings and bandwidth conservation in a retry based pipeline are described. In one embodiment, a bit indicates whether a corresponding queue entry is asleep or awake with respect to arbitration for resources in a retry based pipeline. Furthermore, multiple entries from different queues may be grouped together and multiple resources may be grouped together. Other embodiments are also disclosed. | 06-30-2011 |
20110161769 | RETRY BASED PROTOCOL WITH SOURCE/RECEIVER FIFO RECOVERY AND ANTI-STARVATION MECHANISM TO SUPPORT DYNAMIC PIPELINE LENGTHENING FOR ECC ERROR CORRECTION - Methods and apparatus relating to retry based protocol with source/receiver FIFO (First-In, First-Out) buffer recovery and anti-starvation mechanism to support dynamic pipeline lengthening for ECC error correction are described. In an embodiment, upon detection of an error, a portion of transmitted data is stored in one or more storage devices before retransmission. Other embodiments are also described and claimed. | 06-30-2011 |
20110191542 | SYSTEM-WIDE QUIESCENCE AND PER-THREAD TRANSACTION FENCE IN A DISTRIBUTED CACHING AGENT - Methods and apparatus relating to system-wide quiescence and per-thread transaction fence in a distributed caching agent are described. Some embodiments utilize messages, counters, and/or state machines that support system-wide quiescence and per-thread transaction fence flows. Other embodiments are also disclosed. | 08-04-2011 |
20120079032 | APPARATUS, SYSTEM, AND METHODS FOR FACILITATING ONE-WAY ORDERING OF MESSAGES - Methods, apparatus and systems for facilitating one-way ordering of otherwise independent message classes. A one-way message ordering mechanism facilitates one-way ordering of messages of different message classes sent between interconnects employing independent pathways for the message classes. In one aspect, messages of a second message class may not pass messages of a first message class. Moreover, when messages of the first and second classes are received in sequence, the ordering mechanism ensures that messages of the first class are forwarded to, and received at, a next hop prior to forwarding messages of the second class. | 03-29-2012 |
20130346666 | TUNNELING PLATFORM MANAGEMENT MESSAGES THROUGH INTER-PROCESSOR INTERCONNECTS - Methods and apparatus for tunneling platform management messages through inter-processor interconnects. Platform management messages are received from a management entity such as a management engine (ME) at a management component of a first processor targeted for a managed device operatively coupled to a second processor. Management message content is encapsulated in a tunnel message that is tunneled from the first processor to a second management component in the second processor via a socket-to-socket interconnect link between the processors. Once received at the second management component the encapsulated management message content is extracted and the original management message is recreated. The recreated management message is then used to manage the targeted device in a manner similar to if the ME was directly connected to the second processor. The disclosed techniques enable management of platform devices operatively coupled to processors in a multi-processor platform via a single management entity. | 12-26-2013 |
20140115197 | INTER-QUEUE ANTI-STARVATION MECHANISM WITH DYNAMIC DEADLOCK AVOIDANCE IN A RETRY BASED PIPELINE - Methods and apparatus relating to an inter-queue anti-starvation mechanism with dynamic deadlock avoidance in a retry based pipeline are described. In one embodiment, logic may arbitrate between two queues based on various rules. The queues may store data including local or remote requests, data responses, non-data responses, external interrupts, etc. Other embodiments are also disclosed. | 04-24-2014 |
20140181394 | DIRECTORY CACHE SUPPORTING NON-ATOMIC INPUT/OUTPUT OPERATIONS - Responsive to receiving a write request for a cache line from an input/output device, a caching agent of a first processor determines that the cache line is managed by a home agent of a second processor. The caching agent sends an ownership request for the cache line to the second processor. A home agent of the second processor receives the ownership request, generates an entry in a directory cache for the cache line, the entry identifying the remote caching agent as having ownership of the cache line, and grants ownership of the cache line to the remote caching agent. Responsive to receiving the grant of ownership for the cache line from the home agent an input/output controller of the first processor adds an entry for the cache line to an input/output write cache, the entry comprising a first indicator that the cache line is managed by the home agent of the second processor. | 06-26-2014 |
20140189239 | PROCESSORS HAVING VIRTUALLY CLUSTERED CORES AND CACHE SLICES - A processor of an aspect includes a plurality of logical processors each having one or more corresponding lower level caches. A shared higher level cache is shared by the plurality of logical processors. The shared higher level cache includes a distributed cache slice for each of the logical processors. The processor includes logic to direct an access that misses in one or more lower level caches of a corresponding logical processor to a subset of the distributed cache slices in a virtual cluster that corresponds to the logical processor. Other processors, methods, and systems are also disclosed. | 07-03-2014 |
20140208141 | DYNAMICALLY CONTROLLING INTERCONNECT FREQUENCY IN A PROCESSOR - In one embodiment, the present invention includes a method for determining whether a number of stalled cores of a multicore processor is greater than a stall threshold. If so, a recommendation may be made that an operating frequency of system agent circuitry of the processor be increased. Then based on multiple recommendations, a candidate operating frequency of the system agent circuitry can be set. Other embodiments are described and claimed. | 07-24-2014 |
20140214955 | APPARATUS, SYSTEM, AND METHODS FOR FACILITATINGONE-WAY ORDERING OF MESSAGES - Methods, apparatus and systems for facilitating one-way ordering of otherwise independent message classes. A one-way message ordering mechanism facilitates one-way ordering of messages of different message classes sent between interconnects employing independent pathways for the message classes. In one aspect, messages of a second message class may not pass messages of a first message class. Moreover, when messages of the first and second classes are received in sequence, the ordering mechanism ensures that messages of the first class are forwarded to, and received at, a next hop prior to forwarding messages of the second class. | 07-31-2014 |
20140297967 | INTER-QUEUE ANTI-STARVATION MECHANISM WITH DYNAMIC DEADLOCK AVOIDANCE IN A RETRY BASED PIPELINE - Methods and apparatus relating to an inter-queue anti-starvation mechanism with dynamic deadlock avoidance in a retry based pipeline are described. In one embodiment, logic may arbitrate between two queues based on various rules. The queues may store data including local or remote requests, data responses, non-data responses, external interrupts, etc. Other embodiments are also disclosed. | 10-02-2014 |