Patent application number | Description | Published |
20090132749 | Cache memory system - Systems and methods are disclosed for pre-fetching data into a cache memory system. These systems and methods comprise retrieving a portion of data from a system memory and storing a copy of the retrieved portion of data in a cache memory. These systems and methods further comprise monitoring data that has been placed into pre-fetch memory. | 05-21-2009 |
20090132750 | Cache memory system - The present disclosure provides systems and methods for a cache memory and a cache load circuit. The cache load circuit is capable of retrieving a portion of data from the system memory and of storing a copy of the retrieved portion of data in the cache memory. In addition, the systems and methods comprise a monitoring circuit for monitoring accesses to data in the system memory. | 05-21-2009 |
20090132768 | Cache memory system - Systems and methods are disclosed that comprise a cache memory for storing a copy of a portion of data stored in a system memory and a cache load circuit capable of retrieving the portion of data from the system memory. The systems and methods further comprise a status memory for identifying whether or not a region of the cache memory contains data that has been accessed from the cache memory by an external device. | 05-21-2009 |
20090307433 | Cache memory system - Systems and methods for pre-fetching data are disclosed that use a cache memory for storing a copy of data stored in a system memory and mechanism to initiate a pre-fetch of data from the system memory into the cache memory. The system further comprises an event monitor for monitoring events that is connected to a path on which signals representing an event are transmitted between one or more event generating modules and a processor. In some embodiments, the event monitor initiates a pre-fetch of a portion of data in response to the event monitor detecting an event indicating the availability of the portion of data in the system memory. | 12-10-2009 |
20110133825 | INTEGRATED CIRCUIT PACKAGE WITH MULTIPLE DIES AND SAMPLED CONTROL SIGNALS - A package includes a first die and a second die, at least one of said first and second dies being a memory. The dies are connected to each other through an interface. The interface is configured to transport both control signals and memory transactions. A sampling circuit samples the control signals before transport on the interface. The sampling circuit is controlled in dependence on at least one quality of service parameter associated with a respective control signal. | 06-09-2011 |
20110133826 | INTEGRATED CIRCUIT PACKAGE WITH MULTIPLE DIES AND QUEUE ALLOCATION - A package includes a first die and a second die. The dies are connected to each other through an interface. At least one of the first and second dies includes a plurality of signal sources, wherein each source has at least one quality of service parameter associated therewith, and a plurality of queues having a different priorities. A signal from a respective one of the signal sources is allocated to one of the plurality of queues in dependence on the at least one quality of service parameter associated with the respective signal source. The interface is configured such that signals from said queues are transported from one of said first and second dies to the other of said first and second dies. | 06-09-2011 |
20110134705 | INTEGRATED CIRCUIT PACKAGE WITH MULTIPLE DIES AND A MULTIPLEXED COMMUNICATIONS INTERFACE - A package includes a first die and a second die, at least one of said first and second dies being a memory. The dies are connected to each other through an interface. The interface is configured to transport both control signals and memory transactions. A multiplexer is provided to multiplex the control signals and memory transactions onto the interface such that a plurality of connections of said interface are shared by the control signals and the memory transactions. | 06-09-2011 |
20110135046 | INTEGRATED CIRCUIT PACKAGE WITH MULTIPLE DIES AND A SYNCHRONIZER - A package includes a first die and a second die. The dies are connected to each other through an interface. The interface is configured to transport both control signals and memory transactions. A synchronizer is provided on at least one of said first and second of said dies. The synchronizer is configured to cause any untransmitted control signal values to be transmitted across the interface. | 06-09-2011 |
20110138093 | INTEGRATED CIRCUIT PACKAGE WITH MULTIPLE DIES AND INTERRUPT PROCESSING - A package includes a first die and a second die. The dies are connected to each other through an interface. The package includes interrupt processing for detecting interrupt information and providing a packet in response to the interrupt information detection. The packet includes an address to which data in the packet is to be written. The interface is configured to transport the packet between the dies. A data store is provided to which the data is writable. An interrupt event is determined from data received in several packets. | 06-09-2011 |
20110261603 | INTEGRATED CIRCUIT PACKAGE WITH MULTIPLE DIES AND BUNDLING OF CONTROL SIGNALS - A package includes a first die and a second die, at least one of said first and second dies being a memory. The dies are connected to each other through an interface. The interface is configured to transport a plurality of control signals. The number of control signals is greater than a width of the interface. At least one of the first and second dies performs a configurable grouping so as to provide a plurality of groups of control signals. The signals within a group are transmitted across the interface together. | 10-27-2011 |
20120210093 | METHOD AND APPARATUS FOR INTERFACING MULTIPLE DIES WITH MAPPING TO MODIFY SOURCE IDENTITY - A package includes a die and at least one further die. The die has an interface configured to receive a transaction request from the further die via an interconnect and to transmit a response to the transaction request to said further die via the interconnect. The die also has mapping circuitry which is configured to receive the transaction request including at least first source identity information, wherein the first source identity information is associated with a source of the transaction request on the further die. The mapping circuitry is configured to modify the transaction request to replace the first source identity information with local source identity information, wherein that local source identity information is associated with the mapping circuitry. The mapping circuitry is configured to modify the received transaction request to provide said first source identity information in a further field. | 08-16-2012 |
20120210288 | METHOD AND APPARATUS FOR INTERFACING MULTIPLE DIES WITH MAPPING FOR SOURCE IDENTIFIER ALLOCATION - A package includes a die and at least one further die. The die has an interface configured to receive a transaction request from the further die via an interconnect and to transmit a response to the transaction request to said further die via the interconnect. The die also has mapping circuitry which is configured to allocate to the received transaction a local source identity information as source identity information, the local source identity information comprising one of a set of reusable local source identity information. This ensures the order of transactions tagged with a same original source identity and target and allows transactions tagged with different source identifiers to be processed out of order. | 08-16-2012 |
20130031312 | CACHE MEMORY CONTROLLER - A cache memory controller including: a pre-fetch requester configured to issue pre-fetch requests, each pre-fetch request having one of a plurality of different quality of services. | 01-31-2013 |
20130031313 | CACHE ARRANGEMENT - A first cache arrangement including an input configured to receive a memory request from a second cache arrangement; a first cache memory for storing data; an output configured to provide a response to the memory request for the second cache arrangement; and a first cache controller; the first cache controller configured such that for the response to the memory request output by the output, the cache memory includes no allocation for data associated with the memory request. | 01-31-2013 |
20130031330 | ARRANGEMENT AND METHOD - A first arrangement including a first interface configured to receive a memory transaction having an address from a second arrangement; a second interface; an address translator configured to determine based on said address if said transaction is for said first arrangement and if so to translate said address or if said transaction is for a third arrangement to forward said transaction without modification to said address to said second interface, said second interface being configured to transmit said transaction, without modification to said address, to said third arrangement. | 01-31-2013 |
20130031347 | ARRANGEMENT AND METHOD - A first arrangement including an interface configured to receive transactions with an address from a second arrangement having a first memory space; a translator configured to translate an address of a first type of received transaction to a second memory space of the first arrangement, the second memory space being different to the first memory space; and boot logic configured to map a boot transaction of the received transactions to a boot region in the second memory space. | 01-31-2013 |
20130064143 | CIRCUIT - A circuit including an initiator of a transaction, an interconnect, and a controller. The controller is configured in response to a condition in a least one first part of the circuit to send a notification via the interconnect to at least one block in a second part of the circuit. The notification includes information about the condition in the first part of the circuit, the condition preventing a response to the transaction from being received by the initiator. | 03-14-2013 |
20130103912 | ARRANGEMENT - An arrangement includes a first part and a second part. The first part includes a memory controller for accessing a memory, at least one first cache memory and a first directory. The second part includes at least one second cache memory configured to request access to said memory. The first directory is configured to use a first coherency protocol for the at least one first cache memory and a second different coherency protocol for the at least one second memory. | 04-25-2013 |
20140098617 | PACKAGE - A package includes a first die and a second die. An interface connects the first die and the second die. At least one of the first and second dies includes a memory. The interface is configured to transport both control signals and memory transactions. A multiplexing circuit multiplexes the control signals and the memory transactions onto the interface such that connections of the interface are shared by the control signals and the memory transactions. | 04-10-2014 |
Patent application number | Description | Published |
20090112945 | DATA PROCESSING APPARATUS AND METHOD OF PROCESSING DATA - Data processing apparatus comprising: a chunk store containing specimen data chunks, a manifest store containing a plurality of manifests, each of which represents at least a part of a data set and each of which comprises at least one reference to at least one of said specimen data chunks, a sparse chunk index containing information on only some specimen data chunks, the processor being operable to: process input data into input data chunks; identify manifests having at least one reference to one of said specimen data chunks that corresponds to one of said input data chunks and on which there is information contained in the sparse chunk index; and prioritize the identified manifests for subsequent operation. | 04-30-2009 |
20090112946 | DATA PROCESSING APPARATUS AND METHOD OF PROCESSING DATA - Data processing apparatus comprising: a chunk store partitioned into a plurality of chunk sections, at least one section storing specimen data chunks, the processing apparatus being operable to: process input data into one or more input data chunks; identify a chunk section already containing a specimen data chunk corresponding to at least one input data chunk; and store the at least one input data chunk in another chunk section as a specimen data chunk if the identified chunk section has a predetermined characteristic. | 04-30-2009 |
20130007359 | ACCESS COMMANDS INCLUDING EXPECTED MEDIA POSITIONS - Techniques to send and receive access commands are provided. The access commands may include an expected media position. The expected media position may be compared to an actual media position. | 01-03-2013 |
20150088839 | REPLACING A CHUNK OF DATA WITH A REFERENCE TO A LOCATION - Examples disclose a computing device comprising a deduplication module to analyze a signature associated with a chunk of data to identify a corresponding signature in an index of signatures on a hard drive. The corresponding signature indicates the chunk of data corresponds to a stored chunk of data within a removable media. Further, the deduplication module determines whether the chunk of data is redundant based on the identification of the corresponding signature and replaces the chunk of data with a reference to a location of the stored chunk of data. Additionally, the examples also disclose the removable media to store the reference to the chunk of data. | 03-26-2015 |
Patent application number | Description | Published |
20140341707 | SHROUD ARRANGEMENT FOR A GAS TURBINE ENGINE - A seal segment of a shroud arrangement for bounding a hot gas flow path within a gas turbine engine, including: a plate having an inboard hot gas flow path facing side and an outboard side; a bulkhead extending from the outboard side of the plate which defines a fore portion and an aft portion; a first cooling circuit within the plate for cooling a first portion of the plate; a second cooling circuit within the plate for cooling a second portion of the plate; wherein the first cooling circuit is in fluid communication with the fore portion and the second cooling circuit is in fluid communication with the aft portion and the first and second cooling circuits are fluidically isolated from one another. Also described is a method of cooling a seal segment in a gas turbine engine. | 11-20-2014 |
20140341717 | SHROUD ARRANGEMENT FOR A GAS TURBINE ENGINE - A seal segment of a shroud arrangement for bounding a hot gas flow path within a gas turbine engine is described. The seal segment is upstream of a second component of the gas turbine engine relative to the hot gas flow path. The seal segment comprises: a plate having: a downstream trailing edge; an inboard side which faces the hot gas flow path when in use; an outboard side; and a first part of a two part seal attached on the outboard side, wherein a second part of the two part seal is attached to the second component such that in an assembled gas turbine engine the two part seal provides an isolation chamber which is in fluid communication with the hot gas flow path via the trailing edge of the plate. A gas turbine having the seal segment is also described. | 11-20-2014 |
20140341721 | SHROUD ARRANGEMENT FOR A GAS TURBINE ENGINE - Described is a shroud arrangement for a gas turbine engine, comprising: a seal segment for bounding a hot gas flow path within the gas turbine engine, the seal segment being attached to a casing of the engine via at least one fore attachment and at least one aft attachment, the fore and aft attachment restricting radial movement of the seal segment relative to the engine casing; and, an axial restrictor which prevents axial movement of the seal segment relative to the engine casing, the axis being the principal rotational axis of the engine, wherein the fore and aft retention features are slidably engaged with a carrier segment or engine casing from a common direction. Also described is a gas turbine engine having the shroud arrangement. | 11-20-2014 |