Patent application number | Description | Published |
20120159057 | MEMORY POWER TOKENS - Techniques are described for controlling availability of memory. As memory write operations are processed, the contents of memory targeted by the write operations are read and compared to the data to be written. The availability of the memory for subsequent write operations is controlled based on the outcomes of the comparing. How many concurrent write operations are being executed may vary according to the comparing. In one implementation, a pool of tokens is maintained based on the comparing. The tokens represent units of power. When write operations require more power, for example when they will alter the values of more cells in PCM memory, they draw (and eventually return) more tokens. The token pool can act as a memory-availability mechanism in that tokens must be obtained for a write operation to be executed. When and how many tokens are reserved or recycled can vary according to implementation. | 06-21-2012 |
20120311269 | NON-UNIFORM MEMORY-AWARE CACHE MANAGEMENT - An apparatus is disclosed for caching memory data in a computer system with multiple system memories. The apparatus comprises a data cache for caching memory data. The apparatus is configured to determine a retention priority for a cache block stored in the data cache. The retention priority is based on a performance characteristic of a system memory from which the cache block is cached. | 12-06-2012 |
20120317364 | CACHE PREFETCHING FROM NON-UNIFORM MEMORIES - An apparatus is disclosed for performing cache prefetching from non-uniform memories. The apparatus includes a processor configured to access multiple system memories with different respective performance characteristics. Each memory stores a respective subset of system memory data. The apparatus includes caching logic configured to determine a portion of the system memory to prefetch into the data cache. The caching logic determines the portion to prefetch based on one or more of the respective performance characteristics of the system memory that stores the portion of data. | 12-13-2012 |
20120317376 | ROW BUFFER REGISTER FILE - A memory controller of a device stores data from each of a plurality of row buffers of a multiple-bank memory device in a corresponding entry of a row buffer register file (RBRF) provided in a logic/interface layer of the memory device. The memory controller serves a first memory request from an entry in the RBRF responsive to determining that the entry stores data from a first row buffer associated with the first memory request. | 12-13-2012 |
20130024597 | TRACKING MEMORY ACCESS FREQUENCIES AND UTILIZATION - A method is provided including recording, in a counter of a set of counters, a number of cache accesses for a page corresponding to a translation lookaside buffer (TLB) page table entry, where the counters are physically grouped together and physically separate from the TLB. The method also includes recording the number of cache accesses from the corresponding counter to a field of the page table responsive to an event. An apparatus is provided that includes a memory unit and a set of counters coupled to the one memory unit, the set of counters comprises one or more counters that are physically grouped together and are adapted to store a value indicative of a number of memory page accesses. The apparatus includes a cache coupled to the set of counters. Also provided is a computer readable storage device encoded with data for adapting a manufacturing facility to create the apparatus. | 01-24-2013 |
20130054849 | UNIFORM MULTI-CHIP IDENTIFICATION AND ROUTING SYSTEM - Various methods, computer-readable mediums, articles of manufacture and systems are disclosed. In one aspect, a method is provided that includes generating a packet with a first semiconductor chip. The packet is destined to transit a first substrate and be received by a node of a second semiconductor chip. The packet includes a packet header and packet body. The packet header includes an identification of a first exit point from the first substrate and an identification of the node. The packet is sent to the first substrate and eventually to the node of the second semiconductor chip. | 02-28-2013 |
20130138892 | DRAM CACHE WITH TAGS AND DATA JOINTLY STORED IN PHYSICAL ROWS - A system and method for efficient cache data access in a large row-based memory of a computing system. A computing system includes a processing unit and an integrated three-dimensional (3D) dynamic random access memory (DRAM). The processing unit uses the 3D DRAM as a cache. Each row of the multiple rows in the memory array banks of the 3D DRAM stores at least multiple cache tags and multiple corresponding cache lines indicated by the multiple cache tags. In response to receiving a memory request from the processing unit, the 3D DRAM performs a memory access according to the received memory request on a given cache line indicated by a cache tag within the received memory request. Rather than utilizing multiple DRAM transactions, a single, complex DRAM transaction may be used to reduce latency and power consumption. | 05-30-2013 |
20130138894 | HARDWARE FILTER FOR TRACKING BLOCK PRESENCE IN LARGE CACHES - A system and method for efficiently determining whether a requested memory location is in a large row-based memory of a computing system. A computing system includes a processing unit that generates memory requests on a first chip and a cache (LLC) on a second chip connected to the first chip. The processing unit includes an access filter that determines whether to access the cache. The cache is fabricated on top of the processing unit. The processing unit determines whether to access the access filter for a given memory request. The processing unit accesses the access filter to determine whether given data associated with a given memory request is stored within the cache. In response to determining the access filter indicates the given data is not stored within the cache, the processing unit generates a memory request to send to off-package memory. | 05-30-2013 |
20130159623 | PROCESSOR WITH GARBAGE-COLLECTION BASED CLASSIFICATION OF MEMORY - Improved memory management in a processor is provided using garbage collection utilities. The processor includes higher performance memory units and lower performance memory units and a memory management unit. The memory management unit includes a garbage collection utility programmed to identify high use memory blocks and low use memory blocks within the higher and lower performance memory units. The memory management unit is also configured to move the high use memory blocks to higher performance memory and move the low use memory blocks to lower performance memory. The method comprises determining performance characteristics of available memory to identify higher performance memory and lower performance memory. Next memory block use metrics are analyzed to identify high use memory blocks and low use memory blocks. Finally, high use memory blocks are moved to the higher performance memory while the low use memory blocks are moved to the lower performance memory. | 06-20-2013 |
20130159812 | MEMORY ARCHITECTURE FOR READ-MODIFY-WRITE OPERATIONS - According to one embodiment, a memory architecture implemented method is provided, where the memory architecture includes a logic chip and one or more memory chips on a single die, and where the method comprises: reading values of data from the one or more memory chips to the logic chip, where the one or more memory chips and the logic chip are on a single die; modifying, via the logic chip on the single die, the values of data; and writing, from the logic chip to the one or more memory chips, the modified values of data. | 06-20-2013 |
20130238856 | System and Method for Cache Organization in Row-Based Memories - The present disclosure relates to a method and system for mapping cache lines to a row-based cache. In particular, a method includes, in response to a plurality of memory access requests each including an address associated with a cache line of a main memory, mapping sequentially addressed cache lines of the main memory to a row of the row-based cache. A disclosed system includes row index computation logic operative to map sequentially addressed cache lines of a main memory to a row of a row-based cache in response to a plurality of memory access requests each including an address associated with a cache line of the main memory. | 09-12-2013 |
20130297906 | METHOD AND APPARATUS FOR BATCHING MEMORY REQUESTS - A memory controller includes a batch unit, a batch scheduler, and a memory command scheduler. The batch unit includes a plurality of source queues for receiving memory requests from a plurality of sources. Each source is associated with a selected one of the source queues. The batch unit is operable to generate batches of memory requests in the source queues. The batch scheduler is operable to select a batch from one of the source queues. The memory command scheduler is operable to receive the selected batch from the batch scheduler and issue the memory requests in the selected batch to a memory interfacing with the memory controller. | 11-07-2013 |
20130326185 | MEMORY POWER TOKENS - Techniques are described for controlling availability of memory. As memory write operations are processed, the contents of memory targeted by the write operations are read and compared to the data to be written. The availability of the memory for subsequent write operations is controlled based on the outcomes of the comparing. How many concurrent write operations are being executed may vary according to the comparing. In one implementation, a pool of tokens is maintained based on the comparing. The tokens represent units of power. When write operations require more power, for example when they will alter the values of more cells in PCM memory, they draw (and eventually return) more tokens. The token pool can act as a memory-availability mechanism in that tokens must be obtained for a write operation to be executed. When and how many tokens are reserved or recycled can vary according to implementation. | 12-05-2013 |
20130346695 | INTEGRATED CIRCUIT WITH HIGH RELIABILITY CACHE CONTROLLER AND METHOD THEREFOR - An integrated circuit includes a register including a field for defining a high reliability mode of the integrated circuit and a cache and memory controller coupled to the register and responsive to the high reliability mode to access a memory to store, in a row of the memory, a first multiple number of cache lines, a first multiple number of tags corresponding to the first multiple number of cache lines, and reliability data corresponding to at least the first multiple number of cache lines. | 12-26-2013 |
20140040532 | STACKED MEMORY DEVICE WITH HELPER PROCESSOR - A processing system comprises one or more processor devices and other system components coupled to a stacked memory device having a set of stacked memory layers and a set of one or more logic layers. The set of logic layers implements a helper processor that executes instructions to perform tasks in response to a task request from the processor devices or otherwise on behalf of the other processor devices. The set of logic layers also includes a memory interface coupled to memory cell circuitry implemented in the set of stacked memory layers and coupleable to the processor devices. The memory interface operates to perform memory accesses for the processor devices and for the helper processor. By virtue of the helper processor's tight integration with the stacked memory layers, the helper processor may perform certain memory-intensive operations more efficiently than could be performed by the external processor devices. | 02-06-2014 |
20140040698 | STACKED MEMORY DEVICE WITH METADATA MANGEMENT - A processing system comprises one or more processor devices and other system components coupled to a stacked memory device having a set of stacked memory layers and a set of one or more logic layers. The set of logic layers implements a metadata manager that offloads metadata management from the other system components. The set of logic layers also includes a memory interface coupled to memory cell circuitry implemented in the set of stacked memory layers and coupleable to the devices external to the stacked memory device. The memory interface operates to perform memory accesses for the external devices and for the metadata manager. By virtue of the metadata manager's tight integration with the stacked memory layers, the metadata manager may perform certain memory-intensive metadata management operations more efficiently than could be performed by the external devices. | 02-06-2014 |
20140082322 | PROGRAMMABLE PHYSICAL ADDRESS MAPPING FOR MEMORY - A memory implements a programmable physical address mapping that can change to reflect changing memory access patterns, observed or anticipated, to the memory. The memory employs address decode logic that can implement any of a variety of physical address mappings between physical addresses and corresponding memory locations. The physical address mappings may locate the data within one or more banks and rows of the memory so as to facilitate more efficient memory accesses for a given access pattern. The programmable physical address mapping employed by the hardware of the memory can include, but is not limited to, hardwired logic gates, programmable look-up tables or other mapping tables, reconfigurable logic, or combinations thereof. The physical address mapping may be programmed for the entire memory or on a per-memory region basis. | 03-20-2014 |
20140089609 | INTERPOSER HAVING EMBEDDED MEMORY CONTROLLER CIRCUITRY - A system is provided that includes an interposer having memory controller circuitry embedded therein. The interposer includes conductive vias that are embedded within and that extend through the interposer. The memory controller circuitry can be coupled to some of the conductive vias. In some implementations, other ones of the conductive vias are configured to be coupled to a processor and a memory module that can be mounted along a surface of the interposer. Conductive links are disposed on a surface of the interposer to couple the processor and the memory module to the memory controller circuitry. | 03-27-2014 |
20140108885 | HIGH RELIABILITY MEMORY CONTROLLER - An integrated circuit includes a memory having an address space and a memory controller coupled to the memory for accessing the address space in response to received memory accesses. The memory controller further accesses a plurality of data elements in a first portion of the address space, and reliability data corresponding to the plurality of data elements in a second portion of the address space. | 04-17-2014 |
20140136870 | TRACKING MEMORY BANK UTILITY AND COST FOR INTELLIGENT SHUTDOWN DECISIONS - A device receives an indication that a memory bank is to be powered down, and determines, based on receiving the indication, shutdown scores corresponding to powered up memory banks. Each shutdown score is based on a shutdown metric associated with powering down a powered up memory bank. The device may power down a selected memory bank based on the shutdown scores. | 05-15-2014 |
20140136873 | TRACKING MEMORY BANK UTILITY AND COST FOR INTELLIGENT POWER UP DECISIONS - A device receives an indication that a memory bank is to be powered up, and determines, based on receiving the indication, power scores corresponding to powered down memory banks. Each power score corresponds to a power metric associated with powering up a powered down memory bank. The device powers up a selected memory bank based on the plurality of power scores. | 05-15-2014 |
20140143493 | Bypassing a Cache when Handling Memory Requests - The described embodiments include a computing device that handles memory requests. In some embodiments, when a memory request is to be sent to a cache in the computing device or to be bypassed to a next lower level of a memory hierarchy in the computing device based on expected memory request resolution times, a bypass mechanism is configured to send the memory request to the cache or bypass the memory request to the next lower level of the memory hierarchy. | 05-22-2014 |
20140143502 | Predicting Outcomes for Memory Requests in a Cache Memory - The described embodiments include a cache controller with a prediction mechanism in a cache. In the described embodiments, the prediction mechanism is configured to perform a lookup in each table in a hierarchy of lookup tables in parallel to determine if a memory request is predicted to be a hit in the cache, each table in the hierarchy comprising predictions whether memory requests to corresponding regions of a main memory will hit the cache, the corresponding regions of the main memory being smaller for tables lower in the hierarchy. | 05-22-2014 |
20140143505 | Dynamically Configuring Regions of a Main Memory in a Write-Back Mode or a Write-Through Mode - The described embodiments include a main memory and a cache memory (or “cache”) with a cache controller that includes a mode-setting mechanism. In some embodiments, the mode-setting mechanism is configured to dynamically determine an access pattern for the main memory. Based on the determined access pattern, the mode-setting mechanism configures at least one region of the main memory in a write-back mode and configures other regions of the main memory in a write-through mode. In these embodiments, when performing a write operation in the cache memory, the cache controller determines whether a region in the main memory where the cache block is from is configured in the write-back mode or the write-through mode and then performs a corresponding write operation in the cache memory | 05-22-2014 |
20140156941 | Tracking Non-Native Content in Caches - The described embodiments include a cache with a plurality of banks that includes a cache controller. In these embodiments, the cache controller determines a value representing non-native cache blocks stored in at least one bank in the cache, wherein a cache block is non-native to a bank when a home for the cache block is in a predetermined location relative to the bank. Then, based on the value representing non-native cache blocks stored in the at least one bank, the cache controller determines at least one bank in the cache to be transitioned from a first power mode to a second power mode. Next, the cache controller transitions the determined at least one bank in the cache from the first power mode to the second power mode. | 06-05-2014 |
20140164711 | Configuring a Cache Management Mechanism Based on Future Accesses in a Cache - The described embodiments include a cache controller that configures a cache management mechanism. In the described embodiments, the cache controller is configured to monitor at least one structure associated with a cache to determine at least one cache block that may be accessed during a future access in the cache. Based on the determination of the at least one cache block that may be accessed during a future access in the cache, the cache controller configures the cache management mechanism. | 06-12-2014 |
20140164713 | Bypassing Memory Requests to a Main Memory - Some embodiments include a computing device with a control circuit that handles memory requests. The control circuit checks one or more conditions to determine when a memory request should be bypassed to a main memory instead of sending the memory request to a cache memory. When the memory request should be bypassed to a main memory, the control circuit sends the memory request to the main memory. Otherwise, the control circuit sends the memory request to the cache memory. | 06-12-2014 |
20140173211 | Partitioning Caches for Sub-Entities in Computing Devices - Some embodiments include a partitioning mechanism that partitions a cache memory into sub-partitions for sub-entities. In the described embodiments, the cache memory is initially partitioned into two or more partitions for one or more corresponding entities. During a partitioning operation, the partitioning mechanism is configured to partition one or more of the partitions in the cache memory into two or more sub-partitions for one or more sub-entities of a corresponding entity. A cache controller then uses a corresponding sub-partition for memory accesses by the one or more sub-entities. | 06-19-2014 |
20140173378 | PARITY DATA MANAGEMENT FOR A MEMORY ARCHITECTURE - A processor system as presented herein includes a processor core, cache memory coupled to the processor core, a memory controller coupled to the cache memory, and a system memory component coupled to the memory controller. The system memory component includes a plurality of independent memory channels configured to store data blocks, wherein the memory controller controls the storing of parity bits in at least one of the plurality of independent memory channels. In some implementations, the system memory is realized as a die-stacked memory component. | 06-19-2014 |
20140173379 | DIRTY CACHELINE DUPLICATION - A method of managing memory includes installing a first cacheline at a first location in a cache memory and receiving a write request. In response to the write request, the first cacheline is modified in accordance with the write request and marked as dirty. Also in response to the write request, a second cacheline is installed that duplicates the first cacheline, as modified in accordance with the write request, at a second location in the cache memory. | 06-19-2014 |
20140176187 | DIE-STACKED MEMORY DEVICE WITH RECONFIGURABLE LOGIC - A die-stacked memory device incorporates a reconfigurable logic device to provide implementation flexibility in performing various data manipulation operations and other memory operations that use data stored in the die-stacked memory device or that result in data that is to be stored in the die-stacked memory device. One or more configuration files representing corresponding logic configurations for the reconfigurable logic device can be stored in a configuration store at the die-stacked memory device, and a configuration controller can program a reconfigurable logic fabric of the reconfigurable logic device using a selected one of the configuration files. Due to the integration of the logic dies and the memory dies, the reconfigurable logic device can perform various data manipulation operations with higher bandwidth and lower latency and power consumption compared to devices external to the die-stacked memory device. | 06-26-2014 |
20140177626 | DIE-STACKED DEVICE WITH PARTITIONED MULTI-HOP NETWORK - An electronic assembly includes horizontally-stacked die disposed at an interposer, and may also include vertically-stacked die. The stacked die are interconnected via a multi-hop communication network that is partitioned into a link partition and a router partition. The link partition is at least partially implemented in the metal layers of the interposer for horizontally-stacked die. The link partition may also be implemented in part by the intra-die interconnects in a single die and by the inter-die interconnects connecting vertically-stacked sets of die. The router partition is implemented at some or all of the die disposed at the interposer and comprises the logic that supports the functions that route packets among the components of the processing system via the interconnects of the link partition. The router partition may implement fixed routing, or alternatively may be configurable using programmable routing tables or configurable logic blocks. | 06-26-2014 |
20140181387 | HYBRID CACHE - Data caching methods and systems are provided. A method is provided for a hybrid cache system that dynamically changes modes of one or more cache rows of a cache between an un-split mode having a first tag field and a first data field to a split mode having a second tag field, a second data field being smaller than the first data field and a mapped page field to improve the cache access efficiency of a workflow being executed in a processor. A hybrid cache system is provided in which the cache is configured to operate one or more cache rows in an un-split mode or in a split mode. The system is configured to dynamically change modes of the cache rows from the un-split mode to the split mode to improve the cache access efficiency of a workflow being executed by the processor. | 06-26-2014 |
20140181389 | INSTALLATION CACHE - Data caching methods and systems are provided. The data cache method loads data into an installation cache and a cache (simultaneously or serially) and returns data from the installation cache when the data has not completely loaded into the cache. The data cache system includes a processor, a memory coupled to the processor, a cache coupled to the processor and the memory and an installation cache coupled to the processor and the memory. The system is configured to load data from the memory into the installation cache and the cache (simultaneously or serially) and return data from the installation cache to the processor when the data has not completely loaded into the cache. | 06-26-2014 |
20140181412 | MECHANISMS TO BOUND THE PRESENCE OF CACHE BLOCKS WITH SPECIFIC PROPERTIES IN CACHES - A system and method for efficiently limiting storage space for data with particular properties in a cache memory. A computing system includes a cache and one or more sources for memory requests. In response to receiving a request to allocate data of a first type, a cache controller allocates the data in the cache responsive to determining a limit of an amount of data of the first type permitted in the cache is not reached. The controller maintains an amount and location information of the data of the first type stored in the cache. Additionally, the cache may be partitioned with each partition designated for storing data of a given type. Allocation of data of the first type is dependent at least upon the availability of a first partition and a limit of an amount of data of the first type in a second partition. | 06-26-2014 |
20140181414 | MECHANISMS TO BOUND THE PRESENCE OF CACHE BLOCKS WITH SPECIFIC PROPERTIES IN CACHES - A system and method for efficiently limiting storage space for data with particular properties in a cache memory. A computing system includes a cache array and a corresponding cache controller. The cache array includes multiple banks, wherein a first bank is powered down. In response a write request to a second bank for data indicated to be stored in the powered down first bank, the cache controller determines a respective bypass condition for the data. If the bypass condition exceeds a threshold, then the cache controller invalidates any copy of the data stored in the second bank. If the bypass condition does not exceed the threshold, then the cache controller stores the data with a clean state in the second bank. The cache controller writes the data in a lower-level memory for both cases. | 06-26-2014 |
20140181417 | CACHE COHERENCY USING DIE-STACKED MEMORY DEVICE WITH LOGIC DIE - A die-stacked memory device implements an integrated coherency manager to offload cache coherency protocol operations for the devices of a processing system. The die-stacked memory device includes a set of one or more stacked memory dies and a set of one or more logic dies. The one or more logic dies implement hardware logic providing a memory interface and the coherency manager. The memory interface operates to perform memory accesses in response to memory access requests from the coherency manager and the one or more external devices. The coherency manager comprises logic to perform coherency operations for shared data stored at the stacked memory dies. Due to the integration of the logic dies and the memory dies, the coherency manager can access shared data stored in the memory dies and perform related coherency operations with higher bandwidth and lower latency and power consumption compared to the external devices. | 06-26-2014 |
20140181421 | PROCESSING ENGINE FOR COMPLEX ATOMIC OPERATIONS - A system includes an atomic processing engine (APE) coupled to an interconnect. The interconnect is to couple to one or more processor cores. The APE receives a plurality of commands from the one or more processor cores through the interconnect. In response to a first command, the APE performs a first plurality of operations associated with the first command. The first plurality of operations references multiple memory locations, at least one of which is shared between two or more threads executed by the one or more processor cores. | 06-26-2014 |
20140181427 | Compound Memory Operations in a Logic Layer of a Stacked Memory - Some die-stacked memories will contain a logic layer in addition to one or more layers of DRAM (or other memory technology). This logic layer may be a discrete logic die or logic on a silicon interposer associated with a stack of memory dies. Additional circuitry/functionality is placed on the logic layer to implement functionality to perform various data movement and address calculation operations. This functionality would allow compound memory operations—a single request communicated to the memory that characterizes the accesses and movement of many data items. This eliminates the performance and power overheads associated with communicating address and control information on a fine-grain, per-data-item basis from a host processor (or other device) to the memory. This approach also provides better visibility of macro-level memory access patterns to the memory system and may enable additional optimizations in scheduling memory accesses. | 06-26-2014 |
20140181428 | QUALITY OF SERVICE SUPPORT USING STACKED MEMORY DEVICE WITH LOGIC DIE - A die-stacked memory device implements an integrated QoS manager to provide centralized QoS functionality in furtherance of one or more specified QoS objectives for the sharing of the memory resources by other components of the processing system. The die-stacked memory device includes a set of one or more stacked memory dies and one or more logic dies. The logic dies implement hardware logic for a memory controller and the QoS manager. The memory controller is coupleable to one or more devices external to the set of one or more stacked memory dies and operates to service memory access requests from the one or more external devices. The QoS manager comprises logic to perform operations in furtherance of one or more QoS objectives, which may be specified by a user, by an operating system, hypervisor, job management software, or other application being executed, or specified via hardcoded logic or firmware. | 06-26-2014 |
20140181453 | Processor with Host and Slave Operating Modes Stacked with Memory - A system, method, and computer program product are provided for a memory device system. One or more memory dies and at least one logic die are disposed in a package and communicatively coupled. The logic die comprises a processing device configurable to manage virtual memory and operate in an operating mode. The operating mode is selected from a set of operating modes comprising a slave operating mode and a host operating mode. | 06-26-2014 |
20140181457 | Write Endurance Management Techniques in the Logic Layer of a Stacked Memory - A system, method, and memory device embodying some aspects of the present invention for remapping external memory addresses and internal memory locations in stacked memory are provided. The stacked memory includes one or more memory layers configured to store data. The stacked memory also includes a logic layer connected to the memory layer. The logic layer has an Input/Output (I/O) port configured to receive read and write commands from external devices, a memory map configured to maintain an association between external memory addresses and internal memory locations, and a controller coupled to the I/O port, memory map, and memory layers, configured to store data received from external devices to internal memory locations. | 06-26-2014 |
20140181458 | DIE-STACKED MEMORY DEVICE PROVIDING DATA TRANSLATION - A die-stacked memory device incorporates a data translation controller at one or more logic dies of the device to provide data translation services for data to be stored at, or retrieved from, the die-stacked memory device. The data translation operations implemented by the data translation controller can include compression/decompression operations, encryption/decryption operations, format translations, wear-leveling translations, data ordering operations, and the like. Due to the tight integration of the logic dies and the memory dies, the data translation controller can perform data translation operations with higher bandwidth and lower latency and power consumption compared to operations performed by devices external to the die-stacked memory device. | 06-26-2014 |
20140181483 | Computation Memory Operations in a Logic Layer of a Stacked Memory - Some die-stacked memories will contain a logic layer in addition to one or more layers of DRAM (or other memory technology). This logic layer may be a discrete logic die or logic on a silicon interposer associated with a stack of memory dies. Additional circuitry/functionality is placed on the logic layer to implement functionality to perform various computation operations. This functionality would be desired where performing the operations locally near the memory devices would allow increased performance and/or power efficiency by avoiding transmission of data across the interface to the host processor. | 06-26-2014 |
20140223445 | Selecting a Resource from a Set of Resources for Performing an Operation - The described embodiments comprise a selection mechanism that selects a resource from a set of resources in a computing device for performing an operation. In some embodiments, the selection mechanism is configured to perform a lookup in a table selected from a set of tables to identify a resource from the set of resources. When the identified resource is not available for performing the operation and until a resource is selected for performing the operation, the selection mechanism is configured to identify a next resource in the table and select the next resource for performing the operation when the next resource is available for performing the operation. | 08-07-2014 |
20140372711 | SCHEDULING MEMORY ACCESSES USING AN EFFICIENT ROW BURST VALUE - A memory accessing agent includes a memory access generating circuit and a memory controller. The memory access generating circuit is adapted to generate multiple memory accesses in a first ordered arrangement. The memory controller is coupled to the memory access generating circuit and has an output port, for providing the multiple memory accesses to the output port in a second ordered arrangement based on the memory accesses and characteristics of an external memory. The memory controller determines the second ordered arrangement by calculating an efficient row burst value and interrupting multiple row-hit requests to schedule a row-miss request based on the efficient row burst value. | 12-18-2014 |
20140376320 | SPARE MEMORY EXTERNAL TO PROTECTED MEMORY - A memory subsystem employs spare memory cells external to one or more memory devices. In some embodiments, a processing system uses the spare memory cells to replace individual selected cells at the protected memory, whereby the selected cells are replaced on a cell-by-cell basis, rather than exclusively on a row-by-row, column-by-column, or block-by-block basis. This allows faulty memory cells to be replaced efficiently, thereby improving memory reliability and manufacturing yields, without requiring large blocks of spare memory cells. | 12-25-2014 |
20140380003 | Method and System for Asymmetrical Processing With Managed Data Affinity - Methods, systems and computer readable storage mediums for more efficient and flexible scheduling of tasks on an asymmetric processing system having at least one host processor and one or more slave processors, are disclosed. An example embodiment includes, determining a data access requirement of a task, comparing the data access requirement to respective local memories of the one or more slave processors selecting a slave processor from the one or more slave processors based upon the comparing, and running the task on the selected slave processor. | 12-25-2014 |
20150016172 | QUERY OPERATIONS FOR STACKED-DIE MEMORY DEVICE - An integrated circuit (IC) package includes a stacked-die memory device. The stacked-die memory device includes a set of one or more stacked memory dies implementing memory cell circuitry. The stacked-die memory device further includes a set of one or more logic dies electrically coupled to the memory cell circuitry. The set of one or more logic dies includes a query controller and a memory controller. The memory controller is coupleable to at least one device external to the stacked-die memory device. The query controller is to perform a query operation on data stored in the memory cell circuitry responsive to a query command received from the external device. | 01-15-2015 |
20150019813 | MEMORY HIERARCHY USING ROW-BASED COMPRESSION - A system includes a first memory and a device coupleable to the first memory. The device includes a second memory to cache data from the first memory. The second memory includes a plurality of rows, each row including a corresponding set of compressed data blocks of non-uniform sizes and a corresponding set of tag blocks. Each tag block represents a corresponding compressed data block of the row. The device further includes decompression logic to decompress data blocks accessed from the second memory. The device further includes compression logic to compress data blocks to be stored in the second memory. | 01-15-2015 |
20150019834 | MEMORY HIERARCHY USING PAGE-BASED COMPRESSION - A system includes a device coupleable to a first memory. The device includes a second memory to cache data from the first memory. The second memory is to store a set of compressed pages of the first memory and a set of page descriptors. Each compressed page includes a set of compressed data blocks. Each page descriptor represents a corresponding page and includes a set of location identifiers that identify the locations of the compressed data blocks of the corresponding page in the second memory. The device further includes compression logic to compress data blocks of a page to be stored to the second memory and decompression logic to decompress compressed data blocks of a page accessed from the second memory. | 01-15-2015 |
20150026511 | PARTITIONABLE DATA BUS - A method and a system are provided for partitioning a system data bus. The method can include partitioning off a portion of a system data bus that includes one or more faulty bits to form a partitioned data bus. Further, the method includes transferring data over the partitioned data bus to compensate for data loss due to the one or more faulty bits in the system data bus. | 01-22-2015 |
20150039833 | Management of caches - A system and method for efficiently powering down banks in a cache memory for reducing power consumption. A computing system includes a cache array and a corresponding cache controller. The cache array includes multiple banks, each comprising multiple cache sets. In response to a request to power down a first bank of the multiple banks in the cache array, the cache controller selects a cache line of a given type in the first bank and determines whether a respective locality of reference for the selected cache line exceeds a threshold. If the threshold is exceeded, then the selected cache line is migrated to a second bank in the cache array. If the threshold is not exceeded, then the selected cache line is written back to lower-level memory. | 02-05-2015 |
20150061150 | STACKED SEMICONDUCTOR CHIP DEVICE WITH PHASE CHANGE MATERIAL - Various stacked semiconductor chip arrangements and methods of manufacturing the same are disclosed. In one aspect, an apparatus is provided that includes a first semiconductor chip, a second semiconductor chip mounted on the first semiconductor chip, and a first portion of a phase change material positioned in a first pocket associated with the first semiconductor chip or the second semiconductor chip to store heat generated by one or both of the first and second semiconductor chips. | 03-05-2015 |