Patent application number | Description | Published |
20080263279 | DESIGN STRUCTURE FOR EXTENDING LOCAL CACHES IN A MULTIPROCESSOR SYSTEM - A design structure embodied in a machine readable storage medium for designing, manufacturing, and/or testing a design for caching data in a multiprocessor system is provided. The design structure includes a multiprocessor system, which includes a first processor including a first cache associated therewith, a second processor including a second cache associated therewith, and a main memory to store data required by the first processor and the second processor, the main memory being controlled by a memory controller that is in communication with each of the first processor and the second processor through a bus, wherein the second cache associated with the second processor is operable to cache data from the main memory corresponding to a memory access request of the first processor. | 10-23-2008 |
20100042786 | SNOOP-BASED PREFETCHING - A processing system is disclosed. The processing system includes a memory and a first core configured to process applications. The first core includes a first cache. The processing system includes a mechanism configured to capture a sequence of addresses of the application that miss the first cache in the first core and to place the sequence of addresses in a storage array; and a second core configured to process at least one software algorithm. The at least one software algorithm utilizes the sequence of addresses from the storage array to generate a sequence of prefetch addresses. The second core issues prefetch requests for the sequence of the prefetch addresses to the memory to obtain prefetched data and the prefetched data is provided to the first core if requested. | 02-18-2010 |
20100199045 | STORE-TO-LOAD FORWARDING MECHANISM FOR PROCESSOR RUNAHEAD MODE OPERATION - A system and method to optimize runahead operation for a processor without use of a separate explicit runahead cache structure. Rather than simply dropping store instructions in a processor runahead mode, store instructions write their results in an existing processor store queue, although store instructions are not allowed to update processor caches and system memory. Use of the store queue during runahead mode to hold store instruction results allows more recent runahead load instructions to search retired store queue entries in the store queue for matching addresses to utilize data from the retired, but still searchable, store instructions. Retired store instructions could be either runahead store instructions retired, or retired store instructions that executed before entering runahead mode. | 08-05-2010 |
20100274973 | DATA REORGANIZATION IN NON-UNIFORM CACHE ACCESS CACHES - Embodiments that dynamically reorganize data of cache lines in non-uniform cache access (NUCA) caches are contemplated. Various embodiments comprise a computing device, having one or more processors coupled with one or more NUCA cache elements. The NUCA cache elements may comprise one or more banks of cache memory, wherein ways of the cache are horizontally distributed across multiple banks. To improve access latency of the data by the processors, the computing devices may dynamically propagate cache lines into banks closer to the processors using the cache lines. To accomplish such dynamic reorganization, embodiments may maintain “direction” bits for cache lines. The direction bits may indicate to which processor the data should be moved. Further, embodiments may use the direction bits to make cache line movement decisions. | 10-28-2010 |
20110191603 | Power Management for Systems On a Chip - A system for controlling a multitasking microprocessor system includes an interconnect, a plurality of processing units connected to the interconnect forming a single-source, single-sink flow network, wherein the plurality of processing units pass data between one another from the single-source to the single-sink, and a monitor connected to the interconnect for monitoring a portion of a resource consumed by each of the plurality of processing units and for controlling the plurality of processing units according to a predetermined budget for the resource to control a data overflow condition, wherein the monitor controls performance and power modes of the plurality of processing units. | 08-04-2011 |
20120089979 | Performance Monitor Design for Counting Events Generated by Thread Groups - A number of hypervisor register fields are set to specify which processor cores are allowed to generate a number of performance events for a particular thread group. A plurality of threads for an application running in the computing environment to a plurality of thread groups are configured by a plurality of thread group fields in a plurality of control registers. A number of counter sets are allowed to count a number of thread group events originating from one of a shared resource and a shared cache are specified by a number of additional hypervisor register fields. | 04-12-2012 |
20120089984 | Performance Monitor Design for Instruction Profiling Using Shared Counters - Counter registers are shared among multiple threads executing on multiple processor cores. An event within the processor core is selected. A multiplexer in front of each of a number of counters is configured to route the event to a counter. A number of counters are assigned for the event to each of a plurality of threads running for a plurality of applications on a plurality of processor cores, wherein each of the counters includes a thread identifier in the interrupt thread identification field and a processor identifier in the processor identification field. The number of counters is configured to have a number of interrupt thread identification fields and a number of processor identification fields to identify a thread that will receive a number of interrupts. | 04-12-2012 |
20120089985 | Sharing Sampled Instruction Address Registers for Efficient Instruction Sampling in Massively Multithreaded Processors - Sampled instruction address registers are shared among multiple threads executing on a plurality of processor cores. Each of a plurality of sampled instruction address registers are assigned to a particular thread running for an application on the plurality of processor cores. Each of the sampled instruction address registers are configured by storing in each of the sampled instruction address registers a thread identification of the particular thread in a thread identification field and a processor identification of a particular processor on which the particular thread is running in a processor identification field. | 04-12-2012 |
20120246406 | EFFECTIVE PREFETCHING WITH MULTIPLE PROCESSORS AND THREADS - A processing system includes a memory and a first core configured to process applications. The first core includes a first cache. The processing system includes a mechanism configured to capture a sequence of addresses of the application that miss the first cache in the first core and to place the sequence of addresses in a storage array; and a second core configured to process at least one software algorithm. The at least one software algorithm utilizes the sequence of addresses from the storage array to generate a sequence of prefetch addresses. The second core issues prefetch requests for the sequence of the prefetch addresses to the memory to obtain prefetched data and the prefetched data is provided to the first core if requested. | 09-27-2012 |
20120284542 | POWER MANAGEMENT FOR SYSTEMS ON A CHIP - A method for controlling a multitasking microprocessor system includes monitoring the multitasking microprocessor system connected to an interconnect, the monitoring comprising monitoring performance of a plurality of processing units forming a producer-consumer system on the interconnect, and issuing commands to the plurality of processing units to provide operations and power distributions to the plurality of processing units such that the performance and power modes are assigned to the plurality of processing units based on the monitoring. | 11-08-2012 |
20150089263 | SYSTEM-WIDE POWER CONSERVATION USING MEMORY CACHE - A method, system, and computer program product for system-wide power conservation using memory cache are provided. A memory access request is received at a location in a memory architecture where processing the memory access request has to use a last level of cache before reaching a memory device holding a requested data. Using a memory controller, the memory access request is caused to wait, omitting adding the memory access request to a queue of existing memory access requests accepted for processing using the last level of cache. All the existing memory access requests in the queue are processed using the last level of cache. The last level of cache is purged to the memory device. The memory access request is processed using an alternative path to the memory device that avoids the last level of cache. A cache device used as the last level of cache is powered down. | 03-26-2015 |
Patent application number | Description | Published |
20080292136 | Data Processing System And Method - Embodiments of the invention provide a method of authenticating a physical document, comprising obtaining an electronic representation of at least part of the physical document; extracting at least one error detection code from the electronic representation; and using the at least one error detection code to detect errors in image data within the electronic representation. Embodiments of the invention also provide a method of securing a physical document, comprising obtaining an electronic representation of at least part of the physical document; determining at least one error detection code for image data within the electronic representation; and producing a secure physical document comprising the electronic representation and a machine readable marking including the at least one error detection code. | 11-27-2008 |
20080294557 | Data Processing System And Method - A method of authenticating a transaction, comprising providing details of a card to a merchant; providing transaction identifying information to a data processing device; and sending the transaction identifying information to a third party using the data processing device. | 11-27-2008 |
20090059309 | Document And Method Of Producing A Document - A physical document comprising a human-readable part and a machine-readable part, wherein the machine-readable part comprises markup that describes information on at least one of the document and data within the human-readable part. | 03-05-2009 |
20090103803 | MACHINE READABLE DOCUMENTS AND READING METHODS - A method of independently encoding an image with two information channels comprises generating an image which encodes a primary information channel based on brightness levels. The image is modified to encode a secondary information channel. This image modification comprises applying one of two image output values to the image portion, wherein the brightness of a modified image portion is not changed such as to change the primary information channel encoding. | 04-23-2009 |
20090259663 | Information Access Device And Network - An information access device is disclosed comprising an interface for connecting the information access device to a network; a further interface for providing the information access device with a string of information request indicators; an interpretation layer for extracting an information source from the string and for generating an instruction for triggering a different application of the device to retrieve the information from the information source; and a processor for executing the generated instruction. | 10-15-2009 |
20100153843 | Processing of Printed Documents - A document processing method comprises adding document markers to predetermined locations of an electronically stored document. These are printed with the document. The document is scanned and the scanned document markers are used to process the scanned image. This processing comprises at least pixel threshold setting, and determination of the locations of the scanned image which are to be processed to derive the pixels of a digital version of the document. This enables local deformations in the paper document to be corrected, and enables correct thresholds for the printing and scanning operations to be applied. The electronically stored document can be processed to derive a set of document properties which can be used when constructing the digital version. | 06-17-2010 |
20120197991 | METHOD AND SYSTEM FOR INTUITIVE INTERACTION OVER A NETWORK - Intuitive interaction may be performed over a network. The interaction may include collection of feedback from participants, wherein the feedback is active, passive, or a combination of both. The feedback from the participants may be aggregated and the aggregated feedback can be provided to at least one participant or a non-participant. | 08-02-2012 |