Class / Patent application number | Description | Number of patent applications / Date published |
712025000 | Data driven or demand driven processor | 14 |
20080229060 | MICRO CONTROLLER AND METHOD OF UPDATING THE SAME - A micro controller includes a first storing circuit configured to store program data for performing a power on operation of a system, and a second storing circuit configured to temporarily store algorithm program data for operation of the system loaded from an external storing means while the system operates in response to control of the first storing circuit. | 09-18-2008 |
20090210654 | USING HISTORIC LOAD PROFILES TO DYNAMICALLY ADJUST OPERATING FREQUENCY AND AVAILABLE POWER TO A HANDHELD MULTIMEDIA DEVICE PROCESSOR CORE - A technique is provided for use in a handheld multimedia device that uses the historical load profile statistics of a particular multimedia stream to dynamically scale the computational power of a computing engine, depending upon the complexity of the multimedia content and thereby reduce the power consumption for computationally less intensive content and consequently reduce the power consumption by a significant amount over a duration of time. | 08-20-2009 |
20120102300 | ASYNCHRONOUS PIPELINE SYSTEM, STAGE, AND DATA TRANSFER MECHANISM - Disclosed are an asynchronous pipeline system, a stage, and a data transfer mechanism. The asynchronous pipeline system having a plurality of stages based on a 4-phase protocol, includes: a first stage among the plurality of stages; and a second stage among the plurality of stages connected next to the first stage, wherein the first stage transmits and the second receives bundled data and control data through an always bundled data channel and on-demand data through an on-demand data channel according to need of the second stage. | 04-26-2012 |
20120216014 | APPLYING ADVANCED ENERGY MANAGER IN A DISTRIBUTED ENVIRONMENT - Techniques are described for abating the negative effects of wait conditions in a distributed system by temporarily decreasing the execution time of processing elements. Embodiments of the invention may generally identify wait conditions from an operator graph and detect the slowest processing element preceding the wait condition based on either historical information or real-time data. Once identified, the slowest processing element may be sped up to lessen the negative consequences of the wait condition. Alternatively, if the slowest processing element shares the same compute node with another processing element in the distributed system, one of the processing elements may be transferred to a different compute node to free additional computing resources on the compute node. | 08-23-2012 |
20120272041 | MULTIPROCESSING OF DATA SETS - Various arrangements for processing data sets using multiple processors are presented. A plurality of constraints may be received by a computer system. Each constraint may identify a data relationship that requires a subset of records of one or more data sets to be processed by a same processing device. A plurality of final constraints may be calculated. Each final constraint of the plurality of final constraints may be linked with a record. Each final constraint of the plurality of final constraints may be at least partially based on the plurality of constraints. Final constraints of the plurality of final constraints having a same value may be linked with records that are to be processed by the same processing device. At least partially based on the final constraint, the set of records may be distributed to a plurality of processing devices for processing. | 10-25-2012 |
712026000 | Detection/pairing based on destination, ID tag, or data | 4 |
20080307197 | System and Method for Persistent Hardware System Serial Numbers - A system for computer hardware serial number management includes a computer system chassis comprising a chassis serial number. The chassis serial number is embodied on the computer system chassis as a physical serial number. A first RFID tag is attached to the computer system chassis at a first location. The first RFID tag stores indicia of the physical serial number. A first electronic device couples to the computer system chassis, and comprises a first RFID reader configured to retrieve the stored indicia of the physical serial number from the first RFID tag and to determine the chassis serial number based on the retrieved indicia of the physical serial number. | 12-11-2008 |
20090070551 | CREATION OF LOGICAL APIC ID WITH CLUSTER ID AND INTRA-CLUSTER ID - In some embodiments, an apparatus includes logical interrupt identification number creation logic to receive physical processor identification numbers and create logical processor identification numbers through using the physical processor identification numbers. Each of the logical processor identification numbers corresponds to one of the physical processor identification numbers, and the logical processor identification numbers each include a processor cluster identification number and an intra-cluster identification number. The processor cluster identification numbers are each formed to include a group of bits from the corresponding physical processor identification number shifted in position, and the intra-cluster identification numbers are each formed in response to values of others of the bits of the corresponding physical processor identification number. Other embodiments are described. | 03-12-2009 |
20090210655 | PROCESSOR, METHOD AND COMPUTER PROGRAM PRODUCT INCLUDING SPECIALIZED STORE QUEUE AND BUFFER DESIGN FOR SILENT STORE IMPLEMENTATION - A processor including an architecture for limiting store operations includes: a data input and a cache input as inputs to data merge logic; a merge buffer for providing an output to an old data buffer, holding a copy of a memory location and two way communication with a new data buffer; compare logic for receiving old data from the old data buffer and new data from the new data buffer and comparing if the old data matches the new data, and if there is a match determining an existence of a silent store; and store data control logic for limiting store operations while the silent store exists. A method and a computer program product are provided. | 08-20-2009 |
20140331026 | MULTI-FRAME DATA PROCESSING APPARATUS AND METHOD USING FRAME DISASSEMBLY - A multi-frame data processing apparatus and method using frame disassembly is provided. The multi-frame data apparatus includes a data communication unit, a frame processing unit, and a data processing unit The data communication unit receives a transmission signal from a Line Adaptation Unit (LAU). The frame processing unit disassembles each frame of the transmission signal and acquires information data that is included in the transmission signal. The data processing unit transfers the information data to an Algorithm Processing Unit (APU), and acquires processed information data that is obtained by processing the information data via the APU based on a corresponding algorithm. | 11-06-2014 |
712027000 | Particular data driven memory structure | 5 |
20080301404 | METHOD FOR CONTROLLING AN ELECTRONIC CIRCUIT AND CONTROLLING CIRCUIT - A method for controlling an electronic circuit including selecting at least one pre-stored generating rule from a plurality of pre-stored generating rules according to which a message which is to be transmitted to the electronic circuit for carrying out a controlling function to control the electronic circuit is to be generated, and generating the message according to the at least one selected generating rule. | 12-04-2008 |
20090031105 | Processor for executing group instructions requiring wide operands - A programmable processor and method for improving the performance of processors by expanding at least two source operands, or a source and a result operand, to a width greater than the width of either the general purpose register or the data path width. The present invention provides operands which are substantially larger than the data path width of the processor by using the contents of a general purpose register to specify a memory address at which a plurality of data path widths of data can be read or written, as well as the size and shape of the operand. In addition, several instructions and apparatus for implementing these instructions are described which obtain performance advantages if the operands are not limited to the width and accessible number of general purpose registers. | 01-29-2009 |
20110119467 | MASSIVELY PARALLEL, SMART MEMORY BASED ACCELERATOR - Systems and methods for massively parallel processing on an accelerator that includes a plurality of processing cores. Each processing core includes multiple processing chains configured to perform parallel computations, each of which includes a plurality of interconnected processing elements. The cores further include multiple of smart memory blocks configured to store and process data, each memory block accepting the output of one of the plurality of processing chains. The cores communicate with at least one off-chip memory bank. | 05-19-2011 |
20120185671 | COMPUTATIONAL RESOURCE PIPELINING IN GENERAL PURPOSE GRAPHICS PROCESSING UNIT - This disclosure describes techniques for extending the architecture of a general purpose graphics processing unit (GPGPU) with parallel processing units to allow efficient processing of pipeline-based applications. The techniques include configuring local memory buffers connected to parallel processing units operating as stages of a processing pipeline to hold data for transfer between the parallel processing units. The local memory buffers allow on-chip, low-power, direct data transfer between the parallel processing units. The local memory buffers may include hardware-based data flow control mechanisms to enable transfer of data between the parallel processing units. In this way, data may be passed directly from one parallel processing unit to the next parallel processing unit in the processing pipeline via the local memory buffers, in effect transforming the parallel processing units into a series of pipeline stages. | 07-19-2012 |
20130232320 | PERSISTENT PREFETCH DATA STREAM SETTINGS - A prefetch unit includes a transience register and a length register. The transience register hosts an indication of transient for data stream prefetching. The length register hosts an indication of a stream length for data stream prefetching. The prefetch unit monitors the transience register and the length register. The prefetch unit generates prefetch requests of data streams with a transient property up to the stream length limit when the transience register indicates transient and the length register indicates the stream length limit for data stream prefetching. A cache controller coupled with the prefetch unit implements a cache replacement policy and cache coherence protocols. The cache controller writes data supplied from memory responsive to the prefetch requests into cache with an indication of transient. The cache controller victimizes cache lines with an indication of transient independent of the cache replacement policy. | 09-05-2013 |