Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


PROCESSING ARCHITECTURE

Subclass of:

712 - Electrical computers and digital processing systems: processing architectures and instruction processing (e.g., processors)

Patent class list (only not empty are listed)

Deeper subclasses:

Class / Patent application numberDescriptionNumber of patent applications / Date published
712028000 Distributed processing system 332
712010000 Array processor 204
712032000 Microprocessor or multichip or multimodule processor having sequential program control 176
712002000 Vector processor 108
712024000 Long instruction word 16
712023000 Superscalar 14
712025000 Data driven or demand driven processor 12
Entries
DocumentTitleDate
20130212352DYNAMICALLY CONTROLLED PIPELINE PROCESSING - Systems, apparatuses, methods, and software for processing data in pipeline architectures are provided herein. In one example, a pipeline architecture is presented. The pipeline architecture includes a plurality of processing stages, linked in series, that iteratively process data as the data propagates through the plurality of processing stages. The pipeline architecture includes at least one other processing stage linked in series with and preceded by the plurality of processing stages and configured to iteratively process the data a number of times based at least on an iteration count comprising how many times the data was iteratively processed as the data propagated through the plurality of processing stages.08-15-2013
20130046954MULTI-THREADED DFA ARCHITECTURE - Disclosed is an architecture, system and method for performing multi-thread DFA descents on a single input stream. An executer performs DFA transitions from a plurality of threads each starting at a different point in an input stream. A plurality of executers may operate in parallel to each other and a plurality of thread contexts operate concurrently within each executer to maintain the context of each thread which is state transitioning. A scheduler in each executer arbitrates instructions for the thread into an at least one pipeline where the instructions are executed. Tokens may be output from each of the plurality of executers to a token processor which sorts and filters the tokens into dispatch order.02-21-2013
20120117357ENERGY TILE PROCESSOR - An energy tile processor in which an internal structure of a single processor is divided into a part for supplying instructions and another part for executing the instructions in order for operating voltages and operating frequencies to be supplied independently. The processor includes an instruction supply unit storing instructions and issuing instructions to be executed, a first execution unit executing an integer operation and a memory operation according to an operation type of the instruction issued by the instruction supply unit, and a second execution unit executing a floating point operation according to an operation type of the instruction issued by the instruction supply unit. The instruction supply unit, the first execution unit, and the second execution unit are driven at operating voltages and operating frequencies which are independently controlled.05-10-2012
20110022820SYSTEMS, DEVICES, AND METHODS FOR ANALOG PROCESSING - A system may include first and second qubits that cross one another and a first coupler having a perimeter that encompasses at least a part of the portions of the first and second qubits, the first coupler being operable to ferromagnetically or anti-ferromagnetically couple the first and the second qubits together. A multi-layered computer chip may include a first plurality N of qubits laid out in a first metal layer, a second plurality M of qubits laid out at least partially in a second metal layer that cross each of the qubits of the first plurality of qubits, and a first plurality N times M of coupling devices that at least partially encompasses an area where a respective pair of the qubits from the first and the second plurality of qubits cross each other.01-27-2011
20080270745Hardware acceleration of a write-buffering software transactional memory - A method and apparatus for accelerating a software transactional memory (STM) system is described herein. Annotation field are associated with lines of a transactional memory. An annotation field associated with a line of the transaction memory is initialized to a first value upon starting a transaction. In response to encountering a read operation in the transaction, then annotation field is checked. If the annotation field includes a first value, the read is serviced from the line of the transaction memory without having to search an additional write space. A second and third value in the annotation field potentially indicates whether a read operation missed the transactional memory or a tentative value is stored in a write space. Additionally, an additional bit in the annotation field, may be utilized to indicate whether previous read operations have been logged, allowing for subsequent redundant read logging to be reduced.10-30-2008
20100088488QUANTUM GATE METHOD AND APPARATUS - A method includes causing a common-resonator mode resonating with a transition between |2>04-08-2010
20090164751Method,system and apparatus for main memory access subsystem usage to different partitions in a socket with sub-socket partitioning - Embodiments enable sub-socket partitioning that facilitates access among a plurality of partitions to a shared resource. A round robin arbitration policy is to allow each partition, within a socket, that may utilize a different operating system, access to the shared resource based at least in part on whether an assigned bandwidth parameter for each partition is consumed. Embodiments may further include support for virtual channels.06-25-2009
20090204787Butterfly Physical Chip Floorplan to Allow an ILP Core Polymorphism Pairing - Improved techniques for executing instructions in a pipelined manner that may reduce stalls that occur when executing dependent instructions are provided. Stalls may be reduced by utilizing a cascaded arrangement of pipelines with execution units that are delayed with respect to each other. This cascaded delayed arrangement allows dependent instructions to be issued within a common issue group by scheduling them for execution in different pipelines to execute at different times. Separate processor cores may be morphed to appear differently for different applications. For example, two processor cores each capable of executing N-wide issue groups of instructions may be morphed to appear as a single processor core capable of executing 2N-wide issue groups.08-13-2009
20090210651SYSTEM AND METHOD FOR OBTAINING DATA IN A PIPELINED PROCESSOR - A pipelined processor including one or more units having storage locations not directly accessible by software instructions. The processor includes a load-store unit (LSU) in direct communication with the one or more units for accessing the storage locations in response to special instructions. The processor also includes a requesting unit for receiving a special instruction from a requestor and a mechanism for performing a method. The method includes broadcasting storage location information from the special instruction to one or more of the units to determine a corresponding unit having the storage location specified by the special instruction. Execution of the special instruction is initiated at the corresponding unit. If the unit executing the special instruction is not the LSU, the data is sent to the LSU. The data is received from the LSU as a result of the execution of the special instruction. The data is provided to the requester.08-20-2009
20090249025Serial Data Processing Circuit - A serial data processing circuit that realizes the same performance as that of the pipeline processing with low power consumption. First to fourth latch units receive, in parallel, data sets supplied to a logic circuit. These latch units sequentially latch the data sets sequentially supplied to the logic circuit and output N data sets in parallel. A Selector sequentially selects the data sets supplied from these latch units and supplies the selected data sets to the logical circuit. For example, when the first latch unit latches data (a), the selector selects the data (a) and supplies it to the logic circuit. When the second latch unit latches data (b), the selector selects the data (b) and supplies it to the logic circuit. The logic circuit processes N serial data sets during each cycle.10-01-2009
20080307193Semiconductor integrated circuit - A semiconductor integrated circuit (chip) includes a primary TAP controller and a secondary TAP controller. The primary TAP controller interprets a bit string of n bits included in the group 1 having an m-bit length (m≧2) and less than the total number of m bits as an instruction that carries out a processing for a control object and interprets each bit string having an m-bit length as an instruction that carries out no processing for the control object. The m-bit length is obtained by adding a predetermined single bit string to each bit string included in the group 1 consisting of at least two or more bit strings having an n-bit length, respectively. The secondary TAP controller extracts a single bit string denoting an instruction that has an n-bit length and carries out no processing for the control object from each bit string interpreted by the primary TAP controller as an instruction that carries out a processing for the control object, then interprets the single bit string.12-11-2008
20090287905PROCESSOR PIPELINE ARCHITECTURE LOGIC STATE RETENTION SYSTEMS AND METHODS - A solution for retaining a logic state of a processor pipeline architecture are disclosed. A comparator is positioned between two stages of the processor pipeline architecture. A storage capacitor is coupled between a storage node of the comparator and a ground to store an output of the early one of the two stages. A reference logic is provided, which has the same value as the output of the early stage. A logic storing and dividing device is coupled between the reference logic and a reference node of the comparator to generate a logic at the reference node, which is a fraction of the reference logic, and to retain a logic state of the information stored on the storage capacitor. Further mechanisms are provided to determine validity of data stored in the logic storing and dividing device.11-19-2009
20090327652METHOD FOR CONSTRUCTING A VARIABLE BITWIDTH VIDEO PROCESSOR - A method for designing a video processor with a variable and programmable bitwidth parameter. The method comprises selecting logical operations having propagation delay that scales linearly with the bitwidth; determining a desired tradeoff curve; and grouping instances of a logic operation having same properties; for a single instance of each logic operation, matching an actual curve of the logic operation to the desired tradeoff curve, wherein the actual curve is determined by the propagation delay and bitwidth of the logic operation.12-31-2009
20090070548Programming a Digital Processor with a Single Connection - A digital processor is coupled to a processor programmer through a single programming connection (e.g., terminal, pin, etc.) coupled to the single conductor programming bus. The processor programmer comprises an instruction encoder/decoder, a Manchester encoder, a Manchester decoder, a bus receiver and a bus transmitter. The digital processor comprises an instruction encoder/decoder, a Manchester encoder, a Manchester decoder, a bus receiver, a bus transmitter, a central processing unit (CPU), and a program memory. The instruction encoder/decoder is coupled to the CPU and the program memory. The bus receivers and bus transmitters are coupled to the single conductor programming bus which is coupled to a connection, e.g., terminal, pin, ball, etc., on an integrated circuit package containing the digital processor. The instruction encoder/decoder is coupled to a programming console, e.g., a personal computer, workstation, etc.03-12-2009
20080250224SYSTEM FOR CREATING A DYNAMIC OGSI SERVICE PROXY FRAMEWORK USING RUNTIME INTROSPECTION OF AN OGSI SERVICE - A system for creating a dynamic client side service proxy framework using meta-data and introspection capabilities of Open Grid Services Architecture (OGSA) service data is disclosed. The system includes defining an Open Grid Service Invocation Factory configured to create a service proxy and introspecting an Open Grid Service Infrastructure (OGSI) service based on information exposed by the service. An OGSI Service Invocation Proxy is created defining a set of dynamic interfaces based on service introspection and a meta-data inspection interface of the Service Invocation Proxy. The Service Invocation Proxy exposes both static port type interfaces and dynamic interfaces to support more flexibility of the client.10-09-2008
20120005456SYSTEMS, METHODS AND APPARATUS FOR LOCAL PROGRAMMING OF QUANTUM PROCESSOR ELEMENTS - Systems, methods and apparatus for a scalable quantum processor architecture. A quantum processor is locally programmable by providing a memory register with a signal embodying device control parameter(s), converting the signal to an analog signal; and administering the analog signal to one or more programmable devices.01-05-2012
20080313421LOW POWER-CONSUMPTION DATA PROCESSOR - A low power-consumption data processor, wherein instruction decoding is performed on an instruction memory and an instruction register by an instruction decoding unit through an instruction bus, being characterized in that an instruction decoding circuit is disposed between the instruction register, a program counter and an arithmetic logic unit, wherein a new value in an instruction is transmitted on a new instruction bus (NIB) when the decoded result is an operation/jump/call instruction and, otherwise, the new value is transmitted on an original instruction bus (IB).12-18-2008
20120159119SYSTEM AND METHODOLOGY FOR DEVELOPMENT OF A SYSTEM ARCHITECTURE - Described embodiments relate to methods, systems and computer readable medium for developing a system architecture. Resources constraints are defined, where each resource constraint corresponds to a maximum number of a each kind of resources available to construct the system architecture. Constraint values for each of at least three optimization parameters are defined, which includes a final optimization parameter. A design space is defined as a plurality of vectors representing different combinations of a number of each kind of resource available to construct the system architecture. For each of the plurality of optimization parameters, a priority factor function is defined. A plurality of satisfying sets of vectors is determined for each of the optimization parameters except for the final optimization parameter. A set of vectors is determined based on an intersection of the plurality of satisfying sets of vectors for the optimization parameters. A vector is selected from the set of vectors based on the ordered list of vectors and the final optimization parameter, where the selected vector is for use in developing the system architecture.06-21-2012
20120159118Lower IC Package Structure for Coupling with an Upper IC Package to Form a Package-On-Package (PoP) Assembly and PoP Assembly Including Such a Lower IC Package Structure - Disclosed are embodiments of a lower integrated circuit (IC) package structure for a package-on-package (PoP) assembly. The lower IC package structure includes an interposer having pads to mate with terminals of an upper IC package. An encapsulant material is disposed in the lower IC package, and this encapsulant may be disposed proximate one or more IC die. An upper IC package may be coupled with the lower IC package to form a PoP assembly. Such a PoP assembly may be disposed on a mainboard or other circuit board, and may form part of a computing system. Other embodiments are described and claimed.06-21-2012
20090327651Information Handling System Including A Multiple Compute Element Processor With Distributed Data On-Ramp Data-Off Ramp Topology - A symmetric multi-processing (SMP) processor includes a primary interconnect trunk for communication of information between multiple compute elements situated along the primary interconnect trunk. The processor also includes a secondary interconnected trunk that may be oriented perpendicular with respect to the primary interconnect trunk. The processor distributes data on-ramps and data off-ramps across the data lanes of a data trunk of the primary interconnect trunk to enable communication with compute elements and other structures both on-chip and off-chip.12-31-2009
20120226890ACCELERATOR AND DATA PROCESSING METHOD - The process speed and the power efficiency are improved while accomplishing downsizing by configuring an integrated hard-wired logic controller by a hard-wired logic, and a function modification is enabled by a patch circuit without re-designing of the integrated hard-wired logic controller itself by high-level synthesis even when the function modification becomes necessary because of a specification change and a false design after the production. The costs can be reduced by what corresponds to the unnecessity of re-designing. Therefore, an accelerator is provided which can improve the process speed and the power efficiency while accomplishing downsizing, and which can remarkably reduce the costs for the function modification after the production.09-06-2012
20080215850SYSTEMS, METHODS AND APPARATUS FOR LOCAL PROGRAMMING OF QUANTUM PROCESSOR ELEMENTS - Systems, methods and apparatus for a scalable quantum processor architecture. A quantum processor is locally programmable by providing a memory register with a signal embodying device control parameter(s), converting the signal to an analog signal; and administering the analog signal to one or more programmable devices.09-04-2008
20130124822CENTRAL PROCESSING UNIT (CPU) ARCHITECTURE AND HYBRID MEMORY STORAGE SYSTEM - In general, embodiments of the present invention relate to CPU and/or digital memory architecture. Specifically, embodiments of the present invention relate to various approaches for adapting current CPU designs. In one embodiment, the CPU architecture comprises a storage unit coupled to a CPU and/or a memory translator but no memory unit. In another embodiment, the architecture comprises a memory unit coupled to a CPU and/or a storage mapper, but no storage unit. In yet another embodiment, the CPU can be coupled to a memory unit, a storage unit and/or a memory-to-storage mapper. Regardless, the embodiments can utilize I/O hubs, convergence I/Os and/or memory controllers to connect/couple the components to one another. In addition, a tag can be provided for the memory space. The tag can include fields for a virtual header length, a virtual address header, and/or a physical address.05-16-2013

Patent applications in class PROCESSING ARCHITECTURE

Patent applications in all subclasses PROCESSING ARCHITECTURE