Patent application number | Description | Published |
20150117636 | SYSTEM AND METHOD FOR PERFORMING A SECURE CRYPTOGRAPHIC OPERATION ON A MOBILE DEVICE - In a mobile communication device, multiple sets of sensor measurement data are obtained, each from a corresponding hardware sensor resident on the device. Insufficiently random data is filtered from each of the data sets to produce random data sets which are combined to produce entropy data which is stored in an entropy data cache. An entropy pool is monitored to determine a level of entropy data available and, based on the level determined, entropy data is provided from the entropy data cache to the entropy pool. Entropy data from the entropy pool is then applied to perform a cryptographic operation such as the generation of an encryption key for encrypting communications sent or received by the mobile communication device. | 04-30-2015 |
20150117637 | SYSTEM AND METHOD FOR PERFORMING A SECURE CRYPTOGRAPHIC OPERATION ON A MOBILE DEVICE SELECTING DATA FROM MULTIPLE SENSORS - In a mobile communication device, multiple sets of sensor measurement data are obtained, each from a corresponding hardware sensor resident on the device. Insufficiently random data is filtered from each of the data sets to produce random data sets which are combined to produce entropy data which is stored in an entropy data cache. An entropy pool is monitored to determine a level of entropy data available and, based on the level determined, entropy data is provided from the entropy data cache to the entropy pool. Entropy data from the entropy pool is then applied to perform a cryptographic operation such as the generation of an encryption key for encrypting communications sent or received by the mobile communication device. | 04-30-2015 |
20150117638 | SYSTEM AND METHOD FOR PERFORMING A SECURE CRYPTOGRAPHIC OPERATION ON A MOBILE DEVICE BASED ON A CONTEXTUAL VARIABLE - In a mobile communication device, multiple sets of sensor measurement data are obtained, each from a corresponding hardware sensor resident on the device. Insufficiently random data is filtered from each of the data sets to produce random data sets Which are combined to produce entropy data which is stored in an entropy data cache. An entropy pool is monitored to determine a level of entropy data available and, based on the level determined, entropy data is provided from the entropy data cache to the entropy pool. Entropy data from the entropy pool is then applied to perform a cryptographic operation such as the generation of an encryption key for encrypting communications sent or received by the mobile communication device. | 04-30-2015 |
20150117642 | SYSTEM AND METHOD FOR PERFORMING A SECURE CRYPTOGRAPHIC OPERATION ON A MOBILE DEVICE USING AN ENTROPY POOL - In a mobile communication device, multiple sets of sensor measurement data are obtained, each from a corresponding hardware sensor resident on the device. Insufficiently random data is filtered from each of the data sets to produce random data sets which are combined to produce entropy data which is stored in an entropy data cache. An entropy pool is monitored to determine a level of entropy data available and, based on the level determined, entropy data is provided from the entropy data cache to the entropy pool. Entropy data from the entropy pool is then applied to perform a cryptographic operation such as the generation of an encryption key for encrypting communications sent or received by the mobile communication device. | 04-30-2015 |
20150117643 | SYSTEM AND METHOD FOR PERFORMING A SECURE CRYPTOGRAPHIC OPERATION ON A MOBILE DEVICE COMBINING DATA FROM MULTIPLE SENSORS - In a mobile communication device, multiple sets of sensor measurement data are obtained, each from a corresponding hardware sensor resident on the device. Insufficiently random data is filtered from each of the data sets to produce random data sets which are combined to produce entropy data which is stored in an entropy data cache. An entropy pool is monitored to determine a level of entropy data available and, based on the level determined, entropy data is provided from the entropy data cache to the entropy pool. Entropy data from the entropy pool is then applied to perform a cryptographic operation such as the generation of an encryption key for encrypting communications sent or received by the mobile communication device. | 04-30-2015 |
20150117644 | SYSTEM AND METHOD FOR PERFORMING A SECURE CRYPTOGRAPHIC OPERATION ON A MOBILE DEVICE BASED ON A USER ACTION - In a mobile communication device, multiple sets of sensor measurement data are obtained, each from a corresponding hardware sensor resident on the device. Insufficiently random data is filtered from each of the data sets to produce random data sets which are combined to produce entropy data which is stored in an entropy data cache. An entropy pool is monitored to determine a level of entropy data available and, based on the level determined, entropy data is provided from the entropy data cache to the entropy pool. Entropy data from the entropy pool is then applied to perform a cryptographic operation such as the generation of an encryption key for encrypting communications sent or received by the mobile communication device. | 04-30-2015 |
20150117646 | SYSTEM AND METHOD FOR PERFORMING A SECURE CRYPTOGRAPHIC OPERATION ON A MOBILE DEVICE INCLUDING AN ENTROPY FILTER - In a mobile communication device, multiple sets of sensor measurement data are obtained, each from a corresponding hardware sensor resident on the device. Insufficiently random data is filtered from each of the data sets to produce random data sets which are combined to produce entropy data which is stored in an entropy data cache. An entropy pool is monitored to determine a level of entropy data available and, based on the level determined, entropy data is provided from the entropy data cache to the entropy pool. Entropy data from the entropy pool is then applied to perform a cryptographic operation such as the generation of an encryption key for encrypting communications sent or received by the mobile communication device. | 04-30-2015 |
Patent application number | Description | Published |
20090172344 | METHOD, SYSTEM, AND APPARATUS FOR PAGE SIZING EXTENSION - A method, system, and apparatus may initialize a fixed plurality of page table entries for a fixed plurality of pages in memory, each page having a first size, wherein a linear address for each page table entry corresponds to a physical address and the fixed plurality of pages are aligned. A bit in each of the page table entries for the aligned pages may be set to indicate whether or not the fixed plurality of pages is to be treated as one combined page having a second page size larger than the first page size. Other embodiments are described and claimed. | 07-02-2009 |
20120131366 | LOAD BALANCING FOR MULTI-THREADED APPLICATIONS VIA ASYMMETRIC POWER THROTTLING - A first execution time of a first thread executing on a first processing unit of a multiprocessor is determined. A second execution time of a second thread executing on a second processing unit of the multiprocessor is determined, the first and second threads executing in parallel. Power is set to the first and second processing units to effectuate the first and second threads to finish executing at approximately the same time in future executions of the first and second threads. Other embodiments are also described and claimed. | 05-24-2012 |
20130054940 | MECHANISM FOR INSTRUCTION SET BASED THREAD EXECUTION ON A PLURALITY OF INSTRUCTION SEQUENCERS - In an embodiment, a method is provided. The method includes managing user-level threads on a first instruction sequencer in response to executing user-level instructions on a second instruction sequencer that is under control of an application level program. A first user-level thread is run on the second instruction sequencer and contains one or more user level instructions. A first user level instruction has at least 1) a field that makes reference to one or more instruction sequencers or 2) implicitly references with a pointer to code that specifically addresses one or more instruction sequencers when the code is executed. | 02-28-2013 |
20130111194 | METHOD AND SYSTEM TO PROVIDE USER-LEVEL MULTITHREADING | 05-02-2013 |
Patent application number | Description | Published |
20100052730 | METHOD AND APPARATUS FOR LATE TIMING TRANSITION DETECTION - Two latches store the state of a data signal at a transition of a clock signal. Comparison logic compares the outputs of the two latches and produces a signal to indicate whether the outputs are equal or unequal. Systems using the latches and comparison logic are described and claimed. | 03-04-2010 |
20110154000 | Adaptive optimized compare-exchange operation - A technique to perform a fast compare-exchange operation is disclosed. More specifically, a machine-readable medium, processor, and system are described that implement a fast compare-exchange operation as well as a cache line mark operation that enables the fast compare-exchange operation. | 06-23-2011 |
20130117531 | METHOD, SYSTEM, AND APPARATUS FOR PAGE SIZING EXTENSION - A method, system, and apparatus may initialize a fixed plurality of page table entries for a fixed plurality of pages in memory, each page having a first size, wherein a linear address for each page table entry corresponds to a physical address and the fixed plurality of pages are aligned. A bit in each of the page table entries for the aligned pages may be set to indicate whether or not the fixed plurality of pages is to be treated as one combined page having a second page size larger than the first page size. Other embodiments are described and claimed. | 05-09-2013 |
20130219399 | MECHANISM FOR INSTRUCTION SET BASED THREAD EXECUTION OF A PLURALITY OF INSTRUCTION SEQUENCERS - In an embodiment, a method is provided. The method includes managing user-level threads on a first instruction sequencer in response to executing user-level instructions on a second instruction sequencer that is under control of an application level program. A first user-level thread is run on the second instruction sequencer and contains one or more user level instructions. A first user level instruction has at least 1) a field that makes reference to one or more instruction sequencers or 2) implicitly references with a pointer to code that specifically addresses one or more instruction sequencers when the code is executed. | 08-22-2013 |
Patent application number | Description | Published |
20100332801 | Adaptively Handling Remote Atomic Execution - In one embodiment, a method includes receiving an instruction for decoding in a processor core and dynamically handling the instruction with one of multiple behaviors based on whether contention is predicted. If no contention is predicted, the instruction is executed in the core, and if contention is predicted data associated with the instruction is marshaled and sent to a selected remote agent for execution. Other embodiments are described and claimed. | 12-30-2010 |
20130160020 | GENERATIONAL THREAD SCHEDULER - Disclosed herein is a generational thread scheduler. One embodiment may be used with processor multithreading logic to execute threads of executable instructions, and a shared resource to be allocated fairly among the threads of executable instructions contending for access to the shared resource. Generational thread scheduling logic may allocate the shared resource efficiently and fairly by granting a first requesting thread access to the shared resource allocating a reservation for the shared resource to each other requesting thread of the executing threads and then blocking the first thread from re-requesting the shared resource until every other thread that has been allocated a reservation, has been granted access to the shared resource. Generation tracking state may be cleared when each requesting thread of the generation that was allocated a reservation has had their request satisfied. | 06-20-2013 |
20140095831 | APPARATUS AND METHOD FOR EFFICIENT GATHER AND SCATTER OPERATIONS - An apparatus and method are described for performing efficient gather operations in a pipelined processor. For example, a processor according to one embodiment of the invention comprises: gather setup logic to execute one or more gather setup operations in anticipation of one or more gather operations, the gather setup operations to determine one or more addresses of vector data elements to be gathered by the gather operations; and gather logic to execute the one or more gather operations to gather the vector data elements using the one or more addresses determined by the gather setup operations. | 04-03-2014 |
20140149651 | Providing Extended Cache Replacement State Information - In an embodiment, a processor includes a decode logic to receive and decode a first memory access instruction to store data in a cache memory with a replacement state indicator of a first level, and to send the decoded first memory access instruction to a control logic. In turn, the control logic is to store the data in a first way of a first set of the cache memory and to store the replacement state indicator of the first level in a metadata field of the first way responsive to the decoded first memory access instruction. Other embodiments are described and claimed. | 05-29-2014 |
20140297994 | PROCESSORS, METHODS, AND SYSTEMS TO IMPLEMENT PARTIAL REGISTER ACCESSES WITH MASKED FULL REGISTER ACCESSES - A method includes receiving a packed data instruction indicating a first narrower source packed data operand and a narrower destination operand. The instruction is mapped to a masked packed data operation indicating a first wider source packed data operand that is wider than and includes the first narrower source operand, and indicating a wider destination operand that is wider than and includes the narrower destination operand. A packed data operation mask is generated that includes a mask element for each corresponding result data element of a packed data result to be stored by the masked packed data operation. All mask elements that correspond to result data elements to be stored by the masked operation that would not be stored by the packed data instruction are masking out. The masked operation is performed using the packed data operation mask. The packed data result is stored in the wider destination operand. | 10-02-2014 |
20150052333 | Systems, Apparatuses, and Methods for Stride Pattern Gathering of Data Elements and Stride Pattern Scattering of Data Elements - Embodiments of systems, apparatuses, and methods for performing gather and scatter stride instruction in a computer processor are described. In some embodiments, the execution of a gather stride instruction causes a conditionally storage of strided data elements from memory into the destination register according to at least some of bit values of a writemask. | 02-19-2015 |
20150277910 | METHOD AND APPARATUS FOR EXECUTING INSTRUCTIONS USING A PREDICATE REGISTER - An apparatus and method are described for executing instructions using a predicate register. For example, one embodiment of a processor comprises: a register set including a predicate register to store a set of predicate condition bits, the predicate condition bits specifying whether results of a particular predicated instruction sequence are to be retained or discarded; and predicate execution logic to execute a first predicate instruction to indicate a start of a new predicated instruction sequence by copying a condition value from a processor control register in the register set to the predicate register. In a further embodiment, the predicate condition bits in the predicate register are to be shifted in response to the first predicate instruction to free space within the predicate register for the new condition value associated with the new predicated instruction sequence. | 10-01-2015 |
20160019067 | MECHANISM FOR INSTRUCTION SET BASED THREAD EXECUTION ON A PLURALITY OF INSTRUCTION SEQUENCERS - In an embodiment, a method is provided. The method includes managing user-level threads on a first instruction sequencer in response to executing user-level instructions on a second instruction sequencer that is under control of an application level program. A first user-level thread is run on the second instruction sequencer and contains one or more user level instructions. A first user level instruction has at least 1) a field that makes reference to one or more instruction sequencers or 2) implicitly references with a pointer to code that specifically addresses one or more instruction sequencers when the code is executed. | 01-21-2016 |
20160055004 | METHOD AND APPARATUS FOR NON-SPECULATIVE FETCH AND EXECUTION OF CONTROL-DEPENDENT BLOCKS - An apparatus and method are described for non-speculative execution of conditional instructions. For example, one embodiment of a processor comprises: a register set including a first register to store a set of one or more condition bits; non-speculative execution logic to execute a first instruction to identify a first target instruction strand in response to a first conditional value read from the set of condition bits, the first instruction to wait until the first conditional value becomes known before causing the first target instruction strand to be fetched and executed, the non-speculative execution logic to execute a second instruction to identify an end of the first target instruction strand and responsively identify a new current instruction pointer for instructions which follow the second instruction; and out-of-order execution logic to fetch and execute the instructions which follow the second instruction prior to the execution of the second instruction. | 02-25-2016 |
Patent application number | Description | Published |
20120254591 | SYSTEMS, APPARATUSES, AND METHODS FOR STRIDE PATTERN GATHERING OF DATA ELEMENTS AND STRIDE PATTERN SCATTERING OF DATA ELEMENTS - Embodiments of systems, apparatuses, and methods for performing gather and scatter stride instruction in a computer processor are described. In some embodiments, the execution of a gather stride instruction causes a conditionally storage of strided data elements from memory into the destination register according to at least some of bit values of a writemask. | 10-04-2012 |
20120254593 | SYSTEMS, APPARATUSES, AND METHODS FOR JUMPS USING A MASK REGISTER - Embodiments of systems, apparatuses, and methods for performing a jump instruction in a computer processor are described. In some embodiments, the execution of a blend instruction causes a conditional jump to an address of a target instruction when all of bits of a writemask are zero, wherein the address of the target instruction is calculated using an instruction pointer of the instruction and the relative offset. | 10-04-2012 |
20130305020 | VECTOR FRIENDLY INSTRUCTION FORMAT AND EXECUTION THEREOF - A vector friendly instruction format and execution thereof. According to one embodiment of the invention, a processor is configured to execute an instruction set. The instruction set includes a vector friendly instruction format. The vector friendly instruction format has a plurality of fields including a base operation field, a modifier field, an augmentation operation field, and a data element width field, wherein the first instruction format supports different versions of base operations and different augmentation operations through placement of different values in the base operation field, the modifier field, the alpha field, the beta field, and the data element width field, and wherein only one of the different values may be placed in each of the base operation field, the modifier field, the alpha field, the beta field, and the data element width field on each occurrence of an instruction in the first instruction format in instruction streams. | 11-14-2013 |
20140149724 | VECTOR FRIENDLY INSTRUCTION FORMAT AND EXECUTION THEREOF - A vector friendly instruction format and execution thereof. According to one embodiment of the invention, a processor is configured to execute an instruction set. The instruction set includes a vector friendly instruction format. The vector friendly instruction format has a plurality of fields including a base operation field, a modifier field, an augmentation operation field, and a data element width field, wherein the first instruction format supports different versions of base operations and different augmentation operations through placement of different values in the base operation field, the modifier field, the alpha field, the beta field, and the data element width field, and wherein only one of the different values may be placed in each of the base operation field, the modifier field, the alpha field, the beta field, and the data element width field on each occurrence of an instruction in the first instruction format in instruction streams. | 05-29-2014 |
20140281387 | CONVERTING CONDITIONAL SHORT FORWARD BRANCHES TO COMPUTATIONALLY EQUIVALENT PREDICATED INSTRUCTIONS - A processor is operable to process conditional branches. The processor includes instruction fetch logic to fetch a conditional short forward branch. The conditional short forward branch is to include a conditional branch instruction and a set of one or more instructions that are to sequentially follow the conditional branch instruction in program order. The set of the one or more instructions are between the conditional branch instruction and a forward branch target instruction that is to be indicated by the conditional branch instruction. The processor also includes instruction conversion logic coupled with the instruction fetch logic. The instruction conversion logic is to convert the conditional short forward branch to a computationally equivalent set of one or more predicated instructions. Other processors are also disclosed, as are various methods and systems. | 09-18-2014 |
Patent application number | Description | Published |
20080206120 | Method For Purifying Waste Gases of a Glass Melting Process, Particularly For Glasses For Lcd Display - The invention relates to a method for purifying waste gases of a glass melting process during which SiO | 08-28-2008 |
20080219908 | Method For Cleaning Exhaust Gases Produced By A Sintering Process For Ores And/Or Other Metal-Containing Materials In Metal Production - The invention relates to a method for cleaning exhaust gases produced by an ore sintering process in metal production consisting in mixing ores, possibly associated with other metal-containing materials, with a solid fuel, in sintering said materials by simulataneously combusting said solid fuel and in carrying out a distillation process. In such a way that NO | 09-11-2008 |
20100284870 | FLUID TREATMENT SYSTEM WITH BULK MATERIAL BEDS OPERATED IN PARALLEL AND METHOD FOR OPERATING SUCH A SYSTEM - A fluid treatment system having bulk beds. The fluid to be treated essentially streams from the bottom up through a bulk bed, while the bulk material migrates through the bulk beds in countercurrent to the fluid essentially from the top down. This is accomplished by removing partial quantities of bulk material at the lower end of the bulk bed, and delivering partial quantities of the bulk material to the bulk bed at the top. At least one charging wagon provided with optionally sealable bulk material outlets is able to traverse a charging channel between a charging position and several partial bulk bed release positions above the bulk beds. Provided below the bulk material outlets and the bulk material valve of the charging wagon are bulk material through pipes, the bulk material outlet mouths of which end on bulk material cones of an underlying bulk bed. | 11-11-2010 |
20100296991 | METHOD AND DEVICE FOR PURIFYING THE FLUE GASES OF A SINTERING PROCESS OF ORES AND/OR OTHER MATERIAL-CONTAINING MATERIALS IN METAL PRODUCTION - In a method for the purifying of the waste gases of a sintering process of ores in the production of metals, in which ore material is sintered with a solid fuel, with the combustion of the solids and passage through a smoldering process, at least the pollutants SO | 11-25-2010 |
20120216873 | FLUID TREATMENT SYSTEM WITH BULK MATERIAL BEDS OPERATED IN PARALLEL AND METHOD FOR OPERATING SUCH A SYSTEM - A fluid treatment system having bulk beds. The fluid to be treated essentially streams from the bottom up through a bulk bed, while the bulk material migrates through the bulk beds in countercurrent to the fluid essentially from the top down. This is accomplished by removing partial quantities of bulk material at the lower end of the bulk bed, and delivering partial quantities of the bulk material to the bulk bed at the top. At least one charging wagon provided with optionally sealable bulk material outlets is able to traverse a charging channel between a charging position and several partial bulk bed release positions above the bulk beds. Provided below the bulk material outlets and the bulk material valve of the charging wagon are bulk material through pipes, the bulk material outlet mouths of which end on bulk material cones of an underlying bulk bed. | 08-30-2012 |
Patent application number | Description | Published |
20100228693 | METHOD AND SYSTEM FOR GENERATING A DOCUMENT REPRESENTATION - A method, system and computer program product for generating a document representation are disclosed. The system includes a server and a client computer, and the method involves: receiving into memory a resource containing at least one sentence of text; producing a tree comprising tree elements indicating parts-of-speech and grammatical relations between the tree elements; producing semantic structures each having three tree elements to represent a simple clause (subject-predicate-object); and storing a semantic network of semantic structures and connections therebetween. The semantic network may be created from a user provided root concept. Output representations include concept maps, facts listings, text summaries, tag clouds, indices; and an annotated text. The system interactively modifies semantic networks in response to user feedback, and produces personal semantic networks and document use histories. | 09-09-2010 |
20140019385 | GENERATING A DOCUMENT REPRESENTATION USING SEMANTIC NETWORKS - A method, system and computer program product for developing a semantic network are disclosed. The system includes a server and a client computer, and the method involves: creating a semantic network containing at least one root concept; performing a set of instructions associated with the semantic network, the set of instructions comprising: transmitting information about the semantic network from the client computer to a server; providing information about at least one resource to the server, the at least one resource containing concepts and relations associated with the at least one root concept; receiving information about a modified semantic network from the server; presenting the information about the modified semantic network to a user; receiving a response from the user; based on the response, further modifying the semantic network. The system interactively modifies semantic networks in response to user feedback, and produces personal semantic networks and document use histories. | 01-16-2014 |
20140156635 | OPTIMIZING AN ORDER OF EXECUTION OF MULTIPLE JOIN OPERATIONS - A computer-implemented method, system, and/or computer program product optimizes an order of execution of column join operations. A first partitioning of the first data column splits the first data column into first subsets of rows. A second partitioning of the second data column splits the second data column into a second subsets of rows. A first value frequency information indicates a frequency of attribute values within a subset of rows of the first data column processed. A second value frequency information indicates a frequency of attribute values within a subset of rows of the second data column. Cardinalities of sub-tables derived by a respective joining of the subsets of rows of the first and second data columns are estimated, based on the first and second value frequency information. An order of execution of multiple join operations is then optimized based on the estimated cardinalities of the sub-tables. | 06-05-2014 |
20140358995 | PROVIDING ACCESS TO A RESOURCE FOR A COMPUTER FROM WITHIN A RESTRICTED NETWORK - Disclosed are systems, methods, and machine readable storage media that cause a storage computer and a client computer to perform a method of providing access to one or more resources on the storage computer for the client computer. The storage computer is operable for initiation of a network connection between the client computer and the storage computer. Initiation of the network connection between the client computer and the storage computer by the storage computer is enabled, and initiation of the network connection between the client computer and the storage computer by the client computer is disabled. The client computer and the storage computer are operable for maintaining the network connection between the client computer and the storage computer. | 12-04-2014 |