Patent application number | Description | Published |
20120179601 | OFFSITE FINANCIAL ACCOUNT ONBOARDING - Offsite financial account onboarding is provided which creates a more streamlined process for a customer. The customer accesses a money services businesss business electronic system to request financial account setup. The onboarding system establishes account access to a pooled custodial account managed by the money services businesss business based on preliminary identification (ID) data from the customer. With only preliminary ID data, account access limits are assigned which reduces the risk of fraud or criminal activity with the customer's account access. Because account access is established with just the preliminary ID data, the customer may fully obtain account access directly from a mobile device. The money services businesss business provides additional graduated access levels depending on additional ID data provided by the customer. Thus, depending on the type of ID data provided by the customer, the customer's account access will have corresponding access level rights to the custodial account. | 07-12-2012 |
20120179608 | OFFSITE FINANCIAL ACCOUNT ONBOARDING - Offsite financial account onboarding is provided which creates a more streamlined process for a customer. The customer accesses a money services businesss business electronic system to request financial account setup. The onboarding system establishes account access to a pooled custodial account managed by the money services businesss business based on preliminary identification (ID) data from the customer. With only preliminary ID data, account access limits are assigned which reduces the risk of fraud or criminal activity with the customer's account access. Because account access is established with just the preliminary ID data, the customer may fully obtain account access directly from a mobile device. The money services businesss business provides additional graduated access levels depending on additional ID data provided by the customer. Thus, depending on the type of ID data provided by the customer, the customer's account access will have corresponding access level rights to the custodial account. | 07-12-2012 |
20130036054 | OFFSITE FINANCIAL ACCOUNT ONBOARDING - Offsite financial account onboarding is provided which creates a more streamlined process for a customer. The customer accesses a money services businesss business electronic system to request financial account setup. The onboarding system establishes account, access to a pooled custodial account managed by the money services businesss business based on preliminary identification (ID) data from the customer. With only preliminary ID data, account access limits are assigned which reduces the risk of fraud or criminal activity with the customer's account access. Because account access is established with just the preliminary ID data, the customer may fully obtain account access directly from a mobile device. The money services businesss business provides additional graduated access levels depending on additional ID data provided by the customer. Thus, depending on the type of ID data provided by the customer, the customer's account access will have corresponding access level rights to the custodial account. | 02-07-2013 |
20140006127 | Systems and Methods for Earning Virtual Value Associated with Transaction Account Activities | 01-02-2014 |
20140258126 | OFFSITE FINANCIAL ACCOUNT ONBOARDING - Offsite financial account onboarding is provided which creates a more streamlined process for a customer. The customer accesses a money services business electronic system to request financial account setup. The onboarding system establishes account access to a pooled custodial account managed by the money services business based on preliminary identification (ID) data from the customer. With only preliminary ID data, account access limits are assigned which reduces the risk of fraud or criminal activity with the customer's account access. Because account access is established with just the preliminary ID data, the customer may fully obtain account access directly from a mobile device. The money services business provides additional graduated access levels depending on additional ID data provided by the customer. Thus, depending on the type of ID data provided by the customer, the customer's account access will have corresponding access level rights to the custodial account. | 09-11-2014 |
Patent application number | Description | Published |
20110314259 | OPERATING A STACK OF INFORMATION IN AN INFORMATION HANDLING SYSTEM - A pointer is for pointing to a next-to-read location within a stack of information. For pushing information onto the stack: a value is saved of the pointer, which points to a first location within the stack as being the next-to-read location; the pointer is updated so that it points to a second location within the stack as being the next-to-read location; and the information is written for storage at the second location. For popping the information from the stack: in response to the pointer, the information is read from the second location as the next-to-read location; and the pointer is restored to equal the saved value so that it points to the first location as being the next-to-read location. | 12-22-2011 |
20110320765 | VARIABLE WIDTH VECTOR INSTRUCTION PROCESSOR - A computer processor, method, and computer program product for executing vector processing instructions on a variable width vector register file. An example embodiment is a computer processor that includes an instruction execution unit coupled to a variable width vector register file which contains a number of vector registers, the width of the vector registers is changeable during operation of the computer processor. | 12-29-2011 |
20130019083 | Redundant Transactional MemoryAANM Cain, III; Harold W.AACI HartsdaleAAST NYAACO USAAGP Cain, III; Harold W. Hartsdale NY USAANM Daly; David M.AACI Croton on HudsonAAST NYAACO USAAGP Daly; David M. Croton on Hudson NY USAANM Ekanadham; KattamuriAACI Mohegan LakeAAST NYAACO USAAGP Ekanadham; Kattamuri Mohegan Lake NY USAANM Huang; Michael C.AACI RochesterAAST NYAACO USAAGP Huang; Michael C. Rochester NY USAANM Moreira; Jose E.AACI IrvingtonAAST NYAACO USAAGP Moreira; Jose E. Irvington NY USAANM Serrano; Mauricio J.AACI BronxAAST NYAACO USAAGP Serrano; Mauricio J. Bronx NY US - A mechanism is provided for redundant execution of a set of instructions. A redundant execution begin (rbegin) instruction to be executed by a first hardware thread on the first processor is identified in the set of instructions. The set of instructions immediately after the rbegin instruction are executed on the first hardware thread and on a second hardware thread. Responsive to both the first processor and the second processor ending execution of the set of instructions, responsive to a first set of cache lines in a first speculative store matching a second set of cache lines in a second speculative store, and responsive to a first set of register states in a first status register matching a second set of register states in a second status register, dirty lines in the first speculative store are committed thereby committing a redundant transaction state to an architectural state. | 01-17-2013 |
20130019085 | Efficient Recombining for Dual Path ExecutionAANM Cain, III; Harold W.AACI HartsdaleAAST NYAACO USAAGP Cain, III; Harold W. Hartsdale NY USAANM Daly; David M.AACI Croton on HudsonAAST NYAACO USAAGP Daly; David M. Croton on Hudson NY USAANM Huang; Michael C.AACI RochesterAAST NYAACO USAAGP Huang; Michael C. Rochester NY USAANM Moreira; Jose E.AACI IrvingtonAAST NYAACO USAAGP Moreira; Jose E. Irvington NY USAANM Park; ILAACI SeoulAACO KRAAGP Park; IL Seoul KR - A mechanism is provided for reducing a penalty for executing a correct branch of a branch instruction. An execution unit in a processor of a data processing system executes a first branch of the branch instruction from a main thread of a processor and executes a second branch of the branch instruction from an assist thread of the processor. The execution unit determines whether the main thread is a correct branch of the branch instruction or the assist thread is the correct branch of the branch instruction. Responsive to the assist thread being the correct branch of the branch instruction, the execution unit pauses execution of the branch instruction on both the main thread and the assist thread. The execution unit then properly inherits a context of the main thread in order that execution of the second branch may continue. | 01-17-2013 |
20140075121 | Selective Delaying of Write Requests in Hardware Transactional Memory Systems - Techniques for conflict detection in hardware transactional memory (HTM) are provided. In one aspect, a method for detecting conflicts in HTM includes the following steps. Conflict detection is performed eagerly by setting read and write bits in a cache as transactions having read and write requests are made. A given one of the transactions is stalled when a conflict is detected whereby more than one of the transactions are accessing data in the cache in a conflicting way. An address of the conflicting data is placed in a predictor. The predictor is queried whenever the write requests are made to determine whether they correspond to entries in the predictor. A copy of the data corresponding to entries in the predictor is placed in a store buffer. The write bits in the cache are set and the copy of the data in the store buffer is merged in at transaction commit. | 03-13-2014 |
20140095716 | MAXIMIZING RESOURCES IN A MULTI-APPLICATION PROCESSING ENVIRONEMENT - Aspects of the present invention provide a solution for maximizing server site resources in a server network. In an embodiment, an application signature is collected for an application. This application signature includes a representation of operating characteristics of the application. The application signature is compared with application signatures collected from other applications in the server network. Based on the comparison, the application is assigned for execution to a server site that hosts a group of applications that have similar application signatures to that of the application. | 04-03-2014 |
20140095718 | MAXIMIZING RESOURCES IN A MULTI-APPLICATION PROCESSING ENVIRONMENT - Aspects of the present invention provide a solution for maximizing server site resources in a server network. In an embodiment, an application signature is collected for an application. This application signature includes a representation of operating characteristics of the application. The application signature is compared with application signatures collected from other applications in the server network. Based on the comparison, the application is assigned for execution to a server site that hosts a group of applications that have similar application signatures to that of the application. | 04-03-2014 |
20140281710 | TRANSACTIONS FOR CHECKPOINTING AND REVERSE EXECUTION - A method of backstepping through a program execution includes dividing the program execution into a plurality of epochs, wherein the program execution is performed by an active core, determining, during a subsequent epoch of the plurality of epochs, that a rollback is to be performed, performing the rollback including re-executing a previous epoch of the plurality of epochs, wherein the previous epoch includes one or more instructions of the program execution stored by a checkpointing core, and adjusting a granularity of the plurality of epochs according to a frequency of the rollback. | 09-18-2014 |
20150032997 | TRACKING LONG GHV IN HIGH PERFORMANCE OUT-OF-ORDER SUPERSCALAR PROCESSORS - Tracking global history vector in high performance out of order superscalar processors, in one aspect, may comprise providing a shift register storing global history vector that stores branch predictions and outcomes. A counter is maintained to determine a number of bits to shift the shift register to recover branch history. In another aspect, the global history vector may be implemented with a circular buffer structure. Youngest and oldest pointers to the circular buffer are maintained and used in recovery. | 01-29-2015 |
20150046752 | Redundant Transactions for Detection of Timing Sensitive Errors - A method for detecting a software-race condition in a program includes copying a state of a transaction of the program from a first core of a multi-core processor to at least one additional core of the multi-core processor, running the transaction, redundantly, on the first core and the at least one additional core given the state, outputting a result of the first core and the at least one additional core, and detecting a difference in the results between the first core and the at least one additional core, wherein the difference indicates the software-race condition. | 02-12-2015 |
20150046758 | REDUNDANT TRANSACTIONS FOR SYSTEM TEST - A method for detecting errors in hardware including running a transaction on a plurality of cores, wherein each of the cores runs a respective copy of the transaction, synchronizing the transaction on the cores, comparing results of the transaction on the cores, and determining an error in one or more of the cores. | 02-12-2015 |
20150143083 | Techniques for Increasing Vector Processing Utilization and Efficiency Through Vector Lane Predication Prediction - Techniques for increasing vector processing utilization and efficiency through use of unmasked lanes of predicated vector instructions for executing non-conflicting instructions are provided. In one aspect, a method of vector lane predication for a processor is provided which includes the steps of: fetching predicated vector instructions from a memory; decoding the predicated vector instructions; determining if a mask value of the predicated vector instructions is available and, if the mask value of the predicated vector instructions is not available, predicting the mask value of the predicated vector instructions; and dispatching the predicated vector instructions to only masked vector lanes. | 05-21-2015 |
20150186145 | Compressed Indirect Prediction Caches - Provided herein is a compressed cache design to predict indirect branches in a microprocessor based on the characteristics of the addresses of the branch instructions. In one aspect, a method for predicting a branch target T in a microprocessor includes the following steps. A compressed count cache table (CTABLE) of branch targets indexed using a function combining a branch address and a branch history vector for each of the targets is maintained, wherein entries in the CTABLE contain only low-order bits of each of the targets in combination with an index bit(s) I. A given one of the entries is obtained related to a given one of the branch targets and it is determined from the index bits I whether A) high-order bits of the target are equal to the branch address, or B) the high-order bits of the target are contained in an auxiliary cache table (HTABLE). | 07-02-2015 |
20150293703 | PAGE TABLE INCLUDING DATA FETCH WIDTH INDICATOR - Embodiments relate to a page table including a data fetch width indicator. An aspect includes allocating a memory page in a main memory to an application. Another aspect includes creating a page table entry corresponding to the memory page in the page table. Another aspect includes determining, by a data fetch width indicator determination logic, the data fetch width indicator for the memory page. Another aspect includes sending a notification of the data fetch width indicator from the data fetch width indicator determination logic to supervisory software. Another aspect includes setting the data fetch width indicator in the page table entry by the supervisory software based on the notification. Another aspect includes, based on a cache miss in the cache memory corresponding to an address that is located in the memory page, fetching an amount of data from the memory page based on the data fetch width indicator. | 10-15-2015 |
20150293704 | MEMORY-AREA PROPERTY STORAGE INCLUDING DATA FETCH WIDTH INDICATOR - Embodiments relate to memory-area property storage including a data fetch width indicator. An aspect includes allocating a memory page in a main memory to an application that is executed by a processor of a computer. Another aspect includes determining the data fetch width indicator for the allocated memory page. Another aspect includes setting the data fetch width indicator in the at least one memory-area property storage in the allocated memory page. Another aspect includes, based on a cache miss in the cache memory corresponding to an address that is located in the allocated memory page: determining the data fetch width indicator in the memory-area property storage associated with the location of the address; and fetching an amount of data from the memory page based on the data fetch width indicator. | 10-15-2015 |
20150293849 | COUNTER-BASED WIDE FETCH MANAGEMENT - Embodiments relate to counter-based wide fetch management. An aspect includes assigning a counter to a first memory region in a main memory that is allocated to a first application that is executed by a processor of a computer. Another aspect includes maintaining, by the counter, a count of a number of times adjacent cache lines in the cache memory that correspond to the first memory region are touched by the processor. Another aspect includes determining an update to a data fetch width indicator corresponding to the first memory region based on the counter. Another aspect includes sending a hardware notification from a counter management module to supervisory software of the computer of the update to the data fetch width indicator. Yet another aspect includes updating, by the supervisory software, the data fetch width indicator of the first memory region in the main memory based on the hardware notification. | 10-15-2015 |
20150293851 | MEMORY-AREA PROPERTY STORAGE INCLUDING DATA FETCH WIDTH INDICATOR - Embodiments relate to memory-area property storage including a data fetch width indicator. An aspect includes allocating a memory page in a main memory to an application that is executed by a processor of a computer. Another aspect includes determining the data fetch width indicator for the allocated memory page. Another aspect includes setting the data fetch width indicator in the at least one memory-area property storage in the allocated memory page. Another aspect includes, based on a cache miss in the cache memory corresponding to an address that is located in the allocated memory page: determining the data fetch width indicator in the memory-area property storage associated with the location of the address; and fetching an amount of data from the memory page based on the data fetch width indicator. | 10-15-2015 |
20150293852 | COUNTER-BASED WIDE FETCH MANAGEMENT - Embodiments relate to counter-based wide fetch management. An aspect includes assigning a counter to a first memory region in a main memory that is allocated to a first application that is executed by a processor of a computer. Another aspect includes maintaining, by the counter, a count of a number of times adjacent cache lines in the cache memory that correspond to the first memory region are touched by the processor. Another aspect includes determining an update to a data fetch width indicator corresponding to the first memory region based on the counter. Another aspect includes sending a hardware notification from a counter management module to supervisory software of the computer of the update to the data fetch width indicator. Yet another aspect includes updating, by the supervisory software, the data fetch width indicator of the first memory region in the main memory based on the hardware notification. | 10-15-2015 |
20150293855 | PAGE TABLE INCLUDING DATA FETCH WIDTH INDICATOR - Embodiments relate to a page table including a data fetch width indicator. An aspect includes allocating a memory page in a main memory to an application. Another aspect includes creating a page table entry corresponding to the memory page in the page table. Another aspect includes determining, by a data fetch width indicator determination logic, the data fetch width indicator for the memory page. Another aspect includes sending a notification of the data fetch width indicator from the data fetch width indicator determination logic to supervisory software. Another aspect includes setting the data fetch width indicator in the page table entry by the supervisory software based on the notification. Another aspect includes, based on a cache miss in the cache memory corresponding to an address that is located in the memory page, fetching an amount of data from the memory page based on the data fetch width indicator. | 10-15-2015 |
20150331691 | BRANCH PREDICTION USING MULTIPLE VERSIONS OF HISTORY DATA - Branch prediction is provided by generating a first index from a previous instruction address and from a first branch history vector having a first length. A second index is generated from the previous instruction address and from a second branch history vector that is longer than the first vector. Using the first index, a first branch prediction is retrieved from a first branch prediction table. Using the second index, a second branch prediction is retrieved from a second branch prediction table. Based upon additional branch history data, the first branch history vector and the second branch history vector are updated. A first hash value is generated from a current instruction address and the updated first branch history vector. A second hash value is generated from the current instruction address and the updated second branch history vector. One of the branch predictions are selected based upon the hash values. | 11-19-2015 |
20150338891 | LOCKING POWER SUPPLIES - There is provided an apparatus, a method and computer program product for managing one or more components of an electronic machine. A user connects one or more components to an electronic machine in parallel. The electronic machine determines whether the components are failed. A latch device, attached to each component, automatically locks one or more of the components to the electronic machine if the one or more of the components are not failed. The electromechanical latch automatically releases the one or more of the components from the electronic machine if the one or more of the components are failed. | 11-26-2015 |
20160092231 | INDEPENDENT MAPPING OF THREADS - Embodiments of the present invention provide systems and methods for mapping the architected state of one or more threads to a set of distributed physical register files to enable independent execution of one or more threads in a multiple slice processor. In one embodiment, a system is disclosed including a plurality of dispatch queues which receive instructions from one or more threads and an even number of parallel execution slices, each parallel execution slice containing a register file. A routing network directs an output from the dispatch queues to the parallel execution slices and the parallel execution slices independently execute the one or more threads. | 03-31-2016 |
20160092276 | INDEPENDENT MAPPING OF THREADS - Embodiments of the present invention provide systems and methods for mapping the architected state of one or more threads to a set of distributed physical register files to enable independent execution of one or more threads in a multiple slice processor. In one embodiment, a system is disclosed including a plurality of dispatch queues which receive instructions from one or more threads and an even number of parallel execution slices, each parallel execution slice containing a register file. A routing network directs an output from the dispatch queues to the parallel execution slices and the parallel execution slices independently execute the one or more threads. | 03-31-2016 |
20160092331 | REDUNDANT TRANSACTIONS FOR SYSTEM TEST - A method for detecting errors in hardware including running a transaction on a plurality of cores, wherein each of the cores runs a respective copy of the transaction, periodically synchronizing the transaction on the cores throughout execution of the transaction, comparing results of the transaction on the cores, and determining an error in one or more of the cores. | 03-31-2016 |
Patent application number | Description | Published |
20090070628 | HYBRID EVENT PREDICTION AND SYSTEM CONTROL - A system for predicting an occurrence of a critical even in a computer cluster includes: a control system that includes an event log, a system parameter log, a memory for storing information related to occurrences of critical events, and a processor. The processor implements a hybrid prediction system; loads the information from the event log and the system performance log into a Bayesian network model; uses the Bayesian network model to predict a future critical event; makes future scheduling and current data migration selections; and adapts the Bayesian network model by feeding the scheduling and data migration selections. | 03-12-2009 |
20090282151 | SEMI-HIERARCHICAL SYSTEM AND METHOD FOR ADMINISTRATION OF CLUSTERS OF COMPUTER RESOURCES - A method for managing clustered computer resources, and particularly very large scale clusters of computer resources by a semi-hierarchical n level, n+1 tier approach. Controller resources and controlled resources exist at different hardware levels. The top level consists of controller nodes and a first tier is defined for at least part of the top level. At a last level, at which controlled nodes are found, a last tier is defined. Additional levels of controlled and controller resources may exist between the top and last levels. At least one logical intermediate tier is introduced between adjacent levels and comprises at least one proxy or set of proxy processes. | 11-12-2009 |
20110258347 | COMMUNICATIONS SUPPORT IN A TRANSACTIONAL MEMORY - A system, method and computer program product are provided for supporting Transactional Memory communications. In one embodiment, the system comprises a transactional memory host with a host transactional memory buffer, an endpoint device, a transactional memory buffer associated with the endpoint device, and a communication path connecting the endpoint device and host. Input/Output transactions associated with the endpoint device executed in transactional memory on the host are stored in both the host transactional memory buffer and the transactional memory buffer associated with the endpoint device. In an embodiment, the Transactional Memory system further comprises an intermediate device located on the communication path between the host and the endpoint device, and an intermediate transactional memory buffer associated with said intermediate devices. In this embodiment, the Input/Output transactions associated with said endpoint device are stored in the intermediate transactional memory buffer associated with the intermediate device. | 10-20-2011 |
20110296148 | Transactional Memory System Supporting Unbroken Suspended Execution - Mechanisms are provided, in a data processing system having a processor and a transactional memory, for executing a transaction in the data processing system. These mechanisms execute a transaction comprising one or more instructions that modify at least a portion of the transactional memory. The transaction is suspended in response to a transaction suspend instruction being executed by the processor. A suspended block of code is executed in a non-transactional manner while the transaction is suspended. A determination is made as to whether an interrupt occurs while the transaction is suspended. In response to an interrupt occurring while the transaction is suspended, a transaction abort operation is delayed until after the transaction suspension is discontinued. | 12-01-2011 |
20120239904 | SEAMLESS INTERFACE FOR MULTI-THREADED CORE ACCELERATORS - A method, system and computer program product are disclosed for interfacing between a multi-threaded processing core and an accelerator. In one embodiment, the method comprises copying from the processing core to the hardware accelerator memory address translations for each of multiple threads operating on the processing core, and simultaneously storing on the hardware accelerator one or more of the memory address translations for each of the threads. Whenever any one of the multiple threads operating on the processing core instructs the hardware accelerator to perform a specified operation, the hardware accelerator has stored thereon one or more of the memory address translations for the any one of the threads. This facilitates starting that specified operation without memory translation faults. In an embodiment, the copying includes, each time one of the memory address translations is updated on the processing core, copying the updated one of the memory address translations to the hardware accelerator. | 09-20-2012 |
Patent application number | Description | Published |
20140075124 | Selective Delaying of Write Requests in Hardware Transactional Memory Systems - Techniques for conflict detection in hardware transactional memory (HTM) are provided. In one aspect, a method for detecting conflicts in HTM includes the following steps. Conflict detection is performed eagerly by setting read and write bits in a cache as transactions having read and write requests are made. A given one of the transactions is stalled when a conflict is detected whereby more than one of the transactions are accessing data in the cache in a conflicting way. An address of the conflicting data is placed in a predictor. The predictor is queried whenever the write requests are made to determine whether they correspond to entries in the predictor. A copy of the data corresponding to entries in the predictor is placed in a store buffer. The write bits in the cache are set and the copy of the data in the store buffer is merged in at transaction commit. | 03-13-2014 |
20150324204 | PARALLEL SLICE PROCESSOR WITH DYNAMIC INSTRUCTION STREAM MAPPING - A processor core having multiple parallel instruction execution slices and coupled to multiple dispatch queues by a dispatch routing network provides flexible and efficient use of internal resources. The dispatch routing network is controlled to dynamically vary the relationship between the slices and instruction streams according to execution requirements for the instruction streams and the availability of resources in the instruction execution slices. The instruction execution slices may be dynamically reconfigured as between single-instruction-multiple-data (SIMD) instruction execution and ordinary instruction execution on a per-instruction basis, permitting the mixture of those instruction types. Instructions having an operand width greater than the width of a single instruction execution slice may be processed by multiple instruction execution slices configured to act in concert for the particular instructions. When an instruction execution slice is busy processing a current instruction for one of the streams, another slice can be selected to proceed with execution. | 11-12-2015 |
20150324205 | PROCESSING OF MULTIPLE INSTRUCTION STREAMS IN A PARALLEL SLICE PROCESSOR - Techniques for managing instruction execution for multiple instruction streams using a processor core having multiple parallel instruction execution slices provide flexibility in execution of program instructions by a processor core. An event is detected indicating that either resource requirement or resource availability will not be met by the execution slice currently executing the instruction stream. In response to detecting the event, dispatch of at least a portion of the subsequent instruction is made to another instruction execution slice. The event may be a compiler-inserted directive, may be an event detected by logic in the processor core, or may be determined by a thread sequencer. The instruction execution slices may be dynamically reconfigured as between single-instruction-multiple-data (SIMD) instruction execution, ordinary instruction execution, wide instruction execution. When an instruction execution slice is busy processing a current instruction for one of the streams, another slice can be selected to proceed with execution. | 11-12-2015 |
20150324206 | PARALLEL SLICE PROCESSOR WITH DYNAMIC INSTRUCTION STREAM MAPPING - A method of operation of a processor core having multiple parallel instruction execution slices and coupled to multiple dispatch queues coupled by a dispatch routing network provides flexible and efficient use of internal resources. The dispatch routing network is controlled to dynamically vary the relationship between the slices and instruction streams according to execution requirements for the instruction streams and the availability of resources in the instruction execution slices. The instruction execution slices may be dynamically reconfigured as between single-instruction-multiple-data (SIMD) instruction execution and ordinary instruction execution on a per-instruction basis. Instructions having an operand width greater than the width of a single instruction execution slice may be processed by multiple instruction execution slices configured to act in concert for the particular instructions. When an instruction execution slice is busy processing a current instruction for one of the streams, another slice can be selected to proceed with execution. | 11-12-2015 |
20150324207 | PROCESSING OF MULTIPLE INSTRUCTION STREAMS IN A PARALLEL SLICE PROCESSOR - A method of managing instruction execution for multiple instruction streams using a processor core having multiple parallel instruction execution slices provides instruction processing flexibility. An event is detected indicating that either resource requirement or resource availability for a subsequent instruction of an instruction stream will not be met by the instruction execution slice currently executing the instruction stream. In response to detecting the event, dispatch of at least a portion of the subsequent instruction is made to another instruction execution slice. The event may be a compiler-inserted directive, may be an event detected by logic in the processor core, or may be determined by a thread sequencer. The execution slices may be dynamically reconfigured as between single-instruction-multiple-data (SIMD) instruction execution, ordinary instruction execution, wide instruction execution. When an execution slice is busy processing a current instruction for one of the streams, another slice can be selected to proceed with execution. | 11-12-2015 |
Patent application number | Description | Published |
20130203517 | GOLF CLUB GRIP WITH HOUSING - The invention generally relates to a grip for a golf club for housing an accessory to enhance the enjoyment of the game of golf. A grip of the invention prevents relative motion between the accessory and the club when the accessory is coupled to the club. | 08-08-2013 |
20130244805 | INTERCHANGEABLE SHAFT AND CLUB HEAD CONNECTION SYSTEM - A releasable connection system for assembling a shaft and a club head, e.g., a golf club shaft and a golf club head. The connection system provides interchangeability between a shaft and a club head and allows the head to be adjusted with respect to the shaft. The mating structures between the shaft and the club head may be indexed for reproducible placement. In an embodiment, the connection system also includes retaining structures that maintain the connection fastener position when the head and shaft are separated. | 09-19-2013 |
20140162802 | GOLF CLUB GRIP WITH DEVICE HOUSING - The invention relates to golf clubs, more particularly to mechanisms for fastening accessories to clubs. The invention provides a golf club configured to house an electronic device such as an RFID tag within a recess within the grip, thereby protecting the device from the stress, shock, and exposure that arises when a golf club is used. | 06-12-2014 |
20140187346 | GOLF CLUB HEAD WITH REMOVABLE COMPONENT - The invention provides a golf club head with a fully removable component that can withstand the stress of repeated hits. When assembled, the removable component is held in place by a fastening mechanism that includes structural elements that distribute the holding force across the component and tend to equalize the forces around the periphery where the component meets the body. The fastening mechanism may include a post that reaches across the open space within the hollow club head, pulling the removable component towards an opposed main club head body. Since a golf club of the present invention can be opened, it may include a mechanism on the inside for use by a golfer, such as an electronic device or an adjustment mechanism. The golf club may include a weight adjustment system that allows the club to be custom-fitted to a golfer. | 07-03-2014 |
20140316542 | SYSTEM AND METHOD FOR FITTING GOLF CLUBS AND SETS - The invention relates to devices, systems, and methods for fitting golf clubs. A golf club set is selected from among many golf clubs. A golfer can give information about an existing club set or hitting abilities. Systems of the invention use that information to identify a yardage gap to be covered by clubs and a number of clubs to span that yardage gap. The system can determine one or more yardage for each of the number of clubs and then propose a specific club for each yardage. | 10-23-2014 |
20150045142 | MILLING PROCESS FOR ROUGHNESS ON GOLF CLUB FACE - The invention generally relates to a golf club in which the ball-striking face has a high surface roughness. The invention provides systems and methods for dual speed milling for improved surface roughness. In certain aspects, the invention provides a method of making a ball striking face for a golf club that includes obtaining a piece of material for use as a club head ball-striking face, milling a surface of the piece of material at a first speed, and milling the surface at a second speed. | 02-12-2015 |
20150306473 | GOLF CLUB WITH ADJUSTABLE WEIGHT ASSEMBLY - The invention generally relates to golf clubs with adjustable mass properties. In certain aspects, the invention provides methods and mechanisms for adjusting a club head center of gravity and/or moment of inertia by way of an adjustable weight assembly positionable along the sole of the club head body. When in a first position, the weight assembly provides a lower center of gravity so as to increase launch angle and reduce spin rate, resulting in greater overall distance of ball flight. When in a second position, the weight assembly provides a greater mass moment of inertia, which effectively enlarges the sweet spot and produces a more forgiving club for off-center hits. | 10-29-2015 |
20150306474 | GOLF CLUB WITH ADJUSTABLE WEIGHT ASSEMBLY - The invention generally relates to golf clubs with adjustable mass properties. In certain aspects, the invention provides methods and mechanisms for adjusting a club head center of gravity and/or moment of inertia by way of an adjustable weight assembly positionable along the sole of the club head body. When in a first position, the weight assembly provides a lower center of gravity so as to increase launch angle and reduce spin rate, resulting in greater overall distance of ball flight. When in a second position, the weight assembly provides a greater mass moment of inertia, which effectively enlarges the sweet spot and produces a more forgiving club for off-center hits. | 10-29-2015 |
20150314173 | INTERCHANGEABLE SHAFT AND CLUB HEAD CONNECTION SYSTEM - A releasable connection system for assembling a shaft and a club head, e.g., a golf club shaft and a golf club head. The connection system provides interchangeability between a shaft and a club head and allows the head to be adjusted with respect to the shaft. The mating structures between the shaft and the club head may be indexed for reproducible placement. In an embodiment, the connection system also includes retaining structures that maintain the connection fastener position when the head and shaft are separated. | 11-05-2015 |
20160089583 | GOLF CLUB WITH ADJUSTABLE WEIGHT ASSEMBLY - The invention generally relates to golf clubs with adjustable mass properties. In certain aspects, the invention provides methods and mechanisms for adjusting a club head center of gravity and/or moment of inertia by way of an adjustable weight assembly positionable along the sole of the club head body. When in a first position, the weight assembly provides a lower center of gravity so as to increase launch angle and reduce spin rate, resulting in greater overall distance of ball flight. When in a second position, the weight assembly provides a greater mass moment of inertia, which effectively enlarges the sweet spot and produces a more forgiving club for off-center hits. | 03-31-2016 |
20160089584 | GOLF CLUB GRIP WITH DEVICE HOUSING - The invention relates to golf clubs, more particularly to mechanisms for fastening accessories to clubs. The invention provides a golf club configured to house an electronic device such as an RFID tag within a recess within the grip, thereby protecting the device from the stress, shock, and exposure that arises when a golf club is used. | 03-31-2016 |