Patent application number | Description | Published |
20090061099 | ROBOTIC TIRE SPRAYING SYSTEM - A robotic spray system is provided for accurately spraying mold release onto any size or shaped green tire. The system analyzes individual green tires using an integrated vision system. The system controls the robotic spray position, the fan, fluid, atomizing air, and tire rotation speed for optimal spray coverage on both the inside and outside of green tires. The system includes a conveyor, an overhead mounted camera located over an infeed station, and a second camera located perpendicular to the green tire's tread and several feet away from the center of the tire. Pictures of the green tire in the station are used to estimate the center and radius of the tire and locate the angle of the bar code with respect to the center of the tire. Reference points are provided from the camera images and robot positions are calculated to control the spraying. | 03-05-2009 |
20130078385 | MODULAR TIRE SPRAYING SYSTEM - A modular tire spraying system includes a downdraft spray booth for receiving a tire, a fluid delivery system disposed in the spray booth, a robot for transporting the tire to the spray booth, and a platform on which each of the spray booth, the fluid delivery system, and the robot is disposed. The fluid delivery system includes at least one spray gun for delivering a coating to the tire. | 03-28-2013 |
20140041578 | MODULAR TIRE SPRAYING SYSTEM - A modular tire spraying system includes a downdraft spray booth for receiving a tire, a fluid delivery system disposed in the spray booth, a robot for transporting the tire to the spray booth, and a platform on which each of the spray booth, the fluid delivery system, and the robot is disposed. The fluid delivery system includes at least one spray gun for delivering a coating to the tire. | 02-13-2014 |
20150151314 | PRECISION FLUID DELIVERY SYSTEM - A precision fluid delivery system includes a peristaltic pump, a motor, and a controller. The peristaltic pump has a fluid inlet for communication with a fluid source and a fluid outlet for communication with a tire spraying system. The motor is coupled to and configured to drive the peristaltic pump. The controller is in communication with the motor, and operates the peristaltic pump for delivery of the fluid to a tire spraying system during a tire coating operation. | 06-04-2015 |
Patent application number | Description | Published |
20110067025 | AUTOMATICALLY GENERATING COMPOUND COMMANDS IN A COMPUTER SYSTEM - A computer system provides a way to automatically generate compound commands that perform tasks made up of multiple simple commands. A compound command generation mechanism monitors consecutive user commands and compares the consecutive commands a user has taken to a command sequence identification policy. If the user's consecutive commands satisfy the command sequence identification policy the user's consecutive commands become a command sequence. If the command sequence satisfies the compound command policy, the compound generation mechanism can generate a compound command for the command sequence automatically or prompt an administrator to allow the compound command to be generated. Generating a compound command can be done on a user by user basis or on a system wide basis. The compound command can then be displayed to the user to execute so that the command sequence is performed by the user selecting the compound command for execution. | 03-17-2011 |
20110301996 | AUTOMATING WORKFLOW PARTICIPATION - A workflow system allows defining criteria for an automated task agent to perform tasks for a participant automatically without input from the participant at the time the task is performed. Automated tasks are performed by an automated task agent according to the participant's history in performing similar tasks in the past. Tasks completed by automated task agents are displayed to a user in a manner that distinguishes automated tasks from manual tasks. | 12-08-2011 |
20110302004 | CUSTOMIZING WORKFLOW BASED ON PARTICIPANT HISTORY AND PARTICIPANT PROFILE - A workflow system allows determining at least one date based on various factors including the complexity of a task, a participant's history as monitored by the workflow system, and a participant's profile as entered by the participant. In addition, the workflow system generates customized notifications according to the participant's reliability in meeting due dates in the past and a notification preference specified by the participant. The result is a powerful and flexible workflow system. The dates determined by the workflow system may include one or more due dates for tasks and one or more dates for notifications to participants. | 12-08-2011 |
20120159247 | AUTOMATICALLY CHANGING PARTS IN RESPONSE TO TESTS - In an embodiment, in response to an error encountered by a test of a program, a rule is found that specifies the error and an action. A part in the program is selected in response to the action, the part is modified, and the test is re-executed. In various embodiments, the part is modified by changing the code in the part or by replacing the part with a previous version of the part. | 06-21-2012 |
20120159491 | DATA DRIVEN DYNAMIC WORKFLOW - A method, system and article of manufacture for workflow processing and, more particularly, for managing creation and execution of data driven dynamic workflows. One embodiment provides a computer-implemented method for managing execution of workflow instances. The method comprises providing a parent process template and providing a child process template. The child process template is configured to implement an arbitrary number of workflow operations for a given workflow instance, and the parent process template is configured to instantiate child processes on the basis of the child process template to implement a desired workflow. The method further comprises receiving a workflow configuration and instantiating an instance of the workflow on the basis of the workflow configuration. The instantiating comprises instantiating a parent process on the basis of the parent process template and instantiating, by the parent process template, one or more child processes on the basis of the child process template. | 06-21-2012 |
20130160015 | AUTOMATICALLY GENERATING COMPOUND COMMANDS IN A COMPUTER SYSTEM - A computer system provides a way to automatically generate compound commands that perform tasks made up of multiple simple commands. A compound command generation mechanism monitors consecutive user commands and compares the consecutive commands a user has taken to a command sequence identification policy. If the user's consecutive commands satisfy the command sequence identification policy the user's consecutive commands become a command sequence. If the command sequence satisfies the compound command policy, the compound generation mechanism can generate a compound command for the command sequence automatically or prompt an administrator to allow the compound command to be generated. Generating a compound command can be done on a user by user basis or on a system wide basis. The compound command can then be displayed to the user to execute so that the command sequence is performed by the user selecting the compound command for execution. | 06-20-2013 |
Patent application number | Description | Published |
20110173413 | EMBEDDING GLOBAL BARRIER AND COLLECTIVE IN A TORUS NETWORK - Embodiments of the invention provide a method, system and computer program product for embedding a global barrier and global interrupt network in a parallel computer system organized as a torus network. The computer system includes a multitude of nodes. In one embodiment, the method comprises taking inputs from a set of receivers of the nodes, dividing the inputs from the receivers into a plurality of classes, combining the inputs of each of the classes to obtain a result, and sending said result to a set of senders of the nodes. Embodiments of the invention provide a method, system and computer program product for embedding a collective network in a parallel computer system organized as a torus network. In one embodiment, the method comprises adding to a torus network a central collective logic to route messages among at least a group of nodes in a tree structure. | 07-14-2011 |
20110173488 | NON-VOLATILE MEMORY FOR CHECKPOINT STORAGE - A system, method and computer program product for supporting system initiated checkpoints in high performance parallel computing systems and storing of checkpoint data to a non-volatile memory storage device. The system and method generates selective control signals to perform checkpointing of system related data in presence of messaging activity associated with a user application running at the node. The checkpointing is initiated by the system such that checkpoint data of a plurality of network nodes may be obtained even in the presence of user applications running on highly parallel computers that include ongoing user messaging activity. In one embodiment, the non-volatile memory is a pluggable flash memory card. | 07-14-2011 |
20110219208 | MULTI-PETASCALE HIGHLY EFFICIENT PARALLEL SUPERCOMPUTER - A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency. | 09-08-2011 |
Patent application number | Description | Published |
20110219280 | COLLECTIVE NETWORK FOR COMPUTER STRUCTURES - A system and method for enabling high-speed, low-latency global collective communications among interconnected processing nodes. The global collective network optimally enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the network via links to facilitate performance of low-latency global processing operations at nodes of the virtual network and class structures. The global collective network may be configured to provide global barrier and interrupt functionality in asynchronous or synchronized manner. When implemented in a massively-parallel supercomputing structure, the global collective network is physically and logically partitionable according to needs of a processing algorithm. | 09-08-2011 |
20130166821 | LOW LATENCY AND PERSISTENT DATA STORAGE - Persistent data storage with low latency is provided by a method that includes receiving a low latency store command that includes write data. The write data is written to a first memory device that is implemented by a nonvolatile solid-state memory technology characterized by a first access speed. It is acknowledged that the write data has been successfully written to the first memory device. The write data is written to a second memory device that is implemented by a volatile memory technology. At least a portion of the data in the first memory device is written to a third memory device when a predetermined amount of data has been accumulated in the first memory device. The third memory device is implemented by a nonvolatile solid-state memory technology characterized by a second access speed that is slower than the first access speed. | 06-27-2013 |
20140115281 | MEMORY SYSTEM CONNECTOR - According to one embodiment a memory system includes a circuit card and a separable area array connector on the circuit card. The system also includes a memory device positioned on the circuit card, wherein the memory device is configured to communicate with a main processor of a computer system via the area array connector. | 04-24-2014 |
20150236001 | IMPLEMENTING INVERTED MASTER-SLAVE 3D SEMICONDUCTOR STACK - A method and apparatus are provided for implementing an enhanced three dimensional (3D) semiconductor stack. A chip carrier has an aperture of a first length and first width. A first chip has at least one of a second length greater than the first length or a second width greater than the first width; a second chip attached to the first chip, the second chip having at least one of a third length less than the first length or a third width less than the first width; the first chip attached to the chip carrier by connections in an overlap region defined by at least one of the first and second lengths or the first and second widths; the second chip extending into the aperture; and a heat spreader attached to the chip carrier and in thermal contact with the first chip for dissipating heat from both the first chip and second chip. | 08-20-2015 |
20150236004 | IMPLEMENTING INVERTED MASTER-SLAVE 3D SEMICONDUCTOR STACK - A method and apparatus are provided for implementing an enhanced three dimensional (3D) semiconductor stack. A chip carrier has an aperture of a first length and first width. A first chip has at least one of a second length greater than the first length or a second width greater than the first width; a second chip attached to the first chip, the second chip having at least one of a third length less than the first length or a third width less than the first width; the first chip attached to the chip carrier by connections in an overlap region defined by at least one of the first and second lengths or the first and second widths; the second chip extending into the aperture; and a heat spreader attached to the chip carrier and in thermal contact with the first chip for dissipating heat from both the first chip and second chip. | 08-20-2015 |
20150289406 | HIGH-DENSITY, FAIL-IN-PLACE SWITCHES FOR COMPUTER AND DATA NETWORKS - A structure for a network switch. The network switch may include a plurality of spine chips arranged on a plurality of spine cards, where one or more spine chips are located on each spine card; and a plurality of leaf chips arranged on a plurality of leaf cards, wherein one or more leaf chips are located on each leaf card, where each spine card is connected to every leaf chip and the plurality of spine chips are surrounded on at least two sides by leaf cards. | 10-08-2015 |
20160105262 | COLLECTIVE NETWORK FOR COMPUTER STRUCTURES - A system and method for enabling high-speed, low-latency global collective communications among interconnected processing nodes. The global collective network optimally enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the network via links to facilitate performance of low-latency global processing operations at nodes of the virtual network and class structures. The global collective network may be configured to provide global barrier and interrupt functionality in asynchronous or synchronized manner. When implemented in a massively-parallel supercomputing structure, the global collective network is physically and logically partitionable according to needs of a processing algorithm. | 04-14-2016 |
Patent application number | Description | Published |
20080313408 | LOW LATENCY MEMORY ACCESS AND SYNCHRONIZATION - A low latency memory system access is provided in association with a weakly-ordered multiprocessor system. Bach processor in the multiprocessor shares resources, and each shared resource has an associated lock within a locking device that provides support for synchronization between the multiple processors in the multiprocessor and the orderly sharing of the resources. A processor only has permission to access a resource when it owns the lock associated with that resource, and an attempt by a processor to own a lock requires only a single load operation, rather than a traditional atomic load followed by store, such that the processor only performs a read operation and the hardware locking device performs a subsequent write operation rather than the processor. A simple prefetching for non-contiguous data structures is also disclosed. A memory line is redefined so that in addition to the normal physical memory data, every line includes a pointer that is large enough to point to any other line in the memory, wherein the pointers to determine which memory line to prefetch rather than some other predictive algorithm. This enables hardware to effectively prefetch memory access patterns that are non-contiguous, but repetitive. | 12-18-2008 |
20090259713 | NOVEL MASSIVELY PARALLEL SUPERCOMPUTER - A novel massively parallel supercomputer of hundreds of teraOPS-scale includes node architectures based upon System-On-a-Chip technology, i.e., each processing node comprises a single Application Specific Integrated Circuit (ASIC). Within each ASIC node is a plurality of processing elements each of which consists of a central processing unit (CPU) and plurality of floating point processors to enable optimal balance of computational performance, packaging density, low cost, and power and cooling requirements. The plurality of processors within a single node may be used individually or simultaneously to work on any combination of computation or communication as required by the particular algorithm being solved or executed at any point in time. The system-on-a-chip ASIC nodes are interconnected by multiple independent networks that optimally maximizes packet communications throughput and minimizes latency. In the preferred embodiment, the multiple networks include three high-speed networks for parallel algorithm message passing including a Torus, Global Tree, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. For particular classes of parallel algorithms, or parts of parallel calculations, this architecture exhibits exceptional computational performance, and may be enabled to perform calculations for new classes of parallel algorithms. Additional networks are provided for external connectivity and used for Input/Output, System Management and Configuration, and Debug and Monitoring functions. Special node packaging techniques implementing midplane and other hardware devices facilitates partitioning of the supercomputer in multiple networks for optimizing supercomputing resources. | 10-15-2009 |
20120311299 | NOVEL MASSIVELY PARALLEL SUPERCOMPUTER - A novel massively parallel supercomputer of hundreds of teraOPS-scale includes node architectures based upon System-On-a-Chip technology, i.e., each processing node comprises a single Application Specific Integrated Circuit (ASIC). Within each ASIC node is a plurality of processing elements each of which consists of a central processing unit (CPU) and plurality of floating point processors to enable optimal balance of computational performance, packaging density, low cost, and power and cooling requirements. The plurality of processors within a single node individually or simultaneously work on any combination of computation or communication as required by the particular algorithm being solved. The system-on-a-chip ASIC nodes are interconnected by multiple independent networks that optimally maximizes packet communications throughput and minimizes latency. The multiple networks include three high-speed networks for parallel algorithm message passing including a Torus, Global Tree, and a Global Asynchronous network that provides global barrier and notification functions. | 12-06-2012 |
20140237045 | EMBEDDING GLOBAL BARRIER AND COLLECTIVE IN A TORUS NETWORK - Embodiments of the invention provide a method, system and computer program product for embedding a global barrier and global interrupt network in a parallel computer system organized as a torus network. The computer system includes a multitude of nodes. In one embodiment, the method comprises taking inputs from a set of receivers of the nodes, dividing the inputs from the receivers into a plurality of classes, combining the inputs of each of the classes to obtain a result, and sending said result to a set of senders of the nodes. Embodiments of the invention provide a method, system and computer program product for embedding a collective network in a parallel computer system organized as a torus network. In one embodiment, the method comprises adding to a torus network a central collective logic to route messages among at least a group of nodes in a tree structure. | 08-21-2014 |
Patent application number | Description | Published |
20080209585 | GENES WHICH PRODUCE STAYGREEN CHARACTERISTICS IN MAIZE AND THEIR USES - The present invention is directed to plant genetic engineering. In particular, it is directed to producing green leaves in maize through inhibition of ethylene. The genes involved in producing this phenotype include 1-Aminocyclopropane-1-Carboxylate (“ACC”) synthase, ACC oxidase, ACC deaminase, ethylene response sensor (“ERS”), ethylene resistant (“ETR”), and ethylene insensitive (“EIN”). | 08-28-2008 |
20100281556 | GENES WHICH PRODUCE STAYGREEN CHARACTERISTICS IN MAIZE AND THEIR USES - The present invention provides new methods of delaying senescence in a plant by inhibiting ACC oxidase, or EIN2 activity in the plant. In particular, it is directed to producing green leaves in maize through inhibition of ethylene. The genes involved in producing this phenotype include ACC deaminase, ethylene response sensor (“ERS”), ethylene resistant (“ETR”), and ethylene insensitive (“EIN”). The delay in senescence can be achieved through the production of ACC deaminase, mutated ETR1 and ERS2 proteins, as well as overexpression of wild-type ETR1 and ERS2 proteins. | 11-04-2010 |