Patent application number | Description | Published |
20120033443 | Scanning Backlight with Slatless Light Guide - A backlight includes a light guide and a first and second light source adapted for independent operation and arranged to inject a first and second light beam, respectively, into the light guide through different portions of the light injection surface. Each light source includes a lamp and a concave reflector to partially collimate light from the lamp. One major surface of the light guide includes prismatic structures that are parallel to a first axis. Another major surface of the light guide includes lenticular structures that are parallel to each other but perpendicular to the prismatic structures. The lenticular structures limit spatial spreading for light that remains in the light guide. Each light source cooperates with the light guide to provide light that is substantially laterally confined to a transverse band of the light guide, even though the light guide contains no gaps that define slats to accomplish such confinement. | 02-09-2012 |
20120154450 | DUAL-ORIENTATION AUTOSTEREOSCOPIC BACKLIGHT AND DISPLAY - Stereoscopic displays and backlights include a light guide with individually addressable light sources disposed at opposite edges of the light guide, and a light redirecting film disposed in front of the light guide. Light from one light source is emitted from the backlight as a right eye elongated light beam, and light from the opposite light source is emitted as a left eye elongated light beam. Structured surface features, e.g. linear prismatic or linear lenticular features, on the light guide and/or the light redirecting film are oriented such that the elongated light beams are offset from an optical axis of the backlight. Moreover, each of the elongated light beams is oriented to intersect both a first observation plane and a second observation plane perpendicular to the first observation plane, the first observation plane defined by the optical axis and an in-plane axis along which the light sources are disposed. | 06-21-2012 |
20130170218 | ILLUMINATION DEVICE HAVING VISCOELASTIC LAYER - An illumination device, such as a backlight for electronic display devices, is disclosed. The illumination device includes a lightguide optically coupled to a light source, and a viscoelastic layer and a nanovoided polymeric layer are used in conjunction with the lightguide to manage light emitted by the light source. The viscoelastic layer may be a pressure sensitive adhesive. | 07-04-2013 |
20130314780 | LENS DESIGNS FOR INTEGRAL IMAGING 3D DISPLAYS - Integral imaging 3D films for use with a display panel. The films include a flexible transmissive substrate having a first surface on a viewer side and a second surface for placement on the display panel. Convex lenses are located on the first surface. The second surface is planar for a plano-convex design or has concave lenses registered with the convex lenses for a convex-concave compound lens design. In the plano-convex design, the lens focus is in front of or behind the pixels. In the convex-concave design, the convex and concave lenses combined focus is in front of, at, or behind the pixels. In use the 3D films produce 3D images with motion parallax. | 11-28-2013 |
20140208624 | SELF ILLUMINATED SIGNAGE FOR PRINTED GRAPHICS - Self illuminated back and front lit signage for a printed graphic. The signage includes a turning film having a structured surface for redirecting light, a diffuser providing for diffusion, and a printed graphic. The turning film receives light from an ambient light source and directs the light via the structured surface toward a viewer of the graphic in order to passively illuminate the signage. | 07-31-2014 |
20140285885 | LENS DESIGNS FOR INTEGRAL IMAGING 3D DISPLAYS - Integral imaging 3D films for use with a display panel. The films include a flexible transmissive substrate having a first surface on a viewer side and a second surface for placement on the display panel. Convex lenses are located on the first surface. The second surface is planar for a plano-convex design or has concave lenses registered with the convex lenses for a convex-concave compound lens design. In the plano-convex design, the lens focus is in front of or behind the pixels. In the convex-concave design, the convex and concave lenses combined focus is in front of, at, or behind the pixels. In use the 3D films produce 3D images with motion parallax. | 09-25-2014 |
20140355298 | ADHESIVE LIGHTGUIDE WITH RESONANT CIRCUIT - Optical articles that include adhesive lightguides having resonant circuits and one or more light sources are described. More particularly, optical articles having resonant circuits that, upon a triggering event, cause the one or more light source to emit light into the adhesive lightguide such that light is transported within the lightguide by total internal reflection are described. Additionally, applications and embodiments that include such optical articles are described. | 12-04-2014 |
20150068080 | SELF ILLUMINATED SIGNAGE FOR PRINTED GRAPHICS - Self illuminated back and front lit signage for a printed graphic. The signage includes a turning film having a structured surface for redirecting light, a diffuser providing for diffusion, and a printed graphic. The turning film receives light from an ambient light source and directs the light via the structured surface toward a viewer of the graphic in order to passively illuminate the signage. | 03-12-2015 |
20150121732 | HYBRID SELF ILLUMINATED AND ACTIVELY BACK LIT SIGNAGE FOR PRINTED GRAPHICS - Hybrid signage capable of self illumination and having an active backlight. The signage includes a turning film having a structured surface for redirecting light in order to passively illuminate a printed graphic or shaped sign when the backlight is off. In the shaped sign, the shape provides the content, such as letters, to be conveyed to the viewer instead of a graphic. The signage can be actively illuminated when the backlight is on to supplemental the passive illumination. | 05-07-2015 |
Patent application number | Description | Published |
20100058031 | Executing A Service Program For An Accelerator Application Program In A Hybrid Computing Environment - Executing a service program for an accelerator application program in a hybrid computing environment that includes a host computer and an accelerator, the host computer and the accelerator adapted to one another for data communications by a system level message passing module; where the service program includes a host portion and an accelerator portion and executing a service program for an accelerator includes receiving, from the host portion, operating information for the accelerator portion; starting the accelerator portion on the accelerator; providing, to the accelerator portion, operating information for the accelerator application program; establishing direct data communications between the host portion and the accelerator portion; and, responsive to an instruction communicated directly from the host portion, executing the accelerator application program. | 03-04-2010 |
20100058356 | Data Processing In A Hybrid Computing Environment - Data processing in a hybrid computing environment that includes a host computer having a host computer architecture; an accelerator having an accelerator architecture, the accelerator architecture optimized, with respect to the host computer architecture, for speed of execution of a particular class of computing functions; the host computer and the accelerator adapted to one another for data communications by a system level message passing module; and a host application process executing on the host computer. Data processing such a hybrid computing environment includes starting, at the behest of the host application process, a thread of execution on the accelerator; returning, by the system level message passing module to the host application process, a process identifier (‘PID’) for the thread of execution; and managing, by the host application process, the thread of execution on the accelerator as though the thread of execution were a thread of execution on the host computer. | 03-04-2010 |
20100064295 | Executing An Accelerator Application Program In A Hybrid Computing Environment - Executing an accelerator application program in a hybrid computing environment with a host computer having a host computer architecture; an accelerator having an accelerator architecture, the accelerator architecture optimized, with respect to the host computer architecture, for speed of execution of a particular class of computing functions; the host computer and the accelerator adapted to one another for data communications by a system level message passing module, where executing an accelerator application program on an accelerator includes receiving, from a host application program on the host computer, operating information for an accelerator application program; designating a directory as a CWD for the accelerator application program, separate from any other CWDs of any other applications running on the accelerator; assigning, to the CWD, a name that is unique with respect to names of other CWDs of other applications in the computing environment; and starting the accelerator application program on the accelerator. | 03-11-2010 |
20110035556 | Reducing Remote Reads Of Memory In A Hybrid Computing Environment By Maintaining Remote Memory Values Locally - Reducing remote reads of memory in a hybrid computing environment by maintaining remote memory values locally, the hybrid computing environment including a host computer and a plurality of accelerators, the host computer and the accelerators each having local memory shared remotely with the other, including writing to the shared memory of the host computer packets of data representing changes in accelerator memory values, incrementing, in local memory and in remote shared memory on the host computer, a counter value representing the total number of packets written to the host computer, reading by the host computer from the shared memory in the host computer the written data packets, moving the read data to application memory, and incrementing, in both local memory and in remote shared memory on the accelerator, a counter value representing the total number of packets read by the host computer. | 02-10-2011 |
20110271059 | REDUCING REMOTE READS OF MEMORY IN A HYBRID COMPUTING ENVIRONMENT - A hybrid computing environment in which the host computer allocates, in the shadow memory area of the host computer, a memory region for a packet to be written to the shared memory of an accelerator; writes packet data to the accelerator's shared memory in a memory region corresponding to the allocated memory region; inserts, in a next available element of the accelerator's descriptor array, a descriptor identifying the written packet data; increments the copy of the head pointer of the accelerator's descriptor array maintained on the host computer; and updates a copy of the head pointer of the accelerator's descriptor array maintained on the accelerator with the incremented copy. | 11-03-2011 |
20120191920 | Reducing Remote Reads Of Memory In A Hybrid Computing Environment By Maintaining Remote Memory Values Locally - Reducing remote reads of memory in a hybrid computing environment by maintaining remote memory values locally, the hybrid computing environment including a host computer and a plurality of accelerators, the host computer and the accelerators each having local memory shared remotely with the other, including writing to the shared memory of the host computer packets of data representing changes in accelerator memory values, incrementing, in local memory and in remote shared memory on the host computer, a counter value representing the total number of packets written to the host computer, reading by the host computer from the shared memory in the host computer the written data packets, moving the read data to application memory, and incrementing, in both local memory and in remote shared memory on the accelerator, a counter value representing the total number of packets read by the host computer. | 07-26-2012 |
20120192204 | Executing An Accelerator Application Program In A Hybrid Computing Environment - Executing an accelerator application program in a hybrid computing environment with a host computer having a host computer architecture; an accelerator having an accelerator architecture, the accelerator architecture optimized, with respect to the host computer architecture, for speed of execution of a particular class of computing functions; the host computer and the accelerator adapted to one another for data communications by a system level message passing module, where executing an accelerator application program on an accelerator includes receiving, from a host application program on the host computer, operating information for an accelerator application program; designating a directory as a CWD for the accelerator application program, separate from any other CWDs of any other applications running on the accelerator; assigning, to the CWD, a name that is unique with respect to names of other CWDs of other applications in the computing environment; and starting the accelerator application program on the accelerator. | 07-26-2012 |
20120331065 | Messaging In A Parallel Computer Using Remote Direct Memory Access ('RDMA') - Messaging in a parallel computer using remote direct memory access (‘RDMA’), including: receiving a send work request; responsive to the send work request: translating a local virtual address on the first node from which data is to be transferred to a physical address on the first node from which data is to be transferred from; creating a local RDMA object that includes a counter set to the size of a messaging acknowledgment field; sending, from a messaging unit in the first node to a messaging unit in a second node, a message that includes a RDMA read operation request, the physical address of the local RDMA object, and the physical address on the first node from which data is to be transferred from; and receiving, by the first node responsive to the second node's execution of the RDMA read operation request, acknowledgment data in the local RDMA object. | 12-27-2012 |
20120331153 | Establishing A Data Communications Connection Between A Lightweight Kernel In A Compute Node Of A Parallel Computer And An Input-Output ('I/O') Node Of The Parallel Computer - Establishing a data communications connection between a lightweight kernel in a compute node of a parallel computer and an input-output (‘I/O’) node of the parallel computer, including: configuring the compute node with the network address and port value for data communications with the I/O node; establishing a queue pair on the compute node, the queue pair identified by a queue pair number (‘QPN’); receiving, in the I/O node on the parallel computer from the lightweight kernel, a connection request message; establishing by the I/O node on the I/O node a queue pair identified by a QPN for communications with the compute node; and establishing by the I/O node the requested connection by sending to the lightweight kernel a connection reply message. | 12-27-2012 |
20120331243 | Remote Direct Memory Access ('RDMA') In A Parallel Computer - Remote direct memory access (‘RDMA’) in a parallel computer, the parallel computer including a plurality of nodes, each node including a messaging unit, including: receiving an RDMA read operation request that includes a virtual address representing a memory region at which to receive data to be transferred from a second node to the first node; responsive to the RDMA read operation request: translating the virtual address to a physical address; creating a local RDMA object that includes a counter set to the size of the memory region; sending a message that includes an DMA write operation request, the physical address of the memory region on the first node, the physical address of the local RDMA object on the first node, and a remote virtual address on the second node; and receiving the data to be transferred from the second node. | 12-27-2012 |
20130080564 | MESSAGING IN A PARALLEL COMPUTER USING REMOTE DIRECT MEMORY ACCESS ('RDMA') - Messaging in a parallel computer using remote direct memory access (‘RDMA’), including: receiving a send work request; responsive to the send work request: translating a local virtual address on the first node from which data is to be transferred to a physical address on the first node from which data is to be transferred from; creating a local RDMA object that includes a counter set to the size of a messaging acknowledgment field; sending, from a messaging unit in the first node to a messaging unit in a second node, a message that includes a RDMA read operation request, the physical address of the local RDMA object, and the physical address on the first node from which data is to be transferred from; and receiving, by the first node responsive to the second node's execution of the RDMA read operation request, acknowledgment data in the local RDMA object. | 03-28-2013 |
20130091236 | REMOTE DIRECT MEMORY ACCESS ('RDMA') IN A PARALLEL COMPUTER - Remote direct memory access (‘RDMA’) in a parallel computer, the parallel computer including a plurality of nodes, each node including a messaging unit, including: receiving an RDMA read operation request that includes a virtual address representing a memory region at which to receive data to be transferred from a second node to the first node; responsive to the RDMA read operation request: translating the virtual address to a physical address; creating a local RDMA object that includes a counter set to the size of the memory region; sending a message that includes an DMA write operation request, the physical address of the memory region on the first node, the physical address of the local RDMA object on the first node, and a remote virtual address on the second node; and receiving the data to be transferred from the second node. | 04-11-2013 |
20130103926 | ESTABLISHING A DATA COMMUNICATIONS CONNECTION BETWEEN A LIGHTWEIGHT KERNEL IN A COMPUTE NODE OF A PARALLEL COMPUTER AND AN INPUT-OUTPUT ('I/O') NODE OF THE PARALLEL COMPUTER - Establishing a data communications connection between a lightweight kernel in a compute node of a parallel computer and an input-output (‘I/O’) node of the parallel computer, including: configuring the compute node with the network address and port value for data communications with the I/O node; establishing a queue pair on the compute node, the queue pair identified by a queue pair number (‘QPN’); receiving, in the I/O node on the parallel computer from the lightweight kernel, a connection request message; establishing by the I/O node on the I/O node a queue pair identified by a QPN for communications with the compute node; and establishing by the I/O node the requested connection by sending to the lightweight kernel a connection reply message. | 04-25-2013 |
20130179901 | Executing An Accelerator Application Program In A Hybrid Computing Environment - Executing an accelerator application program in a hybrid computing environment with a host computer having a host computer architecture; an accelerator having an accelerator architecture, the accelerator architecture optimized, with respect to the host computer architecture, for speed of execution of a particular class of computing functions; the host computer and the accelerator adapted to one another for data communications by a system level message passing module, where executing an accelerator application program on an accelerator includes receiving, from a host application program on the host computer, operating information for an accelerator application program; designating a directory as a CWD for the accelerator application program, separate from any other CWDs of any other applications running on the accelerator; assigning, to the CWD, a name that is unique with respect to names of other CWDs of other applications in the computing environment; and starting the accelerator application program on the accelerator. | 07-11-2013 |
20130185375 | CONFIGURING COMPUTE NODES IN A PARALLEL COMPUTER USING REMOTE DIRECT MEMORY ACCESS ('RDMA') - Configuring compute nodes in a parallel computer using remote direct memory access (‘RDMA’), the parallel computer comprising a plurality of compute nodes coupled for data communications via one or more data communications networks, including: initiating, by a source compute node of the parallel computer, an RDMA broadcast operation to broadcast binary configuration information to one or more target compute nodes in the parallel computer; preparing, by each target compute node, the target compute node for receipt of the binary configuration information from the source compute node; transmitting, by each target compute node, a ready message to the target compute node, the ready message indicating that the target compute node is ready to receive the binary configuration information from the source compute node; and performing, by the source compute node, an RDMA broadcast operation to write the binary configuration information into memory of each target compute node. | 07-18-2013 |
20130185381 | Configuring Compute Nodes In A Parallel Computer Using Remote Direct Memory Access ('RDMA') - Configuring compute nodes in a parallel computer using remote direct memory access (‘RDMA’), the parallel computer comprising a plurality of compute nodes coupled for data communications via one or more data communications networks, including: initiating, by a source compute node of the parallel computer, an RDMA broadcast operation to broadcast binary configuration information to one or more target compute nodes in the parallel computer; preparing, by each target compute node, the target compute node for receipt of the binary configuration information from the source compute node; transmitting, by each target compute node, a ready message to the target compute node, the ready message indicating that the target compute node is ready to receive the binary configuration information from the source compute node; and performing, by the source compute node, an RDMA broadcast operation to write the binary configuration information into memory of each target compute node. | 07-18-2013 |
20130212253 | Calculating A Checksum With Inactive Networking Components In A Computing System - Calculating a checksum utilizing inactive networking components in a computing system, including: identifying, by a checksum distribution manager, an inactive networking component, wherein the inactive networking component includes a checksum calculation engine for computing a checksum; sending, to the inactive networking component by the checksum distribution manager, metadata describing a block of data to be transmitted by an active networking component; calculating, by the inactive networking component, a checksum for the block of data; transmitting, to the checksum distribution manager from the inactive networking component, the checksum for the block of data; and sending, by the active networking component, a data communications message that includes the block of data and the checksum for the block of data. | 08-15-2013 |
20130212258 | CALCULATING A CHECKSUM WITH INACTIVE NETWORKING COMPONENTS IN A COMPUTING SYSTEM - Calculating a checksum utilizing inactive networking components in a computing system, including: identifying, by a checksum distribution manager, an inactive networking component, wherein the inactive networking component includes a checksum calculation engine for computing a checksum; sending, to the inactive networking component by the checksum distribution manager, metadata describing a block of data to be transmitted by an active networking component; calculating, by the inactive networking component, a checksum for the block of data; transmitting, to the checksum distribution manager from the inactive networking component, the checksum for the block of data; and sending, by the active networking component, a data communications message that includes the block of data and the checksum for the block of data. | 08-15-2013 |
20130263138 | Collectively Loading An Application In A Parallel Computer - Collectively loading an application in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: identifying, by a parallel computer control system, a subset of compute nodes in the parallel computer to execute a job; selecting, by the parallel computer control system, one of the subset of compute nodes in the parallel computer as a job leader compute node; retrieving, by the job leader compute node from computer memory, an application for executing the job; and broadcasting, by the job leader to the subset of compute nodes in the parallel computer, the application for executing the job. | 10-03-2013 |
20130339805 | Aggregating Job Exit Statuses Of A Plurality Of Compute Nodes Executing A Parallel Application - Aggregating job exit statuses of a plurality of compute nodes executing a parallel application, including: identifying a subset of compute nodes in the parallel computer to execute the parallel application; selecting one compute node in the subset of compute nodes in the parallel computer as a job leader compute node; initiating execution of the parallel application on the subset of compute nodes; receiving an exit status from each compute node in the subset of compute nodes, where the exit status for each compute node includes information describing execution of some portion of the parallel application by the compute node; aggregating each exit status from each compute node in the subset of compute nodes; and sending an aggregated exit status for the subset of compute nodes in the parallel computer. | 12-19-2013 |
20140136888 | CORE FILE LIMITER FOR ABNORMALLY TERMINATING PROCESSES - Computer program product and system to limit core file generation in a massively parallel computing system comprising a plurality of compute nodes each executing at least one task, of a plurality of tasks, by: upon determining that a first task executing on a first compute node has failed, performing an atomic load and increment operation on a core file count; generating a first core file upon determining that the core file count is below a predefined threshold; and not generating the first core file upon determining that the core file count is not below the predefined threshold. | 05-15-2014 |
20140136890 | CORE FILE LIMITER FOR ABNORMALLY TERMINATING PROCESSES - Computer program product and system to limit core file generation in a massively parallel computing system comprising a plurality of compute nodes each executing at least one task, of a plurality of tasks, by: upon determining that a first task executing on a first compute node has failed, performing an atomic load and increment operation on a core file count; generating a first core file upon determining that the core file count is below a predefined threshold; and not generating the first core file upon determining that the core file count is not below the predefined threshold. | 05-15-2014 |
20140282599 | COLLECTIVELY LOADING PROGRAMS IN A MULTIPLE PROGRAM MULTIPLE DATA ENVIRONMENT - Techniques are disclosed for loading programs efficiently in a parallel computing system. In one embodiment, nodes of the parallel computing system receive a load description file which indicates, for each program of a multiple program multiple data (MPMD) job, nodes which are to load the program. The nodes determine, using collective operations, a total number of programs to load and a number of programs to load in parallel. The nodes further generate a class route for each program to be loaded in parallel, where the class route generated for a particular program includes only those nodes on which the program needs to be loaded. For each class route, a node is selected using a collective operation to be a load leader which accesses a file system to load the program associated with a class route and broadcasts the program via the class route to other nodes which require the program. | 09-18-2014 |