Patent application number | Description | Published |
20090315905 | LAYERED TEXTURE COMPRESSION ARCHITECTURE - Various technologies for a layered texture compression architecture. In one implementation, the layered texture compression architecture may include a texture consumption pipeline. The texture compression pipeline may include a processor, memory devices, and textures compressed at varying ratios of compression. The textures within the pipeline may be compressed at ratios in accordance with characteristics of the devices in the pipeline that contains and processes the textures. | 12-24-2009 |
20110178798 | ADAPTIVE AMBIENT SOUND SUPPRESSION AND SPEECH TRACKING - A device for suppressing ambient sounds from speech received by a microphone array is provided. One embodiment of the device comprises a microphone array, a processor, an analog-to-digital converter, and memory comprising instructions stored therein that are executable by the processor. The instructions stored in the memory are configured to receive a plurality of digital sound signals, each digital sound signal based on an analog sound signal originating at the microphone array, receive a multi-channel speaker signal, generate a monophonic approximation signal of the multi-channel speaker signal, apply a linear acoustic echo canceller to suppress a first ambient sound portion of each digital sound signal, generate a combined directionally-adaptive sound signal from a combination of each digital sound signal by a combination of time-invariant and adaptive beamforming techniques, and apply one or more nonlinear noise suppression techniques to suppress a second ambient sound portion of the combined directionally-adaptive sound signal. | 07-21-2011 |
20110234756 | DE-ALIASING DEPTH IMAGES - Techniques are provided for de-aliasing depth images. The depth image may have been generated based on phase differences between a transmitted and received modulated light beam. A method may include accessing a depth image that has a depth value for a plurality of locations in the depth image. Each location has one or more neighbor locations. Potential depth values are determined for each of the plurality of locations based on the depth value in the depth image for the location and potential aliasing in the depth image. A cost function is determined based on differences between the potential depth values of each location and its neighboring locations. Determining the cost function includes assigning a higher cost for greater differences in potential depth values between neighboring locations. The cost function is substantially minimized to select one of the potential depth values for each of the locations. | 09-29-2011 |
20110267269 | HETEROGENEOUS IMAGE SENSOR SYNCHRONIZATION - A computer implemented method for synchronizing information from a scene using two heterogeneous sensing devices. Scene capture information is provided by a first sensor and a second sensor. The information comprises video streams including successive frames provided at different frequencies. Each frame is separated by a vertical blanking interval. A video output comprising a stream of successive frames each separated by a vertical blanking interval is rendered based on information in the scene. The method determines whether an adjustment of the first and second video stream relative to the video output stream is required by reference to the video output stream. A correction is then generated to at least one of said vertical blanking intervals. | 11-03-2011 |
20110274366 | DEPTH MAP CONFIDENCE FILTERING - An apparatus and method for filtering depth information received from a capture device. Depth information is filtered by using confidence information provided with the depth information based an adaptively created, optimal spatial filter on a per pixel basis. Input data including depth information is received on a scene. The depth information comprises a plurality of pixels, each pixel including a depth value and a confidence value. A confidence weight normalized filter for each pixel in the depth information is generated. The weight normalized filter is combined with the input data to provide filtered data to an application. | 11-10-2011 |
20110298967 | Controlling Power Levels Of Electronic Devices Through User Interaction - A processor-implemented method, system and computer readable medium for intelligently controlling the power level of an electronic device in a multimedia system based on user intent, is provided. The method includes receiving data relating to a first user interaction with a device in a multimedia system. The method includes determining if the first user interaction corresponds to a user's intent to interact with the device. The method then includes setting a power level for the device based on the first user interaction. The method further includes receiving data relating to a second user interaction with the device. The method then includes altering the power level of the device based on the second user interaction to activate the device for the user. | 12-08-2011 |
20110301934 | MACHINE BASED SIGN LANGUAGE INTERPRETER - A computer implemented method for performing sign language translation based on movements of a user is provided. A capture device detects motions defining gestures and detected gestures are matched to signs. Successive signs are detected and compared to a grammar library to determine whether the signs assigned to gestures make sense relative to each other and to a grammar context. Each sign may be compared to previous and successive signs to determine whether the signs make sense relative to each other. The signs may further be compared to user demographic information and a contextual database to verify the accuracy of the translation. An output of the match between the movements and the sign is provided. | 12-08-2011 |
20110304713 | INDEPENDENTLY PROCESSING PLANES OF DISPLAY DATA - Independently processing planes of display data is provided by a method of outputting a video stream. The method includes retrieving from memory a first plane of display data having a first set of display parameters and post-processing the first plane of display data to adjust the first set of display parameters. The method further includes retrieving from memory a second plane of display data having a second set of display parameters and post-processing the second plane of display data independently of the first plane of display data. The method further includes blending the first plane of display data with the second plane of display data to form blended display data and outputting the blended display data. | 12-15-2011 |
20120093320 | SYSTEM AND METHOD FOR HIGH-PRECISION 3-DIMENSIONAL AUDIO FOR AUGMENTED REALITY - Techniques are provided for providing 3D audio, which may be used in augmented reality. A 3D audio signal may be generated based on sensor data collected from the actual room in which the listener is located and the actual position of the listener in the room. The 3D audio signal may include a number of components that are determined based on the collected sensor data and the listener's location. For example, a number of (virtual) sound paths between a virtual sound source and the listener may be determined The sensor data may be used to estimate materials in the room, such that the affect that those materials would have on sound as it travels along the paths can be determined In some embodiments, sensor data may be used to collect physical characteristics of the listener such that a suitable HRTF may be determined from a library of HRTFs. | 04-19-2012 |
20120105473 | LOW-LATENCY FUSING OF VIRTUAL AND REAL CONTENT - A system that includes a head mounted display device and a processing unit connected to the head mounted display device is used to fuse virtual content into real content. In one embodiment, the processing unit is in communication with a hub computing device. The processing unit and hub may collaboratively determine a map of the mixed reality environment. Further, state data may be extrapolated to predict a field of view for a user in the future at a time when the mixed reality is to be displayed to the user. This extrapolation can remove latency from the system. | 05-03-2012 |
20120147038 | SYMPATHETIC OPTIC ADAPTATION FOR SEE-THROUGH DISPLAY - A method for overlaying first and second images in a common focal plane of a viewer comprises forming the first image and guiding the first and second images along an axis to a pupil of the viewer. The method further comprises adjustably diverging the first and second images at an adaptive diverging optic to bring the first image into focus at the common focal plane, and, adjustably converging the second image at an adaptive converging optic to bring the second image into focus at the common focal plane. | 06-14-2012 |
20120154542 | PLURAL DETECTOR TIME-OF-FLIGHT DEPTH MAPPING - A depth-mapping method comprises exposing first and second detectors oriented along different optical axes to light dispersed from a scene, and furnishing an output responsive to a depth coordinate of a locus of the scene. The output increases with an increasing first amount of light received by the first detector during a first period, and decreases with an increasing second amount of light received by the second detector during a second period different than the first. | 06-21-2012 |
20120159090 | SCALABLE MULTIMEDIA COMPUTER SYSTEM ARCHITECTURE WITH QOS GUARANTEES - Versions of a multimedia computer system architecture are described which satisfy quality of service (QoS) guarantees for multimedia applications such as game applications while allowing platform resources, hardware resources in particular, to scale up or down over time. Computing resources of the computer system are partitioned into a platform partition and an application partition, each including its own central processing unit (CPU) and, optionally, graphics processing unit (GPU). To enhance scalability of resources up or down, the platform partition includes one or more hardware resources which are only accessible by the multimedia application via a software interface. Additionally, outside the partitions may be other resources shared by the partitions or which provide general purpose computing resources. | 06-21-2012 |
20120245933 | ADAPTIVE AMBIENT SOUND SUPPRESSION AND SPEECH TRACKING - A device for suppressing ambient sounds from speech received by a microphone array is provided. One embodiment of the device comprises a microphone array, a processor, an analog-to-digital converter, and memory comprising instructions stored therein that are executable by the processor. The instructions stored in the memory are configured to receive a plurality of digital sound signals, each digital sound signal based on an analog sound signal originating at the microphone array, receive a multi-channel speaker signal, generate a monophonic approximation signal of the multi-channel speaker signal, apply a linear acoustic echo canceller to suppress a first ambient sound portion of each digital sound signal, generate a combined directionally-adaptive sound signal from a combination of each digital sound signal by a combination of time-invariant and adaptive beamforming techniques, and apply one or more nonlinear noise suppression techniques to suppress a second ambient sound portion of the combined directionally-adaptive sound signal. | 09-27-2012 |
20130044222 | IMAGE EXPOSURE USING EXCLUSION REGIONS - Calculating a gain setting for a primary image sensor includes receiving a test-matrix of pixels from a test image sensor, and receiving a first-frame matrix of pixels from a primary image sensor. A gain setting is calculated for the primary image sensor using the first-frame matrix of pixels except those pixels imaging one or more exclusion regions identified from the test matrix of pixels. | 02-21-2013 |
20130208897 | SKELETAL MODELING FOR WORLD SPACE OBJECT SOUNDS - A method for providing three-dimensional audio includes determining a world space object position and a world space ear position of a human subject based on a modeled virtual skeleton. The method further includes providing three-dimensional audio output to the human subject via an acoustic transducer array including one or more acoustic transducers. The three-dimensional audio output is configured such that sounds appear to originate from the object. | 08-15-2013 |
20130208898 | THREE-DIMENSIONAL AUDIO SWEET SPOT FEEDBACK - A method for providing three-dimensional audio is provided. The method includes receiving a depth map imaging a scene from a depth camera and recognizing a human subject present in the scene. The human subject is modeled with a virtual skeleton comprising a plurality of joints defined with a three-dimensional position. A world space ear position of the human subject is determined based on the virtual skeleton. Furthermore, a target world space ear position of the human subject is determined. The target world space ear position is the world space position where a desired audio effect can be produced via an acoustic transducer array. The method further includes outputting a notification representing a spatial relationship between the world space ear position and the target world space ear position. | 08-15-2013 |
20130208899 | SKELETAL MODELING FOR POSITIONING VIRTUAL OBJECT SOUNDS - Providing three-dimensional audio includes determining a world space ear position of a human subject based on a modeled virtual skeleton. A world space sound source position is determined such that a spatial relationship between the world space sound source position and the world space ear position models a spatial relationship between a virtual space sound source position of a virtual space sound source and a virtual space listening position. Three-dimensional audio is output to the human subject via an acoustic transducer array including one or more acoustic transducers. The three-dimensional audio output is configured such that at the world space ear position a sound provided by a particular virtual space sound source appears to originate from a corresponding world space sound source position | 08-15-2013 |
20130208900 | DEPTH CAMERA WITH INTEGRATED THREE-DIMENSIONAL AUDIO - A three-dimensional audio system includes a depth camera and one or more acoustic transducers in the same housing. Further, the same housing also houses logic for determining a world space ear position of a human subject observed by the depth camera. The logic also determines one or more audio-output transformations based on the world space ear position. The one or more audio-output transformations are configured to produce a three-dimensional audio output configured to provide a desired audio effect at the world space ear position. | 08-15-2013 |
20130208926 | SURROUND SOUND SIMULATION WITH VIRTUAL SKELETON MODELING - A method for providing three-dimensional audio includes determining a world space ear position of a human subject based on a modeled virtual skeleton. The method further includes providing three-dimensional audio output to the human subject via an acoustic transducer array including one or more acoustic transducers. The three-dimensional audio output is configured such that channel-specific sounds appear to originate from corresponding simulated world speaker positions. | 08-15-2013 |
20140316763 | MACHINE BASED SIGN LANGUAGE INTERPRETER - A computer implemented method for performing sign language translation based on movements of a user is provided. A capture device detects motions defining gestures and detected gestures are matched to signs. Successive signs are detected and compared to a grammar library to determine whether the signs assigned to gestures make sense relative to each other and to a grammar context. Each sign may be compared to previous and successive signs to determine whether the signs make sense relative to each other. The signs may further be compared to user demographic information and a contextual database to verify the accuracy of the translation. An output of the match between the movements and the sign is provided. | 10-23-2014 |
Patent application number | Description | Published |
20100253766 | Stereoscopic Device - Systems and methods are disclosed for generating stereoscopic images for a user based on one or more images captured by one or more scene-facing cameras or detectors and the position of the user's eyes or other parts relative to a component of the system as determined from one or more images captured by one or more user-facing detectors. The image captured by the scene-facing detector is modified based on the user's eye or other position. The resulting image represents the scene as seen from the perspective of the eye of the user. The resulting image may be further modified by augmenting the image with additional images, graphics, or other data. Stereoscopic mechanisms may also be adjusted or configured based on the location or the user's eyes or other parts. | 10-07-2010 |
20110300929 | SYNTHESIS OF INFORMATION FROM MULTIPLE AUDIOVISUAL SOURCES - A system and method are disclosed for synthesizing information received from multiple audio and visual sources focused on a single scene. The system may determine the positions of capture devices based on a common set of cues identified in the image data of the capture devices. As a scene may often have users and objects moving into and out of the scene, data from the multiple capture devices may be time synchronized to ensure that data from the audio and visual sources are providing data of the same scene at the same time. Audio and/or visual data from the multiple sources may be reconciled and assimilated together to improve an ability of the system to interpret audio and/or visual aspects from the scene. | 12-08-2011 |
20110310125 | COMPARTMENTALIZING FOCUS AREA WITHIN FIELD OF VIEW - A system and method are disclosed for selectively focusing on certain areas of interest within an imaged scene to gain more image detail within those areas. In general, the present system identifies areas of interest from received image data, which may for example be detected areas of movement within the scene. The system then focuses on those areas by providing more detail in the area of interest. This may be accomplished by a number of methods, including zooming in on the image, increasing pixel density of the image and increasing the amount of light incident on the object in the image. | 12-22-2011 |
20110311144 | RGB/DEPTH CAMERA FOR IMPROVING SPEECH RECOGNITION - A system and method are disclosed for facilitating speech recognition through the processing of visual speech cues. These speech cues may include the position of the lips, tongue and/or teeth during speech. In one embodiment, upon capture of a frame of data by an image capture device, the system identifies a speaker and a location of the speaker. The system then focuses in on the speaker to get a clear image of the speaker's mouth. The system includes a visual speech cues engine which operates to recognize and distinguish sounds based on the captured position of the speaker's lips, tongue and/or teeth. The visual speech cues data may be synchronized with the audio data to ensure the visual speech cues engine is processing image data which corresponds to the correct audio data. | 12-22-2011 |
20120063637 | ARRAY OF SCANNING SENSORS - An array of image sensors is arranged to cover a field of view for an image capture system. Each sensor has a field of view segment which is adjacent to the field of view segment covered by another image sensor. The adjacent field of view (FOV) segments share an overlap area. Each image sensor comprises sets of light sensitive elements which capture image data using a scanning technique which proceeds in a sequence providing for image sensors sharing overlap areas to be exposed in the overlap area during the same time period. At least two of the image sensors capture image data in opposite directions of traversal for an overlap area. This sequencing provides closer spatial and temporal relationships between the data captured in the overlap area by the different image sensors. The closer spatial and temporal relationships reduce artifact effects at the stitching boundaries, and improve the performance of image processing techniques applied to improve image quality. | 03-15-2012 |
20120223967 | Dynamic Perspective Video Window - Systems and methods are disclosed for generating an image for a user based on an image captured by a scene-facing camera or detector. The user's position relative to a component of the system is determined, and the image captured by the scene-facing detector is modified based on the user's position. The resulting image represents the scene as seen from the perspective of the user. The resulting image may be further modified by augmenting the image with additional images, graphics, or other data. | 09-06-2012 |
20120293548 | EVENT AUGMENTATION WITH REAL-TIME INFORMATION - A system and method to present a user wearing a head mounted display with supplemental information when viewing a live event. A user wearing an at least partially see-through, head mounted display views the live event while simultaneously receiving information on objects, including people, within the user's field of view, while wearing the head mounted display. The information is presented in a position in the head mounted display which does not interfere with the user's enjoyment of the live event. | 11-22-2012 |
20130057543 | SYSTEMS AND METHODS FOR GENERATING STEREOSCOPIC IMAGES - Systems and methods are disclosed for generating stereoscopic images for a user based on one or more images captured by one or more scene-facing cameras or detectors and the position of the user's eyes or other parts relative to a component of the system as determined from one or more images captured by one or more user-facing detectors. The image captured by the scene-facing detector is modified based on the user's eye or other position. The resulting image represents the scene as seen from the perspective of the eye of the user. The resulting image may be further modified by augmenting the image with additional images, graphics, or other data. Stereoscopic mechanisms may also be adjusted or configured based on the location or the user's eyes or other parts. | 03-07-2013 |
20130212341 | MIX BUFFERS AND COMMAND QUEUES FOR AUDIO BLOCKS - The subject disclosure is directed towards a technology that may be used in an audio processing environment. Nodes of an audio flow graph are associated with virtual mix buffers. As the flow graph is processed, commands and virtual mix buffer data are provided to audio fixed-function processing blocks. Each virtual mix buffer is mapped to a physical mix buffer, and the associated command is executed with respect to the physical mix buffer. One physical mix buffer mix buffer may be used as an input data buffer for the audio fixed-function processing block, and another physical mix buffer as an output data buffer, for example. | 08-15-2013 |
Patent application number | Description | Published |
20090258777 | SYSTEM AND METHOD FOR TREATING FLY ASH - A method and system for treating fly ash with a treating fluid by evenly dispersing a treating fluid into a flowing stream of fly ash. By dispersing the treating fluid into the fly ash as the fly ash is flowing, the method takes advantage of natural mixing and particle motion that occurs during flow of the bulk solid. The application of treating fluid is advantageously controlled by an automated controller that has inputs and outputs that allow the controller to adjust flow rate of the treating fluid in correspondence with a measured flow rate of the fly ash. | 10-15-2009 |
20110001255 | Vacuum Removal of Entrained Gasses In Extruded, Foamed Polyurethane - Methods for forming foamed polyurethane composite materials in an extruder including a vacuum section are described. One method includes introducing a polyol, a di- or poly-isocyanate, and an inorganic filler to a first section of an extruder and mixing the components. After mixing, the composite material is advanced to a second section of the extruder, which is maintained at a vacuum pressure. The composite material can begin foaming in the second section and then be extruded from the output end of the extruder. The vacuum pressure of the second section removes non-foaming related gasses entrained in the composite material. A further method includes directing the extruded composite material into a mold. | 01-06-2011 |
20110002190 | Fiber Feed System For Extruder For Use In Filled Polymeric Products - Methods for forming composite materials containing fiber in an extruder are described. A first method includes introducing a polymeric material, an inorganic filler, and a fiber to an extruder. A fiber metering device is used to control the rate the fiber is introduced to the extruder based on the extrusion rate of the extruder. A further method is described that includes introducing a polymeric material and an inorganic filler to an extruder. Then, downstream of the polymeric material and inorganic filler, a fiber metering device introduces a constant weight percentage of fiber to the extruder based on the amount of polymeric material and inorganic filler introduced to the extruder. After the polymeric material, inorganic filler, and fiber are introduced to the extruder by either method, the components are mixed to produce a composite material. | 01-06-2011 |
20130087079 | High Speed Mixing Process for Producing Inorganic Polymer Products - Methods of producing inorganic polymer products are described herein. The methods include mixing reactants comprising a reactive powder, an activator, and optionally a retardant for a mixing time of 15 seconds or less to provide a reaction mixture and forming the reaction mixture into a product. Also described herein are building materials formed according to the methods. | 04-11-2013 |
Patent application number | Description | Published |
20110139757 | METHOD AND APPARATUS FOR PROCESSING SUBSTRATE EDGES - A method and apparatus for processing substrate edges is disclosed that overcomes the limitations of conventional edge processing methods and systems used in semiconductor manufacturing. The edge processing method and apparatus of this invention includes a laser and optical system to direct a beam of radiation onto a rotating substrate supported by a chuck. The optical system accurately and precisely directs the beam to remove or transform organic or inorganic films, film stacks, residues, or particles, in atmosphere, from the top edge, top bevel, apex, bottom bevel, and bottom edge of the substrate in a single process step. An optional gas injector system directs gas onto the substrate edge to aid in the reaction. Reaction by-products are removed by means of an exhaust tube enveloping the reaction site. This invention permits precise control of an edge exclusion width, resulting in an increase in the number of usable die on a wafer. Wafer edge processing with this invention replaces existing methods that use large volumes of purified water and hazardous chemicals including solvents, acids, alkalis, and proprietary strippers. | 06-16-2011 |
20110168672 | METHOD AND APPARATUS FOR PROCESSING SUBSTRATE EDGES - A method and apparatus for processing substrate edges is disclosed that overcomes the limitations of conventional edge processing methods and systems used in semiconductor manufacturing. The edge processing method and apparatus of this invention includes a laser and optical system to direct a beam of radiation onto a rotating substrate supported by a chuck, in atmosphere. The optical system accurately and precisely directs the beam to remove or transform organic or inorganic films, film stacks, residues, or particles from the top edge, top bevel, apex, bottom bevel, and bottom edge of the substrate. An optional gas injector system directs gas onto the substrate edge to aid in the reaction. Process by-products are removed via an exhaust tube enveloping the reaction site. This invention permits precise control of an edge exclusion zone, resulting in an increase in the number of usable die on a wafer. Wafer edge processing with this invention replaces existing solvent and/or abrasive methods and thus will improve die yield. | 07-14-2011 |
Patent application number | Description | Published |
20110064535 | Offset hook and fastener system - A tie-down strap system, an offset adapter for use with a tie-down strap or rope and kits therefor are disclosed. A tie-down strap system includes an adjustable tie down strap component having a first strap secured on one end to a strap adjusting mechanism and an anchor point connector on an opposite end and a second strap having a free end removably and operably connected to the strap adjusting mechanism and an anchor point connector on an opposite end, the anchor point connector includes an connector body with an opening formed at a first body end to which a strap end is secured, and an elongated end portion extending longitudinally away from the connector body for a predefined distance and offset from the plane of the connector body. The offset adapter includes a universal hook adapter having an adapter body with an opening formed at a first body end and sized to receive one of a J-hook, a split hook or an S-hook of a conventional tie-down strap system, and an offset elongated end portion that longitudinally extends from a second body end of the adapter body for a predefined distance and offset from the plane of the adapter body. | 03-17-2011 |
20120198661 | Offset hook and fastener system - An adapter kit for use with conventional tie-down straps or ropes includes an universal hook adapter having an adapter body with an opening formed at a first body end and sized to receive one of a J-hook, a split hook or an S-hook of a conventional tie-down strap or a rope, the adapter body defining a longitudinal axis plane, and an elongated end portion connected to the adapter body opposite the first body end through a bend that positions the elongated end portion to extend from and to continue in a longitudinal direction away from the adapter body and the first body end wherein the elongated end portion is parallel to and offset from the longitudinal axis plane of the adapter body. | 08-09-2012 |
20120313393 | Pickup truck tailgate accessory drill-less adapter - A tailgate accessory drill-less adapter for connecting a tailgate accessory to a tailgate portion of a pick-up truck includes an elongated body having a first side, a bottom edge, a first side edge, and a second side edge, the first side edge having a tailgate latch bolt slot transverse to the first side edge and through the elongated body, the slot positioned a predefined distance from the bottom edge, and an offset hook anchor having a hook portion and a hook body portion, the offset hook anchor connected to the first side adjacent to but spaced from the second side edge where the hook portion is adjustably extendable. | 12-13-2012 |
20130205967 | Bi-directional fence attachment for a power tool table - A fence attachment apparatus for a table power tool equipped with an existing rip fence has a longitudinal member that includes a first side member having a top surface, a bottom surface, a cutting-side lateral surface, and a fence-side lateral surface. The longitudinal member has a bridge member that extends transversely from the first side member and connects to the first side member along a major portion of the first side member adjacent to the top surface. The fence attachment has a sliding member in sliding engagement with the top surface of the longitudinal member, where the sliding member has a top sliding surface and a bottom sliding surface. A slide mechanism is disposed between the bottom sliding surface of the sliding member and the top surface of the longitudinal member, where the slide mechanism provides longitudinal movement to the sliding member. | 08-15-2013 |