Patent application number | Description | Published |
20090132938 | SKINNING SUPPORT FOR PARTNER CONTENT - The technology described herein is a system and methods for generating a branded background for user interfaces. In one embodiment, the background is generated based on a background template. A content partner may customize the background by providing a hue value, artwork and a logo. The background of the user interface is tinted a color associated with the hue value. The artwork and logo is placed in the background, and in one embodiment, the artwork comprises a watermark version of the artwork. Gallery content may also be layered over the background to create a UI having a theme. | 05-21-2009 |
20100277470 | Systems And Methods For Applying Model Tracking To Motion Capture - An image such as a depth image of a scene may be received, observed, or captured by a device and a model of a user in the image may be generated. The model may then be adjusted to mimic one or more movements by the user. For example, the model may be a skeletal model having joints and bones that may be adjusted into poses corresponding to the movements of the user in physical space. A motion capture file of the movement of the user may be generated in real-time based on the adjusted model. For example, a set of vectors that define the joints and bones for each of the poses of the adjusted model may be captured and rendered in the motion capture file. | 11-04-2010 |
20100281437 | MANAGING VIRTUAL PORTS - Techniques for managing virtual ports are disclosed herein. Each such virtual port may have different associated features such as, for example, privileges, rights or options. When one or more users are in a capture scene of a gesture based system, the system may associate virtual ports with the users and maintain the virtual ports. Also provided are techniques for disassociating virtual ports with users or swapping virtual ports between two or more users. | 11-04-2010 |
20100302257 | Systems and Methods For Applying Animations or Motions to a Character - An virtual character such as an on-screen object, an avatar, an on-screen character, or the like may be animated using a live motion of a user and a pre-recorded motion. For example, a live motion of a user may be captured and a pre-recorded motion such as a pre-recorded artist generated motion, a pre-recorded motion of the user, and/or a programmatically controlled transformation may be received. The live motion may then be applied to a first portion of an the virtual character and the pre-recorded motion may be applied to a second portion of the virtual character such that the virtual character may be animated with a combination of the live and pre-recorded motions. | 12-02-2010 |
20100303291 | Virtual Object - An image of a scene may be observed, received, or captured. The image may then be scanned to determine one or more signals emitted or reflected by an indicator that belongs to an input object. Upon determining the one or more signals, the signals may be grouped together into a cluster that may be used to generate a first vector that may indicate the orientation of the input object in the captured scene. The first vector may then be tracked, a virtual object and/or an avatar associated with the first vector may be rendered, and/or controls to perform in an application executing on the computer environment may be determined based on the first vector. | 12-02-2010 |
20100304813 | Protocol And Format For Communicating An Image From A Camera To A Computing Environment - A media feed interface may be provided that may be used to extract a media frame from a media feed. The media feed interface may access a capture device, a file, and/or a network resource. Upon accessing the capture device, file, and/or network resource, the media feed interface may populate buffers with data and then may create a media feed from the buffers. Upon request, the media feed interface may isolate a media frame within the media feed. For example, the media feed interface analyze media frames in the media feed to determine whether a media frame includes information associated with, for example, the request. If the media frame includes the requested information, the media feed interface may isolate the media frame associated with the information and may provide access to the isolated media frame. | 12-02-2010 |
20110197161 | HANDLES INTERACTIONS FOR HUMAN-COMPUTER INTERFACE - A system is disclosed for providing on-screen graphical handles to control interaction between a user and on-screen objects. A handle defines what actions a user may perform on the object, such as for example scrolling through a textual or graphical navigation menu. Affordances are provided to guide the user through the process of interacting with a handle. | 08-11-2011 |
20110310007 | ITEM NAVIGATION USING MOTION-CAPTURE DATA - A system and method is provided for using motion-capture data to control navigating of a cursor in a user interface of a computing system. Movement of a user's hand or other object in a three-dimensional capture space is tracked and represented in the computing system as motion-capture model data. The method includes obtaining a plurality of positions for the object from the motion-capture model data. The method determines a curved-gesture center point based on at least some of the plurality of positions for the object. Using the curved-gesture center point as an origin, an angular property is determined for one of the plurality of positions for the object. The method further includes navigating the cursor in a sequential arrangement of selectable items based on the angular property. | 12-22-2011 |
20120127176 | Systems And Methods For Applying Model Tracking to Motion Capture - An image such as a depth image of a scene may be received, observed, or captured by a device and a model of a user in the image may be generated. The model may then be adjusted to mimic one or more movements by the user. For example, the model may be a skeletal model having joints and bones that may be adjusted into poses corresponding to the movements of the user in physical space. A motion capture file of the movement of the user may be generated in real-time based on the adjusted model. For example, a set of vectors that define the joints and bones for each of the poses of the adjusted model may be captured and rendered in the motion capture file. | 05-24-2012 |
20120144348 | Managing Virtual Ports - Techniques for managing virtual ports are disclosed herein. Each such virtual port may have different associated features such as, for example, privileges, rights or options. When one or more users are in a capture scene of a gesture based system, the system may associate virtual ports with the users and maintain the virtual ports. Also provided are techniques for disassociating virtual ports with users or swapping virtual ports between two or more users. | 06-07-2012 |
20120163520 | SYNCHRONIZING SENSOR DATA ACROSS DEVICES - Techniques are provided for synchronization of sensor signals between devices. One or more of the devices may collect sensor data. The device may create a sensor signal from the sensor data, which it may make available to other devices upon a publisher/subscriber model. The other devices may subscribe to sensor signals they choose. A device could be a provider or a consumer of the sensor signals. A device may have a layer of code between an operating system and software applications that processes the data for the applications. The processing may include such actions as synchronizing the data in a sensor signal to a local time clock, predicting future values for data in a sensor signal, and providing data samples for a sensor signal at a frequency that an application requests, among other actions. | 06-28-2012 |
20130311944 | HANDLES INTERACTIONS FOR HUMAN-COMPUTER INTERFACE - A system is disclosed for providing on-screen graphical handles to control interaction between a user and on-screen objects. A handle defines what actions a user may perform on the object, such as for example scrolling through a textual or graphical navigation menu. Affordances are provided to guide the user through the process of interacting with a handle. | 11-21-2013 |
20140085193 | PROTOCOL AND FORMAT FOR COMMUNICATING AN IMAGE FROM A CAMERA TO A COMPUTING ENVIRONMENT - A media feed interface may be provided that may be used to extract a media frame from a media feed. The media feed interface may access a capture device, a file, and/or a network resource. Upon accessing the capture device, file, and/or network resource, the media feed interface may populate buffers with data and then may create a media feed from the buffers. Upon request, the media feed interface may isolate a media frame within the media feed. For example, the media feed interface analyze media frames in the media feed to determine whether a media frame includes information associated with, for example, the request. If the media frame includes the requested information, the media feed interface may isolate the media frame associated with the information and may provide access to the isolated media frame. | 03-27-2014 |
20140160055 | WEARABLE MULTI-MODAL INPUT DEVICE FOR AUGMENTED REALITY - A wrist-worn input device that is used in augmented reality (AR) operates in three modes of operation. In a first mode of operation, the input device is curved so that it may be worn on a user's wrist. A touch surface receives letters gestured or selections by the user. In a second mode of operation, the input device is flat and used as a touch surface for more complex single or multi-hand interactions. A sticker defining one or more locations on the touch surface that corresponds a user's input, such as a character, number or intended operation, may be affixed to the touch surface. The sticker may be interchanged with different stickers based on a mode of operation, user's preference and/or particular AR experience. In a third mode of operation, the input device receives biometric input from biometric sensors. The biometric input may provide contextual information in an AR experience while allowing the user to have their hands free. | 06-12-2014 |
20140306874 | NEAR-PLANE SEGMENTATION USING PULSED LIGHT SOURCE - Methods for recognizing gestures within a near-field environment are described. In some embodiments, a mobile device, such as a head-mounted display device (HMD), may capture a first image of an environment while illuminating the environment using an IR light source with a first range (e.g., due to the exponential decay of light intensity) and capture a second image of the environment without illumination. The mobile device may generate a difference image based on the first image and the second image in order to eliminate background noise due to other sources of IR light within the environment (e.g., due to sunlight or artificial light sources). In some cases, object and gesture recognition techniques may be applied to the difference image in order to detect the performance of hand and/or finger gestures by an end user of the mobile device within a near-field environment of the mobile device. | 10-16-2014 |