Patent application number | Description | Published |
20090132938 | SKINNING SUPPORT FOR PARTNER CONTENT - The technology described herein is a system and methods for generating a branded background for user interfaces. In one embodiment, the background is generated based on a background template. A content partner may customize the background by providing a hue value, artwork and a logo. The background of the user interface is tinted a color associated with the hue value. The artwork and logo is placed in the background, and in one embodiment, the artwork comprises a watermark version of the artwork. Gallery content may also be layered over the background to create a UI having a theme. | 05-21-2009 |
20100277470 | Systems And Methods For Applying Model Tracking To Motion Capture - An image such as a depth image of a scene may be received, observed, or captured by a device and a model of a user in the image may be generated. The model may then be adjusted to mimic one or more movements by the user. For example, the model may be a skeletal model having joints and bones that may be adjusted into poses corresponding to the movements of the user in physical space. A motion capture file of the movement of the user may be generated in real-time based on the adjusted model. For example, a set of vectors that define the joints and bones for each of the poses of the adjusted model may be captured and rendered in the motion capture file. | 11-04-2010 |
20100281437 | MANAGING VIRTUAL PORTS - Techniques for managing virtual ports are disclosed herein. Each such virtual port may have different associated features such as, for example, privileges, rights or options. When one or more users are in a capture scene of a gesture based system, the system may associate virtual ports with the users and maintain the virtual ports. Also provided are techniques for disassociating virtual ports with users or swapping virtual ports between two or more users. | 11-04-2010 |
20100302257 | Systems and Methods For Applying Animations or Motions to a Character - An virtual character such as an on-screen object, an avatar, an on-screen character, or the like may be animated using a live motion of a user and a pre-recorded motion. For example, a live motion of a user may be captured and a pre-recorded motion such as a pre-recorded artist generated motion, a pre-recorded motion of the user, and/or a programmatically controlled transformation may be received. The live motion may then be applied to a first portion of an the virtual character and the pre-recorded motion may be applied to a second portion of the virtual character such that the virtual character may be animated with a combination of the live and pre-recorded motions. | 12-02-2010 |
20100303291 | Virtual Object - An image of a scene may be observed, received, or captured. The image may then be scanned to determine one or more signals emitted or reflected by an indicator that belongs to an input object. Upon determining the one or more signals, the signals may be grouped together into a cluster that may be used to generate a first vector that may indicate the orientation of the input object in the captured scene. The first vector may then be tracked, a virtual object and/or an avatar associated with the first vector may be rendered, and/or controls to perform in an application executing on the computer environment may be determined based on the first vector. | 12-02-2010 |
20100304813 | Protocol And Format For Communicating An Image From A Camera To A Computing Environment - A media feed interface may be provided that may be used to extract a media frame from a media feed. The media feed interface may access a capture device, a file, and/or a network resource. Upon accessing the capture device, file, and/or network resource, the media feed interface may populate buffers with data and then may create a media feed from the buffers. Upon request, the media feed interface may isolate a media frame within the media feed. For example, the media feed interface analyze media frames in the media feed to determine whether a media frame includes information associated with, for example, the request. If the media frame includes the requested information, the media feed interface may isolate the media frame associated with the information and may provide access to the isolated media frame. | 12-02-2010 |
20110197161 | HANDLES INTERACTIONS FOR HUMAN-COMPUTER INTERFACE - A system is disclosed for providing on-screen graphical handles to control interaction between a user and on-screen objects. A handle defines what actions a user may perform on the object, such as for example scrolling through a textual or graphical navigation menu. Affordances are provided to guide the user through the process of interacting with a handle. | 08-11-2011 |
20110310007 | ITEM NAVIGATION USING MOTION-CAPTURE DATA - A system and method is provided for using motion-capture data to control navigating of a cursor in a user interface of a computing system. Movement of a user's hand or other object in a three-dimensional capture space is tracked and represented in the computing system as motion-capture model data. The method includes obtaining a plurality of positions for the object from the motion-capture model data. The method determines a curved-gesture center point based on at least some of the plurality of positions for the object. Using the curved-gesture center point as an origin, an angular property is determined for one of the plurality of positions for the object. The method further includes navigating the cursor in a sequential arrangement of selectable items based on the angular property. | 12-22-2011 |
20120127176 | Systems And Methods For Applying Model Tracking to Motion Capture - An image such as a depth image of a scene may be received, observed, or captured by a device and a model of a user in the image may be generated. The model may then be adjusted to mimic one or more movements by the user. For example, the model may be a skeletal model having joints and bones that may be adjusted into poses corresponding to the movements of the user in physical space. A motion capture file of the movement of the user may be generated in real-time based on the adjusted model. For example, a set of vectors that define the joints and bones for each of the poses of the adjusted model may be captured and rendered in the motion capture file. | 05-24-2012 |
20120144348 | Managing Virtual Ports - Techniques for managing virtual ports are disclosed herein. Each such virtual port may have different associated features such as, for example, privileges, rights or options. When one or more users are in a capture scene of a gesture based system, the system may associate virtual ports with the users and maintain the virtual ports. Also provided are techniques for disassociating virtual ports with users or swapping virtual ports between two or more users. | 06-07-2012 |
20120163520 | SYNCHRONIZING SENSOR DATA ACROSS DEVICES - Techniques are provided for synchronization of sensor signals between devices. One or more of the devices may collect sensor data. The device may create a sensor signal from the sensor data, which it may make available to other devices upon a publisher/subscriber model. The other devices may subscribe to sensor signals they choose. A device could be a provider or a consumer of the sensor signals. A device may have a layer of code between an operating system and software applications that processes the data for the applications. The processing may include such actions as synchronizing the data in a sensor signal to a local time clock, predicting future values for data in a sensor signal, and providing data samples for a sensor signal at a frequency that an application requests, among other actions. | 06-28-2012 |
20130311944 | HANDLES INTERACTIONS FOR HUMAN-COMPUTER INTERFACE - A system is disclosed for providing on-screen graphical handles to control interaction between a user and on-screen objects. A handle defines what actions a user may perform on the object, such as for example scrolling through a textual or graphical navigation menu. Affordances are provided to guide the user through the process of interacting with a handle. | 11-21-2013 |
20140085193 | PROTOCOL AND FORMAT FOR COMMUNICATING AN IMAGE FROM A CAMERA TO A COMPUTING ENVIRONMENT - A media feed interface may be provided that may be used to extract a media frame from a media feed. The media feed interface may access a capture device, a file, and/or a network resource. Upon accessing the capture device, file, and/or network resource, the media feed interface may populate buffers with data and then may create a media feed from the buffers. Upon request, the media feed interface may isolate a media frame within the media feed. For example, the media feed interface analyze media frames in the media feed to determine whether a media frame includes information associated with, for example, the request. If the media frame includes the requested information, the media feed interface may isolate the media frame associated with the information and may provide access to the isolated media frame. | 03-27-2014 |
20140160055 | WEARABLE MULTI-MODAL INPUT DEVICE FOR AUGMENTED REALITY - A wrist-worn input device that is used in augmented reality (AR) operates in three modes of operation. In a first mode of operation, the input device is curved so that it may be worn on a user's wrist. A touch surface receives letters gestured or selections by the user. In a second mode of operation, the input device is flat and used as a touch surface for more complex single or multi-hand interactions. A sticker defining one or more locations on the touch surface that corresponds a user's input, such as a character, number or intended operation, may be affixed to the touch surface. The sticker may be interchanged with different stickers based on a mode of operation, user's preference and/or particular AR experience. In a third mode of operation, the input device receives biometric input from biometric sensors. The biometric input may provide contextual information in an AR experience while allowing the user to have their hands free. | 06-12-2014 |
20140306874 | NEAR-PLANE SEGMENTATION USING PULSED LIGHT SOURCE - Methods for recognizing gestures within a near-field environment are described. In some embodiments, a mobile device, such as a head-mounted display device (HMD), may capture a first image of an environment while illuminating the environment using an IR light source with a first range (e.g., due to the exponential decay of light intensity) and capture a second image of the environment without illumination. The mobile device may generate a difference image based on the first image and the second image in order to eliminate background noise due to other sources of IR light within the environment (e.g., due to sunlight or artificial light sources). In some cases, object and gesture recognition techniques may be applied to the difference image in order to detect the performance of hand and/or finger gestures by an end user of the mobile device within a near-field environment of the mobile device. | 10-16-2014 |
Patent application number | Description | Published |
20080320413 | Dynamic user interface for previewing live content - A dynamic user interface for previewing live content includes multiple tiles. Information for multiple pieces of live content available from a gallery is obtained, and this information is presented in the multiple tiles of the user interface in accordance with a current user interface layout. In accordance with one aspect, this current user interface layout changes automatically over time as the user interface is displayed. In accordance with another aspect, one or more of the multiple tiles is displayed in the user interface more prominently than the other tiles, and which information is to be displayed in the one or more tiles is based at least in part on a received fee. | 12-25-2008 |
20100175027 | NON-UNIFORM SCROLLING - Embodiments related to the non-uniform scrolling of a scrollable list displayed on a computing device display are disclosed. For example, one disclosed embodiment provides a method of operating a display comprising displaying a scrollable list of items that includes a first pair of list positions separated by a first spacing on the display, and a second pair of list positions separated by a second spacing that is different than the first spacing. The method further comprises detecting a movement of a manipulator from a first location to a second location, and in response, scrolling a first list item on the display between the first pair of list positions at a first scroll distance/manipulator movement distance correspondence, and scrolling a second list item between the second pair of list positions at a second scroll distance/manipulator movement distance correspondence. | 07-08-2010 |
20100302253 | REAL TIME RETARGETING OF SKELETAL DATA TO GAME AVATAR - Techniques for generating an avatar model during the runtime of an application are herein disclosed. The avatar model can be generated from an image captured by a capture device. End-effectors can be positioned an inverse kinematics can be used to determine positions of other nodes in the avatar model. | 12-02-2010 |
20120023442 | DYNAMIC USER INTERFACE FOR PREVIEWING LIVE CONTENT - A dynamic user interface for previewing live content includes multiple tiles. Information for multiple pieces of live content available from a gallery is obtained, and this information is presented in the multiple tiles of the user interface in accordance with a current user interface layout. In accordance with one aspect, this current user interface layout changes automatically over time as the user interface is displayed. In accordance with another aspect, one or more of the multiple tiles is displayed in the user interface more prominently than the other tiles, and which information is to be displayed in the one or more tiles is based at least in part on a received fee. | 01-26-2012 |
20120206452 | REALISTIC OCCLUSION FOR A HEAD MOUNTED AUGMENTED REALITY DISPLAY - Technology is described for providing realistic occlusion between a virtual object displayed by a head mounted, augmented reality display system and a real object visible to the user's eyes through the display. A spatial occlusion in a user field of view of the display is typically a three dimensional occlusion determined based on a three dimensional space mapping of real and virtual objects. An occlusion interface between a real object and a virtual object can be modeled at a level of detail determined based on criteria such as distance within the field of view, display size or position with respect to a point of gaze. Technology is also described for providing three dimensional audio occlusion based on an occlusion between a real object and a virtual object in the user environment. | 08-16-2012 |
20130187943 | WEARABLE DISPLAY DEVICE CALIBRATION - In embodiments of wearable display device calibration, a first display lens system forms an image of an environment viewed through the first display lens system. A second display lens system also forms the image of the environment viewed through the second display lens system. The first display lens system emits a first reference beam and the second display lens system emits a second reference beam. The first display lens system then captures a reflection image of the first and second reference beams. The second display lens system also captures a reflection image of the first and second reference beams. An imaging application is implemented to compare the reflection images to determine a misalignment between the first and second display lens systems, and then apply an alignment adjustment to align the image of the environment formed by each of the first and second display lens systems. | 07-25-2013 |
20130263031 | DYNAMIC USER INTERFACE FOR PREVIEWING LIVE CONTENT - A dynamic user interface for previewing live content includes multiple tiles. User interface layouts can be displayed that each have multiple tiles displaying multiple pieces of content, where the multiple pieces of content includes different types of content and each of the multiple tiles display a piece of the content. A command input can be received to change a current user interface layout, and a transition is initiated to display a next user interface layout that includes one or more of the multiple tiles displaying the multiple pieces of content or different multiple pieces of the content. | 10-03-2013 |
20130268888 | DYNAMIC USER INTERFACE FOR PREVIEWING LIVE CONTENT - A dynamic user interface for previewing live content includes multiple tiles. A selection can be received from a user to define a tiled user interface layout that includes the multiple tiles each configured to display content from an associated content gallery. The content can be displayed on the multiple tiles in the tiled user interface layout, and one or more of the tiles change over time to display different pieces of the content from the associated content gallery of a tile. | 10-10-2013 |
20130286004 | DISPLAYING A COLLISION BETWEEN REAL AND VIRTUAL OBJECTS - Technology is described for displaying a collision between objects by an augmented reality display device system. A collision between a real object and a virtual object is identified based on three dimensional space position data of the objects. At least one effect on at least one physical property of the real object is determined based on physical properties of the real object, like a change in surface shape, and physical interaction characteristics of the collision. Simulation image data is generated and displayed simulating the effect on the real object by the augmented reality display. Virtual objects under control of different executing applications can also interact with one another in collisions. | 10-31-2013 |
20140118397 | PLANAR SURFACE DETECTION - A planar surface within a physical environment is detected enabling presentation of a graphical user interface overlaying the planar surface. Detection of planar surfaces may be performed, in one example, by obtaining a collection of three-dimensional surface points of a physical environment imaged via an optical sensor subsystem. A plurality of polygon sets of points are sampled within the collection. Each polygon set of points includes three or more localized points of the collection that defines a polygon. Each polygon is classified into one or more groups of polygons having a shared planar characteristic with each other polygon of that group. One or more planar surfaces within the collection are identified such that each planar surface is at least partially defined by a group of polygons containing at least a threshold number of polygons. | 05-01-2014 |
20140168261 | DIRECT INTERACTION SYSTEM MIXED REALITY ENVIRONMENTS - A system and method are disclosed for interacting with virtual objects in a virtual environment using an accessory such as a hand held object. The virtual object may be viewed using a display device. The display device and hand held object may cooperate to determine a scene map of the virtual environment, the display device and hand held object being registered in the scene map. | 06-19-2014 |
Patent application number | Description | Published |
20120092328 | FUSING VIRTUAL CONTENT INTO REAL CONTENT - A system that includes a head mounted display device and a processing unit connected to the head mounted display device is used to fuse virtual content into real content. In one embodiment, the processing unit is in communication with a hub computing device. The system creates a volumetric model of a space, segments the model into objects, identifies one or more of the objects including a first object, and displays a virtual image over the first object on a display (of the head mounted display) that allows actual direct viewing of at least a portion of the space through the display. | 04-19-2012 |
20120165964 | INTERACTIVE CONTENT CREATION - An audio/visual system (e.g., such as an entertainment console or other computing device) plays a base audio track, such as a portion of a pre-recorded song or notes from one or more instruments. Using a depth camera or other sensor, the system automatically detects that a user (or a portion of the user) enters a first collision volume of a plurality of collision volumes. Each collision volume of the plurality of collision volumes is associated with a different audio stem. In one example, an audio stem is a sound from a subset of instruments playing a song, a portion of a vocal track for a song, or notes from one or more instruments. In response to automatically detecting that the user (or a portion of the user) entered the first collision volume, the appropriate audio stem associated with the first collision volume is added to the base audio track or removed from the base audio track. | 06-28-2012 |
20130044130 | PROVIDING CONTEXTUAL PERSONAL INFORMATION BY A MIXED REALITY DEVICE - The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location. | 02-21-2013 |
20140002491 | DEEP AUGMENTED REALITY TAGS FOR HEAD MOUNTED DISPLAYS | 01-02-2014 |
20140375679 | Dual Duty Cycle OLED To Enable Dynamic Control For Reduced Motion Blur Control With Constant Brightness In Augmented Reality Experiences - A head-mounted display (HMD) device is provided with reduced motion blur by reducing row duty cycle for an organic light-emitting diode (LED) panel as a function of a detected movement of a user's head. Further, a panel duty cycle of the panel is increased in concert with the decrease in the row duty cycle to maintain a constant brightness. The technique is applicable, e.g., to scenarios in which an augmented reality image is displayed in a specific location in world coordinates. A sensor such as an accelerometer or gyroscope can be used to obtain an angular velocity of a user's head. The angular velocity indicates a number of pixels subtended in a frame period according to an angular resolution of the LED panel. The duty cycles can be set, e.g., once per frame, based on the angular velocity or the number of pixels subtended in a frame period. | 12-25-2014 |
20150029218 | LATE STAGE REPROJECTION - Methods for generating and displaying images associated with one or more virtual objects within an augmented reality environment at a frame rate that is greater than a rendering frame rate are described. The rendering frame rate may correspond with the minimum time to render images associated with a pose of a head-mounted display device (HMD). In some embodiments, the HMD may determine a predicted pose associated with a future position and orientation of the HMD, generate a pre-rendered image based on the predicted pose, determine an updated pose associated with the HMD subsequent to generating the pre-rendered image, generate an updated image based on the updated pose and the pre-rendered image, and display the updated image on the HMD. The updated image may be generated via a homographic transformation and/or a pixel offset adjustment of the pre-rendered image. | 01-29-2015 |