Patent application number | Description | Published |
20110099476 | DECORATING A DISPLAY ENVIRONMENT - Disclosed herein are systems and methods for decorating a display environment. In one embodiment, a user may decorate a display environment by making one or more gestures, using voice commands, using a suitable interface device, and/or combinations thereof. A voice command can be detected for user selection of an artistic feature, such as, for example, a color, a texture, an object, and a visual effect for decorating in a display environment. The user can also gesture for selecting a portion of the display environment for decoration. Next, the selected portion of the display environment can be altered based on the selected artistic feature. The user's motions can be reflected in the display environment by an avatar. In addition, a virtual canvas or three-dimensional object can be displayed in the display environment for decoration by the user. | 04-28-2011 |
20110221755 | BIONIC MOTION - A camera that can sense motion of a user is connected to a computing system (e.g., video game apparatus or other type of computer). The computing system determines an action corresponding to the sensed motion of the user and determines a magnitude of the sensed motion of the user. The computing system creates and displays an animation of an object (e.g., an avatar in a video game) performing the action in a manner that is amplified in comparison to the sensed motion by a factor that is proportional to the determined magnitude. The computing system also creates and outputs audio/visual feedback in proportion to a magnitude of the sensed motion of the user. | 09-15-2011 |
20110223995 | INTERACTING WITH A COMPUTER BASED APPLICATION - A computing system runs an application (e.g., video game) that interacts with one or more actively engaged users. One or more physical properties of a group are sensed. The group may include the one or more actively engaged users and/or one or more entities not actively engaged with the application. The computing system will determine that the group (or the one or more entities not actively engaged with the application) have performed a predetermined action. A runtime condition of the application is changed in response to determining that the group (or the one or more entities not actively engaged with the computer based application) have performed the predetermined action. Examples of changing a runtime condition include moving an object, changing a score or changing an environmental condition of a video game. | 09-15-2011 |
20110299728 | AUTOMATIC DEPTH CAMERA AIMING - Automatic depth camera aiming is provided by a method which includes receiving from the depth camera one or more observed depth images of a scene. The method further includes, if a point of interest of a target is found within the scene, determining if the point of interest is within a far range relative to the depth camera. The method further includes, if the point of interest of the target is within the far range, operating the depth camera with a far logic, or if the point of interest of the target is not within the far range, operating the depth camera with a near logic. | 12-08-2011 |
20110304632 | INTERACTING WITH USER INTERFACE VIA AVATAR - Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control. | 12-15-2011 |
20110304774 | CONTEXTUAL TAGGING OF RECORDED DATA - Embodiments are disclosed that relate to the automatic tagging of recorded content. For example, one disclosed embodiment provides a computing device comprising a processor and memory having instructions executable by the processor to receive input data comprising one or more of a depth data, video data, and directional audio data, identify a content-based input signal in the input data, and apply one or more filters to the input signal to determine whether the input signal comprises a recognized input. Further, if the input signal comprises a recognized input, then the instructions are executable to tag the input data with the contextual tag associated with the recognized input and record the contextual tag with the input data. | 12-15-2011 |
20110314482 | SYSTEM FOR UNIVERSAL MOBILE DATA - A system and method is disclosed aggregating and organizing a user's cloud data in an encompassing system, and then exposing the sum-total of that cloud data to application programs via a common API. Such a system provides rich presence information allowing users to map and unify the totality of their experiences across all of their computing devices, as well as discovering other users and their experiences. In this way, users can enhance their knowledge of, and interaction with, their own environment, as well as open up new social experiences with others. | 12-22-2011 |
20120154618 | MODELING AN OBJECT FROM IMAGE DATA - A method for modeling an object from image data comprises identifying in an image from the video a set of reference points on the object, and, for each reference point identified, observing a displacement of that reference point in response to a motion of the object. The method further comprises grouping together those reference points for which a common translational or rotational motion of the object results in the observed displacement, and fitting the grouped-together reference points to a shape. | 06-21-2012 |
20120155705 | FIRST PERSON SHOOTER CONTROL WITH VIRTUAL SKELETON - A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured aiming vector control, and a virtual weapon is aimed in proportion to the gestured aiming vector control. | 06-21-2012 |
20120157198 | DRIVING SIMULATOR CONTROL WITH VIRTUAL SKELETON - Depth-image analysis is performed with a device that analyzes a human target within an observed scene by capturing depth-images that include depth information from the observed scene. The human target is modeled with a virtual skeleton including a plurality of joints. The virtual skeleton is used as an input for controlling a driving simulation. | 06-21-2012 |
20120157200 | INTELLIGENT GAMEPLAY PHOTO CAPTURE - Implementations for identifying, capturing, and presenting high-quality photo-representations of acts occurring during play of a game that employs motion tracking input technology are disclosed. As one example, a method is disclosed that includes capturing via an optical interface, a plurality of photographs of a player in a capture volume during play of the electronic game. The method further includes for each captured photograph of the plurality of captured photographs, comparing an event-based scoring parameter to an event depicted by or corresponding to the captured photograph. The method further includes assigning respective scores to the plurality of captured photographs based, at least in part, on the comparison to the even-based scoring parameter. The method further includes associating the captured photographs at an electronic storage media with the respective scores assigned to the captured photographs. | 06-21-2012 |
20120165096 | INTERACTING WITH A COMPUTER BASED APPLICATION - A computing system runs an application (e.g., video game) that interacts with one or more actively engaged users. One or more physical properties of a group are sensed. The group may include the one or more actively engaged users and/or one or more entities not actively engaged with the application. The computing system will determine that the group (or the one or more entities not actively engaged with the application) have performed a predetermined action. A runtime condition of the application is changed in response to determining that the group (or the one or more entities not actively engaged with the computer based application) have performed the predetermined action. Examples of changing a runtime condition include moving an object, changing a score or changing an environmental condition of a video game. | 06-28-2012 |
20120302350 | COMMUNICATION BETWEEN AVATARS IN DIFFERENT GAMES - Synchronous and asynchronous communications between avatars is allowed. For synchronous communications, when multiple users are playing different games of the same game title and when the avatars of the multiple users are at the same location in their respective games they can communicate with one another, thus allowing the users of those avatars to communicate with one another. For asynchronous communications, an avatar of a particular user is left behind at a particular location in a game along with a recorded communication. When other users of other games are at that particular location, the avatar of that particular user is displayed and the recorded communication is presented to the other users. | 11-29-2012 |
20120302351 | AVATARS OF FRIENDS AS NON-PLAYER-CHARACTERS - In accordance with one or more aspects, for a particular user one or more other users associated with that particular user are identified based on a social graph of that particular user. An avatar of at least one of the other users is obtained and included as a non-player-character in a game being played by that particular user. The particular user can provide requests to interact with the avatar of the second user (e.g., calling out the name of the second user, tapping the avatar of the second user on the shoulder, etc.), these requests being invitations for the second user to join in a game with the first user. An indication of such an invitation is presented to the second user, which can, for example, accept the invitation to join in a game with the first user. | 11-29-2012 |
20120306853 | ADDING ATTRIBUTES TO VIRTUAL REPRESENTATIONS OF REAL-WORLD OBJECTS - A method, medium, and virtual object for providing a virtual representation with an attribute are described. The virtual representation is generated based on a digitization of a real-world object. Properties of the virtual representation, such as colors, shape similarities, volume, surface area, and the like are identified and an amount or degree of exhibition of those properties by the virtual representation is determined. The properties are employed to identify attributes associated with the virtual representation, such as temperature, weight, or sharpness of an edge, among other attributes of the virtual object. A degree of exhibition of the attributes is also determined based on the properties and their degrees of exhibition. Thereby, the virtual representation is provided with one or more attributes that instruct presentation and interactions of the virtual representation in a virtual world. | 12-06-2012 |
20120309534 | AUTOMATED SENSOR DRIVEN MATCH-MAKING - A method of matching a player of a multi-player game with a remote participant includes recognizing the player, automatically identifying an observer within a threshold proximity to the player, using an identity of the observer to find one or more candidates to play as the remote participant of the multi-player game, and when selecting the remote participant, choosing a candidate from the one or more candidates above a non-candidate if the candidate satisfies a matching criteria. | 12-06-2012 |
20120309538 | PHYSICAL CHARACTERISTICS BASED USER IDENTIFICATION FOR MATCHMAKING - One or more physical characteristics of each of multiple users are detected. These physical characteristics of a user can include physical attributes of the user (e.g., the user's height, length of the user's legs) and/or physical skills of the user (e.g., how high the user can jump). Based on these detected one or more physical characteristics of the users, two or more of the multiple users to share an online experience (e.g., play a multi-player game) are identified. | 12-06-2012 |
20120311031 | AUTOMATED SENSOR DRIVEN FRIENDING - A method of finding a new social network service friend for a player belonging to a social network service and having a friend group including one or more player-accepted friends includes recognizing the player, automatically identifying an observer within a threshold proximity to the player, and adding the observer to the friend group of the player in the social network service if the observer satisfies a friending criteria of the player. | 12-06-2012 |
20120311032 | EMOTION-BASED USER IDENTIFICATION FOR ONLINE EXPERIENCES - Emotional response data of a particular user, when the particular user is interacting with each of multiple other users, is collected. Using the emotional response data, an emotion of the particular user when interacting with each of multiple other users is determined. Based on the determined emotions, one or more of the multiple other users are identified to share an online experience with the particular user. | 12-06-2012 |
20130007013 | MATCHING USERS OVER A NETWORK - Various embodiments are disclosed that relate to negatively matching users over a network. For example, one disclosed embodiment provides a method including storing a plurality of user profiles corresponding to a plurality of users, each user profile in the plurality of user profiles including one or more user attributes, and receiving a request from a user for a list of one or more suggested negatively matched other users. In response to the request, the method further includes ranking each of a plurality of other users based on a magnitude of a difference between one or more user attributes of the user and corresponding one or more user attributes of the other user, and sending a list of one or more negatively matched users to the exclusion of more positively matched users based on the ranking. | 01-03-2013 |
20130013093 | PHYSICAL CHARACTERISTICS BASED USER IDENTIFICATION FOR MATCHMAKING - One or more physical characteristics of each of multiple users are detected. These physical characteristics of a user can include physical attributes of the user (e.g., the user's height, length of the user's legs) and/or physical skills of the user (e.g., how high the user can jump). Based on these detected one or more physical characteristics of the users, two or more of the multiple users to share an online experience (e.g., play a multi-player game) are identified. | 01-10-2013 |
20130127994 | VIDEO COMPRESSION USING VIRTUAL SKELETON - Optical sensor information captured via one or more optical sensors imaging a scene that includes a human subject is received by a computing device. The optical sensor information is processed by the computing device to model the human subject with a virtual skeleton, and to obtain surface information representing the human subject. The virtual skeleton is transmitted by the computing device to a remote computing device at a higher frame rate than the surface information. Virtual skeleton frames are used by the remote computing device to estimate surface information for frames that have not been transmitted by the computing device. | 05-23-2013 |
20130135180 | SHARED COLLABORATION USING HEAD-MOUNTED DISPLAY - Various embodiments are provided for a shared collaboration system and related methods for enabling an active user to interact with one or more additional users and with collaboration items. In one embodiment a head-mounted display device is operatively connected to a computing device that includes a collaboration engine program. The program receives observation information of a physical space from the head-mounted display device along with a collaboration item. The program visually augments an appearance of the physical space as seen through the head-mounted display device to include an active user collaboration item representation of the collaboration item. The program populates the active user collaboration item representation with additional user collaboration item input from an additional user. | 05-30-2013 |
20130141419 | AUGMENTED REALITY WITH REALISTIC OCCLUSION - A head-mounted display device is configured to visually augment an observed physical space to a user. The head-mounted display device includes a see-through display and is configured to receive augmented display information, such as a virtual object with occlusion relative to a real world object from a perspective of the see-through display. | 06-06-2013 |
20130141434 | VIRTUAL LIGHT IN AUGMENTED REALITY - A head-mounted display system includes a see-through display that is configured to visually augment an appearance of a physical environment to a user viewing the physical environment through the see-through display. Graphical content presented via the see-through display is created by modeling the ambient lighting conditions of the physical environment. | 06-06-2013 |
20130154958 | CONTENT SYSTEM WITH SECONDARY TOUCH CONTROLLER - A controller for a content presentation and interaction system which includes a primary content presentation device. The controller includes a tactile control input and a touch screen control input. The tactile control input is responsive to the inputs of a first user and communicatively coupled to the content presentation device. The controller a plurality of tactile input mechanisms and provides a first set of the plurality of control inputs manipulating content. The controller includes a touch screen control input responsive to the inputs of the first user and communicatively coupled to the content presentation device. The second controller is proximate the first controller and provides a second set of the plurality of control inputs. The second set of control inputs includes alternative inputs for at least some of the controls and additional inputs not available using the tactile input mechanisms. | 06-20-2013 |
20130194164 | EXECUTABLE VIRTUAL OBJECTS ASSOCIATED WITH REAL OBJECTS - Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object. | 08-01-2013 |
20130194304 | COORDINATE-SYSTEM SHARING FOR AUGMENTED REALITY - A method for presenting real and virtual images correctly positioned with respect to each other. The method includes, in a first field of view, receiving a first real image of an object and displaying a first virtual image. The method also includes, in a second field of view oriented independently relative to the first field of view, receiving a second real image of the object and displaying a second virtual image, the first and second virtual images positioned coincidently within a coordinate system. | 08-01-2013 |
20130196757 | MULTIPLAYER GAMING WITH HEAD-MOUNTED DISPLAY - A system and related methods for inviting a potential player to participate in a multiplayer game via a user head-mounted display device are provided. In one example, a potential player invitation program receives user voice data and determines that the user voice data is an invitation to participate in a multiplayer game. The program receives eye-tracking information, depth information, facial recognition information, potential player head-mounted display device information, and/or potential player voice data. The program associates the invitation with the potential player using the eye-tracking information, the depth information, the facial recognition information, the potential player head-mounted display device information, and/or the potential player voice data. The program matches a potential player account with the potential player. The program receives an acceptance response from the potential player, and joins the potential player account with a user account in participating in the multiplayer game. | 08-01-2013 |
20130196772 | MATCHING PHYSICAL LOCATIONS FOR SHARED VIRTUAL EXPERIENCE - Embodiments for matching participants in a virtual multiplayer entertainment experience are provided. For example, one embodiment provides a method including receiving from each user of a plurality of users a request to join the virtual multiplayer entertainment experience, receiving from each user of the plurality of users information regarding characteristics of a physical space in which each user is located, and matching two or more users of the plurality of users for participation in the virtual multiplayer entertainment experience based on the characteristics of the physical space of each of the two or more users. | 08-01-2013 |
20130201276 | INTEGRATED INTERACTIVE SPACE - Techniques for implementing an integrative interactive space are described. In implementations, video cameras that are positioned to capture video at different locations are synchronized such that aspects of the different locations can be used to generate an integrated interactive space. The integrated interactive space can enable users at the different locations to interact, such as via video interaction, audio interaction, and so on. In at least some embodiments, techniques can be implemented to adjust an image of a participant during a video session such that the participant appears to maintain eye contact with other video session participants at other locations. Techniques can also be implemented to provide a virtual shared space that can enable users to interact with the space, and can also enable users to interact with one another and/or objects that are displayed in the virtual shared space. | 08-08-2013 |
20130215454 | THREE-DIMENSIONAL PRINTING - Three-dimensional printing techniques are described. In one or more implementations, a system includes a three-dimensional printer and a computing device. The three-dimensional printer has a three-dimensional printing mechanism that is configured to form a physical object in three dimensions. The computing device is communicatively coupled to the three-dimensional printer and includes a three-dimensional printing module implemented at least partially in hardware to cause the three-dimensional printer to form the physical object in three dimensions as having functionality configured to communicate with a computing device. | 08-22-2013 |
20130286223 | PROXIMITY AND CONNECTION BASED PHOTO SHARING - Photos are shared among devices that are in close proximity to one another and for which there is a connection among the devices. The photos can be shared automatically, or alternatively based on various user inputs. Various different controls can also be placed on sharing photos to restrict the other devices with which photos can be shared, the manner in which photos can be shared, and/or how the photos are shared. | 10-31-2013 |
20130335435 | COLOR VISION DEFICIT CORRECTION - Embodiments related to improving a color-resolving ability of a user of a see-thru display device are disclosed. For example, one disclosed embodiment includes, on a see-thru display device, constructing and displaying virtual imagery to superpose onto real imagery sighted by the user through the see-thru display device. The virtual imagery is configured to accentuate a locus of the real imagery of a color poorly distinguishable by the user. Such virtual imagery is then displayed by superposing it onto the real imagery, in registry with the real imagery, in a field of view of the user. | 12-19-2013 |
20130335594 | ENHANCING CAPTURED DATA - Captured data is obtained, including various types of captured or recorded data (e.g., image data, audio data, video data, etc.) and/or metadata describing various aspects of the capture device and/or the manner in which the data is captured. One or more elements of the captured data that can be replaced by one or more substitute elements are determined, the replaceable elements are removed from the captured data, and links to the substitute elements are associated with the captured data. Links to additional elements to enhance the captured data are also associated with the captured data. Enhanced content can subsequently be constructed based on the captured data as well as the links to the substitute elements and additional elements. | 12-19-2013 |
20140049558 | AUGMENTED REALITY OVERLAY FOR CONTROL DEVICES - Embodiments for providing instructional information for control devices are disclosed. In one example, a method on a see-through display device comprising a see-through display and an outward-facing image sensor includes acquiring an image of a scene viewable through the see-through display and detecting a control device in the scene. The method also includes retrieving information pertaining to a function of an interactive element of the control device and displaying an image on the see-through display augmenting an appearance of the interactive element of the control device with image data related to the function of the interactive element. | 02-20-2014 |
20140125574 | USER AUTHENTICATION ON DISPLAY DEVICE - Embodiments are disclosed that relate to authenticating a user of a display device. For example, one disclosed embodiment includes displaying one or more virtual images on the display device, wherein the one or more virtual images include a set of augmented reality features. The method further includes identifying one or more movements of the user via data received from a sensor of the display device, and comparing the identified movements of the user to a predefined set of authentication information for the user that links user authentication to a predefined order of the augmented reality features. If the identified movements indicate that the user selected the augmented reality features in the predefined order, then the user is authenticated, and if the identified movements indicate that the user did not select the augmented reality features in the predefined order, then the user is not authenticated. | 05-08-2014 |
20140128161 | CROSS-PLATFORM AUGMENTED REALITY EXPERIENCE - A plurality of game sessions are hosted at a server system. A first computing device of a first user is joined to a first multiplayer gaming session, the first computing device including a see-through display. Augmentation information is sent to the first computing device for the first multiplayer gaming session to provide an augmented reality experience to the first user. A second computing device of a second user is joined to the first multiplayer gaming session. Experience information is sent to the second computing device for the first multiplayer gaming session to provide a cross-platform representation of the augmented reality experience to the second user. | 05-08-2014 |
20140267311 | INTERACTING WITH USER INTERFACE VIA AVATAR - Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control. | 09-18-2014 |
20140320389 | MIXED REALITY INTERACTIONS - Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display. | 10-30-2014 |
20150035832 | VIRTUAL LIGHT IN AUGMENTED REALITY - A head-mounted display system includes a see-through display that is configured to visually augment an appearance of a physical environment to a user viewing the physical environment through the see-through display. Graphical content presented via the see-through display is created by modeling the ambient lighting conditions of the physical environment. | 02-05-2015 |