Patent application number | Description | Published |
20100199229 | MAPPING A NATURAL INPUT DEVICE TO A LEGACY SYSTEM - Systems and methods for mapping natural input devices to legacy system inputs are disclosed. One example system may include a computing device having an algorithmic preprocessing module configured to receive input data containing a natural user input and to identify the natural user input in the input data. The computing device may further include a gesture module coupled to the algorithmic preprocessing module, the gesture module being configured to associate the natural user input to a gesture in a gesture library. The computing device may also include a mapping module to map the gesture to a legacy controller input, and to send the legacy controller input to a legacy system in response to the natural user input. | 08-05-2010 |
20110035666 | SHOW BODY POSITION - A capture device may capture a user's motion and a display device may display a model that maps to the user's motion, including gestures that are applicable for control. A user may be unfamiliar with a system that maps the user's motions or not know what gestures are applicable for an executing application. A user may not understand or know how to perform gestures that are applicable for the executing application. Providing visual feedback representing instructional gesture data to the user can teach the user how to properly gesture. The visual feedback may be provided in any number of suitable ways. For example, visual feedback may be provided via ghosted images, player avatars, or skeletal representations. The system can process prerecorded or live content for displaying visual feedback representing instructional gesture data. The feedback can portray the deltas between the user's actual position and the ideal gesture position. | 02-10-2011 |
20110221755 | BIONIC MOTION - A camera that can sense motion of a user is connected to a computing system (e.g., video game apparatus or other type of computer). The computing system determines an action corresponding to the sensed motion of the user and determines a magnitude of the sensed motion of the user. The computing system creates and displays an animation of an object (e.g., an avatar in a video game) performing the action in a manner that is amplified in comparison to the sensed motion by a factor that is proportional to the determined magnitude. The computing system also creates and outputs audio/visual feedback in proportion to a magnitude of the sensed motion of the user. | 09-15-2011 |
20110223995 | INTERACTING WITH A COMPUTER BASED APPLICATION - A computing system runs an application (e.g., video game) that interacts with one or more actively engaged users. One or more physical properties of a group are sensed. The group may include the one or more actively engaged users and/or one or more entities not actively engaged with the application. The computing system will determine that the group (or the one or more entities not actively engaged with the application) have performed a predetermined action. A runtime condition of the application is changed in response to determining that the group (or the one or more entities not actively engaged with the computer based application) have performed the predetermined action. Examples of changing a runtime condition include moving an object, changing a score or changing an environmental condition of a video game. | 09-15-2011 |
20110246329 | MOTION-BASED INTERACTIVE SHOPPING ENVIRONMENT - An on-screen shopping application which reacts to a human target user's motions to provide a shopping experience to the user is provided. A tracking system captures user motions and executes a shopping application allowing a user to manipulate an on-screen representation the user. The on-screen representation has a likeness of the user or another individual and movements of the user in the on-screen interface allows the user to interact with virtual articles that represent real-world articles. User movements which are recognized as article manipulation or transaction control gestures are translated into commands for the shopping application. | 10-06-2011 |
20110304632 | INTERACTING WITH USER INTERFACE VIA AVATAR - Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control. | 12-15-2011 |
20110304774 | CONTEXTUAL TAGGING OF RECORDED DATA - Embodiments are disclosed that relate to the automatic tagging of recorded content. For example, one disclosed embodiment provides a computing device comprising a processor and memory having instructions executable by the processor to receive input data comprising one or more of a depth data, video data, and directional audio data, identify a content-based input signal in the input data, and apply one or more filters to the input signal to determine whether the input signal comprises a recognized input. Further, if the input signal comprises a recognized input, then the instructions are executable to tag the input data with the contextual tag associated with the recognized input and record the contextual tag with the input data. | 12-15-2011 |
20120155705 | FIRST PERSON SHOOTER CONTROL WITH VIRTUAL SKELETON - A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured aiming vector control, and a virtual weapon is aimed in proportion to the gestured aiming vector control. | 06-21-2012 |
20120157198 | DRIVING SIMULATOR CONTROL WITH VIRTUAL SKELETON - Depth-image analysis is performed with a device that analyzes a human target within an observed scene by capturing depth-images that include depth information from the observed scene. The human target is modeled with a virtual skeleton including a plurality of joints. The virtual skeleton is used as an input for controlling a driving simulation. | 06-21-2012 |
20120165096 | INTERACTING WITH A COMPUTER BASED APPLICATION - A computing system runs an application (e.g., video game) that interacts with one or more actively engaged users. One or more physical properties of a group are sensed. The group may include the one or more actively engaged users and/or one or more entities not actively engaged with the application. The computing system will determine that the group (or the one or more entities not actively engaged with the application) have performed a predetermined action. A runtime condition of the application is changed in response to determining that the group (or the one or more entities not actively engaged with the computer based application) have performed the predetermined action. Examples of changing a runtime condition include moving an object, changing a score or changing an environmental condition of a video game. | 06-28-2012 |
20130002813 | VIEWING WINDOWS FOR VIDEO STREAMS - Techniques are provided for viewing windows for video streams. A video stream from a video capture device is accessed. Data that describes movement or position of a person is accessed. A viewing window is placed in the video stream based on the data that describes movement or position of the person. The viewing window is provided to a display device in accordance with the placement of the viewing window in the video stream. Motion sensors can detect motion of the person carrying the video capture device in order to dampen the motion such that the video on the remote display does not suffer from motion artifacts. Sensors can also track the eye gaze of either the person carrying the mobile video capture device or the remote display device to enable control of the spatial region of the video stream shown at the display device. | 01-03-2013 |
20130016033 | PROVIDING ELECTRONIC COMMUNICATIONS IN A PHYSICAL WORLDAANM Latta; Stephen G.AACI SeattleAAST WAAACO USAAGP Latta; Stephen G. Seattle WA USAANM Small; Sheridan MartinAACI SeattleAAST WAAACO USAAGP Small; Sheridan Martin Seattle WA USAANM Liu; James C.AACI BellevueAAST WAAACO USAAGP Liu; James C. Bellevue WA USAANM Vaught; Benjamin I.AACI SeattleAAST WAAACO USAAGP Vaught; Benjamin I. Seattle WA USAANM Bennett; DarrenAACI SeattleAAST WAAACO USAAGP Bennett; Darren Seattle WA US - Techniques are provided for displaying electronic communications using a head mounted display (HMD). Each electronic communication may be displayed to represent a physical object that indentifies it as a specific type or nature of electronic communication. Therefore, the user is able to process the electronic communications more efficiently. In some aspects, computer vision allows a user to interact with the representation of the physical objects. One embodiment includes accessing electronic communications, and determining physical objects that are representative of at least a subset of the electronic communications. A head mounted display (HMD) is instructed how to display a representation of the physical objects in this embodiment. | 01-17-2013 |
20130042296 | PHYSICAL INTERACTION WITH VIRTUAL OBJECTS FOR DRM - Technology is provided for transferring a right to a digital content item based on one or more physical actions detected in data captured by a see-through, augmented reality display device system. A digital content item may be represented by a three-dimensional ( | 02-14-2013 |
20130044130 | PROVIDING CONTEXTUAL PERSONAL INFORMATION BY A MIXED REALITY DEVICE - The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location. | 02-21-2013 |
20130083008 | ENRICHED EXPERIENCE USING PERSONAL A/V SYSTEM - A system for generating an augmented reality environment in association with one or more attractions or exhibits is described. In some cases, a see-through head-mounted display device (HMD) may acquire one or more virtual objects from a supplemental information provider associated with a particular attraction. The one or more virtual objects may be based on whether an end user of the HMD is waiting in line for the particular attraction or is on (or in) the particular attraction. The supplemental information provider may vary the one or more virtual objects based on the end user's previous experiences with the particular attraction. The HMD may adapt the one or more virtual objects based on physiological feedback from the end user (e.g., if a child is scared). The supplemental information provider may also provide and automatically update a task list associated with the particular attraction. | 04-04-2013 |
20130083062 | PERSONAL A/V SYSTEM WITH CONTEXT RELEVANT INFORMATION - A system for generating an augmented reality environment in association with one or more attractions or exhibits is described. In some cases, a see-through head-mounted display device (HMD) may acquire one or more virtual objects from a supplemental information provider associated with a particular attraction. The one or more virtual objects may be based on whether an end user of the HMD is waiting in line for the particular attraction or is on (or in) the particular attraction. The supplemental information provider may vary the one or more virtual objects based on the end user's previous experiences with the particular attraction. The HMD may adapt the one or more virtual objects based on physiological feedback from the end user (e.g., if a child is scared). The supplemental information provider may also provide and automatically update a task list associated with the particular attraction. | 04-04-2013 |
20130083173 | VIRTUAL SPECTATOR EXPERIENCE WITH A PERSONAL AUDIO/VISUAL APPARATUS - Technology is described for providing a virtual spectator experience for a user of a personal A/V apparatus including a near-eye, augmented reality (AR) display. A position volume of an event object participating in an event in a first 3D coordinate system for a first location is received and mapped to a second position volume in a second 3D coordinate system at a second location remote from where the event is occurring. A display field of view of the near-eye AR display at the second location is determined, and real-time 3D virtual data representing the one or more event objects which are positioned within the display field of view are displayed in the near-eye AR display. A user may select a viewing position from which to view the event. Additionally, virtual data of a second user may be displayed at a position relative to a first user. | 04-04-2013 |
20130093788 | USER CONTROLLED REAL OBJECT DISAPPEARANCE IN A MIXED REALITY DISPLAY - The technology causes disappearance of a real object in a field of view of a see-through, mixed reality display device system based on user disappearance criteria. Image data is tracked to the real object in the field of view of the see-through display for implementing an alteration technique on the real object causing its disappearance from the display. A real object may satisfy user disappearance criteria by being associated with subject matter that the user does not wish to see or by not satisfying relevance criteria for a current subject matter of interest to the user. In some embodiments, based on a 3D model of a location of the display device system, an alteration technique may be selected for a real object based on a visibility level associated with the position within the location. Image data for alteration may be prefetched based on a location of the display device system. | 04-18-2013 |
20130135180 | SHARED COLLABORATION USING HEAD-MOUNTED DISPLAY - Various embodiments are provided for a shared collaboration system and related methods for enabling an active user to interact with one or more additional users and with collaboration items. In one embodiment a head-mounted display device is operatively connected to a computing device that includes a collaboration engine program. The program receives observation information of a physical space from the head-mounted display device along with a collaboration item. The program visually augments an appearance of the physical space as seen through the head-mounted display device to include an active user collaboration item representation of the collaboration item. The program populates the active user collaboration item representation with additional user collaboration item input from an additional user. | 05-30-2013 |
20130141421 | AUGMENTED REALITY VIRTUAL MONITOR - A head-mounted display includes a see-through display and a virtual reality engine. The see-through display is configured to visually augment an appearance of a physical space to a user viewing the physical space through the see-through display. The virtual reality engine is configured to cause the see-through display to visually present a virtual monitor that appears to be integrated with the physical space to a user viewing the physical space through the see-through display. | 06-06-2013 |
20130141434 | VIRTUAL LIGHT IN AUGMENTED REALITY - A head-mounted display system includes a see-through display that is configured to visually augment an appearance of a physical environment to a user viewing the physical environment through the see-through display. Graphical content presented via the see-through display is created by modeling the ambient lighting conditions of the physical environment. | 06-06-2013 |
20130194164 | EXECUTABLE VIRTUAL OBJECTS ASSOCIATED WITH REAL OBJECTS - Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object. | 08-01-2013 |
20130194259 | VIRTUAL ENVIRONMENT GENERATING SYSTEM - A system and related methods for visually augmenting an appearance of a physical environment as seen by a user through a head-mounted display device are provided. In one embodiment, a virtual environment generating program receives eye-tracking information, lighting information, and depth information from the head-mounted display. The program generates a virtual environment that models the physical environment and is based on the lighting information and the distance of a real-world object from the head-mounted display. The program visually augments a virtual object representation in the virtual environment based on the eye-tracking information, and renders the virtual object representation on a transparent display of the head-mounted display device. | 08-01-2013 |
20130194304 | COORDINATE-SYSTEM SHARING FOR AUGMENTED REALITY - A method for presenting real and virtual images correctly positioned with respect to each other. The method includes, in a first field of view, receiving a first real image of an object and displaying a first virtual image. The method also includes, in a second field of view oriented independently relative to the first field of view, receiving a second real image of the object and displaying a second virtual image, the first and second virtual images positioned coincidently within a coordinate system. | 08-01-2013 |
20130196772 | MATCHING PHYSICAL LOCATIONS FOR SHARED VIRTUAL EXPERIENCE - Embodiments for matching participants in a virtual multiplayer entertainment experience are provided. For example, one embodiment provides a method including receiving from each user of a plurality of users a request to join the virtual multiplayer entertainment experience, receiving from each user of the plurality of users information regarding characteristics of a physical space in which each user is located, and matching two or more users of the plurality of users for participation in the virtual multiplayer entertainment experience based on the characteristics of the physical space of each of the two or more users. | 08-01-2013 |
20130328762 | CONTROLLING A VIRTUAL OBJECT WITH A REAL CONTROLLER DEVICE - Technology is described for controlling a virtual object displayed by a near-eye, augmented reality display with a real controller device. User input data is received from a real controller device requesting an action to be performed by the virtual object. A user perspective of the virtual object being displayed by the near-eye, augmented reality display is determined. The user input data requesting the action to be performed by the virtual object is applied based on the user perspective, and the action is displayed from the user perspective. The virtual object to be controlled by the real controller device may be identified based on user input data which may be from a natural user interface (NUI). A user selected force feedback object may also be identified, and the identification may also be based on NUI input data. | 12-12-2013 |
20130328927 | AUGMENTED REALITY PLAYSPACES WITH ADAPTIVE GAME RULES - A system for generating a virtual gaming environment based on features identified within a real-world environment, and adapting the virtual gaming environment over time as the features identified within the real-world environment change is described. Utilizing the technology described, a person wearing a head-mounted display device (HMD) may walk around a real-world environment and play a virtual game that is adapted to that real-world environment. For example, the HMD may identify environmental features within a real-world environment such as five grassy areas and two cars, and then spawn virtual monsters based on the location and type of the environmental features identified. The location and type of the environmental features identified may vary depending on the particular real-world environment in which the HMD exists and therefore each virtual game may look different depending on the particular real-world environment. | 12-12-2013 |
20140002444 | CONFIGURING AN INTERACTION ZONE WITHIN AN AUGMENTED REALITY ENVIRONMENT | 01-02-2014 |
20140267311 | INTERACTING WITH USER INTERFACE VIA AVATAR - Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control. | 09-18-2014 |
20150035832 | VIRTUAL LIGHT IN AUGMENTED REALITY - A head-mounted display system includes a see-through display that is configured to visually augment an appearance of a physical environment to a user viewing the physical environment through the see-through display. Graphical content presented via the see-through display is created by modeling the ambient lighting conditions of the physical environment. | 02-05-2015 |
20150130689 | EXECUTABLE VIRTUAL OBJECTS ASSOCIATED WITH REAL OBJECTS - Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object. | 05-14-2015 |
20150254793 | INTERACTION WITH VIRTUAL OBJECTS CAUSING CHANGE OF LEGAL STATUS - Technology is provided for transferring a right to a digital content item based on one or more physical actions detected in data captured by a see-through, augmented reality display device system. A digital content item may be represented by a three-dimensional (3D) virtual object displayed by the device system. A user can hold the virtual object in some examples, and transfer a right to the content item the object represents by handing the object to another user within a defined distance, who indicates acceptance of the right based upon one or more physical actions including taking hold of the transferred object. Other examples of physical actions performed by a body part of a user may also indicate offer and acceptance in the right transfer. Content may be transferred from display device to display device while rights data is communicated via a network with a service application executing remotely. | 09-10-2015 |
20150312561 | VIRTUAL 3D MONITOR - A right near-eye display displays a right-eye virtual object, and a left near-eye display displays a left-eye virtual object. A first texture derived from a first image of a scene as viewed from a first perspective is overlaid on the right-eye virtual object and a second texture derived from a second image of the scene as viewed from a second perspective is overlaid on the left-eye virtual object. The right-eye virtual object and the left-eye virtual object cooperatively create an appearance of a pseudo 3D video perceivable by a user viewing the right and left near-eye displays. | 10-29-2015 |
20160077785 | EXECUTABLE VIRTUAL OBJECTS ASSOCIATED WITH REAL OBJECTS - Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object. | 03-17-2016 |
20160086382 | PROVIDING LOCATION OCCUPANCY ANALYSIS VIA A MIXED REALITY DEVICE - The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location. | 03-24-2016 |
Patent application number | Description | Published |
20110175801 | Directed Performance In Motion Capture System - Techniques for enhancing the use of a motion capture system are provided. A motion capture system tracks movement and audio inputs from a person in a physical space, and provides the inputs to an application, which displays a virtual space on a display. Bodily movements can be used to define traits of an avatar in the virtual space. The person can be directed to perform the movements by a coaching avatar, or visual or audio cues in the virtual space. The application can respond to the detected movements and voice commands or voice volume of the person to define avatar traits and initiate pre-scripted audio-visual events in the virtual space to provide an entertaining experience. A performance in the virtual space can be captured and played back with automatic modifications, such as alterations to the avatar's voice or appearance, or modifications made by another person. | 07-21-2011 |
20110175809 | Tracking Groups Of Users In Motion Capture System - In a motion capture system, a unitary input is provided to an application based on detected movement and/or location of a group of people. Audio information from the group can also be used as an input. The application can provide real-time feedback to the person or group via a display and audio output. The group can control the movement of an avatar in a virtual space based on the movement of each person in the group, such as in a steering or balancing game. To avoid a discontinuous or confusing output by the application, missing data can be generated for a person who is occluded or partially out of the field of view. A wait time can be set for activating a new person and deactivating a currently-active person. The wait time can be adaptive based on a first detected position or a last detected position of the person. | 07-21-2011 |
20110175810 | Recognizing User Intent In Motion Capture System - Techniques for facilitating interaction with an application in a motion capture system allow a person to easily begin interacting without manual setup. A depth camera system tracks a person in physical space and evaluates the person's intent to engage with the application. Factors such as location, stance, movement and voice data can be evaluated. Absolute location in a field of view of the depth camera, and location relative to another person, can be evaluated. Stance can include facing a depth camera, indicating a willingness to interact. Movements can include moving toward or away from a central area in the physical space, walking through the field of view, and movements which occur while standing generally in one location, such as moving one's arms around, gesturing, or shifting weight from one foot to another. Voice data can include volume as well as words which are detected by speech recognition. | 07-21-2011 |
20120047468 | Translating User Motion Into Multiple Object Responses - A system for translating user motion into multiple object responses of an on-screen object based on user interaction of an application executing on a computing device is provided. User motion data is received from a capture device from one or more users. The user motion data corresponds to user interaction with an on-screen object presented in the application. The on-screen object corresponds to an object other than an on-screen representation of a user that is displayed by the computing device. The user motion data is automatically translated into multiple object responses of the on-screen object. The multiple object responses of the on-screen object are simultaneously displayed to the users. | 02-23-2012 |
20120326976 | Directed Performance In Motion Capture System - Techniques for enhancing the use of a motion capture system are provided. A motion capture system tracks movement and audio inputs from a person in a physical space, and provides the inputs to an application, which displays a virtual space on a display. Bodily movements can be used to define traits of an avatar in the virtual space. The person can be directed to perform the movements by a coaching avatar, or visual or audio cues in the virtual space. The application can respond to the detected movements and voice commands or voice volume of the person to define avatar traits and initiate pre-scripted audio-visual events in the virtual space to provide an entertaining experience. A performance in the virtual space can be captured and played back with automatic modifications, such as alterations to the avatar's voice or appearance, or modifications made by another person. | 12-27-2012 |
20130074002 | Recognizing User Intent In Motion Capture System - Techniques for facilitating interaction with an application in a motion capture system allow a person to easily begin interacting without manual setup. A depth camera system tracks a person in physical space and determines a probabilistic measure of the person's intent to engage of disengage with the application based on location, stance and movement. Absolute location in a field of view of the depth camera, and location relative to another person, can be evaluated. Stance can include facing a depth camera, indicating a willingness to interact. Movements can include moving toward or away from a central area in the physical space, walking through the field of view, and movements which occur while standing generally in one location, such as moving one's arms around, gesturing, or shifting weight from one foot to another. | 03-21-2013 |
20130084970 | Sharing Games Using Personal Audio/Visual Apparatus - A game can be created, shared and played using a personal audio/visual apparatus such as a head-mounted display device (HMDD). Rules of the game, and a configuration of the game space, can be standard or custom. Boundary points of the game can be defined by a gaze direction of the HMDD, by the user's location, by a model of a physical game space such as an instrumented court or by a template. Players can be identified and notified of the availability of a game using a server push technology. For example, a user in a particular location may be notified of the availability of a game at that location. A server manages the game, including storing the rules, boundaries and a game state. The game state can identify players and their scores. Real world objects can be imaged and provided as virtual objects in the game space. | 04-04-2013 |
Patent application number | Description | Published |
20100194762 | Standard Gestures - Systems, methods and computer readable media are disclosed for grouping complementary sets of standard gestures into gesture libraries. The gestures may be complementary in that they are frequently used together in a context or in that their parameters are interrelated. Where a parameter of a gesture is set with a first value, all other parameters of the gesture and of other gestures in the gesture package that depend on the first value may be set with their own value which is determined using the first value. | 08-05-2010 |
20100199228 | Gesture Keyboarding - Systems, methods and computer readable media are disclosed for gesture keyboarding. A user makes a gesture by either making a pose or moving in a pre-defined way that is captured by a depth camera. The depth information provided by the depth camera is parsed to determine at least that part of the user that is making the gesture. When parsed, the character or action signified by this gesture is identified. | 08-05-2010 |
20100241998 | VIRTUAL OBJECT MANIPULATION - Systems, methods and computer readable media are disclosed for manipulating virtual objects. A user may utilize a controller, such as his hand, in physical space to associate with a cursor in a virtual environment. As the user manipulates the controller in physical space, this is captured by a depth camera. The image data from the depth camera is parsed to determine how the controller is manipulated, and a corresponding manipulation of the cursor is performed in virtual space. Where the cursor interacts with a virtual object in the virtual space, that virtual object is manipulated by the cursor. | 09-23-2010 |
20100303289 | DEVICE FOR IDENTIFYING AND TRACKING MULTIPLE HUMANS OVER TIME - A system recognizes human beings in their natural environment, without special sensing devices attached to the subjects, uniquely identifies them and tracks them in three dimensional space. The resulting representation is presented directly to applications as a multi-point skeletal model delivered in real-time. The device efficiently tracks humans and their natural movements by understanding the natural mechanics and capabilities of the human muscular-skeletal system. The device also uniquely recognizes individuals in order to allow multiple people to interact with the system via natural movements of their limbs and body as well as voice commands/responses. | 12-02-2010 |
20100306712 | Gesture Coach - A capture device may capture a user's motion and a display device may display a model that maps to the user's motion, including gestures that are applicable for control. A user may be unfamiliar with a system that maps the user's motions or not know what gestures are applicable for an executing application. A user may not understand or know how to perform gestures that are applicable for the executing application. User motion data and/or outputs of filters corresponding to gestures may be analyzed to determine those cases where assistance to the user on performing the gesture is appropriate. | 12-02-2010 |
20110099476 | DECORATING A DISPLAY ENVIRONMENT - Disclosed herein are systems and methods for decorating a display environment. In one embodiment, a user may decorate a display environment by making one or more gestures, using voice commands, using a suitable interface device, and/or combinations thereof. A voice command can be detected for user selection of an artistic feature, such as, for example, a color, a texture, an object, and a visual effect for decorating in a display environment. The user can also gesture for selecting a portion of the display environment for decoration. Next, the selected portion of the display environment can be altered based on the selected artistic feature. The user's motions can be reflected in the display environment by an avatar. In addition, a virtual canvas or three-dimensional object can be displayed in the display environment for decoration by the user. | 04-28-2011 |
20120302350 | COMMUNICATION BETWEEN AVATARS IN DIFFERENT GAMES - Synchronous and asynchronous communications between avatars is allowed. For synchronous communications, when multiple users are playing different games of the same game title and when the avatars of the multiple users are at the same location in their respective games they can communicate with one another, thus allowing the users of those avatars to communicate with one another. For asynchronous communications, an avatar of a particular user is left behind at a particular location in a game along with a recorded communication. When other users of other games are at that particular location, the avatar of that particular user is displayed and the recorded communication is presented to the other users. | 11-29-2012 |
20120302351 | AVATARS OF FRIENDS AS NON-PLAYER-CHARACTERS - In accordance with one or more aspects, for a particular user one or more other users associated with that particular user are identified based on a social graph of that particular user. An avatar of at least one of the other users is obtained and included as a non-player-character in a game being played by that particular user. The particular user can provide requests to interact with the avatar of the second user (e.g., calling out the name of the second user, tapping the avatar of the second user on the shoulder, etc.), these requests being invitations for the second user to join in a game with the first user. An indication of such an invitation is presented to the second user, which can, for example, accept the invitation to join in a game with the first user. | 11-29-2012 |
20120309538 | PHYSICAL CHARACTERISTICS BASED USER IDENTIFICATION FOR MATCHMAKING - One or more physical characteristics of each of multiple users are detected. These physical characteristics of a user can include physical attributes of the user (e.g., the user's height, length of the user's legs) and/or physical skills of the user (e.g., how high the user can jump). Based on these detected one or more physical characteristics of the users, two or more of the multiple users to share an online experience (e.g., play a multi-player game) are identified. | 12-06-2012 |
20120311032 | EMOTION-BASED USER IDENTIFICATION FOR ONLINE EXPERIENCES - Emotional response data of a particular user, when the particular user is interacting with each of multiple other users, is collected. Using the emotional response data, an emotion of the particular user when interacting with each of multiple other users is determined. Based on the determined emotions, one or more of the multiple other users are identified to share an online experience with the particular user. | 12-06-2012 |
20130013093 | PHYSICAL CHARACTERISTICS BASED USER IDENTIFICATION FOR MATCHMAKING - One or more physical characteristics of each of multiple users are detected. These physical characteristics of a user can include physical attributes of the user (e.g., the user's height, length of the user's legs) and/or physical skills of the user (e.g., how high the user can jump). Based on these detected one or more physical characteristics of the users, two or more of the multiple users to share an online experience (e.g., play a multi-player game) are identified. | 01-10-2013 |
Patent application number | Description | Published |
20090254445 | SYSTEMS AND METHODS FOR AGGREGATING PACKAGES IN A SHIPPING ENVIRONMENT - Various embodiments for aggregating packages into a pouch in a shipping environment are disclosed. For example, package details associated with a package are compared to one or more incompatibility factors, and if the package details do not match the one or more incompatibility factors, a package identifier uniquely identifying the package is added to a pouch manifest. If the package is not compatible with the pouch, an error message may be displayed to a user indicating that the package is incompatible. In addition, package details may be compared with compatibility factors identifying criteria for packages that are compatible with the pouch and questionable compatibility factors identifying criteria for packages that may be compatible with the pouch. Compatibility factors and questionable compatibility factors may include, for example, service options, delivery notification options, or destination zip code(s). | 10-08-2009 |
20100131420 | APPARATUS, SYSTEMS AND METHODS FOR ONLINE, MULTI-PARCEL, MULTI-CARRIER, MULTI-SERVICE PARCEL RETURNS SHIPPING MANAGEMENT - The present invention provides a computer system (the “System”, or the “Return System”) that is configured and programmed to provide online stores with a fast, simple, convenient way for eCommerce customers of an online store to return merchandise purchased from that store from within that online store. The Return System provides multi-carrier shipment rating, shipment labeling, shipment tracking, shipment tracking management reports, returns analysis and returns management reporting In an exemplary embodiment, the Return System has three major components: 1.) A Returns Manager Subsystem that provides a user interface to each Merchant to setup the Merchant's account, setup the Merchant's return policy and rules, and to monitor the status and movement of return shipments; 2.) A Consumer Returns Subsystem (also sometimes referred to as a “Customer Returns Subsystem”) that provides each consumer using the Returns System with an online user interface that leads the consumer through the returns process, displays the return policies and rules to the consumer, provides shipping document to ship the return package if appropriate, and permits the consumer to track their return shipments; and 3.) a Returns Processing Subsystem that, in the exemplary embodiment, provides background shipping and tracking functionality. In one exemplary embodiment of the present invention, the Online Merchant integrates the Merchant's online system with the Returns Processing Subsystem. In another exemplary embodiment, the Returns Processing Subsystem is provided as an independent web-based application service (referred to as a “Return Merchant Service System”) operated by a common provider. In such an embodiment, the Merchant's system interacts with the Return Merchant Service System through Application Program Interfaces (“API”). | 05-27-2010 |
20110246384 | Apparatus, Systems and Methods For Online, Multi-Parcel, Multi-Carrier, Multi-Service Enterprise Parcel Shipping Management - The present invention provides a plurality of Enterprises with a single online user interface with which the Enterprise can provide Enterprise Shippers, shipping origination users and shipping intermediary users with an automated parcel management system for a plurality of supported Carriers for a plurality of services. The present invention provides for the hierarchical definition of users, including the establishment of at least one user for each Enterprise as a Super-Administrator with the highest level of privileges and authority for the Enterprise, and the identification of other users as Sub-Administrators, Desktop Users and Shipping Station Users. The present invention also provides for the hierarchical definition of organizational units within each Enterprise, including the definition of sites, groups within a site, and users within a group. The present invention further provides for a distinct definition of policies, privileges, and other types of specifications for each user level, each user, and each organizational unit. The present invention applies the user and organizational policies, privileges and other specifications as they apply to each particular user to drive the interactive interface with each particular user and to provide among other things, shipping options, shipping services, shipping rates, traveler and/or shipping label preparation, and shipment tracking. | 10-06-2011 |
20130179361 | Apparatus, Systems and Methods for Online, Multi-Parcel, Multi-Carrier, Multi-Service Enterprise Parcel Shipping Management - The present invention provides a plurality of Enterprises with a single online user interface with which the Enterprise can provide Enterprise Shippers, shipping origination users and shipping intermediary users with an automated parcel management system for a plurality of supported Carriers for a plurality of services. The present invention provides for the hierarchical definition of users, including the hierarchical definition of organizational units within each Enterprise. The present invention further provides for a distinct definition of policies, privileges, and other types of specifications for each user level, each user, and each organizational unit. The present invention applies the user and organizational policies, privileges and other specifications as they apply to each particular user to drive the interactive interface with each particular user and to provide among other things, shipping options, shipping services, shipping rates, traveler and/or shipping label preparation, and shipment tracking. | 07-11-2013 |
20140337246 | Apparatus, Systems and Methods for Online, Multi-Parcel, Multi-Carrier, Multi-Service Enterprise Parcel Shipping Management - The present invention provides a plurality of Enterprises with a single online user interface with which the Enterprise can provide Enterprise Shippers, shipping origination users and shipping intermediary users with an automated parcel management system for a plurality of supported Carriers for a plurality of services. The present invention provides for the hierarchical definition of users, including the hierarchical definition of organizational units within each Enterprise. The present invention further provides for a distinct definition of policies, privileges, and other types of specifications for each user level, each user, and each organizational unit. The present invention applies the user and organizational policies, privileges and other specifications as they apply to each particular user to drive the interactive interface with each particular user and to provide among other things, shipping options, shipping services, shipping rates, traveler and/or shipping label preparation, and shipment tracking. | 11-13-2014 |
Patent application number | Description | Published |
20110246502 | CREATING AND PROPAGATING ANNOTATED INFORMATION - Content may be collected, annotated, and propagated in a unified process. In one example, a mobile device such as a smart phone is used to collect information. The information may be text, video, audio, etc. The information may be sent to a reaction service, which may return an annotation of the information. The annotation may be attached to the information to create an annotated document. The annotated document may be communicated to other users. Additionally, the annotated document may be stored in a way that associated the annotated document with the user who created or captured the information. The ability to capture information, obtain annotations to the information, and propagate the annotated information may facilitate the creation of social media, such as social network postings or online photo albums. | 10-06-2011 |
20110295878 | ASSISTED CONTENT AUTHORING - An authoring system on a mobile device (or other type of device) may help a user to author a message based on context available on the device. Context data comes to exist on the device in some manner. For example, the context may contain the results of a search that a user has performed on the device. A message may be proposed based on the search query and/or the result—e.g., if a user searches for “Edinburgh,” the authoring system may propose the message “Username likes Edinburgh” or “Username is learning about Edinburgh.” The authoring system may allow the user to change the message and/or to add additional content and/or links to the message. The user may then to send the message over some channel such as e-mail, a social network, a microblogging site, etc. | 12-01-2011 |
20110320560 | CONTENT AUTHORING AND PROPAGATION AT VARIOUS FIDELITIES - Content may be authored on a device using various types of information, and may be propagated at various different fidelities. In one example, a user enters or captures information on a mobile device, such as a smart phone. The entered and/or captured information may be sent to a remote service, which provides information based on the entered and/or captured data. An application on the device then allows the user of the device to author rich content based on the entered and/or captured data, and based on the information returned from the service. The application may allow the user to include text, photos, video, audio, links, or any other type of content. The entire content object that the user creates may be stored in a structured form, and may be propagated at various different fidelities (e.g., text only, etc.) in order to accommodate the limitations of the propagation channel. | 12-29-2011 |
20130104025 | ENABLING IMMERSIVE SEARCH ENGINE HOME PAGES - Systems, methods, and computer-readable storage media for enabling immersive, interactive search engine home pages are provided. Upon receiving a request for a search engine home page, an image is presented that covers only a portion of the available display. The image includes a portion of a larger image but appears as a complete image. Additional image portions are transmitted for presentation on portions of the display not covered by the first image. Collectively, the image and the additional image portions make up a larger image configured to cover the entire available display. Additionally, portions of the larger image may not be visible on the available display absent some type of user interaction with the larger image. Interactions with the larger image, for instance panning, zooming, and the like are enabled providing the user with an immersive, interactive experience with the search engine home page. | 04-25-2013 |
20130104059 | ENABLING IMMERSIVE, INTERACTIVE DESKTOP IMAGE PRESENTATION - Systems, methods, and computer-readable storage media for enabling immersive, interactive desktop image presentation are provided. Upon receiving a request for presentation of a background image of a search engine home page as a desktop image, the background image is transmitted for presentation on a desktop associated with a computing device. In embodiments, the background image, and likewise the desktop image, permits user interaction therewith. For instance, a user may zoom into the image, pan around the image or otherwise interact with enabled regions of the background and/or desktop image that offer additional content and/or navigate the user to another location where additional information may be found. In this way, the user is provided an immersive, interactive experience with the image whether at the search engine home page, the desktop, or both. | 04-25-2013 |
20130173570 | PRESENTING INTERACTIVE IMAGES WITH SEARCH RESULTS - Systems, methods, and computer-readable storage media for presenting interactive images associated with a search engine in association with a search engine results page (SERP) are provided. Upon receiving a search query at a search engine, it is determined that the search query content has a related interactive image associated with the search engine. An interactive image may be associated with the search engine, for instance, by having been previously presented as a background image for a search engine home page. A link to the interactive image may be presented as a search result on the SERP, the interactive image may be automatically presented as a background image of the SERP, or the interactive image may be determined to be related to an algorithmically-derived search result and a visual indicator thereof may be presented in association with the search result. | 07-04-2013 |
Patent application number | Description | Published |
20090314692 | RECOVERY OF REPROCESSABLE MEDICAL DEVICES IN A SHARPS CONTAINER - Reprocessable medical devices that have been disposed of in a sharps container are recovered, disinfected or sterilized, sorted, and packed for reprocessing in a generally continuous process. In one embodiment, either a sharps container or at least a portion of its contents is successively and controllably conveyed, either manually or automatically, through a plurality of processing stations that are configured to perform different operations. The operations performed by the processing stations can include, for example, detecting and identifying contents, cleaning and disinfecting, opening the sharps containers, separating and sorting their contents, and disposal of contents that are non-reprocessable. Reprocessable medical devices can be sorted by type as they move along on a conveyor. | 12-24-2009 |
20090314694 | RECOVERY OF REUSABLE MEDICAL DEVICES IN A SHARPS CONTAINER - A recovery device and method of use to ensure that any reusable medical device that has been disposed of in a sharps container is recovered, cleaned, sterilized, and repackaged for reuse. In one embodiment, a sharps container and sorting surface are manipulated to rotate together and independently, so that the contents of the sharps container are emptied onto the sorting surface, enabling an operator to manually, safely, efficiently, and timely retrieve reusable medical devices from the sorting surface and place these medical devices into a receptacle bin. The non-reusable contents are subsequently dumped into a waste bin, whose contents will subsequently be incinerated or otherwise destroyed. The operator of the recovery device is protected by a shield and an exhaust system that minimize the operator's exposure to airborne biohazardous toxins and enable the sorting to be done without injury to the operator from sharp medical devices. | 12-24-2009 |
20100000915 | RECOVERY OF REPROCESSABLE MEDICAL DEVICES IN A SHARPS CONTAINER - A recovery device and method of use ensure that any reusable medical device that has been disposed of in a sharps container is recovered, cleaned, sterilized, and repackaged for reuse. In one embodiment, a sharps container and sorting surface are manipulated to rotate together and independently, so that the content of the sharps container are emptied onto the sorting surface, enabling an operator to manually, safely, efficiently, and timely retrieve reusable medical devices from the sorting surface and place these medical devices into a receptacle bin. The non-reusable contents are subsequently dumped into a waste bin, whose contents will subsequently be incinerated or otherwise destroyed. The operator of the recovery device is protected by a shield and an exhaust system that minimize the operator's exposure to airborne biohazardous toxins and enable the sorting to be done without injury to the operator from sharp medical devices. | 01-07-2010 |
20120111770 | Recovery of Reprocessable Medical Devices in a Sharps Containter - Reprocessable medical devices that have been disposed of in a sharps container are recovered, disinfected or sterilized, sorted, and packed for reprocessing in a generally continuous process. In one embodiment, either a sharps container or at least a portion of its contents is successively and controllably conveyed, either manually or automatically, through a plurality of processing stations that are configured to perform different operations. The operations performed by the processing stations can include, for example, detecting and identifying contents, cleaning and disinfecting, opening the sharps containers, separating and sorting their contents, and disposal of contents that are non-reprocessable. Reprocessable medical devices can be sorted by type as they move along on a conveyor. | 05-10-2012 |
Patent application number | Description | Published |
20140211624 | Network Device - Disclosed is a network communication switch that facilitates reliable communication of high priority traffic over lower priority traffic across all ingress and egress ports. The network communication switch may monitor the frame storage buffer regardless of egress port, and when the frame storage buffer reaches a predetermined level, the switch may discard lower priority frames from the most congested port. When the frame storage buffer reaches a second predetermined level, the switch may discard lower priority frames before they are stored according to egress port. The network communication switch may further monitor ingress frames for priority, and assign priority to frames according to pre-assigned priority, ingress port, and/or frame contents. | 07-31-2014 |
20140269736 | Transmission of Data Over a Low-Bandwidth Communication Channel - Disclosed herein are various systems and methods that may improve the transmission of data over low-bandwidth communication channels in an electric power delivery system. Devices communicating across a low-bandwidth communication channel may implement several approaches, according to various embodiments disclosed herein, to reduce the data transmitted across the low-bandwidth communication channel and to prioritize the transmission of time-sensitive and/or more important information with respect to other data. Various embodiments disclosed herein may inspect packets to be transmitted across a low-bandwidth communication channel in order to identify high priority data. High priority data may be time-sensitive information, and accordingly, transmission of such data may be prioritized over other data in order to reduce transmission latency of the higher priority data. | 09-18-2014 |
20140280672 | Systems and Methods for Managing Communication Between Devices in an Electrical Power System - Systems and methods for managing communication between devices in an electric power generation and delivery system are disclosed. In certain embodiments, a method for managing communication between devices may include receiving a message including an identifier via a communications interface. In certain embodiments, the identifier may identify a particular publishing device. A determination may be made whether the message is a most recently received message associated with the identifier. If the message is the most recently received message, the message may be stored message in a message buffer associated with the identifier, and transmitted from a device using a suitable queuing methodology. | 09-18-2014 |
20140280673 | SYSTEMS AND METHODS FOR COMMUNICATING DATA STATE CHANGE INFORMATION BETWEEN DEVICES IN AN ELECTRICAL POWER SYSTEM - Systems and methods are presented for managing communication between devices in an electric power generation and delivery system. In certain embodiments, a method for managing communication messages performed by a network device included in an electric power generation and delivery system may include receiving a message including an identifier and data state information via a communications interface. A determination may be made that that the message represents a data state change associated with the identifier. The message may be stored in a message buffer associated with the identifier. Finally, the stored message may be transmitted from the message buffer to an intelligent electronic device. | 09-18-2014 |
20140280712 | Exchange of Messages Between Devices in an Electrical Power System - Systems and methods are presented for exchanging messages between devices in an electrical power generation and delivery system. In certain embodiments, a method for exchanging messages between devices may include transmitting messages included in a message stream that includes multiple redundant copies of the messages. An indication may be received that at least one message of the message stream was received by an intended receiving device. Transmission of further redundant copies of the message included in the message stream may be determined based on receipt of the indication. | 09-18-2014 |
20140280713 | Proxy Communication Between Devices in an Electrical Power System - Systems and methods for exchanging messages between network devices and intelligent electronic devices of the electric power generation and delivery system are disclosed herein. In certain embodiments, a method performed by a network device for managing the exchange of messages between a first intelligent electronic device (IED) and a second IED included in an electrical power generation and delivery system may include receiving one or more messages configured according to a first communication protocol from the first IED. Based on information regarding one or more communication capabilities of the second IED, a second communication protocol may be determined. The message be reconfigured according to the second communication protocol to generate at least one reconfigured message. The reconfigured message may then be transmitted to the second IED. | 09-18-2014 |
20140280714 | Mixed-Mode Communication Between Devices in an Electrical Power System - Systems and methods are presented for facilitating mixed-mode communication between stations in an electric power generation and delivery system. In certain embodiments, a method for facilitating mixed-mode communication between a first device configured to communicate according to a first communication protocol and a second device configured to communicate according to a second communication protocol is presented The method may include installing a network device in a communication channel between the first device and the second device. A communications interface of the network device may be configured to receive messages from the first device and the second device. A message reconfiguration system of the network device may be configured to reconfigure messages received by the network device from the first device to reconfigured messages for transmission to the second device. | 09-18-2014 |
20140282021 | Visualization of Communication Between Devices in an Electric Power System - Systems and methods are presented for visualizing various devices in an electric power generation and delivery system. In certain embodiments, a method for visualizing communication may include receiving configuration information from an electric power generation and delivery system. Based on the configuration information, a plurality of devices included in the electric power generation and delivery system may be identified. Further, a plurality of communication pathways may be identified. Based on the identified plurality of devices and communication pathways, a visual topology of the electric power generation and delivery system may be generated and displayed. | 09-18-2014 |
20150358253 | TRANSMISSION OF DATA OVER A LOW-BANDWIDTH COMMMUNICATION CHANNEL - Disclosed herein are various systems and methods that may improve the transmission of data over low-bandwidth communication channels in an electric power delivery system. Devices communicating across a low-bandwidth communication channel may implement several approaches, according to various embodiments disclosed herein, to reduce the data transmitted across the low-bandwidth communication channel and to prioritize the transmission of time-sensitive and/or more important information with respect to other data. Various embodiments disclosed herein may inspect packets to be transmitted across a low-bandwidth communication channel in order to identify high priority data. High priority data may be time-sensitive information, and accordingly, transmission of such data may be prioritized over other data in order to reduce transmission latency of the higher priority data. | 12-10-2015 |
Patent application number | Description | Published |
20110055453 | INTERRUPTIBLE NAND FLASH MEMORY - A NAND flash memory logical unit. The NAND flash memory logical unit includes a control circuit that responds to commands and permits program and/or erase commands to be interruptible by read commands. The control circuit includes a set of internal registers for performing the current command, and a set of external registers for receiving commands. The control circuit also includes a set of supplemental registers that allow the NAND flash memory logical unit to have redundancy to properly hold state of an interrupted program or erase command. When the interrupted program or erase command is to resume, the NAND flash memory logical unit thus can quickly resume the paused program or erase operation. This provides significant improvement to read response times in the context of a NAND flash memory logical unit. | 03-03-2011 |
20120130925 | DECOMPOSABLE RANKING FOR EFFICIENT PRECOMPUTING - Methods and computer storage media are provided for generating an algorithm used to provide preliminary rankings to candidate documents. A final ranking function that provides final rankings for documents is analyzed to identify potential preliminary ranking features, such as static ranking features that are query independent and dynamic atom-isolated components that are related to a single atom. Preliminary ranking features are selected from the potential preliminary ranking features based on many factors. Using these selected features, an algorithm is generated to provide a preliminary ranking to the candidate documents before the most relevant documents are passed to the final ranking stage. | 05-24-2012 |
20120130981 | SELECTION OF ATOMS FOR SEARCH ENGINE RETRIEVAL - Methods are provided for populating search indexes with atoms identified in documents. Documents that are to be indexed are identified, and for each document, atoms are identified and are categorized as unigrams, n-grams, and n-tuples. A list of atom/document pairs is generated such that an information metric can be computed for each pair. An information metric represents a ranking of the atom in relation to the particular document. Based on the information metric, some atom/document pairs are discarded and others are indexed. | 05-24-2012 |
20120130994 | MATCHING FUNNEL FOR LARGE DOCUMENT INDEX - Search results are identified and returned in response to search queries by evaluating and pruning candidate documents in multiple stages. The process employs a search index that indexes atoms found in documents and pre-computed scores for document/atom pairs. When a search query is received, atoms are identified from the search query and a reformulated query is generated based on the identified atoms. The reformulated query is used to identify matching documents, and a preliminary score is generated for matching documents using a simplified scoring function and pre-computed scores in the search index. Documents are pruned based on preliminary scores, and the remaining documents are evaluated using a final ranking algorithm that provides a final set of ranked documents, which is used to generate search results to return in response to the search query. | 05-24-2012 |
20120130995 | EFFICIENT FORWARD RANKING IN A SEARCH ENGINE - Methods and computer storage media are provided for generating entries for documents in a forward index. A document and its document identification are received, in addition to static features that are query-independent. The document is parsed into tokens to form a token stream corresponding to the document. Relevant data used to calculate rankings of document is identified and a position of the data is determined. The entry is then generated from the document identification, the token stream of the document, the static features, and the positional information of the relevant data. The entry is stored in the forward index. | 05-24-2012 |
20120130996 | TIERING OF POSTING LISTS IN SEARCH ENGINE INDEX - A search index includes tiered posting lists. Each posting list in the search index corresponds with a different atom and includes a list of documents containing the particular document. Additionally, a rank is stored with each document listed in a posting list for a given atom representing the relevance of the atom to the context of each document. At least some of the posting lists in the search index are tiered. A tiered posting list is divided into a number of tiers with the tiers being ordered by document while each tier is internally ordered by document. Employing tiered posting lists within the search index allows a search engine to evaluate search queries in a manner that allows for a number of efficiencies and precise stopping. | 05-24-2012 |
20120173510 | PRIORITY HASH INDEX - A priority hash index provides efficient lookup of posting lists for search query terms. The priority hash index is a data structure in which hash values for terms are distributed across multiple storage devices based on importance of the terms and access speeds of the storage devices. Terms are grouped into search lists with each search list including a storage location on each storage device. When a search query is received, a term is identified and hashed to a location on the first storage device and to generate a unique hash value for the term. The locations on the storage device for the term's search list are sequentially read until the hash value for the term is located to access the posting list for the term. | 07-05-2012 |
20130297621 | DECOMPOSABLE RANKING FOR EFFICIENT PRECOMPUTING - Methods and computer storage media are provided for generating an algorithm used to provide preliminary rankings to candidate documents. A final ranking function that provides final rankings for documents is analyzed to identify potential preliminary ranking features, such as static ranking features that are query independent and dynamic atom-isolated components that are related to a single atom. Preliminary ranking features are selected from the potential preliminary ranking features based on many factors. Using these selected features, an algorithm is generated to provide a preliminary ranking to the candidate documents before the most relevant documents are passed to the final ranking stage. | 11-07-2013 |
20140149401 | PER-DOCUMENT INDEX FOR SEMANTIC SEARCHING - Methods, computer systems, and computer-storage medium for generating a per-document index used for semantic searching is provided. A document is received and parsed into a plurality of section. Each term in each section is translated in order to at least one of a cache index or a term identifier. Subsequent to translating the terms, each section is separately group encoded to generate the per-document index. The per-document index is stored in association with a data store. | 05-29-2014 |
20150234917 | PER-DOCUMENT INDEX FOR SEMANTIC SEARCHING - Methods, computer systems, and computer-storage medium for generating a per-document index used for semantic searching is provided. A document is received and parsed into a plurality of section. Each term in each section is translated in order to at least one of a cache index or a term identifier. Subsequent to translating the terms, each section is separately group encoded to generate the per-document index. The per-document index is stored in association with a data store. | 08-20-2015 |
Patent application number | Description | Published |
20090054123 | INFORMATION COLLECTION DURING GAME PLAY - Systems and methods allow an on-line game to extract information relevant to a specific need of a game platform or service platform. The specific need relates to management and use of digital content, and is addressed by designing and playing an on-line collaborative game. The rules of the game intend to solve a specific task dictated by the specific need. Players' responses to the game generate a wealth of information related to a specific task objective, such as ranking, sorting, and evaluating a set of digital content items. To compel participation in a game, players can be rewarded with monetary value rewards. As a game illustration, an image selection game (ISG) that exploits human contextual inference is described in detail. The information extracted from ISG is a list of key-image associations, relevant for the task of image sorting and ranking. | 02-26-2009 |
20090271389 | PREFERENCE JUDGEMENTS FOR RELEVANCE - The claimed subject matter provides a system that trains or evaluates ranking techniques by employing or obtaining relative preference judgments. The system can include mechanisms that retrieve a set of documents from a storage device, combine the set of documents with a query orjudgment task received via an interface to form a comparative selection panel, and present the comparative selection panel for evaluation by an assessor. The system further requests the assessor to make a selection as to which document included in the set of documents and presented in the comparative selection panel most satisfies the query or judgment task, and thereafter produces a comparative assessment of the set of documents based on the selections elicited from the assessor and associated with the set of documents. | 10-29-2009 |
20100293174 | QUERY CLASSIFICATION - Techniques and systems are disclosed that provide for constructing a query classification index that can be used to classify a query into relevant categories. Where documents in an index are classified into one or more category predictions for a category hierarchy, classification metadata is generated for categories to which a document in the index has been classified. Further, the classification metadata is associated to the corresponding documents in the index. Additionally, a query of the index can be classified using the metadata associated to the documents in the index, and query results can be provided that are classified by the one or more categories identified by the classification of the query. | 11-18-2010 |
20110314023 | Online Stratified Sampling for Classifier Evaluation - To determine if a set of items belongs to a class of interest, the set of items is binned into sub-populations based on a score, ranking, or trait associated with each item. The sub-populations may be created based on the score associated with each item, such as an equal score interval, or with the distribution of the items within the overall population, such as a proportion interval. A determination is made of how may samples are needed from each sub-population in order to make an estimation regarding the entire set of items. Then a calculation of the precision and variance for each sub-population is completed and are combined to provide an overall precision and variance value for the overall population. | 12-22-2011 |
20120317104 | Using Aggregate Location Metadata to Provide a Personalized Service - Functionality is described herein which generates a plurality of item models based on the aggregate behavior of users, such as the aggregate behavior of the users in selecting network-accessible sites and/or issuing particular queries. In one implementation, each item model estimates a probabilistic distribution of locations for an individual, given that the individual selects a particular item (e.g., a particular site or query). The functionality can use the item models to provide a personalized service to an end user. For example, in one scenario, the functionality can generate a plurality of location-based features based on the item models. The functionality can then learn a ranking model based on the location-based features. In a real-time phase of operation, a query processing system uses the ranking model to personalize search results for an end user. | 12-13-2012 |
20120323828 | FUNCTIONALITY FOR PERSONALIZING SEARCH RESULTS - A query processing system is described herein for personalizing results for a particular user. The query processing system operates by receiving a query from a particular user u who intends to find results that satisfy the query with respect to a topic T | 12-20-2012 |
20130060763 | USING READING LEVELS IN RESPONDING TO REQUESTS - A request can be received and a request reading level representation for the request can be inferred. In response to the request, the request reading level representation can be compared with one or more reading difficulty level representations for one or more response items. Also in response to the request, one or more indications of results of comparing the request reading level representation with one or more reading difficulty level representations for the one or more response items can be returned. The indication(s) may include a ranking of the response items. The ranking can be based at least in part on a request reading level representation for the query and reading difficulty level representations for the response items. The response item(s) may also be returned. | 03-07-2013 |
20140005941 | DYNAMIC DESTINATION NAVIGATION SYSTEM | 01-02-2014 |
20150154307 | USING READING LEVELS IN RESPONDING TO REQUESTS - A request can be received and a request reading level representation for the request can be inferred. In response to the request, the request reading level representation can be compared with one or more reading difficulty level representations for one or more response items. Also in response to the request, one or more indications of results of comparing the request reading level representation with one or more reading difficulty level representations for the one or more response items can be returned. The indication(s) may include a ranking of the response items. The ranking can be based at least in part on a request reading level representation for the query and reading difficulty level representations for the response items. The response item(s) may also be returned. | 06-04-2015 |
20150211874 | DYNAMIC DESTINATION NAVIGATION SYSTEM - The claimed subject matter provides a method for navigating to dynamic destinations. The method includes associating a leader mobile device with a follower mobile device. The method also includes displaying, on the follower mobile device, a first path from a follower vehicle to a first location of a leader vehicle. The follower vehicle is associated with the follower mobile device. The leader vehicle is associated with the leader mobile device. The method further includes displaying, on the follower mobile device, a second path from the follower vehicle to a second location of the leader vehicle. | 07-30-2015 |
Patent application number | Description | Published |
20100306282 | Hierarchial Classification - The hierarchical approach may start at the bottom of the hierarchy. As it moves up the hierarchy, knowledge from children and cousins is used to classify items at the parent. In addition, knowledge of improper classifications at a low level are raised to a higher level to create new rules to better identify mistaken classifications at a higher level. Once the top of the hierarchy is reached, a top down approach is used to further refine the classification of items. | 12-02-2010 |
20110040752 | USING CATEGORICAL METADATA TO RANK SEARCH RESULTS - A system that facilitates ranking search results returned by a search engine in response to receipt of a query is described herein. The system includes a receiver component that receives categorical metadata pertaining to an item and categorical metadata pertaining to the query and a computation component that computes at least one of a document feature pertaining to the item, a query feature pertaining to the query, or a document-query feature pertaining to the item and the query based at least in part upon one or more of the categorical metadata pertaining to the item or the categorical metadata pertaining to the query. The system also includes a ranker component that selectively places the item in a particular location in a sequence of items based at least in part upon the at least one of the document feature, the query feature, or the document-query feature. | 02-17-2011 |
20120117043 | Measuring Duplication in Search Results - Measuring duplication in search results is described. In one example, duplication between a pair of results provided by an information retrieval system in response to a query is measured. History data for the information retrieval system is accessed and query data retrieved, which describes the number of times that users have previously selected either or both of the pair of results, and a relative presentation sequence of the pair of results when displayed at each selection. From the query data, a fraction of user selections is determined in which a predefined combination of one or both of the pair of results were selected for a predefined presentation sequence. From the fraction, a measure of duplication between the pair of results is found. In further examples, the information retrieval system uses the measure of duplication to determine an overall redundancy value for a result set, and controls the result display accordingly. | 05-10-2012 |
20120158621 | STRUCTURED CROSS-LINGUAL RELEVANCE FEEDBACK FOR ENHANCING SEARCH RESULTS - A “Cross-Lingual Unified Relevance Model” provides a feedback model that improves a machine-learned ranker for a language with few training resources, using feedback from a more complete ranker for a language that has more training resources. The model focuses on linguistically non-local queries, such as “world cup” (English language/U.S. market) and “copa mundial” (Spanish language/Mexican market), that have similar user intent in different languages and markets or regions, thus allowing the low-resource ranker to receive direct relevance feedback from the high-resource ranker. Among other things, the Cross-Lingual Unified Relevance Model differs from conventional relevancy-based techniques by incorporating both query- and document-level features. More specifically, the Cross-Lingual Unified Relevance Model generalizes existing cross-lingual feedback models, incorporating both query expansion and document re-ranking to further amplify the signal from the high-resource ranker to enable a learning to rank approach based on appropriately labeled training data. | 06-21-2012 |
20120158685 | Modeling Intent and Ranking Search Results Using Activity-based Context - The subject disclosure is directed towards building one or more context and query models representative of users' search interests based on their logged interaction behaviors (context) preceding search queries. The models are combined into an intent model by learning an optimal combination (e.g., relative weight) for combining the context model with a query model for a query. The resultant intent model may be used to perform a query-related task, such as to rank or re-rank online search results, predict future interests, select advertisements, and so forth. | 06-21-2012 |
20130238608 | SEARCH RESULTS BY MAPPING ASSOCIATED WITH DISPARATE TAXONOMIES - Architecture that generates signals/features that capture the match between intent of a query and category of documents. For example, for a query intent related to “autos”, documents that belong to categories related to “Autos” receive a higher score than documents of a “computers” category. The architecture can be applied to a search ecosystem where query intent classification and document category classifier are available, learns the mapping between query intent and document category, and introduces category-match features to a ranking algorithm, thereby improving search result relevance. The architecture learns the mapping between two existing and different taxonomies to create a category match signal from which the ranking algorithm can learn. Moreover, architecture adapts to a complex ecosystem where different taxonomies on the query side and document side exist through learning a mapping score between at least two taxonomies. | 09-12-2013 |
20130246412 | RANKING SEARCH RESULTS USING RESULT REPETITION - Ranking search results using result repetition is described. In an embodiment, a set of results generated by a search engine is ranked or re-ranked based on whether any of the results were included in previous sets of results generated in response to earlier queries by the same user in one or more searching sessions. User behavior data, such as whether a user clicks on a result, skips a result or misses a result, is stored in real-time and the stored data is used in performing the ranking. In various examples, the ranking is performed using a machine-learning algorithm and various parameters, such as whether a result in a current set of results has previously been clicked, skipped or missed in the same session, are generated based on the user behavior data for the current session and input to the machine-learning algorithm. | 09-19-2013 |
20130346404 | RANKING BASED ON SOCIAL ACTIVITY DATA - Various technologies described herein pertain to using social activity data to personalize ranking of results returned by a computing operation for a user. For each of the results returned by the computing operation, a respective first affinity of the user to a corresponding result and a respective second affinity of the user to the corresponding result can be calculated and used for ranking the results. The respective first affinity of the user to the corresponding result can be calculated based on correlations between social activity data of the user and social activity data of a first group of historical users that clicked the corresponding result. Moreover, the respective second affinity of the user to the corresponding result can be calculated based on correlations between the social activity data of the user and social activity data of a second group of historical users that skipped the corresponding results. | 12-26-2013 |
20140244610 | PREDICTION AND INFORMATION RETRIEVAL FOR INTRINSICALLY DIVERSE SESSIONS - Various technologies described herein pertain to predicting intrinsically diverse sessions and retrieving information for such intrinsically diverse sessions. Search results retrieved by a search engine responsive to executing a query are received. A query classifier can be employed to determine whether the query is intrinsically diverse or not intrinsically diverse based on one or more features of the query and session interaction properties. The query is intrinsically diverse when included in an intrinsically diverse session directed towards a task, where the query and disparate queries included in the intrinsically diverse session are directed towards respective subtasks of the task. An objective function can be evaluated based at least upon the query to compute an optimized value when the query is determined to be intrinsically diverse. The search results can be presented on a display screen according to the optimized value when the query is determined to be intrinsically diverse. | 08-28-2014 |